According to Ray Kurtzweil: www.kurzweilai.net/the-law-of-…
- We achieve one Human Brain capability (2 * 10^16 cps) for $1,000 around the year 2023.
- We achieve one Human Brain capability (2 * 10^16 cps) for one cent around the year 2037.
- We achieve one Human Race capability (2 * 10^26 cps) for $1,000 around the year 2049.
- We achieve one Human Race capability (2 * 10^26 cps) for one cent around the year 2059.
The research we fund at universities around the world and at our own Research Center uses regenerative medicine to repair the damage underlying the diseases of aging. Our goal is to help build the industry that will cure these diseases."
essentially they aim to combat aging/growing old to prolong our lives. We will be able to live as healthy as a 25 years old indefinitely. Lifespans can be centuries to thousands of years, and we expect the first generation of radical life extension to come in less than 20 years.
Save the Children: www.savethechildren.org/
"Save the Children gives children in the United States and around the world what every child deserves – a healthy start, the opportunity to learn and protection from harm. When disaster strikes, we put children’s needs first. We advocate for and achieve large-scale change for children. We save children’s lives. Join us."
Charity which funds family planning, contraception, family planning, education, and other much needed resources for the world's poor.
I LOVE considering what concepts will become more prevalent soon, and how different technologies might work (engineering student). For example, I'd love to see virtual worlds created using more layers of procedural techniques, not just for landscapes, but for, say, the bark on a tree or veins in its leaves. Androids and prosthetics might be printed from artificial cells and have the ability to heal. So many cool things!
Deviantart 2036: 2d, 3d, immerse virtual reality which would include tasting food through virtual reality.
What Kurzweil expands on is Moore's law, which discusses exponential increases in technology. However Moore's law is just a rule of thumb that people tend to accept as a provable law.
It is merely a projection or forecast only, and that alone. The flaw to this theoretical forecasting is based on new technologies that have not been developed yet.
Making something exponentially smaller and faster, also means it gets exponentially hotter. Since heat and electronics don't play well with each other, there will come a time when a ceiling limit will be reached, barring a future leap in technology.
I'm not saying Moore's law (and therefore Kurzweil) is wrong in projection, just that everyone expects with drooling anticipation of better things to come, when the reality is that exponential growth, while nice when it hits, is nothing but an addict's fondest desire. Anybody that knows anything about addicts, knows that after the high comes the crushing low. While that metaphor might not be correct in describing a reversal of technology, the despondence when predicted future technology that doesn't come; is.
Figure it like a reverse law of "pay it forward."
More importantly, by now we should have seen an improvement in technological based parts, i.e. longer lasting technology. Instead, because of the exponentially increasing heat issues, parts are seeing shorter lifespans ever since about 1997, instead of parts longer life spans.
While I am not trying to dampen futurist thoughts like the blurring of virtual reality/reality as Kurzweil describes, my belief is that the human brain is not something that can be predictably "hacked" into. Perception is personal and therefor unpredictable. Ask 5 people what happened in an accident and you'll get 5 different interpretations. Likewise, virtual reality as a construct is not likely to be happen since it will allow a person to bend it to their own thoughts. This doesn't sound bad until you consider, that unless each person has their own personal virtual reality, people will be making and unmaking each other's virtual reality. By extension, virtual realities can be similar to dA in that people can create and submit their virtual concoctions to a social media for public viewing but it won't ever be interactive in allowing others to bend or make it to their will.
The point of all this is to point out that the truly "green lantern" concept of "if you think it, you can create it" virtual reality won't ever happen by my understanding of it. That would require quite sophisticated software that would need to anticipate and compensate for unique and new concepts and possibilities.
In other words, intelligent, as in Artificial Intelligence.
Otherwise, you can only have creativity that some programmer has written software with anticipating what tools you'll need and in other words, provide what is and isn't possible.
Again, I am not saying that possibilities aren't possible. I am simply trying to temper the wet-you-lips/greedy-anticipation that people seem to be exhibiting. For such futuristic projections to occur, we need to make GREAT strides that I don't see happening now or that are forthcoming.
Though I agree with the concept of your model; designed obsolescence will drive consumer repurchasing of newer/faster computers which will fuel newer technology research.
You know in the US, effective Jan 1, 2014 that it is now impossible to purchase incandescent light bulbs and people are being forced to purchase fluorescent light bulbs, which usually have a shorter life span but cost 1000 times more money? This, I predict is the direction of newer computer technologies.
Again, the concepts they are touting to get people to be eager for new tech, are by and large, new tech that they have yet to make ground breaking achievements in.
Seriously, I am not trying to be a damper on your enthusiasm, while I admit I am trying to be a grounding force of practicality. Also to point out they need to make GREAT advances before such concepts come to fruition. Exponential increases in technology are a unicorn; nice to imagine but nothing to take to the bank.
Personally I would like to see less enthusiasm for technology that will fuel new sources for entertainment, and more enthusiasm for solutions to resources that are soon to be depleted like fossil fuels (the tipping point is within 20-30 years). Electric cars are a neat concept to get people around town, but many countries depend highly on foreign and local trade to function properly. Electricity can't power ocean going freighters and highway semi's.
Helium is necessary for computer microchip manufacturing and is expected to RUN OUT in the next FOUR years.
It's time to get the theoretical-science-party-hats off and get the down-to-business-thinking-hats on.
Among the renewable energy sources, hydropower's share during the first half of 2013 was 30.18 percent, biomass 25.26 percent, biofuels 20.18 percent, wind 18.80 percent, solar 3.19 percent, and geothermal 2.39 percent. "
Take a penny, double it (2 cents), double it again (4 cents), double it again (16 cents), double it again ($2.56), double it again ($655.36), double it again ($42,949,672.96), double it again ($184,467,440,737,095,516.16), (ad infinitum, ad nauseum).
Now all of that occurring at periodic regular intervals of time is my understanding of exponential growth.
Source of who said exponential growth is "a unicorn*": my college C++ professor (2006).
*Not exactly what he said, but that's the gist. He also showed Moore never stated Moore's law or at least was misquoted.
(that's the limit of my sourcing)
I'll admit I wasn't aware of the 2013 research into carbon nanotube computers. However, that it is of a quality of 1970's era microchip is not very promising. In a pinch, it's better than nothing, but the result of providing that low a quality technology as a substitute for what we use today, will be the "reverse pay-it-forward" effect I mentioned earlier. Getting society and industry to accept dated tech as new tech, will be, in my opinion, the knell of computers.
If helium depletion occurs (on my or your schedule) and nanotube computers are even quadrupled in power to what those tests suggest, this would be exactly the type of hiccup that disproves exponential growth of computers...
I won't fuss about the "when" of helium depletion or it's tipping point. That there isn't any contention about "IF" it will reach a tipping point or eventual depletion level is enough for me. Pursuit and interest of "whens" is only of concern to people that are thrilled over the drama of countdowns.
I don't believe in climate change. That is; I don't believe that mankind as a whole or part can make significant changes to push what is inevitable, one way or the other. The effect can be exemplified by a glass filled with water and has a convex meniscus; one additional drop and the resulting break of surface tension releases more than what was put in. That people want to argue how half full or half empty the glass was before mankind came along, is a testament that people have waaaay too much time on their hands and too much desire to point fingers of blame. Perhaps those people are the ones that also like countdowns too? I don't know and I don't care because both peoples bore me.
If you ever feel the initiative, I invite you to look at the geologic record of ice ages. If it was a smooth transition from glacial period to interglacial period, with nary a bump until mankind came along, I'd agree then that we created or helped create a tipping point. Instead it reads like an EKG of a 500 lb, 80 year old person.
As far as lab grown meat goes, I can only presume you are worried that 1.3 billion cattle worldwide is a number to worry at. What interests me is that people that worry over such numbers don't recognize that there used to be an estimated 80 million bison in America alone. That there was, at one time, an estimated 1.5 million hump back whales. Now IF those ecosystems were still viable AND there was an increased growth of cattle, then I'd agree there is reason for concern; that mankind has added instability to a stable global system. However, belief in global warming and our hubris to manhandle nature, to me, is irrational.
I meant that the neural pathways for each brain are, in a sense, unique which would foil the ability for nanobots to connect your brain to a virtual reality.
I was just watching a TV program the other day that described Autism Spectrum Disorders. Long story short, ASD affects one in 80 some-odd children and through trauma, even some adults later in life. They are starting to realize that what was previously group labeled as "Autism" has a great number of variances. ASD can have a "high functioning autistic" child or adult who, through birth or trauma, the map of their neural pathways get rerouted. As yet, this rerouting follows no known pattern. The affected areas are usually either the math region, visual region and a third region I can't remember. those are the affected areas alone. Where the rerouting could take place seems to be the wildcard. It could join with the memory section, speech section, or various other parts of the brain (if any).
It is more prevalent to happen in males, and is on the rise in occurrence each year. Thus, in this instance alone, the ability for these nanobots to interact with the brain is questionable. I don't know very many neural issues but I anticipate that there is other instances that neural pathways are scrambled from the norm.
That aside, if the virtual reality as you describe it, is going to catch on it needs to be interactive with other people. That is where I think it is going to go south in my mind. The ability for people to interact with other people while interacting with their environment, especially where doing creative things is concerned, is where you will get people writing and rewriting each others reality.
Look at it this way, you either have a hub reality that everyone jacks into, or you have your own version of heaven, though isolated and alone. The former is more likely, logistically speaking, as you only have to have one reality for everyone to jack into. Although it could be like Second Life or IMVU (which I used to be a Dev for both) in that there is a hub reality with your own room to do our own constructs in. The interacting with your reality though is limited in the social areas. This is where people can interact with other people and view their creative works (that IS what this topic is about, yes?). In your own "room" you can mold reality to your own liking.
However, I will restate my previous comment, in either instance, your ability to mold reality is based on a software engineers ability to anticipate your creativity and make tools for you to use, thus limiting your ability to be creative from going to the Nth degree.
In other words, that reality will supply you with a paint brush and canvas, for instance, but it won't be able to anticipate that you wanted to use a virtual frog to use as a paintbush nor anticipate that you wished to use your frogbrush on a canvas made of water. Thus, limiting the creative ability.
The creative mind, goes outside the box, which, anything less that an artificial intelligence will not be able to adapt and adjust to.
Also, nanobots (which are expected to tranfer gigabytes, or even terabytes of information to make the virtual reality seem real) are still going to run extremely HOT, which I would NOT want in MY mind...
Last, to reiterate what I have heard from someone else (I can't remember who), who was discussing this same concept; that form of reality also would not give one the ability to do something they couldn't do in reality. You can't simply become a Van Gogh unless you knew how to paint Van Gogh in real life or you can't play like Jimmy Hendricks if you never learned to play the guitar.
My point being, the people who want to SELL this concept are painting pretty pictures of what MIGHT be, but I don't see it as being possible in the near, or distant, future.
By comparing brain activity scans, they were able to correctly predict which of 120 pictures someone was focusing on in 90 per cent of cases.
The technique could one day form the basis of a machine to project the imagination on to a screen."
Even right now there is research done to attempt to understand the patterns and signatures of our brain, aka translating the language of the brain to something recognizable.
In other words, that reality will supply you with a paint brush and canvas, for instance, but it won't be able to anticipate that you wanted to use a virtual frog to use as a paintbush nor anticipate that you wished to use your frogbrush on a canvas made of water. Thus, limiting the creative ability."
Look at it this way, you either have a hub reality that everyone jacks into, or you have your own version of heaven, though isolated and alone. The former is more likely, logistically speaking, as you only have to have one reality for everyone to jack into. Although it could be like Second Life or IMVU (which I used to be a Dev for both) in that there is a hub reality with your own room to do our own constructs in. The interacting with your reality though is limited in the social areas. This is where people can interact with other people and view their creative works (that IS what this topic is about, yes?). In your own "room" you can mold reality to your own liking."