Singularity Summit 2010 on Aug. 14-15 in San Francisco
post by alyssavance · 2010-06-02T06:01:26.276Z · LW · GW · Legacy · 22 commentsContents
22 comments
The Singularity Summit 2010 will be held on August 14th and 15th at the Hyatt Regency in San Francisco, and will feature Ray Kurzweil and famed Traditional Rationalist James Randi as speakers, in addition to numerous others. During last year's Summit (in New York City), there was a very large Less Wrong meetup with dozens of attendees, and it is quite possible that there will be one again this year. Anyone interested in planning such a meetup (not just attending) should contact the Singularity Institute at institute@intelligence.org. The Singularity Summit press release follows after the jump.
Singularity Summit 2010 returns to San Francisco, explores intelligence augmentation
Speakers include Futurist Ray Kurzweil, Magician-Skeptic James Randi
Will it be one day become possible to boost human intelligence using brain implants, or create an artificial intelligence smarter than Einstein? In a 1993 paper presented to NASA, science fiction author and mathematician Vernor Vinge called such a hypothetical event a “Singularity”, saying “From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye”. Vinge pointed out that intelligence enhancement could lead to “closing the loop” between intelligence and technology, creating a positive feedback effect.
This August 14-15, hundreds of AI researchers, robotics experts, philosophers, entrepreneurs, scientists, and interested laypeople will converge in San Francisco to address the Singularity and related issues at the only conference on the topic, the Singularity Summit. Experts in fields including animal intelligence, artificial intelligence, brain-computer interfacing, tissue regeneration, medical ethics, computational neurobiology, augmented reality, and more will share their latest research and explore its implications for the future of humanity.
“This year, the conference shifts to a focus on neuroscience, bioscience, cognitive enhancement, and other explorations of what Vernor Vinge called ‘intelligence amplification’ — the other route to the Singularity,” said Michael Vassar, president of the Singularity Institute, which is hosting the event.
Irene Pepperberg, author of “Alex & Me,” who has pushed the frontier of animal intelligence with her research on African Gray Parrots, will explore the ethical and practical implications of non-human intelligence enhancement and of the creation of new intelligent life less powerful than ourselves. Futurist-inventor Ray Kurzweil will discuss reverse-engineering the brain and his forthcoming book, How the Mind Works and How to Build One. Allan Synder, Director, Centre for the Mind at the University of Sydney, will explore the use of transcranial magnetic stimulation for the enhancement of narrow cognitive abilities. Joe Tsien will talk about the smarter rats and mice that he created by tuning the molecular substrate of the brain’s learning mechanism. Steve Mann, “the world’s first cyborg,” will demonstrate his latest geek-chic inventions: wearable computers now used by almost 100,000 people.
Other speakers will include magician-skeptic and MacArthur Genius Award winner James Randi; Gregory Stock (Redesigning Humans), former Director of the Program on Medicine, Technology, and Society at UCLA’s School of Public Health; Terry Sejnowski, Professor and Laboratory Head, Salk Institute Computational Neurobiology Laboratory, who believes we are just ten years away from being able to upload ourselves; Ellen Heber-Katz, Professor, Molecular and Cellular Oncogenesis Program at The Wistar Institute, who is investigating the molecular basis of wound regeneration in mutant mice, which can regenerate limbs, hearts, and spinal cords; Anita Goel, MD, physicist, and CEO of nanotechnology company Nanobiosym; and David Hanson, Founder & CEO, Hanson Robotics, who is creating the world’s most realistic humanoid robots.
Interested readers can watch videos from past summits and register at www.singularitysummit.com.
22 comments
Comments sorted by top scores.
comment by Scott Alexander (Yvain) · 2010-06-02T18:51:02.160Z · LW(p) · GW(p)
Is Randi a singularitarian?
Replies from: MichaelVassar, JoshuaZ↑ comment by MichaelVassar · 2010-06-03T06:31:39.176Z · LW(p) · GW(p)
He seems to go back and forth from day to day. Over all though he's definitely skeptical that we can steer the Singularity and doesn't think it's coming soon, but he thinks the logic of it makes it likely in the long term.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-06-03T09:11:56.122Z · LW(p) · GW(p)
Would be interested to know what he thinks about cryonics - it sometimes seems like it's become the official skeptical position that cryonics is bunk, but according to this strongly anti-cryonics thread Rudi Hoffman claimed to have spoken to him about it and found him "open to the idea"...
↑ comment by JoshuaZ · 2010-06-02T19:03:01.110Z · LW(p) · GW(p)
I doubt it. I've never heard him say anything that suggested he was. For that matter, I doubt that Irene Pepperberg is a Singularitan either. But lots of stuff that is of interest to the Singularitans is of interest to people who assign a low probability to a Singularity. Many of the people who go to the Summit don't consider the Singularity to be imminent.
comment by timtyler · 2010-06-02T13:04:04.431Z · LW(p) · GW(p)
Intelligence Augmentation is not much of "another route". I argue that Intelligence Augmentation synergetically complements machine intelligence approaches here:
"Intelligence augmentation" - http://www.youtube.com/watch?v=nQXVWvtCjJs
comment by Houshalter · 2010-06-02T15:49:20.892Z · LW(p) · GW(p)
In a 1993 paper presented to NASA, science fiction author and mathematician Vernor Vinge called such a hypothetical event a “Singularity”, saying “From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye”. Vinge pointed out that intelligence enhancement could lead to “closing the loop” between intelligence and technology, creating a positive feedback effect.
Why are we assuming that AI will be a free lunch to omnipotent technology? AI might be able to make more efficient wind turbines, car engines, computers, lasers, airplanes, etc. It might be able to make our economy more efficient, and boost our technological progress, but it isn't going to instantly find the secrets to the universe and make us all super human. I think that a post AI world will be very, very similiar to ours except alot more efficient and people will have alot more options. Immagine most of our structures and roads being built under ground (esspecially farms) converting vast quantities of previously occupied land back into forrests as well as allowing our population to expand nearly indefinitley. A little off topic, I know, but it is somewhat relevant.
Replies from: Kaj_Sotala, Kevin, alyssavance, JoshuaZ↑ comment by Kaj_Sotala · 2010-06-02T21:05:44.220Z · LW(p) · GW(p)
A post-human world is not very, very similiar to a chimpanzee world except having a lot more variety when it comes to bananas.
Replies from: Houshalter↑ comment by Houshalter · 2010-06-02T21:37:49.000Z · LW(p) · GW(p)
But thats assuming that you can somehow make that jump. Chimpanze to human is relativley a small jump from human to omnipotent power. My point is that, at first anyways, there will still be cities and cars and governments (maybe) and planes, etc. After a little while you can expect to see cars and roads dissapear and expect to see efficient transportation systems being built below ground as well as much more efficient air travel. Cities will still be around, but human work and labor will dissapear, as will the factories and industrial centers that used to maintain them. What people will do then is anyones guess, but all out anarchy seems unlikely as "no need to work" is a lot different from "no work". I expect massive space exploration attempts that won't even be comparable to anything we've accomplished before as humans. But in the end, we will still be here, we won't want to change, the world we have is base around our own desires and principles so it fits that the ultimate version of this world that has to be built on the existing one would be very similiar. In other words, people won't want to put themselves in capsules and let robots cut down every tree on our planet. All of that is science fiction.
↑ comment by Kevin · 2010-06-02T17:27:27.157Z · LW(p) · GW(p)
Or see That Alien Message. Basically, an AI that is able to make truly efficient use of sensory information could have a chance of solving Cosmology in short order.
http://lesswrong.com/lw/qk/that_alien_message/
Replies from: Houshalter↑ comment by Houshalter · 2010-06-02T18:19:14.283Z · LW(p) · GW(p)
I read that article once and some parts of it more, but I still fail to see how its relevant to this. It must be because 2 people have already given links.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-06-02T18:31:58.440Z · LW(p) · GW(p)
The point is that even with only moderate intelligence, if you speed that intelligence up enough you can potentially have a lot of gains. Thus for example, if you took a moderately smart human (say an average Less Wrongian) and were able to think a hundred times as fast, they'd be pretty damn productive, even if their overall creativity was not that much higher. Now, we don't know what the minimal processing power it takes to create an intelligence. Imagine for example what it would turn out if you could simulate in realtime an intelligence of about a human on an old 486 and that the main issue was just figuring out the algorithms. That means, that a cheap commercial machine computer now can run that AI at around a thousand times as fast as a human. Now, you may object that you find it implausible that an AI would be able to run in real time on a 486. That's ok. Do you think it is plausible it could run on a machine today if we had the algorithms? Ok. Then imagine what happens if we find those algorithms 20 years from now. The same end result. Unless you believe that we will coincidentally discover how to make general AI at about the same time we have precisely the processing power to run them, they will likely be quite fast little buggers.
Replies from: Houshalter↑ comment by Houshalter · 2010-06-02T19:54:47.962Z · LW(p) · GW(p)
Thats a misconception. We're not trying to simulate human or human-like brains. IMO, NNs and the like are dead ends. The AI project I'm currently working on will be (theoretically) able to run on any machine. The thing is, that on a super fast machine, it can just spend extra time analyzing problems, while on the slow one it will probably have to spend most of its time figuring out how to do the problem without wasting so much power. So, yes, there is a definite advantage in speed, but it will always be as efficient as possible given the power it has. So measuring intelligence by how well it does compared to a human isn't practical. With that, a calculator could be argued to be thousands of times faster then a human.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-06-02T21:48:27.081Z · LW(p) · GW(p)
That's a response that relies on specific models of AI. If one can construct any AI that does functionally resemble that of a human then speed of this sort will matter.
Replies from: Houshalter↑ comment by Houshalter · 2010-06-02T23:04:07.955Z · LW(p) · GW(p)
To actually simulate the brain, you have to simulate all the complex chemical reactions and neurons. You can simplify it by just simulating the algorithms that neurons use, but thats still 10 billion things you have to simulate every millisecond or less. To make it faster you could use hashing, you can cut unnecesary parts out, you can use compression, etc, but it's still to much for what we have to work with now. Even if you let the human your simulating modify his own program, it only makes things worse considering he could easily make a mistake. You can take the processes which the brain performs or appears to perform and model them in a computer. You might achieve the same results or better, but its not a human. But thats not the point. I don't want a computer that can only do the things I can do and nothing else, I want one which is good at things I can't do. Essentially, all I care about is results. To get them I will have to use an entirley different system, and then your comparing apples and oranges. A human brain doesn't run on a serial machine, so you can't compare the two systems.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-06-02T23:25:59.301Z · LW(p) · GW(p)
To simulate a neuron you don't need to necessarily simulate every chemical reaction inside it. We have pretty decent models of how neurons act. While there are serious potential problems with our understanding of the brain (for example, there's evidence that glial cells matter for cognition but we don't really understand what they are doing) , we don't need to examine every chemical reaction to make a good approximation. Yes, that is still a lot of simulating, but that's part of the reason we can't do it now, it isn't a reason we can't do it in the future.
Even if you let the human your simulating modify his own program, it only makes things worse considering he could easily make a mistake. You can take the processes which the brain performs or appears to perform and model them in a computer. You might achieve the same results or better, but its not a human. But thats not the point. I don't want a computer that can only do the things I can do and nothing else, I want one which is good at things I can't do.
I'm confused by the relevance of your statements here compared to what we were discussing earlier in this thread about efficiency. Having a lot of human brain equivalents running much faster than humans will still help out a lot. Since the earlier claim was about using these entities to improve technologies, it should be clear that having them would help a lot. To one somewhat futuristic (and ethically questionable) example, imagine that every desktop had a system that allowed you to either simulate a brain a hundred times a fast as a human or simulate a hundred brains at normal speed, do you not think that such technology would be very helpful?
Replies from: Houshalter↑ comment by Houshalter · 2010-06-03T00:23:21.804Z · LW(p) · GW(p)
Having a lot of human brain equivalents running much faster than humans will still help out a lot.
But unless it can be done (and theres that dang speed of light thing as well as our lack of understanding our own brain) its not practical. I'm confused about where this argument is going, but my original point was to defend simple systems which are not based off of biology in any way other then the occasional genetic algorithm. If you have a way to build a "brain box" no ones stopping you, go ahead (well, there are ethical considerations, but you could get around them if you dropped emotions and stuff.) Ever heard of Eurisko (its actually how I found this site)? It achieved amazing engineering feats but was not based in any way off of actual models of the brain.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-06-03T00:39:39.367Z · LW(p) · GW(p)
I'm confused about where this argument is going, but my original point was to defend simple systems which are not based off of biology in any way other then the occasional genetic algorithm.
This is confusing given that a few posts up we were discussing how AI would improve efficiency on many different levels. Starting with your initial post in this thread. The point then is that fast simulated brains will result in more increase in efficiency than the same humans thinking about those ideas slowly. Now, the upshot is that this logic works fine even if one has an AI that isn't a simulation of a human brain but can act even like a minimally scientifically productive human.
I'm familiar with Eurisko but I don't see how it as at all relevant.
↑ comment by alyssavance · 2010-06-02T16:47:21.519Z · LW(p) · GW(p)
↑ comment by JoshuaZ · 2010-06-02T16:35:21.426Z · LW(p) · GW(p)
One major divide on both LW and Overcoming Bias is estimates about the probability of a Singularity in the near future. However, I'm a bit puzzled by your remark about efficiency. First of all, increases in efficiency can lead to otherwise impractical technologies becoming practical. For example, the portable tapeplayer when it originally came out was not an intrinsically new technology but rather much more efficient implementation of existing technology. Similarly, computer networks have been around since the 1960s but the internet became such a major impact because of the increasing efficiency (measured in cost and speed for example) of computing technologies. If one believes that a the advent of AI will lead to AI that are much smarter than humans, they should be able to quickly make many technologies much more efficient. To use one example, the primary problem with a space elevator is making carbon nanotubes cheaply and reliably enough. If an AI can come up with a solution for that then the cost of going from Earth to orbit will be reduced by a few orders of magnitude. That alone would have a lot implications. Now, apply the same logic to hundreds of potential technologies.
If you think that friendly even just moderately smart AI will occur, one can project that it will potentially result in lots of changes.
(I should probably add a disclaimer that I don't assign a Singularity-type event a high probability. If I'm not presenting the position well, please correct me.)
Replies from: Houshalter↑ comment by Houshalter · 2010-06-02T17:28:00.632Z · LW(p) · GW(p)
I understand all that, but I was trying to make a point about how inventing AI is not a magic key that opens up all doors. Making technology more efficient is one thing, but we're still going to be here, so is the earth. I think robotics is one of the biggest areas to be improved. We have the technology, but its not "practical" yet. What happens after that is anyones guess, but the universe is still constrained by the speed of light. In other words, theres hard limits on what you can achieve. Basically, those time wave zero nuts believe that the singularity is going to occur in a single day, and the first novelty event (supposedly comparable to the industrial and agricultural revolutions) will happen then so many hours later, another one, then so many minutes later, another one, then a few seconds later we have another one, etc.
Replies from: alyssavance, JoshuaZ↑ comment by alyssavance · 2010-06-02T17:55:54.437Z · LW(p) · GW(p)
"People seem to make a leap from "This is 'bounded'" to "The bound must be a reasonable-looking quantity on the scale I'm used to." The power output of a supernova is 'bounded', but I wouldn't advise trying to shield yourself from one with a flame-retardant Nomex jumpsuit." - http://lesswrong.com/lw/qk/that_alien_message/
↑ comment by JoshuaZ · 2010-06-02T18:12:04.634Z · LW(p) · GW(p)
Only a small fraction of Singularitans believe that a hard, fast take-off will occur in a matter of hours. Eliezer for example believes that is likely. But that's a claim that isn't at all relevant to whether AI is a "magic key." The notion of repeated singularities at ever increasing pace is not one that is generally argued. It seems like you are confusing it with a Kurzweil-style Singularity. It might help to read up on the different types of Singularities envisioned. For a very brief breakdown, see Eliezer's remarks here
I'm deeply puzzled by your remark about "time wave zero." Time wave zero is a New Age idea originating from Terence McKenna and connected deeply with 2012 claims.That has nothing to do with the Singularity other than the very superficial similarity in the claim of imminent large-scale changes to human society.