[Linkpost] GatesNotes: The Age of AI has begun
post by WilliamKiely · 2023-03-22T04:20:34.340Z · LW · GW · 9 commentsContents
30s Update on Bill Gates' Views re Alignment: Quotations that Convey Key Views None 9 comments
This is a linkpost for https://www.gatesnotes.com/The-Age-of-AI-Has-Begun#ALChapter6
The Age of AI has begun
Artificial intelligence is as revolutionary as mobile phones and the Internet.
30s Update on Bill Gates' Views re Alignment:
- Gates cites Bostrom and Tegmark's books as having shaped his thinking, but thinks that AI developments of the past few months don't make the control problem more urgent.
- Gates asks whether we should try to prevent strong AI from ever being developed, what happens if strong AI's goals conflict with humanity's interests; says these questions will get more pressing with time.
Quotations that Convey Key Views
From the section: "Risks and problems with AI":
- "Three books have shaped my own thinking on this subject: Superintelligence, by Nick Bostrom; Life 3.0 by Max Tegmark; and A Thousand Brains, by Jeff Hawkins."
- "I don’t agree with everything the authors say, and they don’t agree with each other either. But all three books are well written and thought-provoking."
- "There's the possibility that AIs will run out of control. Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us?"
- "Possibly, but this problem is no more urgent today than it was before the AI developments of the past few months."
- "[N]one of the breakthroughs of the past few months have moved us substantially closer to strong AI."
- "Possibly, but this problem is no more urgent today than it was before the AI developments of the past few months."
- "Superintelligent AIs are in our future."
- "Once developers can generalize a learning algorithm and run it at the speed of a computer—an accomplishment that could be a decade away or a century away—we’ll have an incredibly powerful AGI."
- "It will be able to do everything that a human brain can, but without any practical limits on the size of its memory or the speed at which it operates. This will be a profound change."
- "Once developers can generalize a learning algorithm and run it at the speed of a computer—an accomplishment that could be a decade away or a century away—we’ll have an incredibly powerful AGI."
- "These “strong” AIs, as they’re known, will probably be able to establish their own goals. What will those goals be? What happens if they conflict with humanity’s interests? Should we try to prevent strong AI from ever being developed? These questions will get more pressing with time."
9 comments
Comments sorted by top scores.
comment by philip_b (crabman) · 2023-03-22T08:37:29.478Z · LW(p) · GW(p)
I have a kinda-unrelated question. Does Bill Gates write gatesnotes completely himself just because he wants? Or is this a marketing/pr thing and is written by other people? If it's the former, then I want to read it. If it's the latter, I don't.
Replies from: Sodiumcomment by mukashi (adrian-arellano-davin) · 2023-03-22T05:21:55.638Z · LW(p) · GW(p)
I am not sure how anyone would say that "[N]one of the breakthroughs of the past few months have moved us substantially closer to strong AI." unless he hasn't really followed the breakthroughs of the past few months or had read only bad secondhand reports
Replies from: gesild-muka↑ comment by Gesild Muka (gesild-muka) · 2023-03-22T16:16:06.187Z · LW(p) · GW(p)
Strong AI, yes. True AI, probably not (that's just my guess). I started following this fairly recently, can you (or someone) provide some links to articles/posts with updated predictions of timelines that factor in recent breakthroughs? How far are we from true AI?
Replies from: Evan R. Murphy, adrian-arellano-davin↑ comment by Evan R. Murphy · 2023-03-26T05:42:54.166Z · LW(p) · GW(p)
I think there were some not insignificant updates to Metaculus aggregate predictions for AGI timelines in the past of few months: https://www.lesswrong.com/posts/CiYSFaQvtwj98csqG/metaculus-predicts-weak-agi-in-2-years-and-agi-in-10#comments [LW · GW]
↑ comment by mukashi (adrian-arellano-davin) · 2023-03-22T22:26:48.272Z · LW(p) · GW(p)
What do you mean by true IA?
Replies from: lc, gesild-muka↑ comment by Gesild Muka (gesild-muka) · 2023-03-23T11:54:30.191Z · LW(p) · GW(p)
I guess what I'm calling 'true AI' is not unlike the stated goal of general intelligence or AGI. As opposed to narrow AI (also called weak AI), true AI is what the average sci-fi fan thinks of as AI (movies such as 'Ex Machina', '2001: A Space Odyssey' or 'Zoe') who are seemingly conscious, exercise free will and demonstrate human-like cognitive degrees of freedom.
With recent breakthroughs it may be useful to separate those terms as we may have AGI soon but it will still be narrow in a lot of ways. True AI is still far off, in my opinion. I don't think it'll emerge directly from large language models but more likely from a new substrate that's more dynamic than the current computer chips, circuit boards, semiconductors etc. The invention/discovery of that new substrate will be the biggest bottleneck to true AI.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-03-23T13:17:03.149Z · LW(p) · GW(p)
First, we should try to balance fears about the downsides of AI—which are understandable and valid—with its ability to improve people’s lives. To make the most of this remarkable new technology, we’ll need to both guard against the risks and spread the benefits to as many people as possible.
Second, market forces won’t naturally produce AI products and services that help the poorest. The opposite is more likely. With reliable funding and the right policies, governments and philanthropy can ensure that AIs are used to reduce inequity. Just as the world needs its brightest people focused on its biggest problems, we will need to focus the world’s best AIs on its biggest problems.
Although we shouldn’t wait for this to happen, it’s interesting to think about whether artificial intelligence would ever identify inequity and try to reduce it. Do you need to have a sense of morality in order to see inequity, or would a purely rational AI also see it? If it did recognize inequity, what would it suggest that we do about it?