Posts
Comments
This thing?
https://www.scientificamerican.com/article/what-is-the-memory-capacity/
Many of the calculations on the brain capacity are based on wrong assumptions. Is there an original source for that 2.5 PB calculation? This video is very relevant to the topic if you have some time to check it out:
Thanks so much🙏
Same I would do in Slack! I simply have some work groups in Discord, that's why
Is this available for discord?
Great! Can you make that, if I input P for hypothesis A, 1 - P appears automatically for Hypothesis B?
This should be curated. Just reading this list is a good exercise for those people that attribute a very high probability to a single possible scenario.
I don't see why Jaynes is wrong. I guess it depends on the interpretation? If two humans are chasing the same thing there is a limited amount of it, of course they are in conflict with each other. Isn't that what Jaynes is pointing at?
Good post, I hope to read more from you
Yeah, sorry about that. I didn't put much effort into my last comment.
Defining intelligence is tricky, but to paraphrase EY, it's probably wise not to get too specific since we don't fully understand Intelligence yet. In the past, people didn't really know what fire was. Some would just point to it and say, "Hey, it's that shiny thing that burns you." Others would invent complex, intellectual-sounding theories about phlogiston, which were entirely off base. Similarly, I don't think the discussion about AGI and doom scenarios gets much benefit from a super precise definition of intelligence. A broad definition that most people agree on should be enough, like "Intelligence is the capacity to create models of the world and use them to think."
But I do think we should aim for a clearer definition of AGI (yes, I realize 'Intelligence' is part of the acronym). What I mean is, we could have a more vague definition of intelligence, but AGI should be better defined. I've noticed different uses of 'AGI' here on Less Wrong. One definition is a machine that can reason about a wide variety of problems (some of which may be new to it) and learn new things. Under this definition, GPT4 is pretty much an AGI. Another common definition on this forum is an AGI is a machine capable of wiping out all humans. I believe we need to separate these two definitions, as that's really where the core of the crux lies.
What is an AGI? I have seen a lot of "not a true scotman" around this one.
I guess the crux here for most people is the timescale. I agree actually that things can get eventually very bad if there is no progress in alignment etc, but the situation is totally different if we have 50 or 70 years to work on that problem or, as Yudkowsky keeps repeating, we don't have that much time because AGI will kill us all as soon as it appears.
The standard argument you will probably listen is that AGI will be capable of killing everyone because they can think so much faster than humans. I haven't seen yet a serious engagement from doomers to the argument of capabilities. I agree with everything you said here and to me these arguments are obviously right.
Any source you would recommend to know more about the specific practices of Mormons you are referring to?
The Babbage example is the perfect one. Thank you, I will use it
This would clearly put my point in a different place from the doomers
I would place myself also in the right upper quadrant, close to the doomers, but I am not one of them.
The reason is that it is not very clear to me the exact meaning of "tractable for a SI". I do think that nanotechnology/biotechnology can progress enormously with SI, but the problem is not only developing the required knowledge, but creating the economic conditions to make these technologies possible, building the factories, making new machines, etc. For example nowadays, in spite of the massive demand of microchips worldwide, there are very very few factories (and for some specific technologies the number of factories is n=1). Will we get there eventually? Yes. But not at the speed that EY fears.
I think you summarised pretty well my position in this paragraph:
"I think another common view on LW is that many things are probably possible in principle, but would require potentially large amounts of time, data, resources, etc. to accomplish, which might make some tasks intractable, if not impossible, even for a superintelligence. "
So I do think that EY believes in "magic" (even more after reading his tweet), but some people might not like the term and I understand that.
In my case using the word magic does not refer only at breaking the laws of physics. Magic might refer to someone who holds such a simplified model of the world that think, that you can make in a matter of days all those factories, machines and working nanotechnology (on the first try) and then succesfully deploy them everywhere killing everyone, and that we will get to that point in a matter of days AND that there won't be any other SI that could work to prevent those scenarios. I don't think I am misrepresenting EY point of view here, correct me otherwise,
If someone believed that a good group of engineers working one week in a spacecraft model could succesfuly 30 years later in an asteroid close to Proxima Centaury, would you call it magical thinking? I would. There is nothing beyond the realm of physics here! But it assumes so many things and it is so stupidly optimistic that I would simply dismiss it as nonsense.
I agree with this take, but do those plans exist, even in theory?
This is fantastic. Is there anything remotely like this available for Discord?
I don't see how that implies that everyone dies.
It's like saying, weapons are dangerous, imagine what would happen if they fall in the wrong hands. Well, it does happen and sometimes that have bad consequences but there is no logical connection between that and everyone dying, which is what doom means. Do you want to argue that LLMs are dangerous? Fine. No problem with that. But doom is not that.
Thanks for this post. It's refreshing to hear about how this technology will impact our lives in the near future without any references to it killing us all
There are some other assumptions that go into Eliezer's model that are required for doom. I can think of one very clearly which is:
5. The transition to that god-AGI will be as quick that other entities won't have the time to reach also superhuman capabilities. There are no "intermediate" AGIs that can be used to work on Alignment related problems or even as a defence from unaligned AGIs
I wish you recover soon with all my heart
I believe I have found a perfect example where the "Medical Model is Wrong," and I am currently working on a post about it. However, I am swamped with other tasks, I wonder if I will ever finish it.
In my case, I am highly confident that my model is correct, while the majority of the medical community is wrong. Using your bullet points:
1.Personal: I have personally experienced this disease and know that the standard treatments do not work.
2.Anecdotal: I am aware of numerous cases where the conventional treatment has failed. In fact, I am not aware of any cases where it has been successful.
3.Research papers: I came across a research paper from 2022 that shares the same opinion as mine.
4.Academics: Working in academia, I am well aware of its limitations. In this specific case, there is a considerable amount of inertia and a lack of communication between different subfields, as accurately described in the book "Inadequate Equilibria" by EY.
5.Medical: Most doctors hold the same opinion because they are influenced by their education. Therefore, if 10 doctors provide the same response, it should not be considered as 10 independent opinions.
6.Countercultural experts: No idea here
7.Communities: I have not explored this extensively, but completing this post I am talking about might be the beginning
8. Someone claims to have completely made the condition disappear using arbitrary methods. I am not personally aware of any such cases but I suspect that it is feasible and could potentially be relatively simple.
9.Models: I have a precise mechanistic model of the disease and why the treatments fail to cure it. I work professionally in a field closely related to this disease.
In summary, my confidence comes from, 1. being an expert in a closely related field and understanding what other people are missing and above all, why they are missing it, 2. having a mechanistic model 3. finding publications that manifest similar opinions.
Yes, I agree. I think it is important to remind that achieving AGI and doom are two separate events. Many people around here do make a strong connection between them, but not everyone. I'm on the camp that we are 2 or 3 years away to an AGI (it's hard to see why GPT4 does not qualify as that), I don't think that implies the imminent extinction of human beings. It is much easier to convince people of the first point because the evidence is already out there
Has he tried personally to interact with GPT4? Can't think of a better way. It convinced even Bryan Caplan, who had bet publicly against it
I would certainly appreciate knowing the reason for the downvotes
I guess I will break my recently self-imposed rule of not talking about this anymore.
I can certainly envision a future where multiple powerful AGIs fight against each other and are used as weapons, some might be rogue AGIs and some others might be at the service of human-controlled institutions (such as Nation Estates). To put it more clearly: I have trouble imagining a future where something along these lines DOES NOT end up happening.
But, this is NOT what Eliezer is saying. Eliezer is saying:
The Alignment problem has to be solved AT THE FIRST TRY because once you create this AGI we are dead in a matter of days (maybe weeks/months, it does not matter). If someone thinks that Eliezer is saying something else, I think they are not listening properly. Eliezer can have many flaws but lack of clarity is not one of them.
In general, I think this is a textbook example of the Motte and Baley fallacy. The Motte is: AGI can be dangerous, AGI will kill people, AGI will be very powerful. The Baley is: AGI creation means the imminent destruction of all human life and therefore we need to stop now all developments.
I never discussed the Motte. I do agree with that.
But I think they do believe what they say. Is it maybe that they are ... pointing to something else? when using the word AGI? In fact, I do not even know if there is a commonly accepted definition of AGI.
I don't see either how some people can say that AGI will take decades when GPT4 is already almost there
That's a possibility
Certainly no paperclips
Your comment is sitting at positive karma only because I strong upvoted it. It is a good comment, but people on this site are very biased in the opposite direction. And this bias is going to drive non-doomers eventually away from this site (probably many have already left), and LW will continue descending in a spiral of non-rationality. I really wonder how people in 10 or 15 years, when we are still around in spite of powerful AGI being widespread, will rationalize that a community devoted to the development of rationality ended up being so irrational. And that was my last comment showing criticism of doomers, everytime I do it costs me a lot of karma.
I can't agree more with you. But this is a complicated position to maintain here in LW, and one that gives you a lot of negative karma
One of the many ways this could backfire badly is by allowing authoritarian states like China to take the lead in the development of AIs.
+1 here
Sorry, I assumed you posted that just before the interview
Well, it seems it is your lucky day:
What do you mean by true IA?
I am not sure how anyone would say that "[N]one of the breakthroughs of the past few months have moved us substantially closer to strong AI." unless he hasn't really followed the breakthroughs of the past few months or had read only bad secondhand reports
I have no idea about that topic specifically. What I would suggest is: read yourself the literature. This is going to allow you to, at least, ask better questions when meeting the dentist
I see. Thanks! So crazily high. I would still like to see a correlation with the karma values
"These programs have been hailed as the first glimmers on the horizon of artificial _general_ intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.
That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments"
I think that day has "already" come. Mechanical minds are already surpassing human minds in many aspects: take any subject and tell ChatGPT to write a few paragraphs on it. It might not exhibit be the most lucid and creative of the best of humans, but I am willing to bet that its writing is going to be better than most humans. So, saying that its dawn is not yet breaking seems to me extremely myopic (it's like saying that thingy that the Wright brothers made is NOT the beginning of flying machines)
"On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations."
We could argue that the human mind CAN (in very specific cases, under some circumstances) be capable of rational processes. But in general, human minds are not trying to "understand" the world around creating explanations. Human minds are extremely inefficient, prone to biases, get tired very easily, need to be off 1/3 of the time, etc, etc, etc.
"Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking."
Anyone who has spent some time discussing with ChatGPT knows that it does have a model of the world and it is capable of causal explanation. It seems Chomsky didn't test this himself. I can concede that it might not be a very sophisticated model of the world (do most people have a very complex model?), but again, I expect this to improve over time. I think that some of the responses of ChatGPT are very difficult to explain if it is not doing that thing that we generally call thinking
"For this reason, the predictions of machine learning systems will always be superficial and dubious. Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd? Because it might analogize the pattern it inferred from sentences such as “John ate an apple” and “John ate,” in which the latter does mean that John ate something or other. The program might well predict that because “John is too stubborn to talk to Bill” is similar to “John ate an apple,” “John is too suborn to talk to” should be similar to “John ate.” The correct explanations of language are complicated and cannot be learned just by marinating in big data."
I don't find the reference right now, but I remember very clearly that the next version of ChatGPT already surpasses the average humans in things like implicatures. So doubling down and saying that these systems will "always" be superficial and dubious when we "already" have models that are, better than most humans, is again, completely wrong.
I would agree with Chomsky if he were saying: it seems that "this specific chatbot" that some people made is still behind "some" humans in certain aspects of what we call intelligence. But he is claiming much more than that.
It would be very interesting to conduct a poll between the users of LW. I expect that it would show that this site is quite biased towards more negative outcomes than the average ML researcher in this study.
Also, it would be interesting to see how it correlates with karma, I expect a positive correlation between karma score and pessimism
"I don't think anyone has a good argument for it being lower then 5%, or even 50%,"
That's false. There are many, many good arguments. In fact, I would say that it is not only that, it is also that many of the arguments pro-doom are very bad. The only problem is that the conversation in LW on this topic is badly biased towards one camp and that's creating a distorted image on this website. People arguing against doom tend to be downvoted way more easily than people pro-doom. I am not saying that I don't think it is a relevant problem , something people shouldn't work on, etc.
"But I don't get the confidence about the unaligned AGI killing off humanity. The probability may be 90%, but it's not 99.9999% as many seem to imply, including Eliezer."
I think that 90% is also wildly high, and many other people around think so too. But most of them (with perfectly valid criticisms) do not engage in discussions in LW (with some honourable exceptions, e.g. Robin Hanson a few days ago, but how much attention did it draw?)
Added to my Anki. This is very clear framework to think about some problems. Thanks!
I don't see how we disagree here? Maybe it's the use of the word magical? I don't intend to use it in the sense "not allowed by the laws of physics", I am happy to replace that by overweighted probability mass if you think that's more accurate.
Oh yes, I don't deny that, I think we agree. I simply think it is a good sanity practice to call bullshit on those overhyped plans. If people were more sceptical on those SciFi scenarios they would also probability update to lower P(doom) estimates.
THIS. I have been trying to make this point for a while now: there are limits to what intelligence can accomplish. Many of the plans I hear about AGIs taking over the world assume that the power is unlimited and anything can be computed.