Posts
Comments
Hi Clement, I do not have much to add to the previous critiques, I also think that what needs to be simulated is just a consistent enough simulation, so the concept of CI doesn't seem to rule it out.
You may be interested in a related approach ruling out the sim argument based on computational requirements, as simple simulations should be more likely than complex one, but we are pretty complex. See "The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization" (https://philarchive.org/rec/PIETSA-6)
Cheers!
Yes voter, if you can read this: why? It would be great to get an explanation (anon).
Damn, we did not last even 24hrs...
Thanks for the alternative poll. One would think that with rules 2 and 5 out of the way it should be harder to say Yes.
How confident are you that someone is going to press it? If it's pressed: what's the frequency of someone pressing it? What can learn from it? Does any of the rules 2-5 play a crucial role in the decision to press it?
(we are still alive so far!)
This is a pretty counter-intuitive point indeed, but up to a certain threshold this seems to me the approach that minimise risks, by avoiding large capability jumps and improving the "immune system" of society.
Thanks for the insightful comment. Ultimately the different attitude is about the perceived existential risk posed by the technology and the risks coming by acting on accelerating AI vs not acting.
And yes I was expecting not to find much agreement here, but that's what makes it interesting :)
A somewhat similar statistical reasoning can be done to argue that the abundance of optional complexity (things could have been similar but simpler) is evidence against the simulation hyphotesis.
See https://philpapers.org/rec/PIETSA-6 (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization)
This is based on the general principle of computational resources being finite for any arbitrary civilisations (assuming infinities are not physical) and therefore minimised when possible by the simulators. In particular one can use the simplicity assumption: If we randomly select the simulation of a civilization in the space of all possible simulations of that civilization that have ever been run, the likelihood of picking a given simulation is inversely correlated to the computational complexity of the simulation.
It is hard to argue that a similar general principle can be found for something being "mundane" since the definition of mundane seems dependent on the simulators point of view. Can you perhaps modify this reasoning to make it more general?
Let’s start with one of those insights that are as obvious as they are easy to forget: if you want to master something, you should study the highest achievements of your field.
Even if we assume this, it does not follow that we should try to recreate the subjective conditions that led to (perceived) "success". The environment is always changing (tech, knowledge base, tools), so many learnings will not apply. Moreover, biographies tend to create a narrative after the fact, emphasizing the message the writer want to convey.
I prefer the strategy to master the basics from previous works and then figure out yourself how to innovate and improve the state of the art.
True :)
(apart for your reply!)
Using the Universal Distribution in the context of the simulation argument makes a lot of sense if we think that the base reality has no intelligent simulators, as it fits with our expectations that a randomly generated simulator is very likely to be coincise. But for human (or any agent-simulators) generated simulations, a more natural prior is how easy is the simulation to be run (Simplicity Assumption), since agent-simulators face concrete tradeoffs in using computational resources, while they have no pressing tradeoffs on the length of the program.
See here for more info on the latter assumption.
This is also known as Simplicity Assumption: "If we randomly select the simulation of a civilization in the space of all possible simulations of that civilization that have ever been run, the likelihood of picking a given simulation is inversely correlated to the computational complexity of the simulation."
In a nutshell, the amount of computation needed to perform simulations matters (if resources are somewhat finite in base reality, which is fair to imagine), and over the long term simple simulations will dominate the space of sims.
See here for more info.
Regarding (D), it has been elaborated more in this paper (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization).
I would suggest to remove "I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community. " and present your argument, without speaking for the whole community.
Very interesting division, thanks for your comment.
Paraphrasing what you said, in the informational domain we are very close to post scarcity already (minimal effort to distribute high level education and news globally), while in the material and human attention domain we likely still need advancements in robotics and AI to scale.
You mean the edit functionality of Gitlab?
Thanks for the gitbook tip, I will look into it.
Yes, the code is open source: https://gitlab.com/postscarcity/map
Interesting paradox.
As other commented, I see multiple flaws:
- We believe to seem to know that there is a reality that exists. I doubt we can conceive reality, but only a vague understanding of it. Moreover we have no experience of "not existing", so it's hard to argue that we have a strong grasp on deeply understanding that there is a reality that exists.
- Biggest issue is here imho (this is a very common misunderstanding): math is just a tool which we use to describe our universe, it is not (unless you take some approach like the mathematical universe) our universe. The fact that it works well is selection bias. We use math that works well to describe our universe, we discard the rest (see e.g. negative solution to the equation of motion in newtonian mechanics). Math by itself is infinite, we just use a small subset to describe our universe. Also we take insipiration from our universe to build math.
Not conclusive, but still worth doing in my view due to the relative easiness. Create the spreadsheet, make it public and let's see how it goes.
I would add the actual year in which you think it will happen.
Yea, what I meant is that the slides of Full Stack Deep Learning course materials provide a decent outline of all of the significant architectures worth learning.
I would personally not go to that low level of abstraction (e.g. implementing NNs in a new language) unless you really feel your understanding is shaky. Try building an actual side project, e.g. an object classifier for cars, and problems will arise naturally.
I fear that measuring modifications it's like measuring a moving target. I suspect it will be very hard to consider all the modifications, and many AIs may blend each other under large modifications. Also it's not clear how hard some modifications will be without actually carrying out those modifications.
Why not fixing a target, and measuring the inputs needed (e.g. flops, memory, time) to achieve goals?
I'm working on this topic too, I will PM you.
Also feel free to reach out if topic is of interest.
Other useful references:
-On the Measure of Intelligence https://arxiv.org/abs/1911.01547
-S. Legg and M. Hutter, A collection of definitions of intelligence, Frontiers in Artificial Intelligence and applications, 157 (2007),
-S. Legg and M. Hutter, Universal intelligence: A definition of machine intelligence, Minds and Machines, 17 (2007), pp. 391-444. https://arxiv.org/pdf/0712.3329.pdf
-P. Wang, On Defining Artificial Intelligence, Journal of Artificial General Intelligence, 10 (2019), pp. 1-37.
-J. Hernández-Orallo, The measure of all minds: evaluating natural and artificial intelligence, Cambridge University Press, 2017.
This is the most likely scenario, with AGI getting heavily regulated, similarly to nuclear. It doesn't get much publicity because it's "boring".
Nice link, thanks for sharing.
The 1 million prize problem should be "clearly define the AI alignement problem". I'm not even joking, actually understanding the problem and enstablising that there is a problem in the first place may give us hints to the solution.
In research there are a lot of publications, but few stand the test of time. I would suggest to you to look at the architectures which brought significant changes and ideas, those are still very relevant as they:
- often form the building block of current solutions
- they help you build intuition on how architectures can be improved
- it is often assumed in the field that you know about them
- they are often still useful, especially when having low resources
You should not need to look at more than 1-2 architectures per year in each field (computer vision, NLP, RL). Only then I would focus on SOTA.
You may want to check https://fullstackdeeplearning.com/spring2021/ it should have enough historic material to get you covered and expand from there, while also going quickly to modern topics.
Thanks for the link, I will check it out.
ARC is a nice attempt. I also participated in the original challenge on Kaggle. The issue is that the test can be gamed (as anyone on Kaggle did) brute forcing over solution strategies.
An open-ended or interactive version of ARC may solve this issue.
I'm working on these lines to create an easy to understand numeric evaluation scale for AGIs. The dream would be something like: "Gato is AGI level 3.5, while the average human is 8.7." I believe the scale should factor in that no single static test can be a reliable test of intelligence (any test can be gamed and overfitted).
A good reference on the subject is "The Measure of All Minds" by Orallo.
Happy to share a draft, send me a DM if interested.
When you say "switching" it reminds me of the "big switch" approach of https://en.wikipedia.org/wiki/General_Problem_Solver.
Regarding to how they do it, I believe the relevant passage to be:
Because distinct tasks within a domain can share identical embodiments, observation formats and action specifications, the model sometimes needs further context to disambiguate tasks. Rather than providing e.g. one-hot task identifiers, we instead take inspiration from (Brown et al., 2020; Sanh et al., 2022; Wei et al., 2021) and use prompt conditioning.
I guess it should be possible to locate the activation paths for different tasks, as the tasks are pretty well separated. Something on the lines of https://github.com/jalammar/ecco
Fair analysis, I agree with the conclusions. The main contribution seems to be a proof that transformers can handle many tasks at the same time.
Not sure if you sorted the tests in order of relevance, but I also consider the "held-out" test as being the more revealing. Besides finetuning, it would be interesting to test the zero-shot capabilities.
A single network is solving 600 different tasks spanning different areas. 100+ of the tasks are solved at 100% human performance. Let that sink in.
While not a breaktrough in arbitrary scalable generality, the fact that so many tasks can be fitted into one architecture is surprising and novel. For many real life applications, being good in 100-1000 tasks makes an AI general enough to be deployed as an error tollerant robot, say in a warehouse.
The main point imho is that this architecture may be enough to be scaled (10-1000x parameters) in few years to a useful proto-AGI product.
Pretty disappointing and unexpected to hear this in 2022, after all the learnings from the pandemic.
What's stopping the companies from hiring a new researcher? People are queueing for tech jobs.
If they leave then only who does not care remains...
If by "sort of general, flexible learning ability that would let them tackle entirely new domains" we include adding new tokenised vectors in the training set, then this fit the definition. Of course this is "cheating" since the system is not learning purely by itself, but for the purpose of building a product or getting the tasks done this does not really matter.
And it's not unconcievable to imagine self-supervised tokens generation to get more skills and perhaps a K-means algorithm to make sure that the new embeddings do not interfere with previous knowledge. It's a dumb way of getting smarter, but apparently it works thanks to scale effects!
I would agree with "proto-AGI". I might soon write a blog on this, but ideally we could define a continuous value to track how close we are to AGI, which is increasing if:
-the tasks to solve are very different from each other
-the tasks are complex
-how well a task have been solved
-few experience (or info) is fed to the system
-experience is not directly related to the task
-experience is very raw
-computation is done in few steps
Then adding new tasks and changing the environment.
I have always been cautios, but I would say yes this time.
With the caveat that it learns new tasks only from supervised data, and not reusing previous experience.
The fact that adding new tasks doesn't diminuish performance on previous tasks is highly non trivial!
It may be that there is a lot of room in the embedding space to store them. The wild thing is that nothing (apart few hardware iterations) stop us to increase the embedding space if really needed.
Possibly the first truly AGI paper.
Even though it is just exploiting the fact that all the narrow problems can be solved as sequence problems via tokenisation, it's remarkable that the tasks do not interferee distructively between each other. My gut feeling is that this is due the very high dimensional space of the embedding vectors.
It leaves ample room for grow.
My main point is that there is not enough evidence for a strong claim like doom-soon. In absence of hard data anybody is free to cook up argument pro or against doom-soon.
You may not like my suggestion, but I would strongly advise to get deeper into the field and understand it better yourself, before taking important decisions.
In terms of paradigms, you may have a look at why building AI-software development is hard (easy to get to 80% accurate, hellish to get to 99%), AI-winters and hype cycles (disconnect between claims-expectations and reality), the development of dangerous technologies (nuclear, biotech) and how stability has been achieved.
Don't look at opinions, look for data and facts. Speculations, opinions or beliefs cannot be the basis on which you take decisions or update your knowledge. It's better to know few things, but with high confidence.
Ask yourself, which hard data points are there in favour of doom-soon?
Geniuses or talented researchers are not that impactful as much as the right policy. Contribute creating the right conditions (work environment, education, cross contamination, funding, etc.) to make good research flourish. At the same time if fundamentals are not covered (healthcare, housing, etc.) people are not able to focus on much more than suvival. So pretty much anything that makes the whole system works better helps.
As an example, there are plenty of smart individuals in poor counties which are not able to express their potential.
Thanks. Yes, pretty much in line with the authors. Btw, I would super happy to be wrong and see advancement in those areas, especially the robotic one.
Thanks for the offer, but I'm not interested in betting money.
A close call, but I would lean still on no. Engineering the prompt is where humans leverage all their common sense and vast (w.r.t.. the AI) knowledge.
The bottom line is: nobody has a strong argument in support of the inevitability of the doom scenario (If you have it, just reply to this with a clear and self contained argument.).
From what I'm reading in the comments and in other papers/articles, it's a mixture of beliefs, estrapolations from known facts, reliance on what "experts" said, cherry picking. Add the fact that bad/pessimistic news travel and spread faster than boring good news.
A sober analysis enstablish that super-AGI can be dangerous (indeed there are no theorems forbidding this either), what's unproven is that it will be HIGHLY LIKELY to be a net minus for humanity. Even admitting that alignement is not possible, it's not clear why humanity and super-AGI goals should be in contrast, and not just different. Even admitting that they are highly likely to be in contrasts, is not clear why strategies to counter this cannot be of effect (e.g. parner up with a "good" super-AGI).
Another factors often forgotten is that what we mean by "humanity" today may not have the same meaning when we will have technologies like AGIs, mind upload or intelligence enhancement. We may literally become those AIs.
The downvotes are excessive, the post is provoking, but interesting.
I think you will not even need to "push the fat man". The development on an AGI will be slow and gradual (as any other major technology) and there will be incidents along the way (e.g. an AGI chatbot harassing someone). Those incidents will periodically mandate new regulations, so that measurements to tackle real AGI related dangers will be enacted, similarly to what happens in the nuclear energy sector. They will not be perfect, but there will be regulations.
The tricky part is that not all nations will set similar safety level, in fact some may encourage the development of unsafe, but high reward, AGI. So overall it looks like "pushing the fat man" will not even work that well.
Matthew, Tamay: Refreshing post, with actual hard data and benchmarks. Thanks for that.
My predictions:
- A model/ensemble of models achieves >80% on all tasks in the MMLU benchmark
No in 2026, no in 2030. Mainly due to the fact that we don't have much structured data and incentives to solve some of the categories. A powerful unsupervised AI would be needed to clear those categories, or more time.
- A credible estimate reveals that an AI lab deployed EITHER >10^30 FLOPs OR hardware that would cost $1bn if purchased through competitive cloud computing vendors at the time on a training run to develop a single ML model (excluding autonomous driving efforts)
This may actually happen (the 1B one, not the 10^30), also due to inflation and USD created out of thin air and injected into the market. I would go for no in 2026 and yes in 2030.
- A model/ensemble of models will achieve >90% on the MATH dataset using a no-calculator rule
No in 2026, no in 2030. Significant algorithmic improvements needed. It may be done if prompt engineering is allowed.
- A model/ensemble of models achieves >80% top-1 strict accuracy on competition-level problems on the APPS benchmark
No in 2026, no in 2030. Similar to the above, but there will be more progress, as a lot of data is available.
- A gold medal for the IMO Grand Challenge (conditional on it being clear that the questions were not in the training set)
No in 2026, no in 2030.
- A robot that can, from beginning to end, reliably wash dishes, take them out of an ordinary dishwasher and stack them into a cabinet, without breaking any dishes, and at a comparable speed to humans (<120% the average time)
I work with smart robots, this cannot happen so fast also due to hardware limitations. The speed requirement is particularly harsh. Without the speed limit and with the system known in advance I would say yes in 2030. As the bet stands, I go for No in 2026, no in 2030.
- Tesla’s full-self-driving capability makes fewer than one major mistake per 100,000 miles
Not sure about this one, but I lean on No in 2026, no in 2030.
This is a possible AGI scenario, but it's not clear why it should be particularly likely. For instance the AGI may reason that going aggressive will also be the fastest route to be terminated. Or the AGI may consider that keeping humans alive is good, since they were responsable for the AGI creation in the first place.
What you describe is the paper-clip maximiser scenario, which is arguably the most extreme end of the spectrum of super-AGI behaviours.
This would not be a conclusive test, but definitely a cool one and may spark a lot of research. Perhaps we could get started with something NLP based, opening up more and more knowledge access to the AI in the form of training data. Probably still not feasible as of 2022 in term of raw compute required.
It would be good if you could summarise your strongest argument in favour of your conclusion "no alignement = bad for humanity".
Things are rarely black or white, I don't see an AI partially aligned as necessaly a bad thing.
As an example, consider the partial alignement between a child and his parent. A parent is not simply fulfilling every desire of the child, but only a subset.