comment by avturchin ·
2018-12-26T11:07:35.451Z · LW(p) · GW(p)
The one interesting quote from the article is:
"We can then attempt to construct an argument where: (a) we estimate this minimum quantity of computing power (using evidence unrelated to the present rate of return on computing power), (b) predict that the quantity will become available before growth trends hit their wall, and (c) argue that having it available would be nearly sufficient to rapidly train systems that can do a large portion of the things humans can do. In this case, the OpenAI data would be evidence that we should expect the computational threshold to be reached slightly earlier than we would otherwise have expected to reach it. For example, it might take only five years to reach the threshold rather than ten. However, my view is that it’s very difficult to construct an argument where parts (a)-(c) are all sufficiently compelling. In any case, it doesn’t seem like the OpenAI data alone should substantially increase the probability anyone assigns to “near-term AGI” (rather than just shifting forward their conditional probability estimates of how “near-term” “near-term AGI” would be)."
It would be interesting to have Fermi estimates.
For example, if we assume that human level brain simulation requires 1 exaflops (the median estimation according to AI impacts) and DGX-2 is used to do the work (which has 2 petaflops in tensor operations and costs 400 K USD), then one needs 500 such systems, which will cost 200 mln USD. The price of the datacenter to host all this, plus electricity and connections etc will at least double the cost.
So, the initial investment to study human-brain-level AI is around 400 mln USD, which seems acceptable for many large IT companies, but not for startups.
However, even having human-level hardware is not enough. One need to train it and to account for trial and errors. We could assume - by analogy with human brain training in childhood - that to train one model of human mind, at least 1 year of training time is needed (if the computer is running on the same speed as human mind). Also, at least 1000 trials will be needed to get something workable. (I am sure there will be a large demand for such work as there will be attempts to create "home robots" which act in reality and speak human language, which is close to human level capabilities).
So, to successfully train human level models, at least 1000 years of human level hardware is needed. If a company wants to have it all now and in 1 year, it is unrealistic, as it will cost 400 billion USD upfront.
However, assuming that in the next decade price for flops will fail in the order of magnitude, this investment will be only 40 bln in 10 years from now, and it could be distributed in the period of many years. In other words, to have a human level AI in 10 years, a company needs to spend of such research at least 4 billion a year, which is still large, but is more acceptable for largest IT companies.
In the end, we will get a robot, acting in the environment and speaking human language, but it is not a superintelligence and it is a different question, if such form of embodied cognition is a useful and closest step to the superintelligence.
If exponential growth of compute in Open AI-style will continue and will result in even more dramatic growth of available computations, such robot may be trained not in 10, but maybe in 5 years. The quote above said exactly this, and also assume that there is no practical difference between 5 years estimate and 10 years estimate, as we are not ready for both. Replies from: Rekrul
↑ comment by Rekrul ·
2018-12-26T23:01:06.720Z · LW(p) · GW(p)
"We could assume—by analogy with human brain training in childhood—that to train one model of human mind, at least 1 year of training time is needed (if the computer is running on the same speed as human mind)."
Could you clarify here? I'm no expert, but I'm pretty sure human brains in childhood take a lot longer than a year to learn everything they need to survive and thrive in the real world. And they have a lot more going for them than anything we'll build for the foreseeable future (better learning algorithm, better architecture built by evolution, etc.)
Replies from: avturchin
↑ comment by avturchin ·
2018-12-26T23:44:08.944Z · LW(p) · GW(p)
I think that the experimenters will find the ways to compress the training process, may be by skipping part of the dream-less dreaming and the periods of passivity, as well as they will use some algorithmic tricks. Replies from: Rekrul
↑ comment by Rekrul ·
2018-12-27T00:06:39.173Z · LW(p) · GW(p)
I agree that researchers can take shortcuts and develop tricks, but I don't see how that shortens it to something as incredibly short as 1 year, especially since we will be starting with parts that are far worse than their equivalent in the human brain.