Posts

The Colliding Exponentials of AI 2020-10-14T23:31:20.573Z · score: 16 (10 votes)

Comments

Comment by vermillionstuka on The Colliding Exponentials of AI · 2020-10-16T00:25:28.461Z · score: 1 (1 votes) · LW · GW

Thank you for the excellent and extensive write up :)

I hadn't encountered your perspective before, I'll definitely go through all your links to educate myself,     and put less weight on algorithmic progress being a driving force then.

Cheers

Comment by vermillionstuka on The Colliding Exponentials of AI · 2020-10-15T12:03:12.737Z · score: 1 (1 votes) · LW · GW

You can achieve infinitely (literally) faster than Alexnet training time if you just take the weight of Alexnet.

You can also achieve much faster performance if you rely on weight transfer and or hyperparameter optimization based on looking at the behavior of an already trained Alexnet. Or, mind you, some other image-classification model based on that.

Once a given task is "solved" it become trivial to compute models that can train on said task exponentially faster, since you're already working down from a solution.

Could you clarify, you mean the primary cause of efficiency increase wasn’t algorithmic or architectural developments, but researchers just fine-tuning weight transferred models?

 

However, if you want to look for exp improvement you can always find it and if you want to look for log improvement you always will.

Are you saying that the evidence for exponential algorithmic efficiency, not just in image processing, is entirely cherry picked? 

 

In regards to training text models "x time faster",  go into the "how do we actually benchmark text models" section the academica/internet flamewar library.

I googled that and there were no results, and I couldn’t find an "academica/internet flamewar library" either.

 

Look I don’t know enough about ML yet to respond intelligently to your points, could someone else more knowledgeable than me weigh in here please?

Comment by vermillionstuka on The Colliding Exponentials of AI · 2020-10-15T11:13:26.817Z · score: 1 (1 votes) · LW · GW

Wow GPT-3 shaved at least 10 years off the median prediction by the looks of it. I didn't realise Metaculus had prediction history, thanks for letting me know.

Comment by vermillionstuka on The Colliding Exponentials of AI · 2020-10-15T06:09:42.227Z · score: 3 (2 votes) · LW · GW

My algorithmic estimates essentially only quantify your "first category" type of improvements, I wouldn’t know where to begin making estimates for qualitative "second category" AGI algorithmic progress.

My comparisons to human level NLP (which I don’t think would necessarily yield AGI) assumed scaling would hold for current (or near future) techniques, so do you think that current techniques won't scale, or/and that the actual 100-1000x figure I gave was too high?

I'm not sure what the ratio is but my guess is it's 50/50 or so. I'd love to see someone tackle this question and come up with a number.

Yeah that would be great if someone did that.

Comment by vermillionstuka on Open & Welcome Thread – October 2020 · 2020-10-12T04:00:45.345Z · score: 1 (1 votes) · LW · GW

Thanks Ben

Comment by vermillionstuka on Open & Welcome Thread – October 2020 · 2020-10-12T03:35:32.212Z · score: 1 (1 votes) · LW · GW

I just had a question about post formatting, how do I turn a link into text like this example? Thanks.

Comment by vermillionstuka on Open & Welcome Thread – October 2020 · 2020-10-11T12:20:31.362Z · score: 1 (1 votes) · LW · GW

Thankyou :)

Comment by vermillionstuka on Open & Welcome Thread – October 2020 · 2020-10-11T04:01:33.006Z · score: 3 (2 votes) · LW · GW

Thanks Ben, I'm glad to hear that :)

Comment by vermillionstuka on Open & Welcome Thread – October 2020 · 2020-10-11T02:30:08.580Z · score: 16 (8 votes) · LW · GW

Hello LessWrong!

I’m a bit late to the party, I first signed up in February, but I haven’t really been active until recently, so I wanted to make an introductory post.

My name’s Nathan, I’m a 23 year old software engineering graduate from Australia. The first time I properly came into contact with the community was last year when a stray 4chan comment recommended Eliezer’s short story Three Worlds Collide. That story changed my life, I had never read anything that impressed me that much before. It was the first time I had encountered a person that seemed to be more experienced/skilled/intelligent in the things that I cared about. I had always considered truth seeking a central pillar of my identity, but I had never encountered or even heard of anyone else who cared about it as much as I did. So, as this author seemed to be of the same frame of mind, I decided to just read everything he had ever written to see if I could pick up a few things. Oh boy who knows how many millions or words that turned out to be. 

But I did read it all, and just about everything else the rationalist community and friends had written as well, which unearthed a coincidence: I had technically come into contact with the community before! In 2015 I had read Scott Alexanders story  “…And I Show You How Deep The Rabbit Hole Goes” and had quite liked it but had been confused by what Slate Star Codex was, and had never gone back. Lesson learned: if you like something investigate it thoroughly.

Entering the rationalist community felt like coming back to a home I never knew I had, I wasn’t alone!

I have decided to try and make a difference, devote my yet to exist career if I can, to help figure out what’s happening with AGI and the control problem. 

I have made a few comments so far, not always to positive response, so I hope I can do better, if you see something of mine and think I could have done better please point it out to me! I’ll correct it and rewrite my work in response. I would like to contribute much more to the community in the future. I have even been working on a post for a while, polishing, rewriting, just trying to do it justice and make something you want to read, that will meaningfully elucidate. I hope to post it soon, and that we can all become less wrong!

 

Cheers,

Nathan

Comment by vermillionstuka on Brainstorming positive visions of AI · 2020-10-08T01:04:44.482Z · score: 8 (6 votes) · LW · GW

I think if we get AGI/ASI right the outcome could be fantastic from not just from the changes made to the world, but the changes made to us as conscious minds, and that an AGI/ASI figuring out mind design (and how to implement it) will be the most signifigant thing that happens from our perspective. 

I think that the possible space of  mind designs is a vast ocean, and the homo sapiens mind design/state is a single drop within those possibilities. The chance that our current minds are what you would choose for yourself given knowledge of all options is very unlikely. Given that happiness/pleasure (or at least that kind of thing) seems to be a terminal value for humans, our quality of experience could be improved a great deal. 

One obviouse thought is if you increase the size of a brain or otherwise alter its design could you increase the potential magnitude for pleasure. I mean we think of animals like insects, fish, mammals etc on a curve of increasing consciousness generally with humans at the top, if that is the case humanity need not be an upper limit on the 'amount' of consciousness you can possess. And of course, within the mind design ocean more desirable states than pleasure may exist for all I know.

I think that imagining radical changes to our own conscious experience is unintuitive and its importance as a goal underappreciated, but I cannot imagine that anything else AGI/ASI could do that would be more important or rewarding for us.

Comment by vermillionstuka on Forecasting Thread: Existential Risk · 2020-09-23T23:23:20.213Z · score: 1 (1 votes) · LW · GW

I'm not including advanced biotech in my conventional threat category; I really should have elaborated more on what I meant:  Conventional risks are events that already have a background chance of happening (as of 2020 or so) and does not include future technologies. 

I make the distinction because I think that we don’t have enough time left before ASI to develop such advanced tech ourselves, so as an ASI would be overseeing their development and deployment, which reduces their threat massively I think (Even if used by a rouge AI I would say the ER was from the AI not the tech). And that time limit goes not just for tech development but also runaway optimisation processes and societal forces (IE in-optimal value lock in), as a friendly ASI should have enough power to bring them to heel. 

 

My list of threats wasn’t all inclusive, I paid lip service to some advanced tech and some of the more unusual scenarios, but generally I just thought past ASI nearly nothing would pose a real threat so didn’t focus on it. I am going read through the database of existential threats though, does it include what you were referring too? (“important risks that have been foreseen and imagined which you're not accounting for”).

Thanks for the feedback :)

Comment by vermillionstuka on Forecasting Thread: Existential Risk · 2020-09-23T09:52:40.707Z · score: 3 (5 votes) · LW · GW

Elicit prediction: https://elicit.ought.org/builder/0n64Yv2BE

 

Epistemic Status: High degree of uncertainty, thanks to AI timeline prediction and unknowns such as unforeseen technologies and power of highly developed AI.

My Existential Risk (ER) probability mass is almost entirely formed from the risk of unfriendly Artificial Super Intelligence (ASI) and so is heavily influenced my predicted AI timelines. (I think AGI is most likely to occur around 2030 +-5 years, and will be followed within 0-4 years of ASI, with a singularity soon after that, see my AI timelines post: https://www.lesswrong.com/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines?commentId=zaWhEdteBG63nkQ3Z  ). 

I do not think any conventional threat such as nuclear war, super pandemic or climate change is likely to be an ER, and super volcanoes or asteroid impacts are very unlikely. I think this century is unique and will constitute 99% of the bulk of ER, with the last <1% being from more unusual threats such as simulation being turned off, false vacuum collapse, or hostile alien ASI. But also, for unforeseen or unimagined threats.

I think the most likely decade for the creation of ASI will be the 30’s, with an 8% ER chance (From not being able to solve the control problem or coordinate to implement it even if solved).

Considering AI timeline uncertainty as well as how long an ASI takes to acquire techniques or technologies necessary to wipe out or lock out humanity I think an 11% ER chance for the 40’s. Part of the reason this is higher than the 30’s ER estimate is to accommodate the possibility of a delayed treacherous turn. 

Once past the 50’s I think we will be out of most of the danger (only 6% for the rest of the century), and potential remaining ER’s such as runaway nanotech or biotech will not be a very large risk as ASI would be in firm control of civilisation by then.  Even then though some danger remains for the rest of the century from unforeseen black ball technologies, however interstellar civilisational spread (ASI high percent of speed of light probes) by early next century should have reduced nearly all threats to less than ERs. 

So overall I think the 21st Century will pose a 25.6% chance of ER. See the Elicit post for the individual decade breakdowns.

Note: I made this prediction before looking at the Effective Altruism Database of Existential Risk Estimates.

Comment by vermillionstuka on Forecasting Thread: AI Timelines · 2020-08-24T05:34:18.268Z · score: 10 (5 votes) · LW · GW

Prediction: https://elicit.ought.org/builder/ZfFUcNGkL

I (a non-expert) heavily speculate the following scenario for an AGI based on Transformer architectures:

The scaling hypothesis is likely correct (and is the majority of the probability density for the estimate), and maybe only two major architectural breakthroughs are needed before AGI. The first is a functioning memory system capable of handling short and long term memories with lifelong learning without the problems of fine tuning.

The second architectural breakthrough needed would be allowing the system to function in an 'always on' kind of fashion. For example current transformers get an input then spit an output and are done. Where as a human can receive an input, output a response, but then keep running, seeing the result of their own output. I think an 'always on' functionality will allow for multi-step reasoning, and functional 'pruning' as opposed to 'babble'. As an example of what I mean, think of a human carefully writing a paragraph and iterating and fixing/rewriting past work as they go, rather than just the output being their stream of consciousness. Additionally it could allow a system to not have to store all information within its own mind, but rather use tools to store information externally. Getting an output that has been vetted for release rather than a thought stream seems very important for high quality.

Additionally I think functionality such as agent behavior and self awareness only require embedding an agent in a training environment simulating a virtual world and its interactions (See https://www.lesswrong.com/posts/p7x32SEt43ZMC9r7r/embedded-agents ). I think this may be the most difficult to implement, and there are uncertainties. For example does all training need to take place within this environment? Or is only an additional training run after it has been trained like current systems necessary.

I think such a system utilizing all the above may be able to introspectively analyse its own knowledge/model gaps and actively research to correct them. I think that could cause a discontinuous jump in capabilities.

I think that none of those capabilities/breakthroughs seem out of reach this decade, that that scaling will continue to quadrillions of parameters by the end of the decade (in addition to continued efficiency improvements). 

I hope an effective control mechanism can be found by then. (Assuming any of this is correct, 5 months ago I would have laughed at this.). 

Comment by vermillionstuka on Is Molecular Nanotechnology "Scientific"? · 2020-07-07T08:45:04.876Z · score: 3 (2 votes) · LW · GW

Do you still agree with Stuart_2012 on this?

Comment by vermillionstuka on Human instincts, symbol grounding, and the blank-slate neocortex · 2020-02-25T11:57:41.543Z · score: 1 (1 votes) · LW · GW

Thank you very much for this, I had heard of CCA theory but didn't know enough to evaluate it myself. I think this opens new possible paths to AGI I had not thoroughly considered before.