What should I do? (long term plan about starting an AI lab)
post by not_a_cat · 2024-06-09T00:45:12.369Z · LW · GW · 1 commentThis is a question post.
Contents
1 comment
I was listening to this Dwarkesh podcast with Leopold Aschenbrenner where they talk about AGI, superintellignence and how things might unfold. All I want to say about it is that it created a sense of concreteness and urgency when considering my plans for the future.
A bit of context about myself: Since I was a teenager, I've always been fascinated by computers and intelligence. I did CS studies which took away the mystery about computers (to great satisfaction). But the more I read about intelligence, brains, neuroscience and machine learning, the more it was clear we don't know how it works. I took a job as a web/database engineer after getting my CS master because I had to make a living, but I kept reading on the side. With my interest about intelligence getting stronger and no good answers, I made a plan to quit my job, and study on my own while living on my savings, with the hope of landing a research engineer position at DeepMind or GoogleBrain. One year into this self learning journey, it was clear that this would be challenging. So I turned to plan B (prompted by a bitcoin run up to almost $20000 dollars in 2017): I would salvage my newly acquired ML skills to trade those markets, then fund my own lab. And in the process, get better at applied ML and prove to myself I can do ML.
Fast forward 6 years, the plan has worked (to everyone's surprise myself included). The project has grown into a 10-person organization. I recently stepped down, after having transferred the skills and knowledge to the team that is now more competent than me to run it. Now is the time to activate the next step of the plan.
But things have changed. In 2018, concerns like alignment, controllability and misuse felt very theoretical and distant. Not anymore. The big change that occurred is my belief that the ML/AI field as a whole has a very high chance to achieve AGI, followed by superintelligence. Wether I get involved or not.
This is of course adds to all the other concerns regarding starting an AI lab: should I first study on my own to get better? Partner with other labs? Start recruiting now vs later?
AI safety being more important now, I'm telling myself is that the best way to approach it, is to be able to train good models, so I should work on AI capabilities regardless. Researching AI safety in a vacuum is much harder if you don't have AI capability expertise. But I wonder if I'm fully honest with myself when thinking that.
Back to the original question: given this nice situation where I have lots of funding, some confidence that I can at least do applied ML, and my strong curiosity about intelligence still being there, what should I do ?
I see two parts to this question:
- First, should I re-think the plan and focus on AI safety, or other things that I'm better positioned to do?
- Second, if I stick to the plan, how to best approach starting an AI lab? (I didn't talk about my research interests, but very briefly: probabilistic programming, neurosymbolic programming, world models, self play, causality).
I'm happy to react to comments and provide more info/context if needed.
Answers
1 comment
Comments sorted by top scores.
comment by Jono (lw-user0246) · 2024-06-11T13:38:25.742Z · LW(p) · GW(p)
I don't know if you have already, but this might be the time to take a long and hard look at the probblem and consider whether deep learning is the key to solving it.
What is the problem?
- reckless unilateralism? -> go work for policy or chip manufacturing
- inabillity to specify human values? -> that problem looks not DL at all to me
- powerful hackers stealing all the proto-AGIs in the next 4 years? -> go cybersec
- deception? -> (why focus there? why make an AI that might deceive you in the first place?) but that's pretty ML, though I'm not sure interp is the way to go there
- corrigibility? -> might be ML, though I'm not sure all theoretical squiggles are ironed out yet
- OOD behavior? -> probably ML
- multi-agent dynamics? -> probably ML
At the very least you ought to have a clear output channel if you're going to work with hazardous technology. Do you have the safety-mindset that prevents you from having you dual-use tech on the streets? You're probably familiar with the abysmal safety / capabilities ratio of people working in the field, any tech that helps safety as much as capability, will therefore in practice help capability more, if you don't distribute it carefully.
I personally would want some organisation to step up to become the keeper of secrets. I'd want them to just go all-out on cybersec, have a web of trust and basically be the solution to the unilateralists curse. That's not ML though.
I think this problem has a large ML-part to it, but the problem is being tackled nearly-solely by ML people. I think whatever part of the problem can be tackled with ML, won't necessarily benefit by having more ML people on it.