What - ideally - should young and intelligent people do?

post by veterxiph · 2023-06-21T20:21:53.725Z · LW · GW · 4 comments

This is my first post. I'm 21. From what I understand, my fluid intelligence will rise until the age of 26 and then slowly fall. So I'm in a great position now to positively contribute to humanity.

I feel the need - at least now - to devote my life to something that I think actually matters to humanity and helps and/or saves as many humans as possible.

I think it might be a good idea to start off with my basic point of view: I want as many humans to survive as possible and live genuinely happy, satisfying, healthy, and fulfilling lives. I've heard arguments that humans have zero free will; I'm not sure whether I believe this or not, but then again I haven't thought a ton about it. I'm leaning towards believing it. So, because I think it's probably the case that humans have zero free will, I think everyone deserves to be genuinely happy, satisfied, healthy, and fulfilled. Yes, even criminals - they didn't choose to commit their crimes if there's zero free will, right? No. So they deserve to be very happy just like the "good" people do.

My question is, what should someone in my position do? I'm hearing arguments that perhaps humanity should stop AGI development altogether. This isn't an arms race anymore; it's a suicide race, as many have said. 

But we then naturally face an extremely important question: Is it even reasonable to expect humanity to stop developing AGI? It's one thing to get all countries and groups on Earth to agree to stop developing AGI - that might already be too difficult for all I know. But it's then another thing to actually enforce that - airstrikes to destroy places where AGI is being developed? How long could this "airstrike" tactic actually work, for example? I'm assuming it actually can work in the first place, which might be incorrect.

If airstrikes stop being an effective anti-AGI-development tool for whatever reason, could we use or invent another anti-AGI-development tool that is effective? How long until a country or group figures out a way around that, though? Could a government or other group figure out how to secretly build AGI? Isn't it only a matter of time until that happens?

Also, I'm pretty ignorant about this: how much could the people at OpenAI, Deepmind, and Anthropic be hiding? What does this tiny group of elities know that we don't? Is it possible that they have already agreed to create AGI and have agreed to use it to wipe out 99.9+% of humans? 

Or, on the opposite extreme: have the people at these companies agreed to stop developing AGI completely? Are they just trying to make sure that nobody in the world develops AGI at this point, including themselves? Are they doing only what I said in the last question, or that plus trying to figure out how to build benevolent and safe AGI?

Another thought I've had is: is it possible that humans should never build AGI at all because even if we develop perfectly aligned and benevolent AGI, humanity will be exterminated because of a bug or because this perfect AGI will fall into the wrong hands?

If my thought in the last paragraph is true, it seems that our goal as a species should be to never develop AGI at all and to actively try to stop and/or prevent anyone from doing so?

So back to my original question: what should youngsters who are still on the rising part of fluid intelligence do now? Try to get into these extremely selective and powerful AGI-building companies and convince everyone there to turn the group from an AGI-building company to an anti-AGI-development company that tries to make sure that AGI is never built? Try to get into these extremely selective and powerful AGI-building companies and work on alignment?

Or have these companies already agreed - perhaps secretly - to stop attempting to develop AGI and maybe to even stop working on alignment because they have reached the conclusion that even an aligned AGI has a too-high chance to spell the extinction of humanity? In that case, would it be a waste of my life to try to get into these companies? Should I just work on something else that could be useful for humanity like Neuroscience, Mathematics, and/or Physics?

I just wonder what the best use of the next decade+ of my life - and of those similar to me - really is now. There are many, many unknowns, obviously. I have many, many more thoughts about all this but I'm tempted to just publish this now.

4 comments

Comments sorted by top scores.

comment by Raemon · 2023-06-21T20:27:17.797Z · LW(p) · GW(p)

I think the current direction of your thoughts are pointed in a not-that-helpful direction, but I'm having a hard time putting it into words. 

I do think AI's a big deal, and it makes sense to pursue figuring out how to help. 

I think a lot of your thoughts here are pointed in a.... modeling a potentially adversarial-environment direction. And, I don't think it's wrong that there's some amount of adversarial-environment-worth-modeling, but I think it's the sort of thing more likely to drive you crazy if you focus on it without some kind of good grounding.

I think I recommend people in your position start by modeling AI from a technical standpoint of "how do we understand what an AI system is doing?", which you can do in "single player mode" without worrying about what everyone else is doing. I think that's a kind of necessary step to be able to think useful thoughts about the multiplayer scenarios.

(taking a further step back from AI – I think "I want to do something that matters to humanity" is a good goal, but even without AI specifics it's a goal that some people follow off a cliff and I recommend having some caution and guardrails about)

comment by hubertF · 2023-06-22T03:53:02.346Z · LW(p) · GW(p)

I am really interested in how we could develop tools that would support fluid intelligence. Of course, as it is on all the lips these days, some tools could use AI. But I tend to think that notation, knowledge management and exchange may be more useful.

comment by ChristianKl · 2023-06-21T21:58:34.856Z · LW(p) · GW(p)

I'm 21. From what I understand, my fluid intelligence will rise until the age of 26 and then slowly fall.

I would expect that this is a misunderstanding. Even if there are studies that suggest that 26 is the median peak for fluid intelligence that does not mean that this will be the peak for any individual. 

Replies from: Seth Herd
comment by Seth Herd · 2023-06-22T06:13:08.249Z · LW(p) · GW(p)

It's true that it peaks at different ages.

The bigger problem here is that effective intelligence is a function of both fluid and crystallized intelligence. You're not really smartest when fluid intelligence peaks, in terms of real world.problems. You're smartest in terms of juggling pieces of information. But understanding which problems to solve and what concepts to use in solving them is a matter of crystalized intelligence - specifically, knowledge of the problems and relevant concepts. You could achieve that by 26 but you'd have to start young and study like a monk. I'm not sure where tests of crystallized intelligence put it's peak but it would be specific to topic and hours of dedicated study.