A freshman year during the AI midgame: my approach to the next year

post by Buck · 2023-04-14T00:38:49.807Z · LW · GW · 14 comments

14 comments

Comments sorted by top scores.

comment by TekhneMakre · 2023-04-14T04:57:13.886Z · LW(p) · GW(p)

I think there's something wrong with your categories: they're all about social perception. There's some reason for these to be correlated with the reality, but not that strong of a reason. People can be confused in either direction about what sort of AI is coming soon, and confusing people's sense of what sort of AI is coming soon wich what actual AI is coming soon would suggest bad plans.

Replies from: Jonas Hallgren, Buck
comment by Jonas Hallgren · 2023-04-14T09:50:06.709Z · LW(p) · GW(p)

I feel like this is trying to say something important but my brain isn't parsing it.

First and foremost, what categorisation are we talking about? Secondly, in what way are the categories framed in terms of social perception? Thirdly, what do you mean by direction and how does Buck confuse the direction?

(Sorry if this is obvious)

Replies from: TekhneMakre
comment by TekhneMakre · 2023-04-14T14:01:50.597Z · LW(p) · GW(p)

Hopefully this isn't too rude to say, but: I am indeed confused how you could be confused. Maybe there's some mental block for you, which would be interesting. Anyway, to answer your questions:

First and foremost, what categorisation are we talking about?

The main categorization in the post, of course. Quoting:

I want to split the AI timeline into the following categories.

The early game, during which interest in AI is not mainstream. I think this ended within the last year The midgame, during which interest in AI is mainstream but before AGI is imminent. [....] The endgame, during which AI companies conceive of themselves as actively building models that will imminently be transformative, and that pose existential takeover risk. [...]

Your Q:

Secondly, in what way are the categories framed in terms of social perception?

AFAICT the only condition here that isn't about the stories people are telling, is in the midgame, "but before AGI is immiment". Everything else is "interest in...", "interest in...", "concieve of themselves...".

Thirdly, what do you mean by direction and how does Buck confuse the direction?

People can think AGI will come soon when it doesn't, or think it won't when it will, and this can happen for any value of "AGI". Buck seems to be making plans based on stages centered around social perception / narrative rather than what's actually happening in terms of what actual AI stuff there is (big piles of data and compute, algorithms, etc).

Replies from: Quadratic Reciprocity
comment by Quadratic Reciprocity · 2023-04-15T23:20:44.244Z · LW(p) · GW(p)

Hopefully this isn't too rude to say, but: I am indeed confused how you could be confused

Fwiw, I was also confused and your comment makes a lot more sense now. I think it's just difficult to convert text into meaning sometimes. 

Replies from: TekhneMakre
comment by TekhneMakre · 2023-04-15T23:58:01.375Z · LW(p) · GW(p)

Ok, thanks for the data, updating some.

comment by Buck · 2023-04-14T16:40:38.458Z · LW(p) · GW(p)

This is a reasonable point. What I actually are about is reality, but I expect social reality to track reality fairly well on these points.

comment by Eccentricity · 2023-04-14T15:21:42.602Z · LW(p) · GW(p)

I am a literal freshman, and not feeling super optimistic about the future right now. How should I think about how to spend my time?

Replies from: None, Matthew_Opitz, Jayson_Virissimo
comment by [deleted] · 2023-04-16T09:09:35.310Z · LW(p) · GW(p)

Advocate for a global moratorium on AGI. Try and buy (us all) more time. Learn the basics of AGI safety (e.g. AGI Safety Fundamentals) so you are able to discuss the reasons why we need a moratorium in detail. YMMV, but this is what I'm doing as a financially independent 42 year-old. I feel increasingly like all my other work is basically just rearranging deckchairs on the Titanic.

comment by Matthew_Opitz · 2023-04-14T18:19:41.713Z · LW(p) · GW(p)

In a similar vein, I'm an historian who teaches as an adjunct instructor.  While I like my job, I am feeling more and more like I might not be able to count on this profession to make a living over the long term due to LLMs making a lot of the "bottom-rung" work in the social sciences redundant. (There will continue to be demand for top-notch research work for a while longer because LLMs aren't quite up to that yet, but that's not what I do currently).  

Would there be any point in someone like me going back to college to get another 4-year degree in computer science at this moment? Or is that field just as at-risk of being made technologically-obsolete (especially the bottom rungs of the ladder)? Perhaps I should remain as an historian where, since I have about 10 years of experience in that field, I'm at least on the middle rungs of the ladder and might escape technological obsolescence if AGI gobbles up the bottom rungs.

And let's say I did get a computer science degree, or even did some sort of more-focused coding boot camp type of thing.  By the time I finished my training, would my learning even remain relevant, or are things already moving too quickly to make bottom-rung coding knowledge useful? 

Let's say I didn't care about making a living and just wanted to maximize my contributions to AI alignment. Would I be of more use to AI alignment by continuing my "general well-rounded public intellectual education" as an historian (especially one who dabbles in adjacent fields like economics and philosophy probably more than average), or would I be able to make greater contributions to AI alignment by becoming more technically proficient in computer science?

comment by Jayson_Virissimo · 2023-04-16T00:43:48.666Z · LW(p) · GW(p)

FWIW, if my kids were freshmen at a top college, I would advise them to continue schooling, but switch to CS and take every AI-related course that was available if they hasn't already done so.

comment by jacquesthibs (jacques-thibodeau) · 2023-04-14T02:41:01.309Z · LW(p) · GW(p)

Regarding thinking about what to do in the endgame:

Having a bunch of practice at thinking about AI alignment in principle, which might be really useful for answering difficult-to-empirically-resolve questions about the AIs being trained.

Being well-prepared to use AI cognitive labor to do something useful, by knowing a lot about some research topic that we end up wanting to put lots of AI labor into. Maybe you could call this “preparing to be a research lead for a research group made up of AIs”. Or “preparing to be good at consuming AI research labor”.

That nicely put into words how I’m partially planning my “accelerating alignment with language models” agenda. I hope to come up with something that allows all alignment researchers to do the above with minimal friction and set up, and obvious benefit.

comment by Quadratic Reciprocity · 2023-04-14T04:43:03.720Z · LW(p) · GW(p)

Thanks for posting this. It's insightful reading other people thinking through career/life planning of this type.

Am curious about how you feel about the general state of the alignment community going into the midgame. Are there things you hoped you/alignment community had more of / achievable things that could have been different by the time the early game ended that would have been nice?

"I have a crazy take that the kind of reasoning that is done in generative modeling has a bunch of things in common with the kind of reasoning that is valuable when developing algorithms for AI alignment"

Cool!!

comment by Review Bot · 2024-03-08T05:07:26.938Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?