My least favorite thing
post by sudo · 2022-04-14T22:33:20.408Z · LW · GW · 31 commentsContents
31 comments
Epistemic status: Anger. Not edited.
TL;DR The hamster wheel is bad for you. Rationalists often see participation in the hamster wheel as instrumentally good. I don't think that is true.
Meet Alice. She has had the opportunity to learn many skills in her school years. Alice is a bright high school student with a mediocre GPA and a very high SAT score. She doesn't particularly enjoy school, and has no real interest in engaging in the notoriously soul-crushing college admissions treadmill.
Meet Bob. Bob understands that AGI is an imminent existential threat. Bob thinks AI alignment is not only urgent and pressing but also tractable. Bob is a second-year student at Ivy League U studying computer science.
Meet Charlie. Charlie is an L4 engineer at Google. He works on applied machine learning for the Maps team. He is very good at what he does.
Each of our characters has approached you for advice. Their terminal goals might be murky, but they all empathize deeply with the AI alignment problem. They'd like to do their part in decreasing X-risk.
You give Alice the following advice:
It's statistically unlikely that you're the sort of genius who'd be highly productive without at least undergraduate training. At a better college, you will not only receive better training and have better peers; you will also have access to opportunities and signalling advantages that will make you much more useful.
I understand your desire to change the world, and it's a wonderful thing. If you'd just endure the boredom of school for a few more years, you'll have much more impact.
Right now, MIRI wouldn't even hire you. I mean, look at the credentials most AI researchers have!
Statistically, you are not Eliezer.
You give Bob the following advice:
Graduating is a very good signal. A IvyLeagueU degree carries a lot of signalling value! Have you gotten an internship yet? It's great that you are looking into alignment work, but it's also important that you take care of yourself.
It's only your second year. If the college environment does not seem optimal to you, you can certainly change that. Do you want study tips?
Listen to me. Do not drop out. All those stories you hear about billionaires who dropped out of college might be somewhat relevant if you actually wanted to be a billionaire. If you're optimizing for social impact, you do not do capricious things like that.
Remember, you must optimize for expected value. Seriously consider grad school, since it's a great place to improve your skills at AI Alignment work.
You give Charlie the following advice:
Quit your job and go work on AI Alignment. I understand that Google is a fun place to work, but seriously, you're not living your values.
But it is too late, because Charlie has already been injected with a deadly neurotoxin which removes his soul from his skeleton. He is now a zombie, only capable of speaking to promo committees.
--
You want geniuses, yet you despise those who attempt to attain genius.
It seems blatantly obvious to you that the John von Neumanns and Paul Erdoses of the world do not beg for advice on internet forums. They must have already built a deep confidence in their capabilities from fantastical childhood endeavors.
And even if Alice wrote a working C++ compiler in Brainfuck at 15 years old, it's unlikely that she can solve such a momentous problem alone.
Better to keep your head down. Follow the career track. Deliberate. Plan. Optimize.
So with your reasonable advice, Alice went to Harvard and Bob graduated with honors. All of them wish to incrementally contribute to the important project of building safe AI.
They're capable people now. They understand jargon like prosaic alignment and myopic models. They're good programmers, though paralyzed whenever they are asked the Hamming questions. They're not too far off from a job at MIRI or FHI or OpenPhil or Redwood. They made good, wise decisions.
--
I hate people like you.
You say things like, "if you need to ask questions like this, you're likely not cut out for it. That's ok, I'm not either."
I want to grab you by your shoulders, shake you, and scream. Every time I hear the phrase "sphere of competence," I want to cry. Are you so cynical as to assume that people cannot change their abilities? Do you see people rigid as stone, grey as granite?
Do I sound like a cringey, irrational liberal for my belief that people are not stats sheets? Is this language wishful and floaty and dreamy? Perhaps I am betraying my young age, and reality will set in.
Alternatively, perhaps you have Goodharted. You saw cold calculation and wistful "acceptance" as markers of rationality and adopted them. In your wise, raspy voice you killed dreams with jargon.
Don't drop out. Don't quit your job. Don't get off the hamster wheel. Don't rethink. Don't experiment. Optimize.
You people hate fun. I'd like to package this in nice-sounding mathematical terms, but I have nothing for you. Nothing except for a request that you'd be a little less fucking cynical. Nothing except, reflect on what Alice and Bob could've accomplished if you hadn't discouraged them from chasing their dreams.
31 comments
Comments sorted by top scores.
comment by rank-biserial · 2022-04-14T23:58:59.604Z · LW(p) · GW(p)
One of the problems here is that, as of right now, there isn't much of a middle path between "Stay at Google and do nothing" and "quit your job to do alignment work full-time". Then there's the issue of status-seeking vs. altruism as a case of revealed vs. stated preferences. If there was a way [LW · GW] to make $750k a year and save the world, people would be all over that. I, personally, would be all over that.
But there isn't. If we use johnswentworth as an optimistic case, those who would go into independent AI alignment work full-time would make about $90k per year [LW · GW]. Of course, anyone that complains about the prospect of 'only' making 90k would be derided as a snot-nosed asshole, because we live in a world where hundreds of millions of people subsist upon less than two dollars a day. However, people's internal calculus chugs on just the same, and Charlie decides to stay at Google.
Replies from: sudo↑ comment by sudo · 2022-04-15T02:29:38.444Z · LW(p) · GW(p)
If you think that we have a fair shot at stopping AI apocalypse, and that AGI is a short-term risk, then it is absolutely rational to optimize for solving AI safety. This is true even if you are entirely selfish.
Also, this essay is about advice given to ambitious people. It's not about individual people choosing unambitious paths (wheeling). Charlie is a sad example of what can happen to you. I'm not complaining about him.
Replies from: rank-biserial↑ comment by rank-biserial · 2022-04-15T02:53:29.423Z · LW(p) · GW(p)
Yes, selfish agents want to not get turned into paperclips. But they have other goals too. You can prefer alignment be solved, while not wanting to dedicate your mind, body, and soul to waging a jihad against it. Where can Charlie effectively donate, say, 10% of his salary to best mitigate x-risk? Not MIRI (according to MIRI).
comment by Razied · 2022-04-15T12:52:44.281Z · LW(p) · GW(p)
Do you tell every high school quarterback to drop everything and focus on getting in the NFL? Encouraging people to aim for low-probability events with huge failure downsides is almost cruel. You know what is likely to happen to Bob if he drops out? He's likely to waste all his day on video-games because he's addicted to them, and only the social pressures of school managed to break through his addiction to make him actually do stuff. All his friends are still in school and find his decision baffling, they still talk to him, but less and less over time. You know what becomes really hard once you drop out? Finding a girlfriend. Suddenly a year goes by with only a barely functional iphone app to show for it, and while Bob hasn't quite ruined his life, he is cursing his stupidity for dropping out of an ivy league school.
There are safer ways to experiment: ask for a semester off, try to build something during summer break instead of going at a fancy internship, maybe focus only the bare minimum on your classes while you do something else on the side, or take graduate level classes while you're first-year undergrad, or sacrifice your social life and sleep to work on something else concurrently with classes. Failing at any of these leaves you in a recoverable position.
Replies from: sudo↑ comment by sudo · 2022-04-16T02:35:16.571Z · LW(p) · GW(p)
I think the fundamental misunderstanding here is that you are attributing a much smaller success probability to my ideas than me.
It is very likely that becoming highly skilled at AI outside of college will make you both useful (to saving the world) and non-homeless. You will probably make more progress toward your goal than if you stayed "tracked" (for almost any goal, assuming short AI timelines).
Bob isn't likely to waste all his day on video-games? I wonder why you think that? I mean, conditional on him being addicted, perhaps. But surely if that was his problem, then he should focus on solving it.
Do you somehow attribute lower capabilities to people than I do? I certainly think that Bob can figure out a way to learn more effectively than his college can provide. He can prove this to himself if he has doubts.
None of this is the point. Many people have much more ability to take risk than we assume. Giving people overly risk-averse advice, and assuming the bad case scenario as highly likely as you are doing right now, seems very hurtful.
Replies from: PoignardAzur, Razied↑ comment by PoignardAzur · 2022-04-17T12:54:06.772Z · LW(p) · GW(p)
To give my personal experience:
- The last few jobs I got were from techs I put on my CV after spending a few weeks toying with them in my free time.
- At some point I quit my job and decided to take all my savings and spend as long as I could working on open-source projects in Rust.
- I'm currently in a job with triple the pay of the previous one, thanks to networking and experience I got after I quit.
So while my experience isn't relevant to AI safety, it's pretty relevant to the whole "screw the hamster wheel, do something fun" message.
And my advice to Alice and Bob would still be "Fuck no, stay in your Ivy League school!"
I don't care how much of a genius you are. I think I'm a genius, and part of why I'm getting good jobs is my skills, but the other part is there's a shiny famous school on my resume. Staying in that school gave me tons of opportunities I wouldn't have had by working on my own projects for a few years (which is essentially what I did before joining that school).
There are measured risks and there are stupid risks. Quitting school is a stupid risk. Maybe you're enough of a genius that you beat the odds and you succeed despite quitting school, but those were still terrible odds.
↑ comment by Razied · 2022-04-16T03:32:34.650Z · LW(p) · GW(p)
So, obviously my estimates of the chance of success for any given person will depend on the person, for all I know an hour-long talk with you would completely convince me that this is the right decision for you, but for the median ivy-leaguer CS student, I am really not convinced, and for the median CS student outside good schools, it's definitely a bad idea, most CS students really are much dumber than you'd think. If you have a track record of being able to complete long personal projects with no outside motivation or enforcement mechanism, under the stress of your livelihood depending on the success of this project (and the stress of your parents and friends not understanding what the hell you're doing and thinking you're ruining your life), then this is evidence that going off-track wouldn't be disastrous. I have tried it, found it more difficult than I expected, and subsequently regretted it and went back to the normal track (though still with plenty of weird experiments within the track).
It is very likely that becoming highly skilled at AI outside of college will make you both useful (to saving the world) and non-homeless.
In the minds of hiring managers at normal companies, "AI experts" are a dime-a-dozen, because now those words have been devalued to mean whoever took Andrew Ng's course on ML. You can't get a data scientist job without a degree (which would presumably be the non-homeless fallback position), you certainly can't get a research position at any of the good labs without a PhD, you can try publishing alone, but again this basically never happens. I suppose you could try winning Kaggle championships, but those have almost no relevance to AI safety, you could try making money by doing stock prediction with the numer.ai project, and making money that way (which is what I did), and that would provide some freedom to study what you want, but that's again really hard. If you want to get grants from openPhil to do AI safety, that might be something, but really the skills you learn from getting good at AI safety have almost no marketable value, there is a very narrow road you can walk in this direction, and if anything goes wrong there isn't much of a fallback position.
People can certainly handle more risk and more weirdness than they think, but there are many levels of risk increase between what the average student does and dropping out of school to focus on studying AI on your own.
comment by Rana Dexsin · 2022-04-14T23:38:45.683Z · LW(p) · GW(p)
(Edited to add: note that this is both informed by and potentially warped by personal experience and resultant subconscious biases. See my grandchild-comment: https://www.lesswrong.com/posts/JimXsFyEbPXrwzydN/my-least-favorite-thing?commentId=FZKMJctWjWmzjKb8N [LW(p) · GW(p)])
This is partially a story of distantly spaced equilibria though. (In this comment, I treat the educational-track “hamster wheel” from the OP both as given and as having the harmful side effects that the OP implies, to keep the topic connection. However, this is really a comment about equilibria more generally, and in reality, I am uncertain in the moment what I think of the specific case in the OP.)
In my model, you don't get off the hamster wheel because you lose a lot of opportunity if you do, partly (but not entirely!) because everyone else has common knowledge not to follow you and regarding expected relative traits of people on the wheel. If you try to give one-shot true advice, you will say to stay on the hamster wheel and you will be (probabilistically) right. But if everyone receives and follows similar advice, that has the side effect of perpetuating the hamster wheel.
Assuming the coordination benefits from the hamster wheel are a necessity (to avoid being outcompeted or otherwise), making an alternative viable requires creating common knowledge through iterated coordination updates outside the one-shot advice model. Giving suboptimal one-shot advice can be a route to this, but if you do it in the most straightforward way, you accept the problems with lying in the immediate sense as part of this. If it doesn't work, you weren't trustworthy, and that may become evident. If it does work, and the message was enough about the future, you might turn out to have told the truth via making it true, but still not in a straightforwardly truthful way, and that may still become evident. What effect that has has some cultural dependencies—there's chunks of American business culture, for instance, in which running a successful risky campaign of that type is considered admirable, and then there's the chaotic environments of politics…
Note that the suboptimality can also accrue from unintentional (or ‘unintentional’) value skew rather than lying, in which case the distrust also accrues from value skew, but the effect is similar.
By comparison, giving conventional one-shot advice but with an advertisement-like rider of “but if you have some risk tolerance, try this other thing” seems good to me. Attempting in other ways to get out of one-shot advice being a dominant force also seems promising.
If you want to get out of the “concrete convergence points” structure even more deeply rather than just shifting the loci of power, then that's a whole other ball game, I think.
Note that, in my current belief, posting this very comment has a likely side effect of reducing the credibility of forced attempts to get people out of the hamster wheel, but I accept this along the way, mainly because I think active coordination on dewheeling is better in expectation compared to the risked trust loss from poorly-calibrated attempts to forcibly dewheel in isolation, the latter being much worse for long-term dewheeling prospects. I can think of some counterarguments to this which I won't try to get into now both for lack of capacity and because that might have its own hazardous properties (I would be more amenable to describing them privately). There is a separate way in which I wonder whether analyzing the generational/cohort dynamics of this type of phenomenon might be useful, which I also don't have the capacity to go into right now and which I imagine may already be much better covered by e.g. sociological studies (but I haven't looked).
Replies from: sudo↑ comment by sudo · 2022-04-14T23:54:36.043Z · LW(p) · GW(p)
I'll think more carefully about this later, but my instinctual response is that I don't think this is nicely framed as a coordination problem. Basically, I don't think you really lose that much when you "de-wheel," in terms of actually effectiveness at EA-like goals. It's worth noting that a major part of this article is that "wheeling" is incredibly costly (especially in opportunity).
What exactly do you think changes when everyone decides to de-wheel rather than just a few people? My read is that the biggest change is simply that de-wheeling becomes less stigmatized. There are cheaper ways to de-stigmatize de-wheeling than coordinating everyone to de-wheel at the same time.
Edit: first sentence, for clarity
Replies from: Rana Dexsin↑ comment by Rana Dexsin · 2022-04-15T00:24:05.595Z · LW(p) · GW(p)
For a few other scattered reference points:
- My personal experience disagrees hard in magnitude but at a somewhat skew angle. I left the conventional track in some ways and seem to have mostly ruined my life as a result, but there are a lot of confounding forces there too; my goals are also more EA-like in feel than the most conventional goals I can think of, but I wouldn't describe them centrally as that. One part that seems to hold up is that you lose a lot of ability to get into virtuous cycles that are dependent on material resources that are pinned to credential signaling by other market-equilibrium sorts of issues, to the point that I did a substantial amount of risky work on temporary self-modification to try to penetrate these cycles, which mostly failed. (I now notice that this is a possible strong subconscious bias which I should have revealed up front, and I apologize for not noticing that sooner.)
- I specifically believe from the above experience that this is more true today than it was about fifteen years ago, but I'm not immediately sure how I'd justify it, and there's a lot of potential bias in there too.
- Some of Paul Graham's essays exhibit a contrary viewpoint to mine which is more similar to yours, perhaps most centrally “After Credentials”. I notice that this is still kinda congruent with my (2), since that essay is from 2008. He describes it as being more true in 2008 than in ~1983, which I also think I agree with.
- I think rank-biserial's comment re “isn't much of a middle path” points toward “there existing a recognized social role” as a sort-of coordination/common-knowledge aspect here.
Here's a potential divergence I see. Do you believe the viability of skipping a conventionally credentialed path is more true for “EA-like” goals than for “mainstream” goals? If so, why is that, and are there some in-between goals that you could identify to make a spectrum out of?
Replies from: sudo↑ comment by sudo · 2022-04-15T01:00:49.550Z · LW(p) · GW(p)
Here's a potential divergence I see. Do you believe the viability of skipping a conventionally credentialed path is more true for “EA-like” goals than for “mainstream” goals? If so, why is that, and are there some in-between goals that you could identify to make a spectrum out of?
This is slightly complicated.
If your goal is something like "become wealthy while having free time," the Prep school->Fancy college->FIRE in finance or software path is actually pretty darn close to perfect.
If your goal is something like "make my parents proud" or "gain social validation," you probably go down the same route too.
If your goal is something like "seek happiness" or "live a life where I am truly free" I think that the credentialing is something you probably need to get off ASAP. It confuses your reward mechanisms. There's tons of warnings in pop culture about this.
If you have EA-like goals, you have a "maximize objective function"-type goal. It's in the same shape as "become as rich as possible" or "make the world as horrible as possible." Basically, the conventional path is highly highly unlikely to get you all the way there. In this case, you probably want to get into the
- Get skills+resources
- Use skills+resources to do impact
- Repeat
Loop.
For a lot of important work, the resources required are minimal and you already have it. You only need skills. If you have skills, people will also give you resources.
It shouldn't matter how much money you have.
Also, even if you were totally selfish, stopping apocalypse is better for you than earning extra money right now. If you believe the sort of AI arguments made on this forum, then it is probably directly irrational for you to optimize for things other than save the world.
So, do you think it's instrumental to saving the world to focus on credentials? Perhaps it's even a required part of this loop? (Perhaps you need credentials to get opportunities to get skills?)
I basically don't think that is true. Even accepting colleges teach more effectively than people can learn auto-didactically, the amount of time wasted on bullshit and the amount of health wasted on stress probably makes this not true. It seems like you'd have to get very lucky for the credential mill to not be a significant skill cost.
--
I guess it's worthwhile for me to reveal some weird personal biases too.
I'm personally a STEM student at a fancy college with a fancy (non-alignment) internship lined up. I actually love and am very excited about the internship (college somewhat less so. I might just be the wrong shape for college.), because I think it'll give me a lot of extra skills.
My satisfaction with this doesn't negate the fact that I mostly got those things by operating in a slightly different (more wheeled) mental model. A side effect of my former self being a little more wheeled is that I'd have to mess up even more badly to get into a seriously precarious situation. It's probably easier for me to de-wheel at the current point, already having some signalling tools, then it is for the average person to de-wheel.
I'm not quite sure what cycles you were referring to (do you have examples?), but this might be me having a bad case of "this doesn't apply to me so I will pay 0 thought to it," and thus inadvertently burning a massive hole in my map.
Despite this though, that I probably mostly wish I de-wheeled earlier (middle school, when I first started thinking somewhat along these lines) rather than later. I'd be better at programming, better at math, and probably more likely to positively impact the world. At the expense of being less verbally eloquent and having less future money. I can't say honestly that I would take the trade, but certainly a very large part of me wants to.
Certainly, I'd at least argue that Bob should de-wheel. The downside is quite limited.
--
There definitely is a middle path, though. Most of the AI alignment centers pay comparable salaries to top tech companies. You can start AI alignment companies and get funding, etc. There's an entire gradient there. I also don't entirely see how that is relevant.
ranked-biserial's point was largely about Charlie, who wasn't really a focus of the essay. What they said about Charlie might very much be correct. But it's not given that Alice and Bob secretly want this. They may very well have done something else if not given the very conservative advice.
I'll reply to ranked-biserial later.
Edit: typo
Replies from: Rana Dexsin, rank-biserial↑ comment by Rana Dexsin · 2022-04-15T04:35:34.751Z · LW(p) · GW(p)
Interesting. I'll want to look back at this later; it seems like I partially missed the point of your original post, but also it seems like there are potentially productive fuzzy conversations to be had more broadly?
To one aspect, and sorry in advance if I get rambly since I don't have much time to edit down:
I'm not quite sure what cycles you were referring to (do you have examples?),
In short: the location/finance/legibility spiral, the employment/employment/legibility spiral, and the enormous energy needed to get back up if you fall down a class-marker level in enough ways. I don't think I can expand on that without getting into the personal-ish version, so I'll just go ahead and let you adjust/weaken for perspective. There's a lot of potential fog of “which bits of world are accessible” here (but then, that's to some degree part of the phenomenon I'm gesturing at, too).
Preamble: if you care about being influential then your problems skew a lot more toward a social-reality orientation than if you primarily care about doing good work in a more purely abstract sense. I decided long ago for myself that not caring enough about being influential in a more direct sense was likely to create problems with misinterpretation and value skew where even if I did work that had a shot at making an impact on the target, the effective result of any popularization of it might not be something I could meaningfully steer. In particular, this means I don't expect the “live cheaply somewhere remote and put out papers while doing almost all my collaboration electronically” approach to work, at least at this point in my career.
Caveat: currently, I think I've likely overshot in terms of mindset for medium-term benefit even in terms of social reality (mostly due to risk aversion of the kind you disapprove of and due to the way a bunch of signaling is anti-inductive). I am deeply conflicted as to how much to backtrack on or abandon.
First cycle: Several social and career needs might be better met by moving to a specific place. That place has a high cost of living due to big-city amplification effects, which is a big obstacle in itself—but it's not just the cost, but things like default tenant-landlord relationships and the signaling involved in that. It's having the pay stub so you can qualify to rent housing, and having that pay stub come from the right place, and so on. Ability to work around this is limited; alternative documentation usually requires an order of magnitude longer of demonstrated, documented stability, and gaining access via local networks of people has a bootstrapping problem.
Second cycle: I see a lot of talk around some labor markets (especially in software work, which seems very common in this social sphere) currently being heavily worker-tilted, but I've still not seen much way to get in on skills alone, especially because it's not just technical skill, it's the remaining 90% of the work that involves having practiced collaborating and taking roles in an organization in the ‘right’ way so they don't have to completely train you up for that. There's plenty of market for people with three years of legible, verifiable, full-time experience, and almost nothing otherwise. This is classic “you need a job to get a job”, and if your existing role is of the wrong kind, you're on the treadmill of that alternate track and need a massive pile of slack to switch around.
The above two amplify each other a lot, because the former of them gives you a lot of random-chance opportunity to try to get past barriers to the latter and the latter gets you the socioeconomic legibility for the former. For some corroboration, Patrick McKenzie talks about hiring in the software industry: (1), (2), (3) with some tactics for how to work within this. He specifically notes in (3) that “plausible” is essentially binary and recommends something congruent with your “It's probably easier for me to de-wheel at the current point, already having some signalling tools, then it is for the average person to de-wheel.” in terms of getting past a threshold first (which is similar to the type of advice you get upset at in the OP).
Now, if you're talking from purely an alignment perspective, and most work in alignment is currently theoretical and doesn't benefit much from the above, and organizations funding it and people doing it manage to avoid adhesion to similar phenomena in selection, then you have a much better case for not caring much.
I'm personally a STEM student at a fancy college with a fancy (non-alignment) internship lined up.
Being a student is notably a special case that gets you a lot of passes, because that's the perceived place in life where you're ‘supposed to’ not have everything yet. Once you're past the student phase, you get very little slack. This is especially true in terms of lack of external-system slack in mentoring/integration capacity—the above induction into the ‘right kind’ of experience is slack that is explicitly given to interns, but then selecting for anyone else alongside that is expensive, so if they can fill all their intake needs from student bodies, and you don't have the “I am a student” pass yourself, you lose by default.
Replies from: Rana Dexsin↑ comment by Rana Dexsin · 2022-04-25T20:32:43.642Z · LW(p) · GW(p)
A recent BBC Worklife article loosely corroborates my impression of job market conditions and early-career mentorship bottlenecks.
Although a war for talent is certainly raging, employers aren’t fighting the same battles across the board. Only some candidates have power in the job market – typically experienced, mid-career employees. It means entry-level workers can still face difficulties finding employment – and this is especially the situation in certain sectors.
In many cases, labour shortages mean companies are offering flexible working arrangements to secure talent. Grace Lordan, director of the Inclusion Initiative at the London School of Economics, says this practice can further restrict opportunities for inexperienced candidates.
“If hybrid working is implemented, it makes more sense to hire someone with experience: an employee you know can just get on with the job working from home,” adds Lordan. “Managers need more time to train entry-level workers and show what good performance looks like. With employees often time-poor at the biggest firms, it’s not surprising that we’re seeing some inexperienced workers struggle in the job market.”
↑ comment by rank-biserial · 2022-04-15T01:32:35.612Z · LW(p) · GW(p)
If you have EA-like goals, you have a "maximize objective function"-type goal. It's in the same shape as "become as rich as possible" or "make the world as horrible as possible." Basically, the conventional path is highly highly unlikely to get you all the way there. In this case, you probably want to get into the
- Get skills+resources
- Use skills+resources to do impact
- Repeat
Loop.
I'm in a similar situation to yours. (I'm currently in the "Bob" stage of the Alice -> Bob -> Charlie pipeline.) How do you propose I, and those like us, go about doing step 1 without entering the "software/finance hamster wheel"? Are we supposed to found a dozen start-ups until one of them works? Are we supposed to find and exploit some massive inefficiency in crypto pricing, all by ourselves? Please, do tell.
Replies from: sudo↑ comment by sudo · 2022-04-15T02:17:47.602Z · LW(p) · GW(p)
My Solution
--
Optimize skill-building over resource-collection. You don't need that many resources.
Ask:
- What is the skill I'm most interested in building right now?
- What's the best way to get this skill?
A couple of things:
- Open source libraries are free tutoring
- Most alignment-relevant skills can be effectively self-taught
- Projects are learning opportunities that demand mastery
- Tutoring is cheaper and possibly more effective than college tuition
↑ comment by rank-biserial · 2022-04-15T02:50:16.193Z · LW(p) · GW(p)
You don't need that many resources.
True, if we're talking solely about alignment. If we're talking about the larger space of, as you put them, "maximize objective function"-type goals, then there's plenty of demand for resources. Let's say I wanna do (actually effective) longevity research. Since the competition for grant money is (like most things) Goodharted and broken, and because I don't have enough biology credentials, I'm gonna need to self-fund in order to buy lab materials and grad student slaves.
Replies from: sudo↑ comment by sudo · 2022-04-15T02:59:55.013Z · LW(p) · GW(p)
There is no universal advice that I can give.
The problem is that people are assuming that wheeling is correct without checking that it is.
I'm not proposing developing an allergic reaction to colleges or something.
Replies from: rank-biserial↑ comment by rank-biserial · 2022-04-15T03:03:07.509Z · LW(p) · GW(p)
Ok, sick. I largely agree with you btw (about the hamster wheel being corrosive). If I came off as agressive, fyi, I liked the spirit of your post a lot, and I strong-upvoted it.
Replies from: sudocomment by Ulisse Mini (ulisse-mini) · 2023-03-19T22:23:47.198Z · LW(p) · GW(p)
Strongly agree. Rationalist culture is instrumentally irrational here. It's very well known how important self-belief & a growth mindset is for success, and rationalists obsession with natural intelligence quite bad imo, to the point where I want to limit my interaction with the community so I don't pick up bad patterns.
I do wonder if you're strawmanning the advice a little, in my friend circles dropping out is seen as reasonable, though this could just be because a lot of my high-school friends already have some legible accomplishments and skills.
comment by cata · 2022-04-15T02:47:03.666Z · LW(p) · GW(p)
Aren't you kind of preaching to the choir? Who involved in AI alignment is actually giving advice like this?
Wouldn't the median respondent tell A and B something like "go start participating at the state of the art by reading and publishing on the Alignment Forum and by reading, reproducing, and publishing AI/ML papers, and maybe go apply for jobs at alignment research labs?"
Replies from: sudo↑ comment by sudo · 2022-04-15T02:58:02.427Z · LW(p) · GW(p)
It's advice that you generally see from LessWrongers and rationality-adjacent people who are not actively working on technical alignment.
I don't know if that's true, but it might be. That does not change the fact that there is a lot of "stay realistic"-type advise that you get from people in these circles. I'd wager this type of advice does not generally come from a more lucid view of reality, but rather from (irrationally high) risk aversion.
If I'd summarize this in one sentence: we need to be much more risk-tolerant and signalling-averse if we want a chance at solving the most important problems.
comment by Dennis Towne (dennis-towne) · 2022-04-15T13:30:55.646Z · LW(p) · GW(p)
My advice:
Alice: go to college anyway. If you can get into a better school do that; if not, that's ok too. Take the minimum class load you can. Take things that are fun, that you're interested in, that are relevant to alignment. Have a ton of side projects. Soak in the environment, cultivate ideas, learn, build. Shoot for a b+ average gpa. You're basically guaranteed employment no matter what you do here, and the ideas matter.
Bob: focus on alignment where you can, but understand that your best bet may very well be to get the highest paying job you can and use that to fund research. Think hard about that; high end salaries can be on the order of a million dollars a year. Precommit to actually part with the cash if you go this route, because it's harder than you think.
Charlie: raise the flag internally and keep it in everyone's mind. Go for promo so that you both have more money to donate, and so you have more influence over projects which may make things worse. Donate a quarter of your gross to alignment work; you can afford it.
Replies from: sudo↑ comment by sudo · 2022-04-16T02:36:34.249Z · LW(p) · GW(p)
I fundamentally think that this EA idea that donation is just as effective as doing something grossly overestimates how liquid and fungible labor is.
Replies from: Dagon↑ comment by Dagon · 2022-04-18T16:41:15.656Z · LW(p) · GW(p)
[note: I'm not particularly utilitarian nor EA-identifying. This is outside commentary. ]
grossly overestimates how liquid and fungible labor is.
I think the baseline advice that donation is as effective (really, MORE effective) than direct action is DIRECTLY a consequence of labor being non-fungible and money being fungible. Almost every human can be more effective by seeking their comparative advantage across all endeavors than they can by guessing at what's effective for the narrow EA causes (and this goes double for x-risk EA causes).
It doesn't overestimate fungibility, it may overestimate motivation effectiveness. Working directly has feedback loops that can keep one satisfied with striving on that dimension. Working indirectly for donation has a large risk of capture and refocus on the consumption-lifestyle that many of your peers are seeking.
There are exceptions, for exceptional individuals who have the right mix of capability, interest, and self-directedness to focus directly on a given problem. But BECAUSE these are exceptional cases, there's no checklist for when it applies, and the base advice remains correct for the 95% case.
Replies from: mruwnik↑ comment by mruwnik · 2022-04-18T17:25:21.667Z · LW(p) · GW(p)
My understanding here is that while this is true, it will discourage the 5%, who will just go work for FAANG and donate money to someone worse (or someone overwhelmed with work), simultaneously losing any chance at a meaningful job. The point being that yes, it's good to donate, but if everyone donates (since that is the default rat race route), noone will do the important work.
I have the feeling that it stems from focusing on different aspects - sudo -i is lamenting the current incentives which are to study then sell your soul, which is a valid criticism. While a lot of comments seem to focus on the (large) risks of skipping the level grinding. Which are also very valid points - it's hard to save the world when hungry.
A bit of additional pandering - power laws are a thing, and 95% of people will do more good by donating, but that's not necessarily true here.
Replies from: rank-biserial, Dagon↑ comment by rank-biserial · 2022-04-18T19:01:22.094Z · LW(p) · GW(p)
My understanding here is that while this is true, it will discourage the 5%, who will just go work for FAANG and donate money to someone worse (or someone overwhelmed with work), simultaneously losing any chance at a meaningful job. The point being that yes, it's good to donate, but if everyone donates (since that is the default rat race route), noone will do the important work.
No! If everyone donates, there will be enough money to pay direct workers high salaries. I know this goes contra to the image of the selfless, noble Effective Altruist, but if you want shit to get done you should pay people lots of money to do it.
Replies from: mruwnik↑ comment by mruwnik · 2022-04-18T19:12:57.911Z · LW(p) · GW(p)
i.e. make it so EA is an attractive alternative to tech, thereby solving both problems at once?
Replies from: rank-biserial↑ comment by rank-biserial · 2022-04-18T19:30:26.709Z · LW(p) · GW(p)
↑ comment by Dagon · 2022-04-18T17:45:38.734Z · LW(p) · GW(p)
I think there are a lot of important details we just don't have the answer to. Is it 5%, 1%, or 0.01% of advice-seekers who should go into direct work rather than indirect/donation careers? What is the rate of mistakes in each of the groups, and how does the advice change that rate?
My modeling is that the exceptional folk will figure it out and do what's best EVEN when most of the advice is to do the simpler/more-common thing. The less-exceptional folk will NOT recover as easily if they try to make direct contributions and fail.