Jobs that can help with the most important century
post by HoldenKarnofsky · 2023-02-10T18:20:07.048Z · LW · GW · 0 commentsContents
Recapping the major risks, and some things that could help Jobs that can help I really think security is not getting enough attention from people concerned about AI risk, and I disagree with the idea that key security problems can be solved just by hiring from today’s security industry. Low-guidance jobs Things you can do if you’re not ready for a full-time career change Some general advice Footnotes None No comments
Let’s say you’re convinced that AI could make this the most important century of all time for humanity. What can you do to help things go well instead of poorly?
I think the biggest opportunities come from a full-time job (and/or the money you make from it). I think people are generally far better at their jobs than they are at anything else.
This piece will list the jobs I think are especially high-value. I expect things will change (a lot) from year to year - this is my picture at the moment.
Here’s a summary:
Role | Skills/assets you'd need |
Research and engineering on AI safety | Technical ability (but not necessarily AI background) |
Information security to reduce the odds powerful AI is leaked | Security expertise or willingness/ability to start in junior roles (likely not AI) |
Other roles at AI companies | Suitable for generalists (but major pros and cons) |
Govt and govt-facing think tanks | Suitable for generalists (but probably takes a long time to have impact) |
Jobs in politics | Suitable for generalists if you have a clear view on which politicians to help |
Forecasting to get a better handle on what’s coming | Strong forecasting track record (can be pursued part-time) |
"Meta" careers | Misc / suitable for generalists |
Low-guidance options | These ~only make sense if you read & instantly think "That's me" |
A few notes before I give more detail:
- These jobs aren’t the be-all/end-all. I expect a lot to change in the future, including a general increase in the number of helpful jobs available.
- Most of today’s opportunities are concentrated in the US and UK, where the biggest AI companies (and AI-focused nonprofits) are. This may change down the line.
- Most of these aren’t jobs where you can just take instructions and apply narrow skills.
- The issues here are tricky, and your work will almost certainly be useless (or harmful) according to someone.
- I recommend forming your own views on the key risks of AI - and/or working for an organization whose leadership you’re confident in.
- Staying open-minded and adaptable is crucial.
- I think it’s bad to rush into a mediocre fit with one of these jobs, and better (if necessary) to stay out of AI-related jobs while skilling up and waiting for a great fit.
- I don’t think it’s helpful (and it could be harmful) to take a fanatical, “This is the most important time ever - time to be a hero” attitude. Better to work intensely but sustainably, stay mentally healthy and make good decisions.
The first section of this piece will recap my basic picture of the major risks, and the promising ways to reduce these risks (feel free to skip if you think you’ve got a handle on this).
The next section will elaborate on the options in the table above.
After that, I’ll talk about some of the things you can do if you aren’t ready for a full-time career switch yet, and give some general advice for avoiding doing harm and burnout.
Recapping the major risks, and some things that could help
I've cut this section from the email version of this piece to save space. If you'd like to read it, click here.
Jobs that can help
In this long section, I’ll list a number of jobs I wish more people were pursuing.
Unfortunately, I can’t give individualized help exploring one or more of these career tracks. Starting points could include 80,000 Hours and various other resources.
Research and engineering careers. You can contribute to alignment research as a researcher and/or software engineer (the line between the two can be fuzzy in some contexts).
There are (not necessarily easy-to-get) jobs along these lines at major AI labs, in established academic labs, and at independent nonprofits (examples in footnote).2
Different institutions will have very different approaches to research, very different environments and philosophies, etc. so it’s hard to generalize about what might make someone a fit. A few high-level points:
- It takes a lot of talent to get these jobs, but you shouldn’t assume that it takes years of experience in a particular field (or a particular degree).
- I’ve seen a number of people switch over from other fields (such as physics) and become successful extremely quickly.
- In addition to on-the-job training, there are independent programs specifically aimed at helping people skill up quickly.3
- You also shouldn’t assume that these jobs are only for “scientist” types - there’s a substantial need for engineers, which I expect to grow.
- I think most people working on alignment consider a lot of other people’s work to be useless at best. This seems important to know going in for a few reasons.
- You shouldn’t assume that all work is useless just because the first examples you see seem that way.
- It’s good to be aware that whatever you end up doing, someone will probably dunk on your work on the Internet.
- At the same time, you shouldn’t assume that your work is helpful because it’s “safety research.” It's worth investing a lot in understanding how any particular research you're doing could be helpful (and how it could fail).
- I’d even suggest taking regular dedicated time (a day every few months?) to pause working on the day-to-day and think about how your work fits into the big picture.
- For a sense of what work I think is most likely to be useful, I’d suggest my piece on why AI safety seems hard to measure - I’m most excited about work that directly tackles the challenges outlined in that piece, and I’m pretty skeptical of work that only looks good with those challenges assumed away. (Also see my piece on broad categories of research I think have a chance to be highly useful, and some comments from a while ago that I still mostly endorse.)
I also want to call out a couple of categories of research that are getting some attention today, but seem at least a bit under-invested in, even relative to alignment research:
- Threat assessment research. To me, there’s an important distinction between “Making AI systems safer” and “Finding out how dangerous they might end up being.” (Today, these tend to get lumped together under “alignment research.”)
- A key approach to medical research is using model organisms - for example, giving cancer to mice, so we can see whether we’re able to cure them.
- Analogously, one might deliberately (though carefully!4) design an AI system to deceive and manipulate humans, so we can (a) get a more precise sense of what kinds of training dynamics lead to deception and manipulation; (b) see whether existing safety techniques are effective countermeasures.
- If we had concrete demonstrations of AI systems becoming deceptive/manipulative/power-seeking, we could potentially build more consensus for caution (e.g., standards and monitoring). Or we could imaginably produce evidence that the threat is low.5
- A couple of early examples of threat assessment research: here and here.
- Anti-misuse research.
- I’ve written about how we could face catastrophe even from aligned AI. That is - even if AI does what its human operators want it to be doing, maybe some of its human operators want it to be helping them build bioweapons, spread propaganda, etc.
- But maybe it’s possible to train AIs so that they’re hard to use for purposes like this - a separate challenge from training them to avoid deceiving and manipulating their human operators.
- In practice, a lot of the work done on this today (example) tends to get called “safety” and lumped in with alignment (and sometimes the same research helps with both goals), but again, I think it’s a distinction worth making.
- I expect the earliest and easiest versions of this work to happen naturally as companies try to make their AI models fit for commercialization - but at some point it might be important to be making more intense, thorough attempts to prevent even very rare (but catastrophic) misuse.
Information security careers. There’s a big risk that a powerful AI system could be “stolen” via hacking/espionage, and this could make just about every kind of risk worse. I think it could be very challenging - but possible - for AI projects to be secure against this threat. (More above.)
I really think security is not getting enough attention from people concerned about AI risk, and I disagree with the idea that key security problems can be solved just by hiring from today’s security industry.
- From what I’ve seen, AI companies have a lot of trouble finding good security hires. I think a lot of this is simply that security is challenging and valuable, and demand for good hires (especially people who can balance security needs against practical needs) tends to swamp supply.
- And yes, this means good security people are well-paid!
- Additionally, AI could present unique security challenges in the future, because it requires protecting something that is simultaneously (a) fundamentally just software (not e.g. uranium), and hence very hard to protect; (b) potentially valuable enough that one could imagine very well-resourced state programs going all-out to steal it, with a breach having globally catastrophic consequences. I think trying to get out ahead of this challenge, by experimenting early on with approaches to it, could be very important.
- It’s plausible to me that security is as important as alignment right now, in terms of how much one more good person working it will help.
- And security is an easier path, because one can get mentorship from a large community of security people working on things other than AI.6
- I think there’s a lot of potential value both in security research (e.g., developing new security techniques) and in simply working at major AI companies to help with their existing security needs.
- For more on this topic, see this recent 80,000 hours report and this 2019 post by two of my coworkers [EA · GW].
Other jobs at AI companies. AI companies hire for a lot of roles, many of which don’t require any technical skills.
It’s a somewhat debatable/tricky path to take a role that isn’t focused specifically on safety or security. Some people believe7 that you can do more harm than good this way, by helping companies push forward with building dangerous AI before the risks have gotten much attention or preparation - and I think this is a pretty reasonable take.
At the same time:
- You could argue something like: “Company X has potential to be a successful, careful AI project. That is, it’s likely to deploy powerful AI systems more carefully and helpfully than others would, and use them to reduce risks by automating alignment research and other risk-reducing tasks. Furthermore, Company X is most likely to make a number of other decisions wisely as things develop. So, it’s worth accepting that Company X is speeding up AI progress, because of the hope that Company X can make things go better.” This obviously depends on how you feel about Company X compared to others!
- Working at Company X could also present opportunities to influence Company X. If you’re a valuable contributor and you are paying attention to the choices the company is making (and speaking up about them), you could affect the incentives of leadership.
- I think this can be a useful thing to do in combination with the other things on this list, but I generally wouldn’t advise taking a job if this is one’s main goal.
- Working at an AI company presents opportunities to become generally more knowledgeable about AI, possibly enabling a later job change to something else.
How a careful AI project could be helpful (Details not included in email - click to view on the web)
80,000 Hours has a collection of anonymous advice on how to think about the pros and cons of working at an AI company.
In a future piece, I’ll discuss what I think AI companies can be doing today to prepare for transformative AI risk. This could be helpful for getting a sense of what an unusually careful AI company looks like.
Jobs in government and at government-facing think tanks. I think there is a lot of value in providing quality advice to governments (especially the US government) on how to think about AI - both today’s systems and potential future ones.
I also think it could make sense to work on other technology issues in government, which could be a good path to working on AI later (I expect government attention to AI to grow over time).
People interested in careers like these can check out Open Philanthropy’s Technology Policy Fellowships.
One related activity that seems especially valuable: understanding the state of AI in countries other than the one you’re working for/in - particularly countries that (a) have a good chance of developing their own major AI projects down the line; (b) are difficult to understand much about by default.
- Having good information on such countries could be crucial for making good decisions, e.g. about moving cautiously vs. racing forward vs. trying to enforce safety standards internationally.
- I think good work on this front has been done by the Center for Security and Emerging Technology8 among others.
A future piece will discuss other things I think governments can be doing today to prepare for transformative AI risk. I won’t have a ton of tangible recommendations quite yet, but I expect there to be more over time, especially if and when standards and monitoring frameworks become better-developed.
Jobs in politics. The previous category focused on advising governments; this one is about working on political campaigns, doing polling analysis, etc. to generally improve the extent to which sane and reasonable people are in power. Obviously, it’s a judgment call which politicians are the “good” ones and which are the “bad” ones, but I didn’t want to leave out this category of work.
Forecasting. I’m intrigued by organizations like Metaculus, HyperMind, Good Judgment,9 Manifold Markets, and Samotsvety - all trying, in one way or another, to produce good probabilistic forecasts (using generalizable methods10) about world events.
If we could get good forecasts about questions like “When will AI systems be powerful enough to defeat all of humanity?” and “Will AI safety research in category X be successful?”, this could be useful for helping people make good decisions. (These questions seem very hard to get good predictions on using these organizations’ methods, but I think it’s an interesting goal.)
To explore this area, I’d suggest learning about forecasting generally (Superforecasting is a good starting point) and building up your own prediction track record on sites such as the above.
“Meta” careers. There are a number of jobs focused on helping other people learn about key issues, develop key skills and end up in helpful jobs (a bit more discussion here).
It can also make sense to take jobs that put one in a good position to donate to nonprofits doing important work, to spread helpful messages, and to build skills that could be useful later (including in unexpected ways, as things develop), as I’ll discuss below.
Low-guidance jobs
This sub-section lists some projects that either don’t exist (but seem like they ought to), or are in very embryonic stages. So it’s unlikely you can get any significant mentorship working on these things.
I think the potential impact of making one of these work is huge, but I think most people will have an easier time finding a fit with jobs from the previous section (which is why I listed those first).
This section is largely to illustrate that I expect there to be more and more ways to be helpful as time goes on - and in case any readers feel excited and qualified to tackle these projects themselves, despite a lack of guidance and a distinct possibility that a project will make less sense in reality than it does on paper.
A big one in my mind is developing safety standards that could be used in a standards and monitoring regime. By this I mean answering questions like:
- What observations could tell us that AI systems are getting dangerous to humanity (whether by pursuing aims of their own or by helping humans do dangerous things)?
- A starting-point question: why do we believe today’s systems aren’t dangerous? What, specifically, are they unable to do that they’d have to do in order to be dangerous, and how will we know when that’s changed?
- Once AI systems have potential for danger, how should they be restricted, and what conditions should AI companies meet (e.g., demonstrations of safety and security) in order to loosen restrictions?
There is some early work going on along these lines, at both AI companies and nonprofits. If it goes well, I expect that there could be many jobs in the future, doing things like:
- Continuing to refine and improve safety standards as AI systems get more advanced.
- Providing AI companies with “audits” - examinations of whether their systems meet standards, provided by parties outside the company to reduce conflicts of interest.
- Advocating for the importance of adherence to standards. This could include advocating for AI companies to abide by standards, and potentially for government policies to enforce standards.
Other public goods for AI projects. I can see a number of other ways in which independent organizations could help AI projects exercise more caution / do more to reduce risks:
- Facilitating safety research collaborations. I worry that at some point, doing good alignment research will only be possible with access to state-of-the-art AI models - but such models will be extraordinarily expensive and exclusively controlled by major AI companies.
- I hope AI companies will be able to partner with outside safety researchers (not just rely on their own employees) for alignment research, but this could get quite tricky due to concerns about intellectual property leaks.
- A third-party organization could do a lot of the legwork of vetting safety researchers, helping them with their security practices, working out agreements with respect to intellectual property, etc. to make partnerships - and selective information sharing, more broadly - more workable.
- Education for key people at AI companies. An organization could help employees, investors, and board members of AI companies learn about the potential risks and challenges of advanced AI systems. I’m especially excited about this for board members, because:
- I’ve already seen a lot of interest from AI companies in forming strong ethics advisory boards, and/or putting well-qualified people on their governing boards (see footnote for the difference11). I expect demand to go up.
- Right now, I don’t think there are a lot of people who are both (a) prominent and “fancy” enough to be considered for such boards; (b) highly thoughtful about, and well-versed in, what I consider some of the most important risks of transformative AI (covered in this piece and the series it’s part of).
- An “education for potential board members” program could try to get people quickly up to speed on good board member practices generally, on risks of transformative AI, and on the basics of how modern AI works.
- Helping share best practices across AI companies. A third-party organization might collect information about how different AI companies are handling information security, alignment research, processes for difficult decisions, governance, etc. and share it across companies, while taking care to preserve confidentiality. I’m particularly interested in the possibility of developing and sharing innovative governance setups for AI companies.
Thinking and stuff. There’s tons of potential work to do in the category of “coming up with more issues we ought to be thinking about, more things people (and companies and governments) can do to be helpful, etc.”
- About a year ago, I published a list of research questions [EA · GW] that could be valuable and important to gain clarity on. I still mostly endorse this list (though I wouldn’t write it just as is today).
- A slightly different angle: it could be valuable to have more people thinking about the question, “What are some tangible policies governments could enact to be helpful?” E.g., early steps towards standards and monitoring. This is distinct from advising governments directly (it's earlier-stage).
Some AI companies have policy teams that do work along these lines. And a few Open Philanthropy employees work on topics along the lines of the first bullet point. However, I tend to think of this work as best done by people who need very little guidance (more at my discussion of wicked problems), so I’m hesitant to recommend it as a mainline career option.
Things you can do if you’re not ready for a full-time career change
Switching careers is a big step, so this section lists some ways you can be helpful regardless of your job - including preparing yourself for a later switch.
First and most importantly, you may have opportunities to spread key messages via social media, talking with friends and colleagues, etc. I think there’s a lot of potential to make a difference here, and I wrote a on this specifically.
Second, you can explore potential careers like those I discuss above. I’d suggest generally checking out job postings, thinking about what sorts of jobs might be a fit for you down the line, meeting people who work in jobs like those and asking them about their day-to-day, etc.
Relatedly, you can try to keep your options open.
- It’s hard to predict what skills will be useful as AI advances further and new issues come up.
- Being ready to switch careers when a big opportunity comes up could be hugely valuable - and hard. (Most people would have a lot of trouble doing this late in their career, no matter how important!)
- Building up the financial, psychological and social ability to change jobs later on would (IMO) be well worth a lot of effort.
Right now there aren’t a lot of obvious places to donate (though you can donate to the Long-Term Future Fund12 if you feel so moved).
- I’m guessing this will change in the future, for a number of reasons.13
- Something I’d consider doing is setting some pool of money aside, perhaps invested such that it’s particularly likely to grow a lot if and when AI systems become a lot more capable and impressive,14 in case giving opportunities come up in the future.
- You can also, of course, donate to things today that others aren’t funding for whatever reason.
Learning more about key issues could broaden your options. I think the full series I’ve written on key risks is a good start. To do more, you could:
- Actively engage with this series by writing your own takes, discussing with others, etc.
- Consider various online courses15 on relevant issues.
- I think it’s also good to get as familiar with today’s AI systems (and the research that goes into them) as you can.
- If you’re happy to write code, you can check out coding-intensive guides and programs (examples in footnote).16
- If you don’t want to code but can read somewhat technical content, I’d suggest getting oriented with some basic explainers on deep learning17 and then reading significant papers on AI and AI safety.18
- Whether you’re very technical or not at all, I think it’s worth playing with public state-of-the-art AI models, as well as seeing highlights of what they can do via Twitter and such.
Finally, if you happen to have opportunities to serve on governing boards or advisory boards for key organizations (e.g., AI companies), I think this is one of the best non-full-time ways to help.
- I don’t expect this to apply to most people, but wanted to mention it in case any opportunities come up.
- It’s particularly important, if you get a role like this, to invest in educating yourself on key issues.
Some general advice
I think full-time work has huge potential to help, but also big potential to do harm, or to burn yourself out. So here are some general suggestions.
Think about your own views on the key risks of AI, and what it might look like for the world to deal with the risks. Most of the jobs I’ve discussed aren’t jobs where you can just take instructions and apply narrow skills. The issues here are tricky, and it takes judgment to navigate them well.
Furthermore, no matter what you do, there will almost certainly be people who think your work is useless (if not harmful).19 This can be very demoralizing. I think it’s easier if you’ve thought things through and feel good about the choices you’re making.
I’d advise trying to learn as much as you can about the major risks of AI (see above for some guidance on this) - and/or trying to work for an organization whose leadership you have a good amount of confidence in.
Jog, don’t sprint. Skeptics of the “most important century” hypothesis will sometimes say things like “If you really believe this, why are you working normal amounts of hours instead of extreme amounts? Why do you have hobbies (or children, etc.) at all?” And I’ve seen a number of people with an attitude like: “THIS IS THE MOST IMPORTANT TIME IN HISTORY. I NEED TO WORK 24/7 AND FORGET ABOUT EVERYTHING ELSE. NO VACATIONS."
I think that’s a very bad idea.
Trying to reduce risks from advanced AI is, as of today, a frustrating and disorienting thing to be doing. It’s very hard to tell whether you’re being helpful (and as I’ve mentioned, many will inevitably think you’re being harmful).
I think the difference between “not mattering,” “doing some good” and “doing enormous good” comes down to how you choose the job, how good at it you are, and how good your judgment is (including what risks you’re most focused on and how you model them). Going “all in” on a particular objective seems bad on these fronts: it poses risks to open-mindedness, to mental health and to good decision-making (I am speaking from observations here, not just theory).
That is, I think it’s a bad idea to try to be 100% emotionally bought into the full stakes of the most important century - I think the stakes are just too high for that to make sense for any human being.
Instead, I think the best way to handle “the fate of humanity is at stake” is probably to find a nice job and work about as hard as you’d work at another job, rather than trying to make heroic efforts to work extra hard. (I criticized heroic efforts in general here.)
I think this basic formula (working in some job that is a good fit, while having some amount of balance in your life) is what’s behind a lot of the most important positive events in history to date, and presents possibly historically large opportunities today.
Special thanks to Alexander Berger, Jacob Eliosoff, Alexey Guzey, Anton Korinek and Luke Muelhauser for especially helpful comments on this post. A lot of other people commented helpfully as well.
Footnotes
-
I use “aligned” to specifically mean that AIs behave as intended, rather than pursuing dangerous goals of their own. I use “safe” more broadly to mean that an AI system poses little risk of catastrophe for any reason in the context it’s being used in. It’s OK to mostly think of them as interchangeable in this post. ↩
-
AI labs with alignment teams: Anthropic, DeepMind and OpenAI. Disclosure: my wife is co-founder and President of Anthropic, and used to work at OpenAI (and has shares in both companies); OpenAI is a former Open Philanthropy grantee.
Academic labs: there are many of these; I’ll highlight the Steinhardt lab at Berkeley (Open Philanthropy grantee), whose recent research I’ve found especially interesting.
Independent nonprofits: examples would be Alignment Research Center and Redwood Research (both Open Philanthropy grantees, and I sit on the board of both).
You can also ↩
-
Examples: AGI Safety Fundamentals, SERI MATS, MLAB [EA · GW] (all of which have been supported by Open Philanthropy) ↩
-
On one hand, deceptive and manipulative AIs could be dangerous. On the other, it might be better to get AIs trying to deceive us before they can consistently succeed; the worst of all worlds might be getting this behavior by accident with very powerful AIs. ↩
-
Though I think it’s inherently harder to get evidence of low risk than evidence of high risk, since it’s hard to rule out risks arising as AI systems get more capable. ↩
-
Why do I simultaneously think “This is a mature field with mentorship opportunities” and “This is a badly neglected career track for helping with the most important century”?
In a nutshell, most good security people are not working on AI. It looks to me like there are plenty of people who are generally knowledgeable and effective at good security, but there’s also a huge amount of need for such people outside of AI specifically.
I expect this to change eventually if AI systems become extraordinarily capable. The issue is that it might be too late at that point - the security challenges in AI seem daunting (and somewhat AI-specific) to the point where it could be important for good people to start working on them many years before AI systems become extraordinarily powerful. ↩
-
Here’s Katja Grace [LW · GW] arguing along these lines. ↩
-
An Open Philanthropy grantee. ↩
-
Open Philanthropy has funded Metaculus and contracted with Good Judgment and HyperMind. ↩
-
That is, these groups are mostly trying things like “Incentivize people to make good forecasts; track how good people are making forecasts; aggregate forecasts” rather than “Study the specific topic of AI and make forecasts that way” (the latter is also useful, and I discuss it below). ↩
-
The governing board of an organization has the hard power to replace the CEO and/or make other decisions on behalf of the organization. An advisory board merely gives advice, but in practice I think this can be quite powerful, since I’d expect many organizations to have a tough time doing bad-for-the-world things without backlash (from employees and the public) once an advisory board has recommended against them. ↩
-
Open Philanthropy, which I’m co-CEO of, has supported this fund, and its current Chair is an Open Philanthropy employee. ↩
-
I generally expect there to be more and more clarity about what actions would be helpful, and more and more people willing to work on them if they can get funded. A bit more specifically and speculatively, I expect AI safety research to get more expensive as it requires access to increasingly large, expensive AI models. ↩
-
Not investment advice! I would only do this with money you’ve set aside for donating such that it wouldn’t be a personal problem if you lost it all. ↩
-
Some options here, here [? · GW], here [EA · GW], here. I’ve made no attempt to be comprehensive - these are just some links that should make it easy to get rolling and see some of your options. ↩
-
Spinning Up in Deep RL, ML for Alignment Bootcamp [EA · GW], Deep Learning Curriculum. ↩
-
For the basics, I like Michael Nielsen’s guide to neural networks and deep learning; 3Blue1Brown has a video explainer series that I haven’t watched but that others have recommended highly. I’d also suggest The Illustrated Transformer (the transformer is the most important AI architecture as of today).
For a broader overview of different architectures, see Neural Network Zoo.
You can also check out various Coursera etc. courses on deep learning/neural networks. ↩
-
I feel like the easiest way to do this is to follow AI researchers and/or top labs on Twitter. You can also check out Alignment Newsletter or ML Safety Newsletter for alignment-specific content. ↩
-
Why?
One reason is the tension between the “caution” and “competition” frames: people who favor one frame tend to see the other as harmful.
Another reason: there are a number of people who think we’re more-or-less doomed without a radical conceptual breakthrough on how to build safe AI (they think the sorts of approaches I list here are hopeless, for reasons I confess I don’t understand very well). These folks will consider anything that isn’t aimed at a radical breakthrough ~useless, and consider some of the jobs I list in this piece to be harmful, if they are speeding up AI development and leaving us with less time for a breakthrough.
At the same time, working toward the sort of breakthrough these folks are hoping for means doing pretty esoteric, theoretical research that many other researchers think is clearly useless.
And trying to make AI development slower and/or more cautious is harmful according to some people who are dismissive of risks, and think the priority is to push forward as fast as we can with technology that has the potential to improve lives. ↩
0 comments
Comments sorted by top scores.