Posts

Why We Need More Shovel-Ready AI Notkilleveryoneism Megaproject Proposals 2025-01-20T22:38:26.593Z
A proposal for iterated interpretability with known-interpretable narrow AIs 2025-01-11T14:43:05.423Z
Read The Sequences As If They Were Written Today 2025-01-02T02:51:36.537Z
A few questions about recent developments in EA 2024-11-23T02:36:25.728Z
How do you actually obtain and report a likelihood function for scientific research? 2024-02-11T17:42:49.956Z
BOUNTY AVAILABLE: AI ethicists, what are your object-level arguments against AI notkilleveryoneism? 2023-07-06T17:32:08.675Z

Comments

Comment by Peter Berggren (peter-berggren) on Read The Sequences As If They Were Written Today · 2025-01-19T04:41:33.571Z · LW · GW

That's sort of it, but it was specifically talking about certain types of self-deceptive behavior that appears to be instrumentally rational. The problem being is that once you've deceived yourself, you can't tell if it's a good idea or not.

Comment by Peter Berggren (peter-berggren) on Read The Sequences As If They Were Written Today · 2025-01-17T23:04:41.475Z · LW · GW

Thanks for the support. I'll try and work a bit more on my first post in the coming days and I hope it will be up soon.

Comment by Peter Berggren (peter-berggren) on Design 3: Intentionality · 2025-01-14T17:20:34.350Z · LW · GW

I think you did a really good job so far of setting up a series of clear exercises for techniques. The key issue I had with Hammertime was that it often seemed a bit disorganized in terms of changing plans and switching what it was talking about a lot.

Comment by Peter Berggren (peter-berggren) on Yoda Timers 3: Speed · 2025-01-13T02:31:43.328Z · LW · GW

My most recent post on LessWrong (https://www.lesswrong.com/posts/yj2hyrcGMwpPooqfZ/a-proposal-for-iterated-interpretability-with-known), which is also my first post proposing a novel avenue for AI alignment research, took me a total of 30 minutes.

Comment by Peter Berggren (peter-berggren) on Bug Hunt 3 · 2025-01-13T01:59:56.498Z · LW · GW

Probably for me, the main thing that helped was Yoda Timers. Then again, that was probably just a function of getting to practice it much more than anything else. Next up is probably TAPs.

Comment by Peter Berggren (peter-berggren) on Focusing · 2025-01-08T21:28:15.892Z · LW · GW

I have something very similar to the second felt sense given when I've spent too much time on my computer and get kind of vaguely sleepy and disoriented when I try to stop even for a moment. The term I use is similar to the one my parents used to describe the tangible expression of this feeling, and it's "video game poisoning."

Comment by Peter Berggren (peter-berggren) on Three Miniatures · 2025-01-08T21:18:50.481Z · LW · GW

One rationality technique that I can infer from my past experiences is one I'm not really sure how to name; possibilities include "path divergence analysis," "counterfactual defaults," "adjacent life heuristic," "near-miss solutions," and "reality branch mining." The idea is to look at what common actions for you would be if your life had gone slightly differently (e.g. you went to a different school, were born in a different country, etc.), see what sort of actions you would commonly take under these conditions, and see if these actions have value in your current life circumstances. One example that I applied was in terms of realizing that if my college major were slightly different, I'd have learned to code in C++ by now, realizing that learning C++ has significant advantages for me even with my major being what it is, and finally learning C++.

Comment by Peter Berggren (peter-berggren) on TAPs 2 · 2025-01-07T02:51:57.998Z · LW · GW

The closest I've come to a true "factory reset" was when I realized, a few times, that school clubs I was a part of were becoming toxic and unproductive. However, I can't really point to a single button; more just a gradual stream of one bad impression after another, at which point I started to slowly disengage.

Comment by Peter Berggren (peter-berggren) on Yoda Timers 2 · 2025-01-03T03:43:34.697Z · LW · GW

Set a Yoda Timer and share the most important idea you haven’t had time to express. Five minutes is all you get.

I really think that a lot of modern AI alignment research is being done within the academic system, but because it's done within the academic system it's fairly ignored by the independent/dedicated nonprofit research community when compared to independent/dedicated nonprofit research. On the contrary, it likely gets much more attention within academia.

I don't think the dynamic here is "each team likes their own people best." I think it's due to an unwarranted degree of skepticism of the academic system, which may be warranted in non-emergencies but which is less warranted when facing truly apocalyptic threats. The academic system has produced a lot of valuable research on climate change and nuclear risks, and I'd expect its research on AI to be similar broadly speaking. 

The fact that the first few successful researchers weren't academics isn't really a point against the academic system here, any more than Priestley and Lavoisier not being academics is a point against academic chemists. The supposed pre-paradigmaticity of the field also isn't really a point against the academic system here, given that many protosciences (e.g. Freudian psychoanalysis) were able to grow into sciences within the academic system, and this pattern can be seen continuing in fields such as astrobiology.

Comment by Peter Berggren (peter-berggren) on Bug Hunt 2 · 2025-01-02T20:44:16.429Z · LW · GW

My greatest ambition is to create a fully trainable art of rationality that’s so good it gets taught to every high schooler in the country and bankrupts multiple industries that prey on irrational behavior in the process. Although it may seem impossible, the success of anti-smoking efforts against an extremely addictive product with a massive advertising industry suggests that it's achievable, and the fact that the Internet exists now and didn't exist then suggests it's even easier than that was.

Comment by Peter Berggren (peter-berggren) on Read The Sequences As If They Were Written Today · 2025-01-02T19:47:14.240Z · LW · GW

Some of them, sure,  but for a lot I'd be like "that's completely outdated" and for others I'd be like "OK, that's obviously meant to be a jab at some specific person you don't like."

Comment by Peter Berggren (peter-berggren) on Hammertime Day 9: Time Calibration · 2024-12-30T18:57:00.227Z · LW · GW

The worst case of Planning Fallacy that I know recently was my plan to finish a blog post in a week. Now, ten weeks later, I haven't finished it. But when I actually started to work on it, I got a third of it done in a half hour.

Comment by Peter Berggren (peter-berggren) on Hammertime Day 8: Sunk Cost Faith · 2024-12-30T18:14:36.787Z · LW · GW

I agree with you on this, but I also don't think "sunk cost fallacy" isn't the right word to describe what you're saying. The rational behavior here is to factor in the existence of a random error term resulting from mood swings into these calculations, and if you can't fully factor it in, then generally err on the side of keeping projects going. I understand "sunk cost fallacy" to mean "factoring in the amount of effort already spent into these decisions," which does seem like a pure fallacy to me.

It's reasonable e.g. when about to watch a movie to say "I'm in a bad mood, I don't know how bad a mood I'm in, so even though I think the movie's not worth watching, I'll watch it anyway because I don't trust my assessment and I decided to watch it when in a calmer state of mind." Sunk cost fallacy is where you treat it differently if you bought yourself the tickets versus if they were given to you as a gift, which does seem, even in your apology for "sunk cost fallacy," to remain a fallacy.

Comment by Peter Berggren (peter-berggren) on Hammertime Day 8: Sunk Cost Faith · 2024-12-30T16:49:54.941Z · LW · GW

At any given point, you have some probability distribution over how worthwhile the project will be. The distribution can change over time, but it can change either for better or for worse. Therefore, at any point, if a rational agent expects it not to be worthwhile to expend the remaining effort to get the result, they should stop.

Of course, if you are irrational and intentionally fail to account for evidence as a way of getting out of work, this does not apply, but that's the problem then, not your lack of sunk costs.

Comment by Peter Berggren (peter-berggren) on Hammertime Day 8: Sunk Cost Faith · 2024-12-30T02:34:05.342Z · LW · GW

Sorry if this is confusing. What I'm saying is, you have some estimate of the project's valuation, and this factors in the information that you expect to get in the future about the project's valuation (cf. Conservation of Expected Evidence). If there's some chance the project will turn out worthwhile, you know that chance already. But there must also be some counterbalancing chance that the project will turn out even less worthwhile than you think.

Comment by Peter Berggren (peter-berggren) on Hammertime Day 8: Sunk Cost Faith · 2024-12-29T22:42:50.953Z · LW · GW

It seems to me like the "random walk" case you described is poorly formed; the possibility of a project turning out to be worth it after all should be factored into one's estimate of how "worth it" it is. If it doesn't, then that's a problem of motivated reasoning, not a reason to have a sunk cost fallacy.

Intentionally inducing fallacious reasoning in oneself is classified as "Dark Arts" for a reason, especially since it can bias one's own assessment of how well it turns out and whether to continue doing it.

Comment by Peter Berggren (peter-berggren) on Hammertime Day 7: Aversion Factoring · 2024-12-29T20:54:41.330Z · LW · GW

Probably the most consequential trivial inconvenience for me (recently) was that I stayed up very late (hours past when I planned to go to sleep) because my phone was right next to my bed. This was because the alternate charging spot I had set up to prevent this from happening was mildly cluttered.

Comment by Peter Berggren (peter-berggren) on Hammertime Day 6: Mantras · 2024-12-24T00:11:44.490Z · LW · GW

One of my favorite mantras is "A citizen has the courage to make the safety of the human race their personal responsibility" from the movie Starship Troopers. While a lot of the meaning is caught up in the movie's setting, the meaning that I personally draw from it is that an important part of living in the world is working hard to make the world a better place, and not assuming someone else will do it for you.

Comment by Peter Berggren (peter-berggren) on A few questions about recent developments in EA · 2024-11-25T05:54:14.401Z · LW · GW

I'm not proposing to never take breaks. I'm proposing something more along the lines of "find the precisely-calibrated amount of breaks to maximize productivity and take exactly those."

Comment by Peter Berggren (peter-berggren) on A few questions about recent developments in EA · 2024-11-23T17:56:50.115Z · LW · GW

OK then, so how would one go about making an organization that is capable of funding and building this? Are there any interested donors yet?

Comment by Peter Berggren (peter-berggren) on A few questions about recent developments in EA · 2024-11-23T05:13:45.941Z · LW · GW

Very much agree on this one, as do many other people that I know of. However, the key counterargument as to why this may be better as an EA project than a rationality one is that "rationality" is vague on what you're applying it to, while "EA" is at least slightly more clear, and a community like this benefits from having clear goals. Nevertheless, it may make sense to market it as a "rationality" project and just have EA be part of the work it does.

So the question now turns to, how would one go about building it?

Comment by Peter Berggren (peter-berggren) on A few questions about recent developments in EA · 2024-11-23T04:10:17.796Z · LW · GW

Thanks for giving some answers here to these questions; it was really helpful to have them laid out like this.

1. In hindsight, I was probably talking more about moves towards decentralization of leadership, rather than decentralization of funding. I agree that greater decentralization of funding is a good thing, but it seems to me like, within the organizations funded by a given funder, decentralization of leadership is likely useless (if leadership decisions are still being made by informal networks between orgs rather than formal ones), or it may lead to a lack of clarity and direction. 

3. I understand the dynamics that may cause the overrepresentation of women. However, that still doesn't completely explain why there is an overrepresentation of white women, even when compared to racial demographics within EA at large. Additionally, this also doesn't explain why the overrepresentation of women here isn't seen as a problem on CEA's part, if even just from an optics perspective.

4. Makes sense, but I'm still concerned that, say, if CEA had an anti-Stalinism team, they'd be reluctant to ever say "Stalinism isn't a problem in EA."

5. Again, this was a question that was badly worded on my end. I was referring more specifically to organizations within AI safety, more than EA at large. I know that AMF, GiveDirectly, The Humane League, etc. fundraise outside EA.

6. I was asking a descriptive question here, not a normative one. Guilt by association, even if weak, is a very commonly used form of argument, and so I would expect it to be in used in this case.

7. That makes sense. That was one of my hypotheses (hence my phrase "at least upon initial examination"), and I guess in hindsight it's probably the best one.

10. Starting an AI capabilities company that does AI safety as a side project generally hasn't gone well, and yet people keep doing it. The fact that something hasn't gone well in the past doesn't seem to me to be a sufficient explanation for why people don't keep doing it, especially because it largely seems like Leverage failed for Leverage-specific reasons (i.e. too much engagement with woo). Additionally, your argument here seems to prove too much; the Manhattan Project was a large scientific project operating under an intense structure, and yet it was able to maintain good epistemics (i.e. not fixating too hard on designs that wouldn't work) under those conditions. Same with a lot of really intense start-ups.

11. They may not be examples of the unilateralist's curse in the original sense, but the term seems to have been expanded well past its original meaning, and they're examples of that expanded meaning.

12. It seems to me like this is work of a different type than technical alignment work, and could likely be accomplished by hiring different people than the people already working on technical alignment, so it's not directly trading off against that.

Comment by Peter Berggren (peter-berggren) on How do you actually obtain and report a likelihood function for scientific research? · 2024-02-12T04:55:50.038Z · LW · GW

Sorry; I thought I had used the "Question" type.

Comment by Peter Berggren (peter-berggren) on MIRI 2024 Mission and Strategy Update · 2024-01-05T03:48:43.115Z · LW · GW

What's preventing MIRI from making massive investments into human intelligence augmentation? If I recall correctly, MIRI is most constrained on research ideas, but human intelligence augmentation is a huge research idea that other grantmakers, for whatever reason, aren't funding. There are plenty of shovel-ready proposals already, e.g. https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing; why doesn't MIRI fund them?

Comment by Peter Berggren (peter-berggren) on BOUNTY AVAILABLE: AI ethicists, what are your object-level arguments against AI notkilleveryoneism? · 2023-07-07T13:07:23.490Z · LW · GW

Thank you very much! I won't be sending you a bounty, as you're not an AI ethicist of the type discussed here, but I'd be happy to send $50 to a charity of your choice. Which one do you want?

Comment by Peter Berggren (peter-berggren) on BOUNTY AVAILABLE: AI ethicists, what are your object-level arguments against AI notkilleveryoneism? · 2023-07-06T23:11:29.191Z · LW · GW

I've seen plenty of AI x-risk skeptics present their object-level argument, and I'm not interested in paying out a bounty for stuff I already have. I'm most interested in the arguments from this specific school of thought, and that's why I'm offering the terms I offer.

Comment by Peter Berggren (peter-berggren) on Schelling fences on slippery slopes · 2022-11-17T15:52:25.310Z · LW · GW

Man, this article hits different now that I know the psychopharmacology theory of the FTX crash...

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-07-22T19:46:44.802Z · LW · GW

Have any prizes been awarded yet? I haven't heard anything about prizes, but that could have just been that I didn't win one...

Comment by Peter Berggren (peter-berggren) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-17T02:49:15.143Z · LW · GW

I'm still not sure why exactly people (I'm thinking of a few in particular, but this applies to many in the field) tell very detailed stories of AI domination like "AI will use protein nanofactories to embed tiny robots in our bodies to destroy all of humanity at the press of a button." This seems like a classic use of the conjunction fallacy, and it doesn't seem like those people really flinch from the word "and" like the Sequences tell them they should.

Furthermore, it seems like people within AI alignment aren't taking the "sci-fi" criticism as seriously as they could. I don't think most people who have that objection are saying "this sounds like science fiction, therefore it's wrong." I think they're more saying "these hypothetical scenarios are popular because they make good science fiction, not because they're likely." And I have yet to find a strong argument against the latter form of that point.

Please let me know if I'm doing an incorrect "steelman," or if I'm missing something fundamental here.

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-28T02:10:29.159Z · LW · GW

Some figures within machine learning have argued that the safety of broad-domain future AI is not a major concern. They argue that since narrow-domain present-day AI is already dangerous, this should be our primary concern, rather than that of future AI. But it doesn't have to be either/or.

Take climate change. Some climate scientists study the future possibilities of ice shelf collapses and disruptions of global weather cycles. Other climate scientists study the existing problems of more intense natural disasters and creeping desertification. But these two fields don't get into fights over which field is "more important." Instead, both fields can draw from a shared body of knowledge and respect each other's work as valuable and relevant.

The same principle applies to machine learning and artificial intelligence. Some researchers focus on remote but high-stakes research like the alignment of artificial general intelligence (AGI). Others focus on relatively smaller but nearer-term concerns like social media radicalization and algorithmic bias. These fields are both important in their own ways, and both fields have much to learn from each other. However, given how few resources have been put into AGI alignment compared to nearer-term research, many experts in the field feel that alignment research is currently more worthy of attention.

(tech executives, ML researchers)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-28T02:00:50.122Z · LW · GW

You wouldn't hire an employee without references. Why would you make an AI that doesn't share your values?

(policymakers, tech executives)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-28T01:57:01.396Z · LW · GW

The future is not a race between AI and humanity. It's a race between AI safety and AI disaster.

(Policymakers, tech executives)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-28T01:56:35.970Z · LW · GW

We need to be proactive about AI safety, not reactive.

(Policymakers)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-28T00:02:44.745Z · LW · GW

In the Soviet Union, there was a company that made machinery for vulcanizing rubber. They had the option to make more efficient machines, instead of their older models. However, they didn't do it, because they wouldn't get paid as much for making the new machines. Why would that be? Wouldn't more efficient machines be more desirable?

Well, yes, but the company got paid per pound of machine, and the new machines were lighter.

Now, you may say that this is just a problem with communist economies. Well, capitalist economies fall into very similar traps. If a company has a choice of making slightly more profit by putting massive amounts of pollution into public waterways, they'll very often do it. The profit that they get is concentrated to them, and the pollution of waterways is spread out over everyone else, so of course they'll do it. Not doing it would be just as foolish as the Soviet company making new machines that weighed less.

Modern machine learning systems used in artificial intelligence have very similar problems. Game-playing AIs have exploited glitches in the games they play. AIs rewarded based on human judgements have deceived their judges. Social media recommendation AIs have recommended posts that made people angry and radicalized their politics, because that counted as "engagement."

At this point, we have stumbled into an economic system which combines capitalist private enterprise with regulation to correct for market failures. But there may not be time for "stumbling" once superhuman-level AI comes around. If a superintelligent AI with poorly designed goals is told to make thumbtacks, and it decides to turn the universe and everyone in it into thumbtacks... we're doomed.

Let's make sure AI does what we want it to do, not just what we tell it to do, the first time.

 

(policymakers, tech executives, ML researchers)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T23:45:06.170Z · LW · GW

There is an enormous amount of joy, fulfillment, exploration, discovery, and prosperity in humanity's future... but only if advanced AI values those things.

 

(Policymakers, tech executives)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T19:12:07.227Z · LW · GW

Even if you don't assume that the long-term future matters much, preventing AI risk is still a valuable policy objective. Here's why.

In regulatory cost-benefit analysis, a tool called the "value of a statistical life" is used to measure how much value people place on avoiding risks to their own life (source). Most government agencies, by asking about topics like how much people will pay for safety features in their car or how much people are paid for working in riskier jobs, assign a value of about ten million dollars to one statistical life. That is, reducing the risk of a thousand people dying by one in a thousand each is worth ten million dollars of government money.

If experts on AI such as Stuart Russell are to be believed (and if they're not to be believed, who is?), then superintelligent AI poses a sizeable risk of leading to the end of humanity. For a very conservative estimate, let's just assume that the AI will only kill every single American. There are currently over 330 million Americans (source), and so the use of the value of a statistical life implies that reducing AI risk by just one in a million is worth:

330 million Americans *  1 outcome in which all of them die / 1 million outcomes * 10 million dollars / statistical life = $3,300,000,000

No, this is not a misprint. It is worth 3.3 billion dollars to reduce the risk of human extinction due to AI by one in one million, based on the government's own cost-effectiveness metrics, even assuming that the long-term future has no significance, and even assuming that non-American lives have no significance.

And AI experts say we could do a lot more for a lot less.

(policymakers)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T18:57:13.757Z · LW · GW

Clarke’s First Law goes: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

Stuart Russell is only 60. But what he lacks in age, he makes up in distinction: he’s a computer science professor at Berkeley, neurosurgery professor at UCSF, DARPA advisor, and author of the leading textbook on AI. His book Human Compatible states that superintelligent AI is possible; Clarke would recommend we listen.

(tech executives, ML researchers)
(adapted slightly from the first paragraphs of the Slate Star Codex review of Human Compatible)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T16:12:02.636Z · LW · GW

Climate change was weird in the 1980s. Pandemics were weird in the 2010s. Every world problem is weird... until it happens.

(policymakers)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T16:05:57.351Z · LW · GW

AI might be nowhere near human-level yet. We're also nowhere near runaway climate change, but we still care about it.

(policymakers, tech executives)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T16:04:54.865Z · LW · GW

"Follow the science" doesn't just apply to pandemics. It's time to listen to AI experts, not AI pundits.

 

(policymakers)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T02:08:00.669Z · LW · GW

Fixed; thanks!

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T02:06:59.359Z · LW · GW

This seems like it falls into the trap of being "too weird" for policymakers to take seriously. Good concept; maybe work on the execution a bit?

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T02:04:33.025Z · LW · GW

I thought that would ruin the parallelism and flow a bit, and this isn't intended for the "paragraph" category, so I didn't put that in yet.

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T01:50:13.741Z · LW · GW

There is a certain strain of thinker who insists on being more naturalist than Nature. They will say with great certainty that since Thor does not exist, Mr. Tesla must not exist either, and that the stories of Asclepius disprove Pasteur. This is quite backwards: it is reasonable to argue that a machine will never think because the Mechanical Turk couldn't; it is madness to say it will never think because Frankenstein's monster could. As well demand that we must deny Queen Victoria lest we accept Queen Mab, or doubt Jack London lest we admit Jack Frost. Nature has never been especially interested in looking naturalistic, and it ignores these people entirely and does exactly what it wants.

(adapted from Scott Alexander's G. K. Chesterton on AI Risk)

 

(policymakers, tech executives)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T01:37:06.763Z · LW · GW

You might think that AI risk is no big deal. But would you bet your life on it?

(policymakers)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T01:32:18.753Z · LW · GW

Betting against the people who said pandemics were a big deal, six years ago, is a losing proposition.

(policymakers, tech executives)

(source)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T01:29:36.918Z · LW · GW

Just because tech billionaires care about AI risk doesn't mean you shouldn't. Even if a fool says the sky is blue, it's still blue.

(policymakers, maybe ML researchers)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T01:25:32.584Z · LW · GW

Hoping you'll run out of gas before you drive off a cliff is a losing strategy. Align AGI; don't count on long timelines.

(ML researchers)

(adapted from Human Compatible)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T01:05:57.852Z · LW · GW

If the media reported on other dangers like it reported on AI risk, it would talk about issues very differently. It would compare events in the Middle East to Tom Clancy novels. It would dismiss runaway climate change by saying it hasn't happened yet. It would think of the risk of nuclear war in terms of putting people out of work. It would compare asteroid impacts to mudslides. It would call meteorologists "nerds" for talking about hurricanes. 

AI risk is serious, and it isn't taken seriously. It's time to look past the sound bites and focus on what experts are really saying. 

(policymakers, tech executives) 

(based on Scott Alexander's post on the subject)

Comment by Peter Berggren (peter-berggren) on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T00:49:30.056Z · LW · GW

People said man wouldn't fly for a million years. Airplanes were fighting each other eleven years later. Superintelligent AI might happen faster than you think. (policymakers, tech executives) (source) (other source)