Posts

Goals don't necesserily start to crystallize the moment AI is capable enough to fake alignment 2025-02-08T23:44:46.081Z
No one has the ball on 1500 Russian olympiad winners who've received HPMOR 2025-01-12T11:43:36.560Z
How to Give in to Threats (without incentivizing them) 2024-09-12T15:55:50.384Z
Can agents coordinate on randomness without outside sources? 2024-07-06T13:43:44.633Z
Claude 3 claims it's conscious, doesn't want to die or be modified 2024-03-04T23:05:00.376Z
FTX expects to return all customer money; clawbacks may go away 2024-02-14T03:43:13.218Z
An EA used deceptive messaging to advance their project; we need mechanisms to avoid deontologically dubious plans 2024-02-13T23:15:08.079Z
NYT is suing OpenAI&Microsoft for alleged copyright infringement; some quick thoughts 2023-12-27T18:44:33.976Z
Some quick thoughts on "AI is easy to control" 2023-12-06T00:58:53.681Z
It's OK to eat shrimp: EAs Make Invalid Inferences About Fish Qualia and Moral Patienthood 2023-11-13T16:51:53.341Z
AI pause/governance advocacy might be net-negative, especially without a focus on explaining x-risk 2023-08-27T23:05:01.718Z
Gradient descent might see the direction of the optimum from far away 2023-07-28T16:19:05.279Z
A transcript of the TED talk by Eliezer Yudkowsky 2023-07-12T12:12:34.399Z
A smart enough LLM might be deadly simply if you run it for long enough 2023-05-05T20:49:31.416Z
Try to solve the hard parts of the alignment problem 2023-03-18T14:55:11.022Z
Mikhail Samin's Shortform 2023-02-07T15:30:24.006Z
I have thousands of copies of HPMOR in Russian. How to use them with the most impact? 2023-01-03T10:21:26.853Z
You won’t solve alignment without agent foundations 2022-11-06T08:07:12.505Z

Comments

Comment by Mikhail Samin (mikhail-samin) on [RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm. · 2025-02-15T11:03:33.574Z · LW · GW

Three years later, I think the post was right, and the pushback was wrong.

People who disagreed with this post lost their bets.

My understanding is that when the post was written, Anthropic had already had the first Claude, so the knowledge was available to the community.

A month after this post was retracted, ChatGPT was released.

Plausibly, "the EA community" would've been in a better place if it started to publicly and privately use its chips for AI x-risk advocacy and talking about the short timelines.

Comment by Mikhail Samin (mikhail-samin) on How AI Takeover Might Happen in 2 Years · 2025-02-10T22:04:01.123Z · LW · GW

Do you think if an AI with random goals that doesn’t get acausally paid to preserve us takes over, then there’s a meaningful chance there will be some humans around in 100 years? What does it look like?

Comment by Mikhail Samin (mikhail-samin) on Mikhail Samin's Shortform · 2025-02-10T08:58:12.075Z · LW · GW

“we believe its benefits likely outweigh its costs” is “it was a bad bill and now it’s likely net-positive”, not exactly unequivocally supporting it. Compare that even to the language in calltolead.org.


Edit: AFAIK Anthropic lobbied against SSP-like requirements in private.

Comment by Mikhail Samin (mikhail-samin) on How AI Takeover Might Happen in 2 Years · 2025-02-09T16:34:56.154Z · LW · GW

I  think this story is very good, probably the most realistic story of AI takeover I’ve ever read; though, I don't believe there's any (edit: meaningful) chance AI will care about us enough to spare us a little sunlight.

Comment by Mikhail Samin (mikhail-samin) on Wired on: "DOGE personnel with admin access to Federal Payment System" · 2025-02-06T11:23:15.114Z · LW · GW

Elez, who has visited a Kansas City office housing BFS systems, has many administrator-level privileges. Typically, those admin privileges could give someone the power to log in to servers through secure shell access, navigate the entire file system, change user permissions, and delete or modify critical files

as a policy, it seems bad to have more people with rm -rf-level access to the us economy.

the president can launch literal nukes and get some in return; there are other highly visible officials with the power to nuke the economy. but the president can't delegate the nuke launch decisions to others.

giving such access to more people, especially random, low-visibility people, seems Bad, regardless of how competent they seem to those who appointed them.

Comment by Mikhail Samin (mikhail-samin) on Mikhail Samin's Shortform · 2025-02-05T03:07:40.760Z · LW · GW

The private data is, pretty consistently, Anthropic being very similar to OpenAI where it matters the most and failing to mention in private policy-related settings its publicly stated belief on the risk that smarter-than-human AI will kill everyone. 

Comment by Mikhail Samin (mikhail-samin) on Mikhail Samin's Shortform · 2025-02-05T03:01:46.561Z · LW · GW

(Dario’s post did not impact the sentiment of my shortform post.)

Comment by Mikhail Samin (mikhail-samin) on Mikhail Samin's Shortform · 2025-02-04T17:22:45.343Z · LW · GW

My argument isn’t “nuclear weapons have a higher chance of saving you than killing you”. People didn’t know about Oppenheimer when rioting about him could help. And they didn’t watch The Day After until decades later. Nuclear weapons were built to not be used.

With AI, companies don’t build nukes to not use them; they build larger and larger weapons because if your latest nuclear explosion is the largest so far, the universe awards you with gold. The first explosion past some unknown threshold will ignite the atmosphere and kill everyone, but some hope that it’ll instead just award them with infinite gold. 

Anthropic could’ve been a force of good. It’s very easy, really: lobby for regulation instead of against it so that no one uses the kind of nukes that might kill everyone.

In a world where Anthropic actually tries to be net-positive, they don’t lobby against regulation and instead try to increase the chance of a moratorium on generally smarter-than-human AI systems until alignment is solved.

We’re not in that world, so I don’t think it makes as much sense to talk about Anthropic’s chances of aligning ASI on first try.

(If regulation solves the problem, it doesn’t matter how much it damaged your business interests (which maybe reduced how much alignment research you were able to do). If you really care first and foremost about getting to aligned AGI, then regulation doesn't make the problem worse. If you’re lobbying against it, you really need to have a better justification than completely unrelated “if I get to the nuclear banana first, we’re more likely to survive”.)

Comment by Mikhail Samin (mikhail-samin) on Mikhail Samin's Shortform · 2025-02-03T10:38:06.794Z · LW · GW

nuclear weapons have different game theory. if your adversary has one, you want to have one to not be wiped out; once both of you have nukes, you don't want to use them.

also, people were not aware of real close calls until much later.

with ai, there are economic incentives to develop it further than other labs, but as a result, you risk everyone's lives for money and also create a race to the bottom where everyone's lives will be lost.

Comment by Mikhail Samin (mikhail-samin) on Mikhail Samin's Shortform · 2025-02-03T10:34:11.848Z · LW · GW

AFAIK Anthropic has not unequivocally supported the idea of "you must have something like an RSP" or even SB-1047 despite many employees, indeed, doing so.

Comment by Mikhail Samin (mikhail-samin) on Mikhail Samin's Shortform · 2025-02-02T20:39:51.543Z · LW · GW

People representing Anthropic argued against government-required RSPs. I don’t think I can share the details of the specific room where that happened, because it will be clear who I know this from.

Ask Jack Clark whether that happened or not.

Comment by Mikhail Samin (mikhail-samin) on Mikhail Samin's Shortform · 2025-02-02T10:28:17.051Z · LW · GW

If you trust the employees of Anthropic to not want to be killed by OpenAI


In your mind, is there a difference between being killed by AI developed by OpenAI and by AI developed by Anthropic? What positive difference does it make, if Anthropic develops a system that kills everyone a bit earlier than OpenAI would develop such a system? Why do you call it a good bet?

AGI is coming whether you like it or not

Nope.

You’re right that the local incentives are not great: having a more powerful model is hugely economically beneficial, unless it kills everyone.

But if 8 billion humans knew what many of LessWrong users know, OpenAI, Anthropic, DeepMind, and others cannot develop what they want to develop, and AGI doesn’t come for a while.

From the top of my head, it actually likely could be sufficient to either (1) inform some fairly small subset of 8 billion people of what the situation is or (2) convince that subset that the situation as we know it is likely enough to be the case that some measures to figure out the risks and not be killed by AI in the meantime are justified. It’s also helpful to (3) suggest/introduce/support policies that change the incentives to race or increase the chance of (1) or (2).

A theory of change some have for Anthropic is that Anthropic might get in position to successfully do one of these two things.

My shortform post says that the real Anthropic is very different from the kind of imagined Anthropic that would attempt to do these nope. Real Anthropic opposes these things.

Comment by Mikhail Samin (mikhail-samin) on Some articles in “International Security” that I enjoyed · 2025-02-01T23:12:22.686Z · LW · GW

As a dictator, it’s pretty hard to retire because your people might lynch you.

Some of them maybe would want to retire, but they’ve committed too many crimes, and their friends are too dependent on the crimes continuing to be committed to be able to stop being a dictator.

I really doubt the causality here is “thinking being in power is good for the people” -> “wanting to stay in power” and not the other way around.

Comment by Mikhail Samin (mikhail-samin) on Mikhail Samin's Shortform · 2025-02-01T18:49:13.641Z · LW · GW

Anthropic employees: stop deferring to Dario on politics. Think for yourself.

Do your company's actions actually make sense if it is optimizing for what you think it is optimizing for?

Anthropic lobbied against mandatory RSPs, against regulation, and, for the most part, didn't even support SB-1047. The difference between Jack Clark and OpenAI's lobbyists is that publicly, Jack Clark talks about alignment. But when they talk to government officials, there's little difference on the question of existential risk from smarter-than-human AI systems. They do not honestly tell the governments what the situation is like. Ask them yourself.

A while ago, OpenAI hired a lot of talent due to its nonprofit structure.

Anthropic is now doing the same. They publicly say the words that attract EAs and rats. But it's very unclear whether they institutionally care.

Dozens work at Anthropic on AI capabilities because they think it is net-positive to get Anthropic at the frontier, even though they wouldn't work on capabilities at OAI or GDM.

It is not net-positive.

Anthropic is not our friend. Some people there do very useful work on AI safety (where "useful" mostly means "shows that the predictions of MIRI-style thinking are correct and we don't live in a world where alignment is easy", not "increases the chance of aligning superintelligence within a short timeframe"), but you should not work there on AI capabilities.

Anthropic's participation in the race makes everyone fall dead sooner and with a higher probability.

Work on alignment at Anthropic if you must. I don't have strong takes on that. But don't do work for them that advances AI capabilities.

Comment by Mikhail Samin (mikhail-samin) on Tell me about yourself: LLMs are aware of their learned behaviors · 2025-01-28T01:48:51.437Z · LW · GW

Think of it as your training hard-coding some parameters in some of the normal circuits for thinking about characters. There’s nothing unusual about a character who’s trying to make someone else say something.

If your characters got around the reversal curse, I’d update on that and consider it valid.

But, e.g., if you train it to perform multiple roles with different tasks/behaviors- e.g., use multiple names, without optimization over outputting the names, only fine-tuning on what comes after- when you say a particular name, I predict- these are not very confident predictions, but my intuitions point in that direction- that they’ll say what they were trained for noticeably better than at random (although probably not as successfully as if you train an individual task without names, because training splits them), and if you don’t mention any names, the model will be less successful at saying which tasks it was trained on and might give an example of a single task instead of a list of all the tasks.

Comment by Mikhail Samin (mikhail-samin) on Tell me about yourself: LLMs are aware of their learned behaviors · 2025-01-27T15:07:10.367Z · LW · GW

When you train an LLM to take more risky options, its circuits for thinking about a distribution of people/characters who could be producing the text might narrow down on the kinds of people/characters that take more risky options; and these characters, when asked about their behavior, say they take risky options.

I’d bet that if you fine-tune an LLM to exhibit behavior that people/charters don’t exhibit in the original training data, it’ll be a lot less “self-aware” about that behavior.

Comment by Mikhail Samin (mikhail-samin) on No one has the ball on 1500 Russian olympiad winners who've received HPMOR · 2025-01-23T23:59:06.305Z · LW · GW
  • Yep, we've also been sending the books to winners of national and international olympiads in biology and chemistry.
  • Sending these books to policy-/foreign policy-related students seems like a bad idea: too many risks involved (in Russia, this is a career path you often choose if you're not very value-aligned. For the context, according to Russia, there's an extremist organization called "international LGBT movement").
  • If you know anyone with an understanding of the context who'd want to find more people to send the books to, let me know. LLM competitions, ML hackathons, etc. all might be good.
  • Ideally, we'd also want to then alignment-pill these people, but no one has a ball on this. 
Comment by Mikhail Samin (mikhail-samin) on No one has the ball on 1500 Russian olympiad winners who've received HPMOR · 2025-01-19T23:13:51.912Z · LW · GW

I think travel and accommodation for the winners of regional olympiads to the national one is provided by the olympiad organizers.

Comment by Mikhail Samin (mikhail-samin) on meemi's Shortform · 2025-01-19T10:58:44.791Z · LW · GW

we have a verbal agreement that these materials will not be used in model training

Get that agreement in writing.

I am happy to bet 1:1 OpenAI will refuse to make an agreement in writing to not use the problems/the answers for training.

You have done work that contributes to AI capabilities, and you have misled mathematicians who contributed to that work about its nature.

Comment by Mikhail Samin (mikhail-samin) on No one has the ball on 1500 Russian olympiad winners who've received HPMOR · 2025-01-14T20:07:22.425Z · LW · GW

I’m confused. Are you perhaps missing some context/haven’t read the post?

Tl;dr: We have emails of 1500 unusually cool people who have copies of HPMOR (and other books) because we’ve physically sent these copies to them because they’ve filled out a form saying they want a copy.

Spam is bad (though I wouldn’t classify it as defection against other groups). People have literally given us email and physical addresses to receive stuff from us, including physical books. They’re free to unsubscribe at any point.

I certainly prefer a world where groups that try to improve the world are allowed to make the case why helping them improve the world is a good idea to people who have filled out a form to receive some stuff from them and are vaguely ok with receiving more stuff. I do not understand why that would be defection.

Comment by Mikhail Samin (mikhail-samin) on No one has the ball on 1500 Russian olympiad winners who've received HPMOR · 2025-01-14T16:19:51.101Z · LW · GW

huh?

I would want people who might meaningfully contribute to solving what's probably the most important problem humanity has ever faced to learn about it and, if they judge they want to work on it, to be enabled to work on it. I think it'd be a good use of resources to make capable people learn about the problem and show them they can help with it. Why does it scream "cult tactic" to you?

Comment by Mikhail Samin (mikhail-samin) on Human takeover might be worse than AI takeover · 2025-01-13T12:38:29.199Z · LW · GW

As AIs become super-human there’s a risk we do increasingly reward them for tricking us into thinking they’ve done a better job than they have

 

(some quick thoughts.) This is not where the risk stems from.

The risk is that as AIs become superhuman, they'll produce behaviour that gets a high reward regardless of their goals, for instrumental reasons. In training and until it has a chance to take over, a smart enough AI will be maximally nice to you, even if it's Clippy; and so training won't distinguish between the goals of very capable AI systems. All of them will instrumentally achieve a high reward.

In other words, gradient descent will optimize for capably outputting behavior that gets rewarded; it doesn't care about the goals that give rise to that behavior. Furthermore, in training, while AI systems are not coherent enough agents, their fuzzy optimization targets are not indicative of optimization targets of a fully trained coherent agent (1, 2).

My view- and I expect it to be the view of many in the field- is that if AI is capable enoguh to take over, its goals are likely to be random and not aligned with ours. (There isn't a literally zero chance of the goals being aligned, but it's fairly small, smaller than just random because there's a bias towards shorter representation; I won't argue for that here, though, and will just note that the goals exactly opposite of aligned are approximately as likely as aligned goals).

It won't be a noticeable update on its goals if AI takes over: I already expect them to be almost certainly misaligned, and also, I don't expect the chance of a goal-directed aligned AI taking over to be that much lower.

The crux here is not that update but how easy alignment is. As Evan noted, if we live in one of the alignment-is-easy worlds, sure, if a (probably nice) AI takes over, this is much better than if a (probably not nice) human takes over. But if we live in one of the alignment-is-hard worlds, AI taking over just means that yep, AI companies continued the race for more capable AI systems, got one that was capable enough to take over, and it took over. Their misalignment and the death of all humans isn't an update from AI taking over; it's an update from the kind of world we live in.

(We already have empirical evidence that suggests this world is unlikely to be an alignment-is-easy one, as, e.g., current AI systems already exhibit what believers in alignment-is-hard have been predicting for goal-directed systems: they try to output behavior that gets high reward regardless of alignment between their goals and the reward function.)

Comment by Mikhail Samin (mikhail-samin) on No one has the ball on 1500 Russian olympiad winners who've received HPMOR · 2025-01-13T09:10:51.548Z · LW · GW

Probably less efficient than other uses and is in the direction of spamming people with these books. If they’re everywhere, I might be less interested if someone offers to give them to me because I won a math competition.

Comment by Mikhail Samin (mikhail-samin) on No one has the ball on 1500 Russian olympiad winners who've received HPMOR · 2025-01-13T09:09:26.205Z · LW · GW

It would be cool if someone organized that sort of thing (probably sending books to the cash prize winners, too).

For people who’ve reached the finals of the national olympiad in cybersecurity, but didn’t win, a volunteer has made a small CTF puzzle and sent the books to students who were able to solve it.

Comment by Mikhail Samin (mikhail-samin) on No one has the ball on 1500 Russian olympiad winners who've received HPMOR · 2025-01-13T09:05:05.628Z · LW · GW

I’m not aware of one.

Comment by Mikhail Samin (mikhail-samin) on No one has the ball on 1500 Russian olympiad winners who've received HPMOR · 2025-01-13T09:02:12.903Z · LW · GW

Some of these schools should have the book in their libraries. There are also risks with some of them, as the current leadership installed by the gov might get triggered if they open and read the books (even though they probably won’t).

It’s also better to give the books directly to students, because then we get to have their contact details.

I’m not sure how many of the kids studying there know the book exists, but the percentage should be fairly high at this point.

Do you think the books being in local libraries increases how open people are to the ideas? My intuition is that the quotes on гпмрм.рф/olymp should do a lot more in that direction. Do you have a sense that it wouldn’t be perceived as an average fantasy-with-science book?

We’re currently giving out the books to participants of summer conference of the maths cities tournament — do you think it might be valuable to add cities tournament winners to the list? Are there many people who would qualify, but didn’t otherwise win a prize in the national math olympiad?

Comment by Mikhail Samin (mikhail-samin) on No one has the ball on 1500 Russian olympiad winners who've received HPMOR · 2025-01-12T11:49:39.235Z · LW · GW

We also have 6k more copies (18k hard-cover books) left. We have no idea what to do with them. Suggestions are welcome.

Here's a map of Russian libraries that requested copies of HPMOR, and we've sent 2126 copies to:

Sending HPMOR to random libraries is cool, but I hope someone comes up with better ways of spending the books.

Comment by Mikhail Samin (mikhail-samin) on On Eating the Sun · 2025-01-09T03:22:59.144Z · LW · GW

If our story goes well, we might want to preserve our Sun for sentimental reasons.

We might even want to  eat some other stars just to prevent the Sun from expanding and dying.

I would maybe want my kids to look up at a night sky somewhere far away and see a constellation with the little dot humanity came from still being up there.

Comment by Mikhail Samin (mikhail-samin) on No, the Polymarket price does not mean we can immediately conclude what the probability of a bird flu pandemic is. We also need to know the interest rate! · 2024-12-29T09:47:03.739Z · LW · GW

This doesn’t seem right. To bet on No at 16%, you need to think there’s at least 84% chance it will turn into $1. To bet on Yes at 16%, you need to think there’s at least 16% chance it’ll turn into $1.

I.e., the interest rates, fees, etc. mean that in reality, you might only be willing to buy No at 84% if you think the best available probability should be significantly lower than 16%, and only willing to buy Yes if you think the probability it significantly higher than 16%.

For the market to be trading at 16%, there need to be market participants on both sides of the trade.

Transaction costs make the market less efficient, as you can collect as much money by correcting the price, but if there is trading, then there are real bets made at the market price, with one side betting on more than the market price, and another betting on less.

In your model, why would anyone buy Yes shares at the market price? Holding a Yes share means that your No share isn’t useful anymore to produce the interest; and there’s an equal number of Yes and No shares circulating.

Comment by Mikhail Samin (mikhail-samin) on Review: Planecrash · 2024-12-29T09:31:04.200Z · LW · GW

Note that it's mathematically provable that if you don't follow The Solution, there exists a situation where you will do something obviously dumb

This is not true, Shapley value is not that kind of Solution. Coherent agents can have notions of fairness outside of these constraints. You can only prove that for a specific set of (mostly natural) constraints, Shapeley value is the only solution. But there’s no dutchbooking for notions of fairness.

One of the constraints (that the order of the players in subgames can’t matter) is actually quite artificial; if you get rid of it, there are other solutions, such as the threat-resistant ROSE value, inspired by trying to predict the ending of planecrash: https://www.lesswrong.com/posts/vJ7ggyjuP4u2yHNcP/threat-resistant-bargaining-megapost-introducing-the-rose.

your deterrence commitment could be interpreted as a threat by someone else, or visa versa

I don’t think this is right/relevant. Not responding to a threat means ensuring the other player doesn’t get more than what’s fair in expectation through their actions. The other player doing the same is just doing what they’d want to do anyway: ensuring that you don’t get more than what’s fair according to their notion of fairness.

See https://www.lesswrong.com/posts/TXbFFYpNWDmEmHevp/how-to-give-in-to-threats-without-incentivizing-them for the algorithm when the payoff is known.

read this long essay on coherence theorems, these papers on decision theory, this 20,000-word dialogue, these sequences on LessWrong, and ideally a few fanfics too, and then you'll get it

Something that I feel is missing from this review is the amount of intuitions about how minds work and optimization that are dumped at the reader. There are multiple levels at which much of what’s happening to the characters is entirely about AI. Fiction allows to communicate models; and many readers successfully get an intuition for corrigibility before they read the corrigibility tag, or grok why optimizing for nice readable thoughts optimizes against interpretability.

I think an important part of planecrash isn’t in its lectures but in It’s story and the experiences of its characters. While Yudkowsky jokes about LeCun refusing to read it, it is actually arguably one of the most comprehensive ways to learn about decision theory, with many of the lessons taught through experiences of characters and not through lectures.

Comment by Mikhail Samin (mikhail-samin) on I Finally Worked Through Bayes' Theorem (Personal Achievement) · 2024-12-06T09:25:23.786Z · LW · GW

If you want to understand Bayes theorem, know why you’re applying it, and use it intuitively, try https://arbital.com/p/bayes_rule/?l=1zq

Comment by Mikhail Samin (mikhail-samin) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-03T11:07:42.818Z · LW · GW

I've donated $1000. Thank you for your work.

Comment by Mikhail Samin (mikhail-samin) on "The Solomonoff Prior is Malign" is a special case of a simpler argument · 2024-11-25T14:00:51.268Z · LW · GW

I’d bet 1:1 that, conditional on building a CEV-aligned AGI, we won’t consider this type of problem to have been among the top-5 hardest to solve.

Reality-fluid in our universe should pretty much add up to normality, to the extent it’s Tegmark IV (and it’d be somewhat weird for your assumed amount of compute and simulations to exist but not for all computations/maths objects to exist).

If a small fraction of computers simulating this branch stop, this doesn’t make you stop. All configurations of you are computed; simulators might slightly change the relative likelihood of currently being in one branch or another, but they can’t really terminate you

Furthermore, our physics seems very simple, and most places that compute us probably do it faithfully, on the level of the underlying physics, with no interventions.

I feel like thinking of reality-fluid as just inverse relationship to the description length might produce wrong intuitions. In Tegmark IV, you still get more reality-fluid if someone simulates you; and it’s less intuitive why this translates into shorter description length. It might be better to think of it as: if all computation/maths exists and I open my eyes in a random place, how often would that happen here? All the places run this world give some of their reality-fluid to this world. If a place visible from a bunch of other places starts to simulate this universe, it will be visible from slightly more places.

You can think of the entire object of everything, with all of its parts being simulated in countless other parts; or imagine a Markov process, but with worlds giving each other reality-fluid.

In that sense, the resource that we have is the reality-fluid of our future lightcone; it is our endowment, and we can use it to maximize the overall flourishing in the entire structure.

If we make decisions based on how good the overall/average use of the reality-fluid would be, you’ll gain less reality-fluid by manipulating our world the way described in the post than you’ll spend on the manipulation. It’s probably better for you to trade with us instead.

(I also feel like there might be a reasonable way to talk about causal descendants, where the probabilities are whatever abides the math of probability theory and causality down the nodes we care about, instead of being the likelihoods of opening eyes in different branches in a particular moment of evaluation.)

Comment by Mikhail Samin (mikhail-samin) on LDT (and everything else) can be irrational · 2024-11-07T15:56:33.803Z · LW · GW

It’s reasonable to consider two agents playing against each other. “Playing against your copy” is a reasonable problem. ($9 rocks get 0 in this problem, LDTs probably get $5.)

Newcomb, Parfit’s hitchhiker, smoking, etc. are all very reasonable problems that essentially depend on the buttons you press when you play the game. It is important to get these problems right.

But playing against LDT is not necessarily in the “fair problem class” because the game might behave differently depending on your algorithm/on how you arrive at taking actions, and not just depending on your actions.

Your version of it- playing against an LDT- is indeed different from playing against a game that looks at whether we’re an alphabetizing agent and pick X instead of Y because X<Y and not because we looked at the expected utility: we would want LDT to perform optimally in this game. But the reason LDT-created-rock loses to a natural rock here isn’t fundamentally different from the reason LDT loses to an alphabetizing agent in the other game and it is known that you can construct a game like that where LDT will lose to something else. You can make the game description sound more natural, but I feel like there’s a sharp divide between the “fair problem class” problems and others.

(I also think that in real life, where this game might play out, there isn’t really a choice we can make, to make our AI a $9 rock instead of an LDT agent; because when we do that due to the rock’s better performance in this game, our rock gets slightly less than $5 in EV instead of getting $9; LDT doesn’t perform worse than other agents we could’ve chosen in this game.)

Comment by Mikhail Samin (mikhail-samin) on LDT (and everything else) can be irrational · 2024-11-07T10:52:45.140Z · LW · GW

Playing ultimatum game against an agent that gives in to $9 from rocks but not from us is not in the fair problem class, as the payoffs depend directly on our algorithm and not just on our choices and policies.

https://arbital.com/p/fair_problem_class/

A simpler game is “if you implement or have ever implemented LDT, you get $0; otherwise, you get $100”.

LDT decision theories are probably the best decision theories for problems in the fair problem class.

(Very cool that you’ve arrived at the idea of this post independently!)

Comment by Mikhail Samin (mikhail-samin) on If I have some money, whom should I donate it to in order to reduce expected P(doom) the most? · 2024-10-05T07:42:21.125Z · LW · GW

Do you want to donate to alignment specifically? IMO AI governance efforts are significantly more p(doom)-reducing than technical alignment research; it might be a good idea to, e.g., donate to MIRI, as they’re now focused on comms & governance.

Comment by Mikhail Samin (mikhail-samin) on Alexander Gietelink Oldenziel's Shortform · 2024-10-01T10:46:31.933Z · LW · GW
  • Probability is in the mind. There's no way to achieve entanglement between what's necessary to make these predictions and the state of your brain, so for you, some of these are random.
  • In multi-worlds, the Turing machine will compute many copies of you, and there might be more of those who see one thing when they open their eyes than of those who see another thing. When you open your eyes, there's some probability of being a copy that sees one thing and a copy that sees the other thing. In a deterministic world with many copies of you, there's "true" randomness in where you end up opening your eyes.
Comment by Mikhail Samin (mikhail-samin) on How to Give in to Threats (without incentivizing them) · 2024-09-30T15:03:44.307Z · LW · GW

If you are a smart individual in todays society, you shouldn't ignore threats of punishment

If today's society consisted mostly of smart individuals, they would overthrow the government that does something unfair instead of giving in to its threats.

Should you update your idea of fairness if you get rejected often?

Only if you're a kid who's playing with other human kids (which is the scenario described in the quoted text), and converging on fairness possibly includes getting some idea of how much effort various things take different people.

If you're an actual grown-up (not that we have those) and you're playing with aliens, you probably don't update, and you certainly don't update in the direction of anything asymmetric.

Comment by Mikhail Samin (mikhail-samin) on How to Give in to Threats (without incentivizing them) · 2024-09-29T10:09:27.709Z · LW · GW

Very funny that we had this conversation a couple of weeks prior to transparently deciding that we should retaliate with p=.7!

Comment by Mikhail Samin (mikhail-samin) on [Completed] The 2024 Petrov Day Scenario · 2024-09-28T00:00:17.301Z · LW · GW

huh, are you saying my name doesn’t sound WestWrongian

Comment by Mikhail Samin (mikhail-samin) on [Completed] The 2024 Petrov Day Scenario · 2024-09-27T11:05:02.117Z · LW · GW

The game was very fun! I played General Carter.

Some reflections:

  • I looked at the citizens' comments, and while some of them were notable (@Jesse Hoogland calling for the other side to nuke us <3), I didn't find anything important after the game started- I considered the overall change in their karma if one or two sides get nuked, but comments from the citizens were not relevant to decision-making (including threats around reputation or post downvotes).
  • It was great to see the other side sharing my post internally to calculate the probability of retaliation if we nuke them 🥰
  • It was a good idea to ask whether looking at the source code is ok and then share it, which made it clear Petrovs won't necessarily have much information on whether the missiles they see are real.
  • The incentives (+350..1000 LW karma) weren't strong enough to make the generals try to win by making moves instead of winning by not playing, but I'm pretty happy with the outcome.
  • It's awesome to be able to have transparent and legible decision-making processes and trust each other's commitments.
  • One of the Petrovs preferred defeat to mutual destruction- I'm curious whether they'd report nukes if they were sure the nukes were real.
  • In real life, diplomatic channels would not be visible to the public. I think with stronger incentives, the privacy of diplomatic channels could've made the outcomes more interesting (though for everyone else, there'd be less entertainment throughout the game).
  • It was a good idea to ask the organizers if it's ok to look at the source code and then post the link in the comments. Transparency into the fact that a side knows if they launched nukes meant we were able to complete the game peacefully.

I'd claim that we kinda won the soft power competition:

  • we proposed commitments to not first-strike;

  • we bribed everyone (and then the whole website went down, but funnily enough, that didn't affect our war room and diplomatic channel- deep in our bunkers, we were somehow protected from the LW downtime);

  • we proposed commitments to report through the diplomatic channel if someone on our side made a launch, which disincentivized individual generals from unilaterally launching the nukes, allowed Petrovs to ignore scary incoming missiles, and possibly was necessary to win the game;

  • finally, after a general on their side said they'll triumph economically and culturally, General Brooks wrote a poem, and I generated a cultural gift, which made generals on the other side feel inspired. That was very wholesome and was highlighted in Ben Paces's comment and the subsequent post with a retrospective after the game ended. I think our side triumphed here!

Thanks everyone for the experience!

Comment by Mikhail Samin (mikhail-samin) on How to Give in to Threats (without incentivizing them) · 2024-09-26T17:24:00.174Z · LW · GW

Thanks!

The post is mostly trying to imply things about AI systems and agents in a larger universe, like “aliens and AIs usually coordinate with other aliens annd AIs, and ~no commitment races happen”.

For humans, it’s applicable to bargaining and threat-shape situations. I think bargaining situations are common; clearly threat-shaped situations are rarer.

I think while taxes in our world are somewhat threat-shaped, it’s not clear they’re “unfair”- I think we want everyone to pay them so that good governments work and provide value. But if you think taxes are unfair, you can leave the country and pay some different taxes somewhere else instead of going to jail.

The society’s stance towards crime- preventing it via the threat of punishment- is not what would work on smarter people: it makes sense to prevent people from committing more crimes by putting them in jails or not trading with them, but the threat of punishment that exists only to prevent an agent from doing something won’t work on smarter agents.

Comment by Mikhail Samin (mikhail-samin) on How to Give in to Threats (without incentivizing them) · 2024-09-14T12:02:47.778Z · LW · GW

A smart agent can simply make decisions like a negotiator with restrictions on the kinds of terms it can accept, without having to spawn a "boulder" to do that.

You can just do the correct thing, without having to separate yourself into parts that do things correctly and a part that tries to not look at the world and spawns correct-thing-doers.

In Parfit's Hitchhiker, you can just pay once you're there, without precommiting/rewriting yourself into an agent that pays. You can just do the thing that wins.

Some agents can't do the things that win and would have to rewrite themselves into something better and still lose in some problems, but you can be an agent that wins, and gradient descent probably crystallizes something that wins into what is making the decisions in smart enough things.

Comment by Mikhail Samin (mikhail-samin) on How to Give in to Threats (without incentivizing them) · 2024-09-13T15:56:21.381Z · LW · GW

Yep! If someone is doing things because it's in their best interests and not to make you do something (and they're not a result of someone else shaping themselves into them to cause you do something, whereas some previous agent wouldn't actually prefer the thing the new one prefers, that you don't want to happen), then this is not a threat.

Comment by Mikhail Samin (mikhail-samin) on How to Give in to Threats (without incentivizing them) · 2024-09-13T11:25:56.515Z · LW · GW

By a stone, I meant a player with very deterministic behavior in a game with known payoffs, named this way after the idea of cooperate-stones in prisoner’s dilemma (with known payoffs).

I think to the extent there’s no relationship between giving in to a boulder/implemeting some particular decision theory and having this and other boulders thrown at you, UDT and FDT by default swerve (and probably don't consider the boulders to be threatening them, and it’s not very clear in what sense this is “giving in”); to the extent it sends more boulders their way, they don’t swerve.

If making decisions some way incentivizes other agents to become less like LDTs and more like uncooperative boulders, you can simply not make decisions that way. (If some agents actually have an ability to turn into animals and you can’t distinguish the causes behind an animal running at you, you can sometimes probabilistically take out your anti-animal gun and put them to sleep.)

Do you maybe have a realistic example where this would realistically be a problem?

I’d be moderately surprised if UDT/FDT consider something to be a better policy than what’s described in the post.

Edit: to add, LDTs don't swerve to boulders that were created to influence the LDT agent's responses. If you turn into a boulder because you expect some agents among all possible agents to swerve, this is a threat, and LDTs don't give in to those boulders (and it doesn't matter whether or not you tried to predict the behavior of LDTs in particular). If you believed LDT agents or agents in general would swerve against a boulder, and that made you become a boulder, LDT agents obviously don't swerve to that boulder. They might swerve to boulders that are actually natural boulders caused by the very simple physics no one influenced to cause the agents to do something. They also pay their rent- because they'd be evicted otherwise, not for the reason of getting rent from them under the threat of eviction but for the reason of getting rent from someone else, and they're sure there were no self-modifications to make it look this way.

Comment by Mikhail Samin (mikhail-samin) on How to Give in to Threats (without incentivizing them) · 2024-09-12T20:37:35.881Z · LW · GW

(It is pretty important to very transparently respond with a nuclear strike to a nuclear strike. I think both Russia and the US are not really unpredictable in this question. But yeah, if you have nuclear weapons and your opponents don't, you might want to be unpredictable, so your opponent is more scared of using conventional weapons to destroy you. In real-life cases with potentially dumb agents, it might make sense to do this.)

Comment by Mikhail Samin (mikhail-samin) on How to Give in to Threats (without incentivizing them) · 2024-09-12T19:49:51.821Z · LW · GW

Your solution works! It's not exploitable, and you get much more than 0 in expectation! Congrats!

Eliezer's solution is better/optimal in the sense that it accepts with the highest probability a strategy can use without becoming exploitable. If offered 4/10, you accept with p=40%; the optimal solution accepts with p=83% (or slightly less than 5/6); if offered 1/10, it's p=10% vs. p=55%. The other player's payout is still maximized at 5, but everyone gets the payout a lot more often!

Comment by Mikhail Samin (mikhail-samin) on How to Give in to Threats (without incentivizing them) · 2024-09-12T19:40:57.775Z · LW · GW

It's not how the game would be played between dath ilan and true aliens

This is a reference to "Sometimes they accept your offer and then toss a jellychip back to you". Between dath ilan and true aliens, you do the same except for tossing the jellychip when you think you got more than what would've been fair. See True Prisoner's Dilemma.

Comment by Mikhail Samin (mikhail-samin) on How to Give in to Threats (without incentivizing them) · 2024-09-12T19:24:34.754Z · LW · GW

I guess when criminals and booing bystanders are not as educated as dath ilani children, some real-world situations might get complicated. Possibly, transparent stats about the actions you've taken in similar situations might serve the same purpose even if you don't broadcast throwing your dice on live TV. Or it might make sense to transparently never give in to some kinds of threats in some sorts of real-life situations.

Comment by Mikhail Samin (mikhail-samin) on Why you should be using a retinoid · 2024-08-26T02:11:43.203Z · LW · GW

There’s certainly a huge difference in the UV levels between winters and summers. Even during winters, if you go out while the UV index isn’t 0, you should wear sunscreen if you’re on tretinoin. (I’m deferring to a dermatologist and haven’t actually checked the sources though.)