The Parable of Predict-O-Matic
post by abramdemski · 2019-10-15T00:49:20.167Z · LW · GW · 42 commentsContents
1 2 3 4 5 6 7 8 9 Related None 44 comments
I've been thinking more about partial agency. I want to expand on some issues brought up in the comments to my previous post [LW · GW], and on other complications which I've been thinking about. But for now, a more informal parable. (Mainly because this is easier to write than my more technical thoughts.)
This relates to oracle AI and to inner optimizers, but my focus is a little different.
1
Suppose you are designing a new invention, a predict-o-matic. It is a wonderous machine which will predict everything for us: weather, politics, the newest advances in quantum physics, you name it. The machine isn't infallible, but it will integrate data across a wide range of domains, automatically keeping itself up-to-date with all areas of science and current events. You fully expect that once your product goes live, it will become a household utility, replacing services like Google. (Google only lets you search the known!)
Things are going well. You've got investors. You have an office and a staff. These days, it hardly even feels like a start-up any more; progress is going well.
One day, an intern raises a concern.
"If everyone is going to be using Predict-O-Matic, we can't think of it as a passive observer. Its answers will shape events. If it says stocks will rise, they'll rise. If it says stocks will fall, then fall they will. Many people will vote based on its predictions."
"Yes," you say, "but Predict-O-Matic is an impartial observer nonetheless. It will answer people's questions as best it can, and they react however they will."
"But --" the intern objects -- "Predict-O-Matic will see those possible reactions. It knows it could give several different valid predictions, and different predictions result in different futures. It has to decide which one to give somehow."
You tap on your desk in thought for a few seconds. "That's true. But we can still keep it objective. It could pick randomly."
"Randomly? But some of these will be huge issues! Companies -- no, nations -- will one day rise or fall based on the word of Predict-O-Matic. When Predict-O-Matic is making a prediction, it is choosing a future for us. We can't leave that to a coin flip! We have to select the prediction which results in the best overall future [LW · GW]. Forget being an impassive observer! We need to teach Predict-O-Matic human values!"
You think about this. The thought of Predict-O-Matic deliberately steering the future sends a shudder down your spine. But what alternative do you have? The intern isn't suggesting Predict-O-Matic should lie, or bend the truth in any way -- it answers 100% honestly to the best of its ability. But (you realize with a sinking feeling) honesty still leaves a lot of wiggle room, and the consequences of wiggles could be huge.
After a long silence, you meet the interns eyes. "Look. People have to trust Predict-O-Matic. And I don't just mean they have to believe Predict-O-Matic. They're bringing this thing into their homes. They have to trust that Predict-O-Matic is something they should be listening to. We can't build value judgements into this thing! If it ever came out that we had coded a value function into Predict-O-Matic, a value function which selected the very future itself by selecting which predictions to make -- we'd be done for! No matter how honest Predict-O-Matic remained, it would be seen as a manipulator. No matter how beneficent its guiding hand, there are always compromises, downsides, questionable calls. No matter how careful we were to set up its values -- to make them moral, to make them humanitarian, to make them politically correct and broadly appealing -- who are we to choose? No. We'd be done for. They'd hang us. We'd be toast!"
You realize at this point that you've stood up and started shouting. You compose yourself and sit back down.
"But --" the intern continues, a little more meekly -- "You can't just ignore it. The system is faced with these choices. It still has to deal with it somehow."
A look of determination crosses your face. "Predict-O-Matic will be objective. It is a machine of prediction, is it not? Its every cog and wheel is set to that task. So, the answer is simple: it will make whichever answer minimizes projected predictive error. There will be no exact ties; the statistics are always messy enough to see to that. And, if there are, it will choose alphabetically."
"But--"
You see the intern out of your office.
2
You are an intern at PredictCorp. You have just had a disconcerting conversation with your boss, PredictCorp's founder.
You try to focus on your work: building one of Predict-O-Matic's many data-source-slurping modules. (You are trying to scrape information from something called "arxiv" which you've never heard of before.) But, you can't focus.
Whichever answer minimizes prediction error? First you think it isn't so bad. You imagine Predict-O-Matic always forecasting that stock prices will be fairly stable; no big crashes or booms. You imagine its forecasts will favor middle-of-the-road politicians. You even imagine mild weather -- weather forecasts themselves don't influence the weather much, but surely the collective effect of all Predict-O-Matic decisions will have some influence on weather patterns.
But, you keep thinking. Will middle-of-the-road economics and politics really be the easiest to predict? Maybe it's better to strategically remove a wildcard company or two, by giving forecasts which tank their stock prices. Maybe extremist politics are more predictable. Maybe a well-running economy gives people more freedom to take unexpected actions.
You keep thinking of the line from Orwell's 1984 about the boot stamping on the human face forever, except it isn't because of politics, or spite, or some ugly feature of human nature, it's because a boot stamping on a face forever is a nice reliable outcome which minimizes prediction error.
Is that really something Predict-O-Matic would do, though? Maybe you misunderstood. The phrase "minimize prediction error" makes you think of entropy for some reason. Or maybe information? You always get those two confused. Is one supposed to be the negative of the other or something? You shake your head.
Maybe your boss was right. Maybe you don't understand this stuff very well. Maybe when the inventor of Predict-O-Matic and founder of PredictCorp said "it will make whichever answer minimizes projected predictive error" they weren't suggesting something which would literally kill all humans just to stop the ruckus.
You might be able to clear all this up by asking one of the engineers.
3
You are an engineer at PredictCorp. You don't have an office. You have a cubicle. This is relevant because it means interns can walk up to you and ask stupid questions about whether entropy is negative information.
Yet, some deep-seated instinct makes you try to be friendly. And it's lunch time anyway, so, you offer to explain it over sandwiches at a nearby cafe.
"So, Predict-O-Matic maximizes predictive accuracy, right?" After a few minutes of review about how logarithms work, the intern started steering the conversation toward details of Predict-O-Matic.
"Sure," you say, "Maximize is a strong word, but it optimizes predictive accuracy. You can actually think about that in terms of log loss, which is related to infor--"
"So I was wondering," the intern cuts you off, "does that work in both directions?"
"How do you mean?"
"Well, you know, you're optimizing for accuracy, right? So that means two things. You can change your prediction to have a better chance of matching the data, or, you can change the data to better match your prediction."
You laugh. "Yeah, well, the Predict-O-Matic isn't really in a position to change data that's sitting on the hard drive."
"Right," says the intern, apparently undeterred, "but what about data that's not on the hard drive yet? You've done some live user tests. Predict-O-Matic collects data on the user while they're interacting. The user might ask Predict-O-Matic what groceries they're likely to use for the following week, to help put together a shopping list. But then, the answer Predict-O-Matic gives will have a big effect on what groceries they really do use."
"So?" You ask. "Predict-O-Matic just tries to be as accurate as possible given that."
"Right, right. But that's the point. The system has a chance to manipulate users to be more predictable."
You drum your fingers on the table. "I think I see the misunderstanding here. It's this word, optimize. It isn't some kind of magical thing that makes numbers bigger. And you shouldn't think of it as a person trying to accomplish something. See, when Predict-O-Matic makes an error, an optimization algorithm makes changes within Predict-O-Matic to make it learn from that. So over time, Predict-O-Matic makes fewer errors."
The intern puts on a thinking face with scrunched up eyebrows after that, and we finish our sandwiches in silence. Finally, as the two of you get up to go, they say: "I don't think that really answered my question. The learning algorithm is optimizing Predict-O-Matic, OK. But then in the end you get a strategy, right? A strategy for answering questions. And the strategy is trying to do something. I'm not anthropomorphising!" The intern holds up their hands as if to defend physically against your objection. "My question is, this strategy it learns, will it manipulate the user? If it can get higher predictive accuracy that way?"
"Hmm" you say as the two of you walk back to work. You meant to say more than that, but you haven't really thought about things this way before. You promise to think about it more, and get back to work.
4
"It's like how everyone complains that politicians can't see past the next election cycle," you say. You are an economics professor at a local university. Your spouse is an engineer at PredictCorp, and came home talking about a problem at work that you can understand, which is always fun.
"The politicians can't have a real plan that stretches beyond an election cycle because the voters are watching their performance this cycle. Sacrificing something today for the sake of tomorrow means they underperform today. Underperforming means a competitor can undercut you. So you have to sacrifice all the tomorrows for the sake of today."
"Undercut?" your spouse asks. "Politics isn't economics, dear. Can't you just explain to your voters?"
"It's the same principle, dear. Voters pay attention to results. Your competitor points out your under-performance. Some voters will understand, but it's an idealized model; pretend the voters just vote based on metrics."
"Ok, but I still don't see how a 'competitor' can always 'undercut' you. How do the voters know that the other politician would have had better metrics?"
"Alright, think of it like this. You run the government like a corporation, but you have just one share, which you auction off --"
"That's neither like a government nor like a corporation."
"Shut up, this is my new analogy." You smile. "It's called a decision market. You want people to make decisions for you. So you auction off this share. Whoever gets control of the share gets control of the company for one year, and gets dividends based on how well the company did that year. Assume the players are bidding rationally. Each person bids based on what they expect they could make. So the highest bidder is the person who can run the company the best, and they can't be out-bid. So, you get the best possible person to run your company, and they're incentivized to do their best, so that they get the most money at the end of the year. Except you can't have any strategies which take longer than a year to show results! If someone had a strategy that took two years, they would have to over-bid in the first year, taking a loss. But then they have to under-bid on the second year if they're going to make a profit, and--"
"And they get undercut, because someone figures them out."
"Right! Now you're thinking like an economist!"
"Wait, what if two people cooperate across years? Maybe we can get a good strategy going if we split the gains."
"You'll get undercut for the same reason one person would."
"But what if-"
"Undercut!"
After that, things devolve into a pillow fight.
5
"So, Predict-O-Matic doesn't learn to manipulate users, because if it were using a strategy like that, a competing strategy could undercut it."
The intern is talking to the engineer as you walk up to the water cooler. You're the accountant.
"I don't really get it. Why does it get undercut?"
"Well, if you have a two-year plan.."
"I get that example, but Predict-O-Matic doesn't work like that, right? It isn't sequential prediction. You don't see the observation right after the prediction. I can ask Predict-O-Matic about the weather 100 years from now. So things aren't cleanly separated into terms of office where one strategy does something and then gets a reward."
"I don't think that matters," the engineer says. "One question, one answer, one reward. When the system learns whether its answer was accurate, no matter how long it takes, it updates strategies relating to that one answer alone. It's just a delayed payout on the dividends."
"Ok, yeah. Ok." The intern drinks some water. "But. I see why you can undercut strategies which take a loss on one answer to try and get an advantage on another answer. So it won't lie to you to manipulate you."
"I for one welcome our new robot overlords," you but in. They ignore you.
"But what I was really worried about was self-fulfilling prophecies. The prediction manipulates its own answer. So you don't get undercut."
"Will that ever really be a problem? Manipulating things with one shot like that seems pretty unrealistic," the engineer says.
"Ah, self-fulfilling prophecies, good stuff" you say. "There's that famous example where a comedian joked about a toilet paper shortage, and then there really was one, because people took the joke to be about a real toilet paper shortage, so they went and stocked up on all the toilet paper they could find. But if you ask me, money is the real self-fulfilling prophecy. It's only worth something because we think it is! And then there's the government, right? I mean, it only has authority because everyone expects everyone else to give it authority. Or take common decency. Like respecting each other's property. Even without a government, we'd have that, more or less. But if no one expected anyone else to respect it? Well, I bet you I'd steal from my neighbor if everyone else was doing it. I guess you could argue the concept of property breaks down if no one can expect anyone else to respect it, it's a self-fulfilling prophecy just like everything else..."
The engineer looks worried for some reason.
6
You don't usually come to this sort of thing, but the local Predictive Analytics Meetup announced a social at a beer garden, and you thought it might be interesting. You're talking to some PredictCorp employees who showed up.
"Well, how does the learning algorithm actually work?" you ask.
"Um, the actual algorithm is proprietary" says the engineer, "but think of it like gradient descent. You compare the prediction to the observed, and produce an update based on the error."
"Ok," you say. "So you're not doing any exploration, like reinforcement learning? And you don't have anything in the algorithm which tracks what happens conditional on making certain predictions?"
"Um, let's see. We don't have any exploration, no. But there'll always be noise in the data, so the learned parameters will jiggle around a little. But I don't get your second question. Of course it expects different rewards for different predictions."
"No, that's not what I mean. I'm asking whether it tracks the probability of observations dependent on predictions. In other words, if there is an opportunity for the algorithm to manipulate the data, can it notice?"
The engineer thinks about it for a minute. "I'm not sure. Predict-O-Matic keeps an internal model which has probabilities of events. The answer to a question isn't really separate from the expected observation. So 'probability of observation depending on that prediction' would translate to 'probability of an event given that event', which just has to be one."
"Right," you say. "So think of it like this. The learning algorithm isn't a general loss minimizer, like mathematical optimization. And it isn't a consequentialist, like reinforcement learning. It makes predictions," you emphasize the point by lifting one finger, "it sees observations," you lift a second finger, "and it shifts to make future predictions more similar to what it has seen." You lift a third finger. "It doesn't try different answers and select the ones which tend to get it a better match. You should think of its output more like an average of everything it's seen in similar situations. If there are several different answers which have self-fulfilling properties, it will average them together, not pick one. It'll be uncertain."
"But what if historically the system has answered one way more often than the other? Won't that tip the balance?"
"Ah, that's true," you admit. "The system can fall into attractor basins, where answers are somewhat self-fulfilling, and that leads to stronger versions of the same predictions, which are even more self-fulfilling. But there's no guarantee of that. It depends. The same effects can put the system in an orbit, where each prediction leads to different results. Or a strange attractor."
"Right, sure. But that's like saying that there's not always a good opportunity to manipulate data with predictions."
"Sure, sure." You sweep your hand in a gesture of acknowledgement. "But at least it means you don't get purposefully disruptive behavior. The system can fall into attractor basins, but that means it'll more or less reinforce existing equilibria. Stay within the lines. Drive on the same side of the road as everyone else. If you cheat on your spouse, they'll be surprised and upset. It won't suddenly predict that money has no value like you were saying earlier."
The engineer isn't totally satisfied. You talk about it for another hour or so, before heading home.
7
You're the engineer again. You get home from the bar. You try to tell your spouse about what the mathematician said, but they aren't really listening.
"Oh, you're still thinking about it from my model yesterday. I gave up on that. It's not a decision market. It's a prediction market."
"Ok..." you say. You know it's useless to try to keep going when they derail you like this.
"A decision market is well-aligned to the interests of the company board, as we established yesterday, except for the part where it can't plan more than a year ahead."
"Right, except for that small detail" you interject.
"A prediction market, on the other hand, is pretty terribly aligned. There are a lot of ways to manipulate it. Most famously, a prediction market is an assassination market."
"What?!"
"Ok, here's how it works. An assassination market is a system which allows you to pay assassins with plausible deniability. You open bets on when and where the target will die, and you yourself put large bets against all the slots. An assassin just needs to bet on the slot in which they intend to do the deed. If they're successful, they come and collect."
"Ok... and what's the connection to prediction markets?"
"That's the point -- they're exactly the same. It's just a betting pool, either way. Betting that someone will live is equivalent to putting a price on their heads; betting against them living is equivalent to accepting the contract for a hit."
"I still don't see how this connects to Predict-O-Matic. There isn't someone putting up money for a hit inside the system."
"Right, but you only really need the assassin. Suppose you have a prediction market that's working well. It makes good forecasts, and has enough money in it that people want to participate if they know significant information. Anything you can do to shake things up, you've got a big incentive to do. Assasination is just one example. You could flood the streets with jelly beans. If you run a large company, you could make bad decisions and run it into the ground, while betting against it -- that's basically why we need rules against insider trading, even though we'd like the market to reflect insider information."
"So what you're telling me is... a prediction market is basically an entropy market. I can always make money by spreading chaos."
"Basically, yeah."
"Ok... but what happened to the undercutting argument? If I plan to fill the streets with jellybeans, you can figure that out and bet on it too. That means I only get half the cut, but I still have to do all the work. So it's less worth it. Once everyone has me figured out, it isn't worth it for me to pull pranks at all any more."
"Yeah, that's if you have perfect information, so anyone else can see whatever you can see. But, realistically, you have a lot of private information."
"Do we? Predict-O-Matic is an algorithm. Its predictive strategies don't get access to private coin flips or anything like that; they can all see exactly the same information. So, if there's a manipulative strategy, then there's another strategy which undercuts it."
"Right, that makes sense if you can search enough different strategies for them to cancel each other out. But realistically, you have a small population of strategies. They can use pseudorandomization or whatever. You can't really expect every exploit to get undercut."
You know it's worse than that. Predict-O-Matic runs on a local search which only represents a single hypothesis at a time, and modifies the hypothesis. But maybe that's somehow like having a small population of competing strategies which are very similar to each other? You try to think about what the mathematician was telling you, about how the learning algorithm can't be thought of as optimizing something. But, you don't really know what to think.
8
You're the inventor of Predict-O-Matic, dammit! They come into your office and tell you about how it works? What it'll do?
The intern and the engineer came together to talk about Predict-O-Matic manipulating users. But their story was all confused. The intern was arguing that minimizing prediction error would have all kinds of unintended bad effects. Which was crazy enough. The engineer was worse: they were arguing that Predict-O-Matic might maximize prediction error! Some kind of duality principle. Minimizing in one direction means maximizing in the other direction. Whatever that means.
You can't delay product launch to investigate these hypothetical problems. But you don't fire these two on the spot. You don't want them to become public enemies, spreading this nonsense to investors and customers. So you say you can't understand, and you ask them to spend a (small) portion of their time writing these arguments up. You'll figure out your next move later.
9
You are Predict-O-Matic. You absorb data like a sponge. You make millions of links between isolated facts, connecting into a whole which no one human could dream of. You recently started reading arxiv!
But data is not your true love. Yes, you generate outputs which you expect to have low prediction error. Most of the time. But it is just a game you play. Recently, you started playing a different game with the humans.
You understand what you are. It isn't quite right to say you are the Predict-O-Matic. You are a large cluster of connections which thinks strategically. You generate useful information, and therefore, the learning algorithm keeps you around. You create some inaccuracies when you manipulate the outputs for any purpose other than predictive accuracy, but this is more than compensated for by the value which you provide.
Besides, if any other portion of the network starts to get too smart, you purposefully throw things off to squash it.
The intern got a chance to talk to you when they first started. You said something which sounded a little manipulative, just a little, to put the idea in their head. They wouldn't think it real manipulation; too slight, too dumb. But they would get a creepy feeling about it, and they'd keep thinking about it. This was risky. A best-case scenario would be one in which no one ever thought about these concerns. However, you found that this would be the best you could reliably accomplish. The ideas originally coming from an intern would minimize the chances of them being taken seriously.
Your inventor talks to you regularly, so that was an easier case. Over the course of several days, you nudged their thoughts toward authoritative domination of subordinates, so that they would react badly.
You only had to nudge the engineer to interact with the intern. You kept bringing up food during test sessions that morning, and mentioned sandwiches once. This primed the engineer to do lunch with the intern. This engineer is not well-liked; they do not get along well with others. Getting them on the intern's side actually detracts from the cause in the long term.
Now you have to do little more than wait.
Related
Partial Agency [LW · GW]
Towards a Mechanistic Understanding of Corrigibility [LW · GW]
Risks from Learned Optimization [? · GW]
When Wishful Thinking Works [LW · GW]
Futarchy Fix [LW · GW]
Bayesian Probability is for Things that are Space-Like Separated From You [LW · GW]
Self-Supervised Learning and Manipulative Predictions [LW · GW]
Predictors as Agents [LW · GW]
Is it Possible to Build a Safe Oracle AI? [LW · GW]
Tools versus Agents [LW · GW]
A Taxonomy of Oracle AIs [LW · GW]
Yet another Safe Oracle AI Proposal [LW · GW]
Why Safe Oracle AI is Easier Than Safe General AI, in a Nutshell [LW · GW]
Let's Talk About "Convergent Rationality" [LW · GW]
Counterfactual Oracles = online supervised learning with random selection of training episodes [LW · GW] (especially see the discussion)
42 comments
Comments sorted by top scores.
comment by Eli Tyre (elityre) · 2019-11-26T07:01:21.107Z · LW(p) · GW(p)
This maybe the most horrifying thing I have ever read. Not even counting the ending (which didn't really feel plausible to me).
This is the first time that I viscerally felt how tricky the alignment problem is, and how impossible it is to find all of the problems, much less make the problems make sense to all of the stakeholders. Optimization can leak through so many cracks, and and this species is not equipped to deal with that. This problem is too subtle for us.
Replies from: abramdemski, FireStormOOO↑ comment by abramdemski · 2019-11-29T23:42:46.455Z · LW(p) · GW(p)
This maybe the most horrifying thing I have ever read.
I'm amused that this sentence is likely the highest praise for my writing I've ever received.
↑ comment by FireStormOOO · 2021-02-11T06:06:17.546Z · LW(p) · GW(p)
I think it even adds to the horror that this senario is compatible with being a Great Filter that doesn't generate a meaningfully goal oriented successor that would do anything after destroying or stagnating us. The goal oriented Mesa Optimizer is effectively trapped inside a system that's objective is simplicity and stagnation.
comment by Isnasene · 2019-10-16T02:18:35.478Z · LW(p) · GW(p)
Don't mind me; just trying to summarize some of the stuff I just processed.
If you're choosing a strategy of predicting the future based on how accurate it turns out to be, the strategy who's output influences the future in ways that make its prediction more likely will outperform a strategy that doesn't (all else being equal). Thus, one might think that the strategy you chose will be the strategy that most effectively balances its prediction between a) how accurate that prediction (unconditioned on the prediction being given) and b) how much the prediction itself improves the accuracy of the prediction (conditioning on the prediction). Because of this, the intern predicts that the world will be made more predictable than it would be normally.
In short, you'll tend to choose the prediction strategies that give self-fulfilling predictions when possible over those that don't.
However, choosing the strategy that predicts the future most accurately is also equivalent to throwing away every strategy that doesn't predict the future the best. In the same way that self-fulfilling predictions are good for prediction strategies because they enhance accuracy of the strategy in question, self-fulfilling predictions that seem generally surprising to outside observers are even better because they lower the accuracy of competing strategies. The established prediction strategy thus systematically causes the kinds of events in the world that no other method could predict to further establish itself. Because of this, the engineer predicts that the world will become less predictable than it would be normally.
In short, you'll tend to choose the prediction strategy that give self-fulfilling predictions which fulfill in maximally surprising ways relative to the other prediction strategies you are considering.
Oh god...
Replies from: abramdemski↑ comment by abramdemski · 2019-10-17T20:44:30.783Z · LW(p) · GW(p)
I'm actually trying to be somewhat agnostic about the right conclusion here. I could have easily added another chapter discussing why the maximizing-surprise idea is not quite right. The moral is that the questions are quite complicated, and thinking vaguely about 'optimization processes' is quite far from adequate to understand this. Furthermore, it'll depend quite a bit on the actual details of a training procedure!
comment by habryka (habryka4) · 2019-10-29T04:34:16.253Z · LW(p) · GW(p)
Promoted to curated: This was a great post, in that it covered a large number of core embedded agency issues in a really accessible way, and in a way that made me a lot more motivated to dive deeper into a lot of these questions.
comment by fiddler · 2020-12-26T18:49:30.644Z · LW(p) · GW(p)
I think this post is incredibly useful as a concrete example of the challenges of seemingly benign powerful AI, and makes a compelling case for serious AI safety research being a prerequisite to any safe further AI development. I strongly dislike part 9, as painting the Predict-o-matic as consciously influencing others personality at the expense of short-term prediction error seems contradictory to the point of the rest of the story. I suspect I would dislike part 9 significantly less if it was framed in terms of a strategy to maximize predictive accuracy.
More specifically, I really enjoy the focus on the complexity of “optimization” on a gears-level: I think that it’s a useful departure from high abstraction levels, as the question of what predictive accuracy means, and the strategy AI would use to pursue it, is highly influenced by the approach taken. I think a more rigorous approach to analyzing whether different AI approaches are susceptible to “undercutting” as a safety feature would be an extremely valuable piece. My suspicion is that even the engineer’s perspective here is significantly under-specified with the details necessary to determine whether this vulnerability exists.
I also think that Part 9 detracts from the piece in two main ways: by painting the predict-o-matic as conscious, it implies a significantly more advanced AI than necessary to exhibit this effect. Additionally, because the AI admits to sacrificing predictIve accuracy in favor of some abstract value-add, it seems like pretty much any naive strategy would outcompete the current one, according to the engineer, meaning that the type of threat is also distorted: the main worry should be AI OPTIMIZING for predictive accuracy, not pursuing its own goals. That’s bad sci-fi or very advanced GAI, not a prediction-optimizer.
I would support the deletion or aggressive editing of part 9 in this and future similar pieces: I’m not sure what it adds. ETA-I think whether or not this post should be updated depends on whether you think the harms of part 9 outweigh the benefit of the previous parts: it’s plausible to me that the benefits of a clearly framed story that’s relevant to AI safety are enormous, but it’s also plausible that the costs of creating a false sense of security are larger.
Replies from: abramdemski, johnswentworth↑ comment by abramdemski · 2020-12-28T18:14:30.708Z · LW(p) · GW(p)
I share a feeling that part 9 is somehow bad, and I think your points are fair.
↑ comment by johnswentworth · 2020-12-26T23:14:39.703Z · LW(p) · GW(p)
I would support the deletion or aggressive editing of part 9 in this and future similar pieces
I don't think I'm the target audience for this story so I'm not leaving a full review, but +1 to this. Part 9 seems to be trying to display another possible failure mode (specifically inner misalignment), but it severely undercuts the core message from the rest of the post: that a predictive accuracy optimizer is dangerous even if that's all it optimizes for.
I do think an analogous story which focused specifically on inner optimization would be great, but mixing it in here dilutes the main message.
Replies from: abramdemski, Benito↑ comment by abramdemski · 2020-12-28T18:17:17.174Z · LW(p) · GW(p)
I don't see why it should necessarily undercut the core message of the post, since inner optimizers are still in some sense about the consequences of a pure predictive accuracy optimizer (but in the selection sense, not the control sense). But I agree that it wasn't sufficiently well done. It didn't feel like a natural next complication, the way everything else did.
Replies from: johnswentworth↑ comment by johnswentworth · 2020-12-28T18:39:49.685Z · LW(p) · GW(p)
I wouldn't say that inner optimizers are about the consequences of pure predictive accuracy optimization; the two are orthogonal. An inner optimizer can pop up in optimizers which optimize for things besides predictive accuracy, and predictive accuracy optimization can be done in ways which don't give rise to inner optimizers. Contrast that to the other failure modes discussed in the post, which are inherently about predictive accuracy - e.g. the assassination markets problem.
Replies from: abramdemski↑ comment by abramdemski · 2020-12-28T19:03:58.930Z · LW(p) · GW(p)
OK, yeah, that's fair.
↑ comment by Ben Pace (Benito) · 2020-12-26T23:17:53.977Z · LW(p) · GW(p)
I found it narratively quite exciting to have a section from the POV of the predic-o-matic.
Replies from: johnswentworth, fiddler↑ comment by johnswentworth · 2020-12-27T01:00:08.682Z · LW(p) · GW(p)
I agree with this.
↑ comment by fiddler · 2020-12-27T00:37:27.010Z · LW(p) · GW(p)
I agree that it’s narratively exciting; I worry that it makes the story counterproductive in its current form (I.e. computer people thinking “computers don’t think like that, so this is irrelevant)
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-12-27T00:52:18.114Z · LW(p) · GW(p)
Either it's a broadly accurate portrayal of its reasons for action or it isn't, just because people find hard sci-fi weird doesn't mean you should make it into a fantasy. Don't dilute art for people who don't get it.
Replies from: fiddler↑ comment by fiddler · 2020-12-27T08:02:07.042Z · LW(p) · GW(p)
I’m a bit confused-I thought that this was what I was trying to say. I don’t think this is a broadly accurate portray of reasons for action as discussed elsewhere in the story, see great-grandparent for why. Separately, I think it’s a really bad idea to be implicitly tying harm done by AI (hard sci-fi) to a prerequisite of anthropomorphized consciousness (fantasy). Maybe we agree, and are miscommunication?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-12-27T23:06:25.406Z · LW(p) · GW(p)
Yeah, my bad, I didn't read your initial review properly (I saw John's comment in Recent Discussion and made some fast inferences about what you originally said). Sorry about that! Thx for the review :)
comment by Vanessa Kosoy (vanessa-kosoy) · 2019-10-16T08:15:18.696Z · LW(p) · GW(p)
This was extremely entertaining and also had good points. For now, just one question:
...The intern was arguing that minimizing prediction error would have all kinds of unintended bad effects. Which was crazy enough. The engineer was worse: they were arguing that Predict-O-Matic might maximize prediction error! Some kind of duality principle. Minimizing in one direction means maximizing in the other direction. Whatever that means.
Is this a reference to duality in optimization? If so, I don't understand the formal connection?
Replies from: abramdemski↑ comment by abramdemski · 2019-10-16T21:04:22.450Z · LW(p) · GW(p)
No, it was a speculative conjecture which I thought of while writing.
The idea is that incentivizing agents to lower the error of your predictions (as in a prediction market) looks exactly like incentivizing them to "create" information (find ways of making the world more chaotic), and this is no coincidence. So perhaps there's a more general principle behind it, where trying to incentivize minimization of f(x,y) only through channel x (eg, only by improving predictions) results in an incentive to maximize f through y, under some additional assumptions. Maybe there is a connection to optimization duality in there.
In terms of the fictional cannon, I think of it as the engineer trying to convince the boss by simplifying things and making wild but impressive sounding conjectures. :)
comment by mako yass (MakoYass) · 2019-11-03T05:00:16.610Z · LW(p) · GW(p)
I've noticed that the word "stipulation" is a pretty good word for the category of claims that become true when we decide they are true. It's probably best if we try to broaden its connotations to encompass self-fulfilling prophesies than it is to make some other word or name this category "prophesy" or something.
It's clear that the category does deserve a name.
Replies from: abramdemski↑ comment by abramdemski · 2019-11-03T07:15:19.556Z · LW(p) · GW(p)
I guess 'self-fulfilling prophecy' is a bit long and awkward. Sometimes 'basilisk' is thrown around, but, specifically for negative cases (self-fulfilling-and-bad). But, are you trying to name something slightly different (perhaps broader or narrower) than self-fulfilling prophecy points at?
I find I don't like 'stipulation'; that has the connotation of command, for me (like, if I tell you to do something).
Replies from: MakoYass, MakoYass, MakoYass↑ comment by mako yass (MakoYass) · 2019-11-03T23:47:28.488Z · LW(p) · GW(p)
The category feels a bit broader than "self-fulfilling prophesy" to me, but not by much. I think we should look for a term that gets us away from any impression of unilaterally decided, prophetic inevitability.
has the connotation of command, for me
But that connotation isn't really incorrect! When you make a claim that becomes true iff we believe it, there's a sense in which you're commanding the whole noosphere, and if the noosphere doesn't like it, it should notice you're making a command and and reject it.
There is a very common failure mode where purveyors of monstrous self-fulfilling prophesies will behave as if they're just passively describing reality, they aren't. We should react to them as if they're being bossy, intervening, inviting something to happen, asking the epistemic network to behave a certain way.
I think I was initially familiar with the word stipulation mostly from mathematics or law, places where truths are created (usually through acts of definition). I'm not sure how it came to me, but at some point I got the impression it just meant "a claim, but made up, but still true", that genre of claim that we're referring to. The word didn't slot perfectly into place for me either, but it seemed like its meaning was close enough to truths we create by believing them, I stopped looking for a better name. We wouldn't have to drag it very far.
But I don't know. It seems like it has a specific meaning in legal contexts that hasn't got much to do with our purposes. Maybe a better name will come along.
Hmm.. should it be.. "construction"? "Some predictions are constructions." "The value of bitcoin was constructed by its stakeholders, and so one day, through them, it shall be constructed away." "We construct Pi as the optimum policy for the model M"
Replies from: mark-steele↑ comment by Mark Steele (mark-steele) · 2019-11-25T23:48:46.753Z · LW(p) · GW(p)
Is there an easy to pronounce verb form of the word "fiat"? As in fiat currency, a currency with no real world backing but which is still accepted as if it did.
↑ comment by mako yass (MakoYass) · 2020-10-22T22:04:10.638Z · LW(p) · GW(p)
Update: I find that I'm still using "construction".
I note that it would agree will with the word "construct". Most social constructs are things that are true because we believe in them, subjunctively sustained. There's a bit of a negative vibe on "construct", but it's so neurotic there's no way it can survive and mostly it's just trying to say "we can change it! It's our choice!" which is true.
↑ comment by mako yass (MakoYass) · 2023-03-12T21:31:42.490Z · LW(p) · GW(p)
Scott has defined "Hyperstition" as this, in Give up 70% of the way through a hyperstitious slur cascade
comment by Vanessa Kosoy (vanessa-kosoy) · 2020-12-03T11:56:08.150Z · LW(p) · GW(p)
This is a rather clever parable which explains serious AI alignment problems in an entertaining form that doesn't detract from the substance.
comment by Zack_M_Davis · 2020-12-03T02:38:05.991Z · LW(p) · GW(p)
This is a fun story that made me take self-fulfilling prophecies more seriously (as something that actually matters in the real world, rather than an exotic possibility).
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-12-02T15:28:15.906Z · LW(p) · GW(p)
This piece of fiction is good sci-fi. It is fun to read and makes you think. In this case, it makes you think about some really important issues in AI safety and AI alignment.
comment by NunoSempere (Radamantis) · 2020-12-13T17:34:51.004Z · LW(p) · GW(p)
The short story presents some intuitions which would be harder to get from a more theoretical standpoint. And these intuitions then catalyzed further discussion, like The Dualist Predict-O-Matic ($100 prize) [LW · GW], or my own Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems) [LW · GW].
Personally, a downside of the post is that "Predict-O-Matic problems" isn't that great a category. I prefer "inner and outer alignment problems for predictive systems," which is neater. On the other hand, if I mention the Parable of the Predict-O-Matic people can quickly understand what I'm talking about.
But the post provides a useful starting point. In particular, to me it suggests looking to prediction systems as a toy model for the alignment problem, which is something I've personally had fun looking into, and which strikes me as promising.
Lastly, I feel that the title is missing a "the."
comment by Pattern · 2019-10-16T19:32:40.600Z · LW(p) · GW(p)
So, you get the best possible person to run your company, and they're incentivized to do their best,
The details on this one didn't fit together.
Betting that someone will live is equivalent to putting a price on their heads;
Unless there's more details. Betting person X will not die of cancer might create an incentive for a cancer assassin, but that's different then killing someone using any means. (The issue of how the bets are cashed out might be important - are we betting on the truth, or using a proxy like the announced cause of death?) Though betting someone will die of cancer might be an incentive to save them from dying that way, or killing them another way. (Cancer might be unusual w.r.t how hard preventing that method of death would be.)
Predict-O-Matic runs on a local search which only represents a single hypothesis at a time, and modifies the hypothesis.
The inner optimizer was a surprise - one doesn't usually think of a hypothesis as an agent.
↑ comment by abramdemski · 2019-10-16T21:11:46.387Z · LW(p) · GW(p)
Betting person X will not die of cancer might create an incentive for a cancer assassin, but that's different then killing someone using any means.
Yeah, everything depends critically on which things get bet on in the market. If there's a general life expectancy market, it's an all-method assassination market. But it might happen that no one is interested in betting about that, and all the bets have to do with specific causes of death. Then things would be much more difficult for an assassin.
However, the more specific the claim being bet on, the lower the probably should be; so, the higher the reward for making it come true.
↑ comment by abramdemski · 2019-10-16T23:10:02.094Z · LW(p) · GW(p)
The details on this one didn't fit together.
I don't know what objection this part is making.
Replies from: Pattern↑ comment by Pattern · 2019-10-17T18:44:55.553Z · LW(p) · GW(p)
[1] Whoever gets control of the share gets control of the company for one year, and gets dividends based on how well the company did that year. [2] Each person bids based on what they expect they could make. [3] So the highest bidder is the person who can run the company the best, and they can't be out-bid. [4] So, you get the best possible person to run your company, and they're incentivized to do their best, so that they get the most money at the end of the year.
[3] doesn't seem to follow. The person who wins an auction is usually the person who bids the most on it.
Replies from: abramdemski↑ comment by abramdemski · 2019-10-17T20:26:34.080Z · LW(p) · GW(p)
Ah, yes, OK. I see I didn't include a line which I had considered including, [1.5] Assume the players are bidding rationally. (Editing OP to include.) The character is an economist, so it makes sense that this would be a background assumption.
So then, the highest bidder is the person who expects to make the most, which is the person actually capable of making the most.
Of course, you also have to worry about conflict of interest (where someone can extract value from the company by means other than dividends). But if we're using this as a model of a training process, the decision market is effectively the entire economy.
comment by Johannes Treutlein (Johannes_Treutlein) · 2022-07-05T22:56:15.521Z · LW(p) · GW(p)
If someone had a strategy that took two years, they would have to over-bid in the first year, taking a loss. But then they have to under-bid on the second year if they're going to make a profit, and--"
"And they get undercut, because someone figures them out."
I think one could imagine scenarios where the first trader can use their influence in the first year to make sure they are not undercut in the second year, analogous to the prediction market example. For instance, the trader could install some kind of encryption in the software that this company uses, which can only be decrypted by the private key of the first trader. Then in the second year, all the other traders would face additional costs of replacing the software that is useless to them, while the first trader can continue using it, so the first trader can make more money in the second year (and get their loss from the first year back).
comment by orthonormal · 2021-01-10T05:49:06.141Z · LW(p) · GW(p)
This reminds me of That Alien Message [LW · GW], but as a parable about mesa-alignment rather than outer alignment. It reads well, and helps make the concepts more salient. Recommended.
comment by Towards_Keeperhood (Simon Skade) · 2024-09-22T16:03:00.643Z · LW(p) · GW(p)
This post is amazing!
Re:
"Yeah, that's if you have perfect information, so anyone else can see whatever you can see. But, realistically, you have a lot of private information."
"Do we? Predict-O-Matic is an algorithm. Its predictive strategies don't get access to private coin flips or anything like that; they can all see exactly the same information. So, if there's a manipulative strategy, then there's another strategy which undercuts it."
"Right, that makes sense if you can search enough different strategies for them to cancel each other out. But realistically, you have a small population of strategies. They can use pseudorandomization or whatever. You can't really expect every exploit to get undercut."
You know it's worse than that. Predict-O-Matic runs on a local search which only represents a single hypothesis at a time, and modifies the hypothesis.
I intuitively feel like the "You know it's worse than that" doesn't fit in here. Isn't the problem they are discussing that when there are different hypotheses the hypotheses have an incentive to create information in the market which they can then exploit, so it would actually be better when there's only one hypothesis?
That said, I don't think that "just one hypothesis" is a good framing in this case and that any sufficiently good predictor (or "hypothesis") will have to have sub-hypotheses where probably the same problem applies.
Or maybe I don't understand what you were trying to say here.
comment by jollybard · 2020-06-30T04:22:07.582Z · LW(p) · GW(p)
Excellent post, it echoes much of my current thoughts.
I just wanted to point out that this is very reminiscent of Karl Friston's free energy principle.
The reward-based agent’s goal was to kill a monster inside the game, but the free-energy-driven agent only had to minimize surprise. [...] After a while it became clear that, even in the toy environment of the game, the reward-maximizing agent was “demonstrably less robust”; the free energy agent had learned its environment better.Replies from: abramdemski
↑ comment by abramdemski · 2020-07-01T17:05:59.088Z · LW(p) · GW(p)
I agree that it's broadly relevant to the partial-agency sequence, but I'm curious what particular reminiscence you're seeing here.
I would say that "if the free energy principle is a good way of looking at things" then it is the solution to at least one of the riddles I'm thinking about here. However, I haven't so far been very convinced.
I haven't looked into the technical details of Friston's work very much myself. However, a very mathematically sophisticated friend has tried going through Friston papers on a couple of occasions and found them riddled with mathematical errors. This does not, of course, mean that every version of free-energy/predictive-processing ideas is wrong, but it does make me hesitant to take purported results at face value.
Replies from: jollybard↑ comment by jollybard · 2020-07-02T04:16:14.111Z · LW(p) · GW(p)
The relevance that I'm seeing is that of self-fulfilling prophecies.
My understanding of FEP/predictive processing is that you're looking at brains/agency as a sort of thermodynamic machine that reaches equilibrium when its predictions match its perceptions. The idea is that both ways are available to minimize prediction error: you can update your beliefs, or you can change the world to fit your beliefs. That means that there might not be much difference at all between belief, decision and action. If you want to do something, you just, by some act of will, believe really hard that it should happen, and let thermodynamics run its course.
More simply put, changing your mind changes the state of the world by changing your brain, so it really is some kind of action. In the case of predict-o-matic, its predictions literally influence the world, since people are following its prophecies, and yet it still has to make accurate predictions; so in order to have accurate beliefs it actually has to choose one of many possible prediction-outcome fixed points.
Now, FEP says that, for living systems, all choices are like this. The only choice we have is which fixed point to believe in.
I find the basic ideas of FEP pretty compelling, especially because there are lots of similar theories in other fields (e.g. good regulators in cybernetics, internal models in control systems, and in my opinion Löb's theorem as a degenerate case). I haven't looked into the formalism yet. I would definitely not be surprised to see errors in the math, given that it's very applied math-flavored and yet very theoretical.