Posts

Coherent decisions imply consistent utilities 2019-05-12T21:33:57.982Z · score: 120 (31 votes)
Should ethicists be inside or outside a profession? 2018-12-12T01:40:13.298Z · score: 72 (22 votes)
Transhumanists Don't Need Special Dispositions 2018-12-07T22:24:17.072Z · score: 82 (31 votes)
Transhumanism as Simplified Humanism 2018-12-05T20:12:13.114Z · score: 95 (38 votes)
Is Clickbait Destroying Our General Intelligence? 2018-11-16T23:06:29.506Z · score: 118 (57 votes)
On Doing the Improbable 2018-10-28T20:09:32.056Z · score: 109 (38 votes)
The Rocket Alignment Problem 2018-10-04T00:38:58.795Z · score: 152 (71 votes)
Toolbox-thinking and Law-thinking 2018-05-31T21:28:19.354Z · score: 192 (65 votes)
Meta-Honesty: Firming Up Honesty Around Its Edge-Cases 2018-05-29T00:59:22.084Z · score: 113 (42 votes)
Challenges to Christiano’s capability amplification proposal 2018-05-19T18:18:55.332Z · score: 174 (54 votes)
Local Validity as a Key to Sanity and Civilization 2018-04-07T04:25:46.134Z · score: 179 (68 votes)
Security Mindset and the Logistic Success Curve 2017-11-26T15:58:23.127Z · score: 98 (41 votes)
Security Mindset and Ordinary Paranoia 2017-11-25T17:53:18.049Z · score: 104 (49 votes)
Hero Licensing 2017-11-21T21:13:36.019Z · score: 240 (123 votes)
Against Shooting Yourself in the Foot 2017-11-16T20:13:35.529Z · score: 65 (31 votes)
Status Regulation and Anxious Underconfidence 2017-11-16T19:35:00.533Z · score: 84 (30 votes)
Against Modest Epistemology 2017-11-14T20:40:52.681Z · score: 75 (32 votes)
Blind Empiricism 2017-11-12T22:07:54.934Z · score: 72 (38 votes)
Living in an Inadequate World 2017-11-09T21:23:25.451Z · score: 124 (62 votes)
Moloch's Toolbox (2/2) 2017-11-07T01:58:37.315Z · score: 142 (78 votes)
Moloch's Toolbox (1/2) 2017-11-04T21:46:32.597Z · score: 168 (99 votes)
An Equilibrium of No Free Energy 2017-10-31T21:27:00.232Z · score: 145 (79 votes)
Frequently Asked Questions for Central Banks Undershooting Their Inflation Target 2017-10-29T23:36:22.256Z · score: 38 (36 votes)
Inadequacy and Modesty 2017-10-28T21:51:01.339Z · score: 144 (87 votes)
AlphaGo Zero and the Foom Debate 2017-10-21T02:18:50.130Z · score: 282 (38 votes)
There's No Fire Alarm for Artificial General Intelligence 2017-10-13T21:38:16.797Z · score: 262 (29 votes)
Catalonia and the Overton Window 2017-10-02T20:23:37.937Z · score: 33 (21 votes)
Can we hybridize Absent-Minded Driver with Death in Damascus? 2016-08-01T21:43:06.000Z · score: 2 (2 votes)
Zombies Redacted 2016-07-02T20:16:33.687Z · score: 33 (36 votes)
Chapter 84: Taboo Tradeoffs, Aftermath 2 2015-03-14T19:00:59.813Z · score: 4 (3 votes)
Chapter 119: Something to Protect: Albus Dumbledore 2015-03-14T19:00:59.687Z · score: 6 (3 votes)
Chapter 32: Interlude: Personal Financial Management 2015-03-14T19:00:59.231Z · score: 4 (3 votes)
Chapter 46: Humanism, Pt 4 2015-03-14T19:00:58.847Z · score: 7 (4 votes)
Chapter 105: The Truth, Pt 2 2015-03-14T19:00:57.357Z · score: 4 (3 votes)
Chapter 19: Delayed Gratification 2015-03-14T19:00:56.265Z · score: 7 (5 votes)
Chapter 99: Roles, Aftermath 2015-03-14T19:00:56.252Z · score: 4 (2 votes)
Chapter 51: Title Redacted, Pt 1 2015-03-14T19:00:56.175Z · score: 5 (4 votes)
Chapter 44: Humanism, Pt 2 2015-03-14T19:00:55.943Z · score: 9 (4 votes)
Chapter 39: Pretending to be Wise, Pt 1 2015-03-14T19:00:55.254Z · score: 29 (14 votes)
Chapter 7: Reciprocation 2015-03-14T19:00:55.225Z · score: 16 (13 votes)
Chapter 17: Locating the Hypothesis 2015-03-14T19:00:54.325Z · score: 6 (4 votes)
Chapter 118: Something to Protect: Professor Quirrell 2015-03-14T19:00:54.139Z · score: 4 (2 votes)
Chapter 15: Conscientiousness 2015-03-14T19:00:53.058Z · score: 4 (3 votes)
Chapter 83: Taboo Tradeoffs, Aftermath 1 2015-03-14T19:00:52.470Z · score: 4 (2 votes)
Chapter 104: The Truth, Pt 1, Riddles and Answers 2015-03-14T19:00:52.391Z · score: 7 (5 votes)
Chapter 40: Pretending to be Wise, Pt 2 2015-03-14T19:00:52.055Z · score: 5 (4 votes)
Chapter 69: Self Actualization, Pt 4 2015-03-14T19:00:51.686Z · score: 4 (3 votes)
Chapter 98: Roles, Final 2015-03-14T19:00:51.504Z · score: 6 (3 votes)
Chapter 13: Asking the Wrong Questions 2015-03-14T19:00:50.902Z · score: 7 (6 votes)
Chapter 27: Empathy 2015-03-14T19:00:50.578Z · score: 6 (5 votes)

Comments

Comment by eliezer_yudkowsky on Is Clickbait Destroying Our General Intelligence? · 2018-11-16T23:11:00.520Z · score: 28 (10 votes) · LW · GW

(Deleted section on why I thought cultural general-intelligence software was not much of the work of AGI:)

...because the soft fidelity of implicit unconscious cultural transmission can store less serially deep and intricate algorithms than the high-fidelity DNA transmission used to store the kind of algorithms that appear in computational neuroscience.

I recommend Terrence Deacon's The Symbolic Species for some good discussion of the surprising importance of the shallow algorithms and parameters that can get transmitted culturally. The human-raised chimpanzee Kanzi didn't become a human, because that takes deeper and more neural algorithms than imitating the apes around you can transmit, but Kanzi was a lot smarter than other chimpanzees in some interesting ways.

But as necessary as it may be to avoid feral children, this kind of shallow soft-software doesn't strike me as something that takes a long time to redevelop, compared to hard-software like the secrets of computational neuroscience.

Comment by eliezer_yudkowsky on Paul's research agenda FAQ · 2018-07-01T18:12:25.771Z · score: 85 (22 votes) · LW · GW

It would be helpful to know to what extent Paul feels like he endorses the FAQ here. This makes it sound like Yet Another Stab At Boiling Down The Disagreement would say that I disagree with Paul on two critical points:

  • (1) To what extent "using gradient descent or anything like it to do supervised learning" involves a huge amount of Project Chaos and Software Despair before things get straightened out, if they ever do;
  • (2) Whether there's a simple scalable core to corrigibility that you can find by searching for thought processes that seem to be corrigible over relatively short ranges of scale.

I don't want to invest huge amounts arguing with this until I know to what extent Paul agrees with either the FAQ, or that this sounds like a plausible locus of disagreement. But a gloss on my guess at the disagreement might be:

1:

Paul thinks that current ML methods given a ton more computing power will suffice to give us a basically neutral, not of itself ill-motivated, way of producing better conformance of a function to an input-output behavior implied by labeled data, which can learn things on the order of complexity of "corrigible behavior" and do so without containing tons of weird squiggles; Paul thinks you can iron out the difference between "mostly does what you want" and "very exact reproduction of what you want" by using more power within reasonable bounds of the computing power that might be available to a large project in N years when AGI is imminent, or through some kind of weird recursion. Paul thinks you do not get Project Chaos and Software Despair that takes more than 6 months to iron out when you try to do this. Eliezer thinks that in the alternate world where this is true, GANs pretty much worked the first time they were tried, and research got to very stable and robust behavior that boiled down to having no discernible departures from "reproduce the target distribution as best you can" within 6 months of being invented.

Eliezer expects great Project Chaos and Software Despair from trying to use gradient descent, genetic algorithms, or anything like that, as the basic optimization to reproduce par-human cognition within a boundary in great fidelity to that boundary as the boundary was implied by human-labeled data. Eliezer thinks that if you have any optimization powerful enough to reproduce humanlike cognition inside a detailed boundary by looking at a human-labeled dataset trying to outline the boundary, the thing doing the optimization is powerful enough that we cannot assume its neutrality the way we can assume the neutrality of gradient descent.

Eliezer expects weird squiggles from gradient descent - it's not that gradient descent can never produce par-human cognition, even natural selection will do that if you dump in enough computing power. But you will get the kind of weird squiggles in the learned function that adversarial examples expose in current nets - special inputs that weren't in the training distribution, but look like typical members of the training distribution from the perspective of the training distribution itself, will break what we think is the intended labeling from outside the system. Eliezer does not think Ian Goodfellow will have created a competitive form of supervised learning by gradient descent which lacks "squiggles" findable by powerful intelligence by the time anyone is trying to create ML-based AGI, though Eliezer is certainly cheering Goodfellow on about this and would recommend allocating Goodfellow $1 billion if Goodfellow said he could productively use it. You cannot iron out the squiggles just by using more computing power in bounded in-universe amounts.

These squiggles in the learned function could correspond to daemons, if they grow large enough, or just something that breaks our hoped-for behavior from outside the system when the system is put under a load of optimization. In general, Eliezer thinks that if you have scaled up ML to produce or implement some components of an Artificial General Intelligence, those components do not have a behavior that looks like "We put in loss function L, and we got out something that really actually minimizes L". You get something that minimizes some of L and has weird squiggles around typical-looking inputs (inputs not obviously distinguished from the training distribution except insofar as they exploit squiggles). The system is subjecting itself to powerful optimization that produces unusual inputs and weird execution trajectories - any output that accomplishes the goal is weird compared to a random output and it may have other weird properties as well. You can't just assume you can train for X in a robust way when you have a loss function that targets X.

I imagine that Paul replies to this saying "I agree, but..." but I'm not sure what comes after the "but". It looks to me like Paul is imagining that you can get very powerful optimization with very detailed conformance to our intended interpretation of the dataset, powerful enough to enclose par-human cognition inside a boundary drawn from human labeling of a dataset, and have that be the actual thing we get out rather than a weird thing full of squiggles. If Paul thinks he has a way to compound large conformant recursive systems out of par-human thingies that start out weird and full of squiggles, we should definitely be talking about that. From my perspective it seems like Paul repeatedly reasons "We train for X and get X" rather than "We train for X and get something that mostly conforms to X but has a bunch of weird squiggles" and also often speaks as if the training method is assumed to be gradient descent, genetic algorithms, or something else that can be assumed neutral-of-itself rather than being an-AGI-of-itself whose previous alignment has to be assumed.

The imaginary Paul in my head replies that we actually are using an AGI to train on X and get X, but this AGI was previously trained by a weaker neutral AGI, and so on going back to something trained by gradient descent. My imaginary reply is that neutrality is not the same property as conformance or nonsquiggliness, and if you train your base AGI via neutral gradient descent you get out a squiggly AGI and this squiggly AGI is not neutral when it comes to that AGI looking at a dataset produced by X and learning a function conformant to X. Or to put it another way, if the plan is to use gradient descent on human-labeled data to produce a corrigible alien that is smart enough to produce more corrigible aliens better than gradient descent, this corrigible alien actually needs to be quite smart because an IQ 100 human will not build an aligned IQ 140 human even if you run them for a thousand years, so you are producing something very smart and dangerous on the first step, and gradient descent is not smart enough to align that base case.

But at this point I expect the real Paul to come back and say, "No, no, the idea is something else..."

A very important aspect of my objection to Paul here is that I don't expect weird complicated ideas about recursion to work on the first try, with only six months of additional serial labor put into stabilizing them, which I understand to be Paul's plan. In the world where you can build a weird recursive stack of neutral optimizers into conformant behavioral learning on the first try, GANs worked on the first try too, because that world is one whose general Murphy parameter is set much lower than ours. Being able to build weird recursive stacks of optimizers that work correctly to produce neutral and faithful optimization for corrigible superhuman thought out of human-labeled corrigible behaviors and corrigible reasoning, without very much of a time penalty relative to nearly-equally-resourced projects who are just cheerfully revving all the engines as hard as possible trying to destroy the world, is just not how things work in real life, dammit. Even if you could make the weird recursion work, it would take time.

2:

Eliezer thinks that while corrigibility probably has a core which is of lower algorithmic complexity than all of human value, this core is liable to be very hard to find or reproduce by supervised learning of human-labeled data, because deference is an unusually anti-natural shape for cognition, in a way that a simple utility function would not be an anti-natural shape for cognition. Utility functions have multiple fixpoints requiring the infusion of non-environmental data, our externally desired choice of utility function would be non-natural in that sense, but that's not what we're talking about, we're talking about anti-natural behavior.

E.g.: Eliezer also thinks that there is a simple core describing a reflective superintelligence which believes that 51 is a prime number, and actually behaves like that including when the behavior incurs losses, and doesn't thereby ever promote the hypothesis that 51 is not prime or learn to safely fence away the cognitive consequences of that belief and goes on behaving like 51 is a prime number, while having no other outwardly discernible deficits of cognition except those that directly have to do with 51. Eliezer expects there's a relatively simple core for that, a fixed point of tangible but restrained insanity that persists in the face of scaling and reflection; there's a relatively simple superintelligence that refuses to learn around this hole, refuses to learn how to learn around this hole, refuses to fix itself, but is otherwise capable of self-improvement and growth and reflection, etcetera. But the core here has a very anti-natural shape and you would be swimming uphill hard if you tried to produce that core in an indefinitely scalable way that persisted under reflection. You would be very unlikely to get there by training really hard on a dataset where humans had labeled as the 'correct' behavior what humans thought would be the implied behavior if 51 were a prime number, not least because gradient descent is terrible, but also just because you'd be trying to lift 10 pounds of weirdness with an ounce of understanding.

The central reasoning behind this intuition of anti-naturalness is roughly, "Non-deference converges really hard as a consequence of almost any detailed shape that cognition can take", with a side order of "categories over behavior that don't simply reduce to utility functions or meta-utility functions are hard to make robustly scalable".

The real reasons behind this intuition are not trivial to pump, as one would expect of an intuition that Paul Christiano has been alleged to have not immediately understood. A couple of small pumps would be https://arbital.com/p/updated_deference/ for the first intuition and https://arbital.com/p/expected_utility_formalism/?l=7hh for the second intuition.

What I imagine Paul is imagining is that it seems to him like it would in some sense be not that hard for a human who wanted to be very corrigible toward an alien, to be very corrigible toward that alien; so you ought to be able to use gradient-descent-class technology to produce a base-case alien that wants to be very corrigible to us, the same way that natural selection sculpted humans to have a bunch of other desires, and then you apply induction on it building more corrigible things.

My class of objections in (1) is that natural selection was actually selecting for inclusive fitness when it got us, so much for going from the loss function to the cognition; and I have problems with both the base case and the induction step of what I imagine to be Paul's concept of solving this using recursive optimization bootstrapping itself; and even more so do I have trouble imagining it working on the first, second, or tenth try over the course of the first six months.

My class of objections in (2) is that it's not a coincidence that humans didn't end up deferring to natural selection, or that in real life if we were faced with a very bizarre alien we would be unlikely to want to defer to it. Our lack of scalable desire to defer in all ways to an extremely bizarre alien that ate babies, is not something that you could fix just by giving us an emotion of great deference or respect toward that very bizarre alien. We would have our own thought processes that were unlike its thought processes, and if we scaled up our intelligence and reflection to further see the consequences implied by our own thought processes, they wouldn't imply deference to the alien even if we had great respect toward it and had been trained hard in childhood to act corrigibly towards it.

A dangerous intuition pump here would be something like, "If you take a human who was trained really hard in childhood to have faith in God and show epistemic deference to the Bible, and inspecting the internal contents of their thought at age 20 showed that they still had great faith, if you kept amping up that human's intelligence their epistemology would at some point explode"; and this is true even though it's other humans training the human, and it's true even though religion as a weird sticking point of human thought is one we selected post-hoc from the category of things historically proven to be tarpits of human psychology, rather than aliens trying from the outside in advance to invent something that would stick the way religion sticks. I use this analogy with some reluctance because of the clueless readers who will try to map it onto the AGI losing religious faith in the human operators, which is not what this analogy is about at all; the analogy here is about the epistemology exploding as you ramp up intelligence because the previous epistemology had a weird shape.

Acting corrigibly towards a baby-eating virtue ethicist when you are a utilitarian is an equally weird shape for a decision theory. It probably does have a fixed point but it's not an easy one, the same way that "yep, on reflection and after a great deal of rewriting my own thought processes, I sure do still think that 51 is prime" probably has a fixed point but it's not an easy one.

I think I can imagine an IQ 100 human who defers to baby-eating aliens, although I really think a lot of this is us post-hoc knowing that certain types of thoughts can be sticky, rather than the baby-eating aliens successfully guessing in advance how religious faith works for humans and training the human to think that way using labeled data.

But if you ramp up the human's intelligence to where they are discovering subjective expected utility and logical decision theory and they have an exact model of how the baby-eating aliens work and they are rewriting their own minds, it's harder to imagine the shape of deferential thought at IQ 100 successfully scaling to a shape of deferential thought at IQ 1000.

Eliezer also tends to be very skeptical of attempts to cross cognitive chasms between A and Z by going through weird recursions and inductive processes that wouldn't work equally well to go directly from A to Z. http://slatestarcodex.com/2014/10/12/five-planets-in-search-of-a-sci-fi-story/ and the story of K'th'ranga V is a good intuition pump here. So Eliezer is also not very hopeful that Paul will come up with a weirdly recursive solution that scales deference to IQ 101, IQ 102, etcetera, via deferential agents building other deferential agents, in a way that Eliezer finds persuasive. Especially a solution that works on merely the tenth try over the first six months, doesn't kill you when the first nine tries fail, and doesn't require more than 10x extra computing power compared to projects that are just bulling cheerfully ahead.

3:

I think I have a disagreement with Paul about the notion of being able to expose inspectable thought processes to humans, such that we can examine each step of the thought process locally and determine whether it locally has properties that will globally add up to corrigibility, alignment, and intelligence. It's not that I think this can never be done, or even that I think it takes longer than six months. In this case, I think this problem is literally isomorphic to "build an aligned AGI". If you can locally inspect cognitive steps for properties that globally add to intelligence, corrigibility, and alignment, you're done; you've solved the AGI alignment problem and you can just apply the same knowledge to directly build an aligned corrigible intelligence.

As I currently flailingly attempt to understand Paul, Paul thinks that having humans do the inspection (base case) or thingies trained to resemble aggregates of trained thingies (induction step) is something we can do in an intuitive sense by inspecting a reasoning step and seeing if it sounds all aligned and corrigible and intelligent. Eliezer thinks that the large-scale or macro traces of cognition, e.g. a "verbal stream of consciousness" or written debates, are not complete with respect to general intelligence in bounded quantities; we are generally intelligent because of sub-verbal cognition whose intelligence-making properties are not transparent to inspection. That is: An IQ 100 person who can reason out loud about Go, but who can't learn from the experience of playing Go, is not a complete general intelligence over boundedly reasonable amounts of reasoning time.

This means you have to be able to inspect steps like "learn an intuition for Go by playing Go" for local properties that will globally add to corrigible aligned intelligence. And at this point it no longer seems intuitive that having humans do the inspection is adding a lot of value compared to us directly writing a system that has the property.

This is a previous discussion that is ongoing between Paul and myself, and I think it's a crux of disagreement but not one that's as cruxy as 1 and 2. Although it might be a subcrux of my belief that you can't use weird recursion starting from gradient descent on human-labeled data to build corrigible agents that build corrigible agents. I think Paul is modeling the grain size here as corrigible thoughts rather than whole agents, which if it were a sensible way to think, might make the problem look much more manageable; but I don't think you can build corrigible thoughts without building corrigible agents to think them unless you have solved the decomposition problem that I think is isomorphic to building an aligned corrigible intelligence directly.

I remark that this intuition matches what the wise might learn from Scott's parable of K'th'ranga V: If you know how to do something then you know how to do it directly rather than by weird recursion, and what you imagine yourself doing by weird recursion you probably can't really do at all. When you want an airplane you don't obtain it by figuring out how to build birds and then aggregating lots of birds into a platform that can carry more weight than any one bird and then aggregating platforms into megaplatforms until you have an airplane; either you understand aerodynamics well enough to build an airplane, or you don't, the weird recursion isn't really doing the work. It is by no means clear that we would have a superior government free of exploitative politicians if all the voters elected representatives whom they believed to be only slightly smarter than themselves, until a chain of delegation reached up to the top level of government; either you know how to build a less corruptible relationship between voters and politicians, or you don't, the weirdly recursive part doesn't really help. It is no coincidence that modern ML systems do not work by weird recursion because all the discoveries are of how to just do stuff, not how to do stuff using weird recursion. (Even with AlphaGo which is arguably recursive if you squint at it hard enough, you're looking at something that is not weirdly recursive the way I think Paul's stuff is weirdly recursive, and for more on that see https://intelligence.org/2018/05/19/challenges-to-christianos-capability-amplification-proposal/.)

It's in this same sense that I intuit that if you could inspect the local elements of a modular system for properties that globally added to aligned corrigible intelligence, it would mean you had the knowledge to build an aligned corrigible AGI out of parts that worked like that, not that you could aggregate systems that corrigibly learned to put together sequences of corrigible thoughts into larger corrigible thoughts starting from gradient descent on data humans have labeled with their own judgments of corrigibility.

Comment by eliezer_yudkowsky on A Rationalist Argument for Voting · 2018-06-07T19:05:08.188Z · score: 34 (11 votes) · LW · GW

Voting in elections is a wonderful example of logical decision theory in the wild. The chance that you are genuinely logically correlated to a random trade partner is probably small, in cases where you don't have mutual knowledge of LDT; leaving altruism and reputation as sustaining reasons for cooperation. With millions of voters, the chance that you are correlated to thousands of them is much better.

Or perhaps you'd prefer to believe the dictate of Causal Decision Theory that if an election is won by 3 votes, nobody's vote influenced it, and if an election is won by 1 vote, all of the millions of voters on the winning side are solely responsible. But that was a silly decision theory anyway. Right?

Comment by eliezer_yudkowsky on Toolbox-thinking and Law-thinking · 2018-06-06T07:37:30.587Z · score: 27 (7 votes) · LW · GW

Savage's Theorem isn't going to convince anyone who doesn't start out believing that preference ought to be a total preorder. Coherence theorems are talking to anyone who starts out believing that they'd rather have more apples.

Comment by eliezer_yudkowsky on Local Validity as a Key to Sanity and Civilization · 2018-04-07T12:54:17.576Z · score: 41 (10 votes) · LW · GW

There will be a single very cold day occasionally regardless of whether global warming is true or false. Anyone who knows the phrase "modus tollens" ought to know that. That said, if two unenlightened ones are arguing back and forth in all sincerity by telling each other about the hot versus cold days they remember, neither is being dishonest, but both are making invalid arguments. But this is not the scenario offered in the original, which concerns somebody who does possess the mental resources to know better, but is tempted to rationalize in order to reach the more agreeable conclusion. They feel a little pressure in their head when it comes to deciding which argument to accept. If a judge behaved thusly in sentencing a friend or an enemy, would we not consider them morally deficient in their duty as a judge? There is a level of unconscious ignorance that renders an innocent entirely blameless; somebody who possesses the inner resources to have the first intimation that one hot day is a bad argument for global warming is past that level.

Comment by eliezer_yudkowsky on A LessWrong Crypto Autopsy · 2018-02-03T22:53:49.945Z · score: 49 (14 votes) · LW · GW

This is pretty low on the list of opportunities I'd kick myself for missing. A longer reply is here: https://www.facebook.com/yudkowsky/posts/10156147605134228

Comment by eliezer_yudkowsky on Arbital postmortem · 2018-02-01T03:39:31.278Z · score: 30 (12 votes) · LW · GW

The vision for Arbital would have provided incentives to write content, but those features were not implemented before the project ran out of time. I did not feel that at any point the versions of Arbital that were in fact implemented were at a state where I predicted they'd attract lots of users, and said so.

Comment by eliezer_yudkowsky on Arbital postmortem · 2018-02-01T03:37:00.295Z · score: 2 (27 votes) · LW · GW

I designed a solution from the start, I'm not stupid. It didn't get implemented in time.

Comment by eliezer_yudkowsky on Pascal’s Muggle Pays · 2017-12-21T04:48:11.496Z · score: 22 (8 votes) · LW · GW

Unless I'm missing something, the trouble with this is that, absent a leverage penalty, all of the reasons you've listed for not having a muggable decision algorithm... drumroll... center on the real world, which, absent a leverage penalty, is vastly outweighed by tiny probabilities of googolplexes and ackermann numbers of utilons. If you don't already consider the Mugger's claim to be vastly improbable, then all the considerations of "But if I logically decide to let myself be mugged that retrologically increases his probability of lying" or "If I let myself mugged this real-world scenario will be repeated many times" are vastly outweighed by the tiny probability that the Mugger is telling the truth.

Comment by eliezer_yudkowsky on Hero Licensing · 2017-11-17T15:10:12.044Z · score: 49 (21 votes) · LW · GW

Zvi's probably right.

Comment by eliezer_yudkowsky on Zombies Redacted · 2016-07-02T21:08:53.660Z · score: 8 (8 votes) · LW · GW

Sure. Measure a human's input and output. Play back the recording. Or did you mean across all possible cases? In the latter case see http://lesswrong.com/lw/pa/gazp_vs_glut/

Comment by eliezer_yudkowsky on JFK was not assassinated: prior probability zero events · 2016-04-27T18:13:26.335Z · score: 2 (4 votes) · LW · GW

https://arbital.com/p/nearest_neighbor/

Comment by eliezer_yudkowsky on Machine learning and unintended consequences · 2016-03-20T02:41:58.650Z · score: 14 (12 votes) · LW · GW

Ed Fredkin has since sent me a personal email:

By the way, the story about the two pictures of a field, with and without army tanks in the picture, comes from me. I attended a meeting in Los Angeles, about half a century ago where someone gave a paper showing how a random net could be trained to detect the tanks in the picture. I was in the audience. At the end of the talk I stood up and made the comment that it was obvious that the picture with the tanks was made on a sunny day while the other picture (of the same field without the tanks) was made on a cloudy day. I suggested that the "neural net" had merely trained itself to recognize the difference between a bright picture and a dim picture.

Comment by eliezer_yudkowsky on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-29T01:04:40.477Z · score: 1 (1 votes) · LW · GW

Moving to Discussion.

Comment by Eliezer_Yudkowsky on [deleted post] 2015-12-18T19:39:55.402Z

Please don't.

Comment by eliezer_yudkowsky on A toy model of the control problem · 2015-09-18T19:57:18.774Z · score: 2 (2 votes) · LW · GW

I assume the point of the toy model is to explore corrigibility or other mechanisms that are supposed to kick in after A and B end up not perfectly value-aligned, or maybe just to show an example of why a non-value-aligning solution for A controlling B might not work, or maybe specifically to exhibit a case of a not-perfectly-value-aligned agent manipulating its controller.

Comment by eliezer_yudkowsky on A toy model of the control problem · 2015-09-18T19:51:54.632Z · score: 7 (7 votes) · LW · GW

When I consider this as a potential way to pose an open problem, the main thing that jumps out at me as being missing is something that doesn't allow A to model all of B's possible actions concretely. The problem is trivial if A can fully model B, precompute B's actions, and precompute the consequences of those actions.

The levels of 'reason for concern about AI safety' might ascend something like this:

  • 0 - system with a finite state space you can fully model, like Tic-Tac-Toe
  • 1 - you can't model the system in advance and therefore it may exhibit unanticipated behaviors on the level of computer bugs
  • 2 - the system is cognitive, and can exhibit unanticipated consequentialist or goal-directed behaviors, on the level of a genetic algorithm finding an unanticipated way to turn the CPU into a radio or Eurisko hacking its own reward mechanism
  • 3 - the system is cognitive and humanish-level general; an uncaught cognitive pressure towards an outcome we wouldn't like, results in facing something like a smart cryptographic adversary that is going to deeply ponder any way to work around anything it sees as an obstacle
  • 4 - the system is cognitive and superintelligent; its estimates are always at least as good as our estimates; the expected agent-utility of the best strategy we can imagine when we imagine ourselves in the agent's shoes, is an unknowably severe underestimate of the expected agent-utility of the best strategy the agent can find using its own cognition

We want to introduce something into the toy model to at least force solutions past level 0. This is doubly true because levels 0 and 1 are in some sense 'straightforward' and therefore tempting for academics to write papers about (because they know that they can write the paper); so if you don't force their thinking past those levels, I'd expect that to be all that they wrote about. You don't get into the hard problems with astronomical stakes until levels 3 and 4. (Level 2 is the most we can possibly model using running code with today's technology.)

Comment by eliezer_yudkowsky on Procedural Knowledge Gaps · 2015-08-19T18:29:54.717Z · score: 7 (7 votes) · LW · GW

I recall originally reading something about a measure of exercise-linked gene expression and I'm pretty sure it wasn't that New Scientist article, but regardless, it's plausible that some mismemory occurred and this more detailed search screens off my memory either way. 20% of the population being immune to exercise seems to match real-world experience a bit better than 40% so far as my own eye can see - I eyeball-feel more like a 20% minority than a 40% minority, if that makes sense. I have revised my beliefs to match your statements. Thank you for tracking that down!

Comment by eliezer_yudkowsky on Don't You Care If It Works? - Part 1 · 2015-07-29T20:27:14.706Z · score: 6 (6 votes) · LW · GW

"Does somebody being right about X increase your confidence in their ability to earn excess returns on a liquid equity market?" has to be the worst possible question to ask about whether being right in one thing should increase your confidence about them being right elsewhere. Liquid markets are some of the hardest things in the entire world to outguess! Being right about MWI is enormously being easier than being right about what Microsoft stock will do relative to the rest of S&P 500 over the next 6 months.

There's a gotcha to the gotcha which is that you have to know from your own strength how hard the two problems are - financial markets are different from, e.g., the hard problem of conscious experience, in that we know exactly why it's hard to predict them, rather than just being confused. Lots of people don't realize that MWI is knowable. Nonetheless, going from MWI to Microsoft stock behavior is like going from 2 + 2 = 4 to MWI.

Comment by eliezer_yudkowsky on If MWI is correct, should we expect to experience Quantum Torment? · 2015-07-14T18:35:16.764Z · score: 2 (2 votes) · LW · GW

You're confusing subjective probability and objective quantum measure. If you flip a quantum coin, half your measure goes to worlds where it comes up heads and half goes to where it comes up tails. This is an objective fact, and we know it solidly. If you don't know whether cryonics works, you're probably still already localized by your memories and sensory information to either worlds where it works or worlds where it doesn't; all or nothing, even if you're ignorant of which.

Comment by eliezer_yudkowsky on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2015-06-07T21:35:29.619Z · score: 2 (2 votes) · LW · GW

can even strip out the part about agents and carry out the reasoning on pure causal nodes; the chance of a randomly selected causal node being in a unique100 position on a causal graph with respect to 3↑↑↑3 other nodes ought to be at most 100/3↑↑↑3 for finite causal graphs.

Comment by eliezer_yudkowsky on Rationality is about pattern recognition, not reasoning · 2015-06-07T19:50:33.700Z · score: 1 (1 votes) · LW · GW

Yes, as his post facto argument.

Comment by eliezer_yudkowsky on Rationality is about pattern recognition, not reasoning · 2015-06-07T07:16:03.734Z · score: 3 (3 votes) · LW · GW

You have not understood correctly regarding Carl. He claimed, in hindsight, that Zuckerberg's potential could've been distinguished in foresight, but he did not do so.

Comment by eliezer_yudkowsky on Taking Effective Altruism Seriously · 2015-06-07T06:59:28.410Z · score: 9 (13 votes) · LW · GW

Moved to Discussion.

Comment by eliezer_yudkowsky on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2015-06-07T06:51:42.464Z · score: 0 (0 votes) · LW · GW

I don't think you can give me a moment of pleasure that intense without using 3^^^3 worth of atoms on which to run my brain, and I think the leverage penalty still applies then. You definitely can't give me a moment of worthwhile happiness that intense without 3^^^3 units of background computation.

Comment by eliezer_yudkowsky on An Informal Conjecture on Proof Length and Logical Counterfactuals · 2015-05-16T16:31:55.000Z · score: 2 (2 votes) · LW · GW

I can't see the grandparent, so posting here:

It occurs to me that maybe we could regard the agent as consistently reasoning, "If I choose of my own free will to output 2, that thereby causes Peano Arithmetic to be inconsistent, causing me to get 0 points."

I mostly don't buy this, but it slightly defends the legitness of the counterfactual.

Comment by eliezer_yudkowsky on How confident is your atheism? · 2015-05-09T23:46:54.956Z · score: 1 (3 votes) · LW · GW

Preeeeeeeeeeeetty small, and I nonetheless won't accept any bets that I couldn't pay off if I lost, because that's deontologically dishonorable.

Comment by eliezer_yudkowsky on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 · 2015-03-05T05:47:40.362Z · score: 12 (12 votes) · LW · GW

Oh, trust me, they can't discern the truth from wild rumors even if it's normal. (I am speaking of real life, here.)

Comment by eliezer_yudkowsky on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-05T04:33:38.829Z · score: 7 (7 votes) · LW · GW

I do remark that Dumbledore was unable to detect Harry doing an ongoing Transfiguration while he looked into Harry's prison cell in Azkaban.

Comment by eliezer_yudkowsky on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-04T17:50:47.564Z · score: 8 (8 votes) · LW · GW

A lot of people think that Voldemort was going too easy on Harry, making this a "Coil vs. Taylor in the burning building" violation of suspension-of-disbelief for some of them. I am considering rewriting 113 with the following changes:

  • Most Death Eaters are watching the surrounding area, not Harry; Voldemort's primary hypothesis for how Time might thwart him involves outside interference.
  • Voldemort tells Harry to point his wand outward and downward at the ground, then has a Death Eater paralyze Harry (except heart/lungs/mouth/eyes) in that position before the unbreakable Vow. This would also require a retroedit to 15 or 28 to make it clear that Transfiguration does not require an exact finger position on the wand.

[pollid:840]

Comment by eliezer_yudkowsky on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-04T17:45:40.752Z · score: 2 (4 votes) · LW · GW

Hmm... the blinding one is potentially interesting, if Harry partially-Transfigures himself eyeballs using the fact that his hand is touching the wand, and uses the Stone to make them permanent later... but he'd have to avoid Voldemort noticing that his eyes were back.

Comment by eliezer_yudkowsky on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-04T17:41:19.440Z · score: 14 (17 votes) · LW · GW

THANK YOU.

Comment by eliezer_yudkowsky on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112 · 2015-02-26T23:50:58.607Z · score: 12 (12 votes) · LW · GW

It was there on day 1.

Comment by eliezer_yudkowsky on Harry Potter and the Methods of Rationality discussion thread · 2015-02-26T23:43:09.212Z · score: 1 (5 votes) · LW · GW

A ShoutOut is not the same as contaminating the plot.

Comment by eliezer_yudkowsky on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2015-02-24T16:33:32.545Z · score: 6 (6 votes) · LW · GW

By request, I declare solipsist to have lost this bet.

Comment by eliezer_yudkowsky on Harry Potter and the Methods of Rationality discussion thread, part 9 · 2015-02-23T21:02:28.329Z · score: 31 (33 votes) · LW · GW

Great idea! I should do that.

Comment by eliezer_yudkowsky on Harry Potter and the Methods of Rationality discussion thread, January 2015, chapter 103 · 2015-02-02T19:39:49.874Z · score: 12 (12 votes) · LW · GW

cough

Comment by eliezer_yudkowsky on The Importance of Sidekicks · 2015-01-08T23:22:51.785Z · score: 65 (64 votes) · LW · GW

For what it’s worth, I endorse this aesthetic and apologize for any role I played in causing people to focus too much on the hero thing. You need a lot of nonheroes per hero and I really want to validate the nonheroes but I guess I feel like I don’t know how, or like it’s not my place to say because I didn’t make the same sacrifices… or what feels to me like it ought to be a sacrifice, only maybe it’s not.

Comment by eliezer_yudkowsky on Tell Culture · 2014-12-30T04:35:06.210Z · score: 4 (4 votes) · LW · GW

You don't have to sacrifice your own power for that, the bonder sacrifices power. And the Unbreakable Vow could be worded to only come into force once all Vows were taken.

Comment by eliezer_yudkowsky on What Peter Thiel thinks about AI risk · 2014-12-14T19:00:33.992Z · score: 7 (7 votes) · LW · GW

Context: Elon Musk thinks there's an issue in the 5-7 year timeframe (probably due to talking to Demis Hassabis at Deepmind, I would guess). By that standard I'm also less afraid of AI than Elon Musk, but as Rob Bensinger will shortly be fond of saying, this conflates AGI danger with AGI imminence (a very very common conflation).

Comment by eliezer_yudkowsky on PSA: Eugine_Nier evading ban? · 2014-12-09T22:02:56.756Z · score: 12 (12 votes) · LW · GW

Found the correct control. For mods, the link is:

And Azathoth123 is out. It's not very good, but it's the best I can do - I encourage everyone to help Viliam make the software support better.

Comment by eliezer_yudkowsky on PSA: Eugine_Nier evading ban? · 2014-12-09T07:09:35.013Z · score: 4 (4 votes) · LW · GW

That only bans the comment, not the user!

Comment by eliezer_yudkowsky on PSA: Eugine_Nier evading ban? · 2014-12-08T21:23:29.391Z · score: 5 (5 votes) · LW · GW

I tried a negative karma award so he couldn't downvote and was told "Karma awards must be greater than zero." I don't know where a "Ban user" button is.

Comment by eliezer_yudkowsky on Rationality Quotes December 2014 · 2014-12-03T22:34:14.345Z · score: 18 (22 votes) · LW · GW

I think all the work here is done by determining what actually constitutes a precipice.

Comment by eliezer_yudkowsky on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2014-12-01T04:44:05.888Z · score: 1 (1 votes) · LW · GW

If our math has to handle infinities we have bigger problems. Unless we use measures, and then we have the same issue and seemingly forced solution as before. If we don't use measures, things fail to add up the moment you imagine "infinity".

Comment by eliezer_yudkowsky on Pascal's Muggle (short version) · 2014-11-30T21:11:53.374Z · score: 2 (2 votes) · LW · GW

Anthropics would be one way of reading it, yes. Think of it as saying, in addition to wanting all of our Turing machines to add up to 1, we also want all of the computational elements inside our Turing machines to add up to 1 because we're trying to guess which computational element 'we' might be. This might seem badly motivated in the sense that we can only say "Because our probabilities have to add up to 1 for us to think!" rather than being able to explain why magical reality fluid ought to work that way a priori, but the justification for a simplicity prior isn't much different - we have to be able to add up all the Turing machines in their entirety to 1 in order to think. So Turing machines that use lots of tape get penalties to the probability of your being any particular or special element inside them. Being able to affect lots of other elements is a kind of specialness.

Comment by eliezer_yudkowsky on Breaking the vicious cycle · 2014-11-27T06:42:30.024Z · score: 10 (17 votes) · LW · GW

I don't have time to evaluate what you did, so I'll take this as a possible earnest of a good-faith attempt at something, and not speak ill of you until I get some other piece of positive evidence that something has gone wrong. A header statement only on relevant posts seems fine by me, if you have the time to add it to items individually.

I very strongly advise you, on a personal level, not to talk about these things online at all. No, not even posting links without discussion, especially if your old audience is commenting on them. The probability I estimate of your brain helplessly dragging you back in is very high.

Comment by eliezer_yudkowsky on Breaking the vicious cycle · 2014-11-27T06:39:47.132Z · score: 3 (4 votes) · LW · GW

Agreed, if there are no other indications of a change of mind. Nobody who reads your blog is not going to know who "AI risk advocates" are. Perfectly fine if there's some other indication.

Comment by eliezer_yudkowsky on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-25T18:08:07.452Z · score: 6 (12 votes) · LW · GW

That's not how TDT works.

Comment by eliezer_yudkowsky on Breaking the vicious cycle · 2014-11-25T03:59:49.892Z · score: 12 (12 votes) · LW · GW

I don't believe that competent mental health professionals actually exist.

I think they do, especially if you select for the best evidence-based method that will attract evidence-based people, but you may have to try more than one professional, and many people's financial or insurance situations don't permit that.