Posts

Nonspecific discomfort 2021-09-04T14:15:22.636Z
Fixing the arbitrariness of game depth 2021-07-17T12:37:11.669Z
Feedback calibration 2021-03-15T14:24:44.244Z
Three more stories about causation 2020-11-03T15:51:58.820Z
cousin_it's Shortform 2019-10-26T17:37:44.390Z
Announcement: AI alignment prize round 4 winners 2019-01-20T14:46:47.912Z
Announcement: AI alignment prize round 3 winners and next round 2018-07-15T07:40:20.507Z
How to formalize predictors 2018-06-28T13:08:11.549Z
UDT can learn anthropic probabilities 2018-06-24T18:04:37.262Z
Using the universal prior for logical uncertainty 2018-06-16T14:11:27.000Z
Understanding is translation 2018-05-28T13:56:11.903Z
Announcement: AI alignment prize round 2 winners and next round 2018-04-16T03:08:20.412Z
Using the universal prior for logical uncertainty (retracted) 2018-02-28T13:07:23.644Z
UDT as a Nash Equilibrium 2018-02-06T14:08:30.211Z
Beware arguments from possibility 2018-02-03T10:21:12.914Z
An experiment 2018-01-31T12:20:25.248Z
Biological humans and the rising tide of AI 2018-01-29T16:04:54.749Z
A simpler way to think about positive test bias 2018-01-22T09:38:03.535Z
How the LW2.0 front page could be better at incentivizing good content 2018-01-21T16:11:17.092Z
Beware of black boxes in AI alignment research 2018-01-18T15:07:08.461Z
Announcement: AI alignment prize winners and next round 2018-01-15T14:33:59.892Z
Announcing the AI Alignment Prize 2017-11-04T11:44:19.000Z
Announcing the AI Alignment Prize 2017-11-03T15:47:00.092Z
Announcing the AI Alignment Prize 2017-11-03T15:45:14.810Z
The Limits of Correctness, by Bryan Cantwell Smith [pdf] 2017-08-25T11:36:38.585Z
Using modal fixed points to formalize logical causality 2017-08-24T14:33:09.000Z
Against lone wolf self-improvement 2017-07-07T15:31:46.908Z
Steelmanning the Chinese Room Argument 2017-07-06T09:37:06.760Z
A cheating approach to the tiling agents problem 2017-06-30T13:56:46.000Z
What useless things did you understand recently? 2017-06-28T19:32:20.513Z
Self-modification as a game theory problem 2017-06-26T20:47:54.080Z
Loebian cooperation in the tiling agents problem 2017-06-26T14:52:54.000Z
Thought experiment: coarse-grained VR utopia 2017-06-14T08:03:20.276Z
Bet or update: fixing the will-to-wager assumption 2017-06-07T15:03:23.923Z
Overpaying for happiness? 2015-01-01T12:22:31.833Z
A proof of Löb's theorem in Haskell 2014-09-19T13:01:41.032Z
Consistent extrapolated beliefs about math? 2014-09-04T11:32:06.282Z
Hal Finney has just died. 2014-08-28T19:39:51.866Z
"Follow your dreams" as a case study in incorrect thinking 2014-08-20T13:18:02.863Z
Three questions about source code uncertainty 2014-07-24T13:18:01.363Z
Single player extensive-form games as a model of UDT 2014-02-25T10:43:12.746Z
True numbers and fake numbers 2014-02-06T12:29:08.136Z
Rationality, competitiveness and akrasia 2013-10-02T13:45:31.589Z
Bayesian probability as an approximate theory of uncertainty? 2013-09-26T09:16:04.448Z
Notes on logical priors from the MIRI workshop 2013-09-15T22:43:35.864Z
An argument against indirect normativity 2013-07-24T18:35:04.130Z
"Epiphany addiction" 2012-08-03T17:52:47.311Z
AI cooperation is already studied in academia as "program equilibrium" 2012-07-30T15:22:32.031Z
Should you try to do good work on LW? 2012-07-05T12:36:41.277Z
Bounded versions of Gödel's and Löb's theorems 2012-06-27T18:28:04.744Z

Comments

Comment by cousin_it on Trying to Keep the Garden Well · 2022-01-17T18:57:38.051Z · LW · GW

I think the right procedure works something like this: 1) Tenants notice that one of them has trashed the garden, and tell the landlord who. 2) The landlord tells the offending tenant to clean up or they'll be billed. 3) If the offending tenant doesn't clean up, the cleaning fee gets added to their next rent bill.

In your case it seems like the offending tenant wasn't pointed out. Maybe because other tenants didn't care, or maybe some tenants had a mafia mentality and made "snitching" unsafe. Either way, you were right to move away.

Comment by cousin_it on Understanding the tensor product formulation in Transformer Circuits · 2021-12-28T10:54:29.050Z · LW · GW

Can't say much about transformers, but the tensor product definition seems off. There can be many elements in V⊗W that aren't expressible as v⊗w, only as a linear combination of multiple such. That can be seen from dimensionality: if v and w have dimensions n and m, then all possible pairs can only span n+m dimensions (Cartesian product), but the full tensor product has nm dimensions.

Here's an explanation of tensor products that I came up with sometime ago in an attempt to make it "click". Imagine you have a linear function that takes in two vectors and spits out a number. But wait, there are two natural but incompatible ways to imagine it:

  1. f(a,b) + f(c,d) = f(a+c,b+d), linear in both arguments combined. The space of such functions has dimension n+m, and corresponds to Cartesian product.

  2. f(a,b) + f(a,c) = f(a,b+c) and also f(a,c) + f(b,c) = f(a+b,c), in other words, linear in each argument separately. The space of such functions has dimension nm, and corresponds to tensor product.

It's especially simple to work through the case n=m=1. In that case all functions satisfying (1) have the form f(x,y)=ax+by, so their space is 2-dimensional, while all functions satisfying (2) have the form f(x,y)=axy, so their space is 1-dimensional. Admittedly this case is a bit funny because nm<n+m, but you can see how in higher dimensions the space of functions of type (2) becomes much bigger, because it will have terms for x1y1, x1y2, etc.

Comment by cousin_it on The Debtor's Revolt · 2021-12-27T18:20:13.430Z · LW · GW

Consider my friend with the business plan to buy up laundromats. Let's say an illiquid, privately held laundromat makes a 25% return on invested capital. Suppose the stock market demands a 10% return for a small-cap company. So $100 million of privately held laundromats would generate $25 million in annual income, worth $250 million on the stock market, 2.5 times the initial investment. But if the laundromat company can finance 75% of the deal at 10% interest, then the cash cost of acquisition is $25 million. The cash flow profits of $25 million are reduced by $7.5 million in interest payments, for a net annual profit of $17.5 million. This company could sell for $175 million on the stock market, seven times the initial cash outlay. For this reason, orthodox financial theory recommends that companies borrow as much as they can get away with and roll over the debt perpetually, to maximize return on equity.

In the long run this subsidy to large purchasers should inflate the market price of inputs for the laundromat industry, simultaneously increasing the market price and reducing the profitability of existing businesses, creating increasing pressure to sell out.

I think even without the "subsidy to large purchasers", the situation described would have a much simpler outcome: everyone and their mom would start laundromats and drive profitability well below 25%. And the market price of existing laundromats might well fall as a result, not increase as the post says.

Comment by cousin_it on Reply to Eliezer on Biological Anchors · 2021-12-24T12:21:06.872Z · LW · GW

With these two points in mind, it seems off to me to confidently expect a new paradigm to be dominant by 2040 (even conditional on AGI being developed), as the second quote above implies. As for the first quote, I think the implication there is less clear, but I read it as expecting AGI to involve software well over 100x as efficient as the human brain, and I wouldn’t bet on that either (in real life, if AGI is developed in the coming decades—not based on what’s possible in principle.)

I think this misses the point a bit. The thing to be afraid of is not an all-new approach to replace neural networks, but rather new neural network architectures and training methods that are much more efficient than today's. It's not unreasonable to expect those, and not unreasonable to expect that they'll be much more efficient than humans, given how easy it is to beat humans at arithmetic for example, and given fast recent progress to superhuman performance in many other domains.

Comment by cousin_it on Occupational Infohazards · 2021-12-19T16:34:53.056Z · LW · GW

Idk, to me it still feels like the common harmful factor in all these situations is psychological mindfuckery. Experimenting on your mind / deliberately lowering defenses / group practices / psychedelics / making up psychological concepts and taking them seriously and so on. If you know a hundred times less about the thing you're tinkering with (your mind) than a car mechanic knows about cars, and at the same time you exercise a hundred times less caution than the mechanic does, what do you expect to happen? You'll break the thing, that's it. Even if you have a valid reason, like needing to fix a small crack in the thing; the zero knowledge + zero caution approach is still flawed, it will lead to more and bigger cracks, every time.

Comment by cousin_it on Considerations on interaction between AI and expected value of the future · 2021-12-09T23:16:11.434Z · LW · GW

To me it feels like alignment is a tiny target to hit, and around it there's a neighborhood of almost-alignment, where enough is achieved to keep people alive but locked out of some important aspect of human value. There are many aspects such that missing even one or two of them is enough to make life bad (complexity and fragility of value). You seem to be saying that if we achieve enough alignment to keep people alive, we have >50% chance of achieving all/most other aspects of human value as well, but I don't see why that's true.

Comment by cousin_it on Considerations on interaction between AI and expected value of the future · 2021-12-09T09:06:53.797Z · LW · GW

These involve extinction, so they don't answer the question what's the most likely outcome conditional on non-extinction. I think the answer there is a specific kind of near-miss at alignment which is quite scary.

Comment by cousin_it on Interpreting Yudkowsky on Deep vs Shallow Knowledge · 2021-12-08T10:57:26.130Z · LW · GW

I had the same view as you, and was persuaded out of it in this thread. Maybe to shift focus a little, one interesting question here is about training. How do you train a plan-generating AI? If you reward plans that sound like they'd succeed, regardless of how icky they seem, then the AI will become useless to you by outputting effective-sounding but icky plans. But if you reward only plans that look nice enough to execute, that tempts the AI to make plans that manipulate whoever is reading them, and we're back at square one.

Maybe that's a good way to look at the general problem. Instead of talking about AI architecture, just say we don't know any training methods that would make AI better than humans at real world planning and safe to interact with the world, even if it's just answering questions.

Comment by cousin_it on Considerations on interaction between AI and expected value of the future · 2021-12-08T00:14:43.948Z · LW · GW

I think alignment is finicky, and there's a "deep pit around the peak" as discussed here.

Comment by cousin_it on General alignment plus human values, or alignment via human values? · 2021-12-07T11:52:35.011Z · LW · GW

There are very “large” impacts to which we are completely indifferent (chaotic weather changes, the above-mentioned change in planetary orbits, the different people being born as a consequence of different people meeting and dating across the world, etc.) and other, smaller, impacts that we care intensely about (the survival of humanity, of people’s personal wealth, of certain values and concepts going forward, key technological innovations being made or prevented, etc.)

I don't think we are indifferent to these outcomes. We leave them to luck, but that's a fact about our limited capabilities, not about our values. If we had enough control over "chaotic weather changes" to steer a hurricane away from a coastal city, we would very much care about it. So if a strong AI can reason through these impacts, it suddenly faces a harder task than a human: "I'd like this apple to fall from the table, and I see that running the fan for a few minutes will achieve that goal, but that's due to subtly steering a hurricane and we can't have that".

Comment by cousin_it on Considerations on interaction between AI and expected value of the future · 2021-12-07T10:53:57.581Z · LW · GW

I think the default non-extinction outcome is a singleton with near miss at alignment creating large amounts of suffering.

Comment by cousin_it on Soares, Tallinn, and Yudkowsky discuss AGI cognition · 2021-12-01T00:47:20.143Z · LW · GW

Yeah, I had a similar thought when reading that part. In agent-foundations discussions, the idea often came up that the right decision theory should quantify not over outputs or input-output maps, but over successor programs to run and delegate I/O to. Wei called it "UDT2".

Comment by cousin_it on Soares, Tallinn, and Yudkowsky discuss AGI cognition · 2021-12-01T00:25:03.345Z · LW · GW

“Though many predicted disaster, subsequent events were actually so slow and messy, they offered many chances for well-intentioned people to steer the outcome and everything turned out great!” does not sound like any particular segment of history book I can recall offhand.

I think the ozone hole and the Y2K problem fit the bill. Though of course that doesn't mean the AI problem will go the same way.

Comment by cousin_it on Frame Control · 2021-11-28T02:20:50.444Z · LW · GW

years ago I was at a large group dinner with acquaintances and a woman I didn’t like. She was talking about something I wasn’t interested in, mostly to a few other people at the table, and I drifted to looking at my phone. The woman then said loudly, “Oh, looks like I’m boring Aella”. This put me into a position

From that description I sympathize with the woman more.

Comment by cousin_it on [deleted post] 2021-11-28T01:30:15.915Z

I've been playing music for many years and have thought of many songs as "perfect" by various musical criteria, melody, beat and so on. But deep down I think musical criteria aren't the answer. It all comes down to which mood the song puts you in, so the perfect song = the one that hits the right mood at your current stage in life. So it's gonna be unavoidably different between people, and for the same person across time. For me as a teenager it was "Losing My Religion", somehow. Now at almost 40, this recording of Aguas de Março makes me smile.

Comment by cousin_it on A Bayesian Aggregation Paradox · 2021-11-23T12:16:06.577Z · LW · GW

I think your first example could be even simpler. Imagine you have a coin that's either fair, all-heads, or all-tails. If your prior is "fair or all-heads with probability 1/2 each", then seeing heads is evidence against "fair". But if your prior is "fair or all-tails with probability 1/2 each", then seeing heads is evidence for "fair". Even though "fair" started as 1/2 in both cases. So the moral of the story is that there's no such thing as evidence for or against a hypothesis, only evidence that favors one hypothesis over another.

Comment by cousin_it on Ngo and Yudkowsky on alignment difficulty · 2021-11-22T17:32:27.157Z · LW · GW

Thinking about it more, it seems that messy reward signals will lead to some approximation of alignment that works while the agent has low power compared to its "teachers", but at high power it will do something strange and maybe harm the "teachers" values. That holds true for humans gaining a lot of power and going against evolutionary values ("superstimuli"), and for individual humans gaining a lot of power and going against societal values ("power corrupts"), so it's probably true for AI as well. The worrying thing is that high power by itself seems sufficient for the change, for example if an AI gets good at real-world planning, that constitutes power and therefore danger. And there don't seem to be any natural counterexamples. So yeah, I'm updating toward your view on this.

Comment by cousin_it on Split and Commit · 2021-11-22T16:18:34.430Z · LW · GW

A few years ago Abram and I were discussing something like this, and converged on "TC Chamberlin's essay about method of multiple working hypotheses is the key to rationality". Or in other words, never have just one hypothesis, always have a next best.

Comment by cousin_it on Ngo and Yudkowsky on alignment difficulty · 2021-11-19T11:14:25.864Z · LW · GW

This is tricky. Let's say we have a powerful black box that initially has no knowledge or morals, but a lot of malleable computational power. We train it to give answers to scary real-world questions, like how to succeed at business or how to manipulate people. If we reward it for competent answers while we can still understand the answers, at some point we'll stop understanding answers, but they'll continue being super-competent. That's certainly a danger and I agree with it. But by the same token, if we reward the box for aligned answers while we still understand them, the alignment will generalize too. There seems no reason why alignment would be much less learnable than competence about reality.

Maybe your and Eliezer's point is that competence about reality has a simple core, while alignment doesn't. But I don't see the argument for that. Reality is complex, and so are values. A process for learning and acting in reality can have a simple core, but so can a process for learning and acting on values. Humans pick up knowledge from their surroundings, which is part of "general intelligence", but we pick up values just as easily and using the same circuitry. Where does the symmetry break?

Comment by cousin_it on Ngo and Yudkowsky on alignment difficulty · 2021-11-17T12:31:21.642Z · LW · GW

I still don't understand. Let's say we ask an AI for a plan that would, conditional on its being executed, give us a lot of muffins. The AI gives us a plan that involves running a child AI, which would maximize muffins and hurt people along the way. We notice that and don't execute the plan.

It sounds like you're saying that "run the child AI" would be somehow concealed in the plan, so we don't notice it on inspection and execute the plan anyway. But plans optimized for "getting muffins conditional on the plan being executed" have no reason to be optimized for "manipulating people into executing the plan", because the latter doesn't help with the former.

What am I missing?

Comment by cousin_it on Ngo and Yudkowsky on alignment difficulty · 2021-11-16T13:31:03.302Z · LW · GW

I think it makes complete sense to say something like "once we have enough capability to run AIs making good real-world plans, some moron will run such an AI unsafely". And that itself implies a startling level of danger. But Eliezer seems to be making a stronger point, that there's no easy way to run such an AI safely, and all tricks like "ask the AI for plans that succeed conditional on them being executed" fail. And maybe I'm being thick, but the argument for that point still isn't reaching me somehow. Can someone rephrase for me?

Comment by cousin_it on Ngo and Yudkowsky on alignment difficulty · 2021-11-16T10:00:14.407Z · LW · GW

That seems wrong, living creatures have lots of specific behaviors that are genetically programmed.

In fact I think both you and John are misunderstanding the bottleneck. The point isn't that the genome is small, nor that it affects the mind indirectly. The point is that the mind doesn't affect the genome. Living creatures don't have the tech to encode their life experience into genes for the next generation.

Comment by cousin_it on Why do you believe AI alignment is possible? · 2021-11-15T23:10:38.575Z · LW · GW
  1. An AI that consistently follows a utility function seems possible. I can't think of a law of nature prohibiting that.

  2. A utility function is a preference ordering over possible worlds (actually over probability distributions on possible worlds, but that doesn't change the point).

  3. It seems like at least some possible worlds would be nice for humans. So there exists an ordering that puts these worlds on top.

  4. It's plausible that some such worlds, or the diff between them and our world, have reasonably short description.

  5. Conclusion: an AI leading to worlds nice for humans should be possible and have reasonably short description.

The big difficulty of course is in step 4. "So what's the short description?" "Uhh..."

Comment by cousin_it on Worst Commonsense Concepts? · 2021-11-15T22:31:57.095Z · LW · GW

To me some of worst commonsense ideas come from the amateur psychology school: "gaslighting", "blaming the victim", "raised by narcissists", "sealioning" and so on. They just teach you to stop thinking and take sides.

Logical fallacies, like "false equivalence" or "slippery slope", are in practice mostly used to dismiss arguments prematurely.

The idea of "necessary vs contingent" (or "essential vs accidental", "innate vs constructed" etc) is mostly used as an attack tool, and I think even professional usage is more often confusing than not.

Comment by cousin_it on Why do you believe AI alignment is possible? · 2021-11-15T16:56:19.937Z · LW · GW

I think a lot of human "alignment" isn't encoded in our brains, it's encoded only interpersonally, in the fact that we need to negotiate with other humans of similar power. Once a human gets a lot of power, often the brakes come off. To the extent that's true, alignment inspired by typical human architecture won't work well for a stronger-than-human AI, and some other approach is needed.

Comment by cousin_it on Why do you believe AI alignment is possible? · 2021-11-15T14:49:57.025Z · LW · GW

Arguments by definition don't work. If by "human values" you mean "whatever humans end up maximizing", then sure, but we are unstable and can be manipulated, which isn't we want in an AI. And if you mean "what humans deeply want or need", then human actions don't seem very aligned with that, so we're back at square one.

Comment by cousin_it on Education on My Homeworld · 2021-11-15T01:30:35.321Z · LW · GW

In The Case against Education: Why the Education System Is a Waste of Time and Money, Bryan Caplan uses Earth data to make the case that compulsory education does not significantly increase literacy.

Compulsory education increases literacy, see the Likbez in the USSR.

Managing your own boredom requires freedom, which is the opposite of compulsion.

One can make the opposite assertion, that it's fastest learned through discipline, and point to Chinese or South Korean schools.

I don’t doubt that it’s useful to have the whole population learn reading and arithmetic, but this seems to me like it’s the kind of thing that can be taught in a few months.

From my couple years experience teaching average (non-selected) kids, expecting that something can be taught to them quickly is a recipe for disappointment.

If kids don’t learn reading automatically then that would imply that they wouldn’t text each other in the absence of school which, to me, is reductio ad absurdum.

Texting isn't enough for literacy, lots of kids can text but cannot read and understand a book, ask any teacher.

Comment by cousin_it on Education on My Homeworld · 2021-11-14T14:43:06.764Z · LW · GW

There is no standard set of skills everyone is supposed to learn because if everyone learns something then its economic value becomes zero.

This seems wrong. Skills like literacy, numeracy, prosociality and ability to manage your own boredom bring a lot of economic value, even (especially) if everyone has them. And looking at our world, most people don't acquire these skills freely and automatically, they have to be forced somewhat.

Comment by cousin_it on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-11T12:08:46.200Z · LW · GW

Pessimistic thought: human friendliness is in large part instrumental, used for cooperating with other humans of similar power, and often breaks down when the human acquires a lot of power. So faithfully imitating typical human friendliness will lead to an AI that gets unfriendly as it acquires power. That can happen even if there's no manipulation and no self-improvement.

Optimistic thought: it seems current AIs are capable at many things but aren't manipulating humans yet, and this window may last for a few years. What's the best way they could help us with alignment problems in the meanwhile?

Comment by cousin_it on Depositions and Rationality · 2021-11-04T17:10:54.908Z · LW · GW

Boxing in, by bracketing. People who claim to have no idea about a quantity will often give surprisingly tight ranges when explicitly interrogated.

And most of the time their original "no idea" will be more accurate than the stuff you made them make up.

I do think there's a rationality skill implicit in the text: the "coaching" that witnesses undergo to avoid giving answers they don't want to give. That'd be worth learning, as it's literally defense against the dark arts. And the test for it could be an interrogation of the kind that you describe.

Comment by cousin_it on The Opt-Out Clause · 2021-11-04T13:15:42.201Z · LW · GW

I didn't want to leave, but also didn't think reciting the phrase would do anything, so I recited it just as an exercise to overcome superstition, and nothing happened. Reminds me of how Ross Scott bought a bunch of people's souls for candy, one guy just said "sure I'm hungry" and signed the contract; that's the way.

Comment by cousin_it on [Book Review] "The Bell Curve" by Charles Murray · 2021-11-03T07:59:20.358Z · LW · GW

(3) tax businesses for hogging up all the smart people, if they try to brain drain into their own firm?

Due to tax incidence, that's the same as taxing smart people for getting together. I don't like that for two reasons. First, people should be free to get together. Second, the freedom of smart people to get together could be responsible for large economic gains, so we should be careful about messing with it.

Comment by cousin_it on On the Universal Distribution · 2021-10-29T21:58:54.730Z · LW · GW

It's interesting that the "up to a finite fudge factor" problem isn't specific to universal priors. The same problem exists in ordinary probability theory, where you can have different priors about, for example, the propensity of a coin. Then after many observations, all reasonable priors get closer and closer to the true propensity of the coin.

Then it's natural to ask, what kind of such long-run truths do all universal priors converge on? It's not only truths of the form "this bit string comes from this program", because a universal prior can also look at a random string where a certain fraction of entries are zeros, and predict that the same trend will hold in the future (learn the propensity of a coin). So what's the general form of long-run truths that can be learned by universal priors? I'm having trouble formulating this to myself, and can't even decide if the set of learnable truths is inherent in the prior or something we imagine on top of it.

Comment by cousin_it on They don't make 'em like they used to · 2021-10-28T22:57:58.738Z · LW · GW

Input latency and unpredictability of it. One famous example is that for many years there were usable finger-drumming apps on iOS but not on Android, because on Android you couldn't make the touchscreen + app + OS + sound system let people actually drum in time. Something would always introduce a hundred ms of latency (give or take) at random moments, which is enough to mess up the feeling. Everyone knew it and no one could fix it.

Comment by cousin_it on They don't make 'em like they used to · 2021-10-28T22:39:41.500Z · LW · GW

Or just keep a piezoelectric lighter.

Comment by cousin_it on Ruling Out Everything Else · 2021-10-28T17:07:41.166Z · LW · GW

Diplomacy is a huge country that I've been discovering recently. There are amazing things you can achieve if you understand what everyone wants, know when to listen and when to speak and what to say and how to sound.

In particular, one trick I've found useful is making arguments that are "skippable". Instead of saying stop everything, this argument overthrows everything you said, I prefer to point out a concern but make the person feel free to skip it and move past. Hey here's one thing that came to my mind, just want to add it to discussion. If they have good sense, and your argument actually changes things, they won't move past; but they'll appreciate that you offered them the option.

Comment by cousin_it on Self-Integrity and the Drowning Child · 2021-10-25T12:52:22.202Z · LW · GW

It's more subtle than that. Utility functions, by design, encode preferences that are consistent over lotteries (immune to Allais paradox), not just pure outcomes.

Or equivalently, they make you say not only that you prefer pure outcome A to pure outcome B, but also by how much. That "by how much" must obey some constraints motivated by probability theory, and the simplest way to summarize them is to say each outcome has a numeric utility.

Comment by cousin_it on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T08:59:01.724Z · LW · GW

I think it's a fine way of think about mathematical logic, but if you try to think this way about reality, you'll end up with views that make internal sense and are self-reinforcing but don't follow the grain of facts at all. When you hear such views from someone else, it's a good idea to see which facts they give in support. Do their facts seem scant, cherrypicked, questionable when checked? Then their big claims are probably wrong.

The people who actually know their stuff usually come off very different. Their statements are carefully delineated: "this thing about power was true in 10th century Byzantium, but not clear how much of it applies today".

Also, just to comment on this:

It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype.

I think it's somewhat changeable. Even for people like us, there are ways to make our processing more "fuzzy". Deliberately dimming some things, rounding others. That has many benefits: on the intellectual level you learn to see many aspects of a problem instead of hyperfocusing on one; emotionally you get more peaceful when thinking about things; and interpersonally, the world is full of small spontaneous exchanges happening on the "warm fuzzy" level, it's not nearly so cold a place as it seems, and plugging into that market is so worth it.

Comment by cousin_it on Lies, Damn Lies, and Fabricated Options · 2021-10-22T08:36:04.223Z · LW · GW

When I said the government should use tax money to finance disaster recovery, I didn't mean it should give that money to individual people to buy disaster supplies.

Comment by cousin_it on In the shadow of the Great War · 2021-10-20T13:05:18.688Z · LW · GW

I mean, even with your image it seems to me that the earlier movie was more fond of the human form, while the later one has a more weird/grotesque view of it. You can say beauty is subjective, but that view itself feels to me like part of the decline. Gaudi thought beauty was a specific and analyzable aspect of nature (curved lines and so on), and his buildings are my favorite places in the world.

Comment by cousin_it on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T12:54:00.919Z · LW · GW

I used to think the ability to have deep conversations is an indicator of how "alive" a person is, but now I think that view is wrong. It's better to look at what the person has done and is doing. Surprisingly there's little correlation: I often come across people who are very measured in conversation, but turn out to have amazing skills and do amazing things.

Comment by cousin_it on Lies, Damn Lies, and Fabricated Options · 2021-10-20T11:31:43.924Z · LW · GW

The whole price gouging conversation is societal bike shedding. Modern governments can and do provide emergency aid almost in real time as disasters happen.

A few weeks ago I said the same thing:

About price gouging, I’m not sure this is even the right question. Disaster recovery is the perfect situation where planned economy beats market: there’s a known need, affecting a known set of people equally, and the government has tax money specifically for this need.

And yeah, I also wish the "bikeshed" of price limits stopped being discussed, made into law, etc.

Comment by cousin_it on Lies, Damn Lies, and Fabricated Options · 2021-10-20T09:07:43.496Z · LW · GW

In the first part, you're saying that price limits have the same intent as withdrawal limits, rationing and so on: to prevent panic and speculation. That's true, but it doesn't matter if the result of price limits is different: empty shelves and not much else. That's what the econ folks have been saying.

Comment by cousin_it on Open & Welcome Thread October 2021 · 2021-10-20T07:09:30.936Z · LW · GW

I think Eliezer's original analogy (which may or may not be right, but is a fun thing to think about mathematically) was more like "compound interest folded on itself". Imagine you're a researcher making progress at a fixed rate, improving computers by 10% per year. That's modeled well by compound interest, since every year there's a larger number to increase by 10%, and it gives your ordinary exponential curve. But now make an extra twist: imagine the computing advances are speeding up your research as well, maybe because your mind is running on a computer, or because of some less exotic effects. So the first 10% improvement happens after a year, the next after 11 months, and so on. This may not be obvious, but it changes the picture qualitatively: it gives not just a faster exponential, but a curve which has a vertical asymptote, going to infinity in finite time. The reason is that the descending geometrical progression - a year, plus 11 months, and so on - adds up to a finite amount of time, in the same way that 1+1/2+1/4... adds up to a finite amount.

Of course there's no infinity in real life, but the point is that a situation where research makes research faster could be even more unstable ("gradual and then sudden") than ordinary compound interest, which we already have trouble understanding intuitively.

Comment by cousin_it on In the shadow of the Great War · 2021-10-19T01:07:02.230Z · LW · GW

WWI also destroyed belief in beauty: the Belle Epoque, Art Nouveau, Romantic music, almost all my favorite beautiful things ended then. The US, being mostly untouched, carried the torch a little longer: Disney's Sleeping Beauty in 1959 was probably the peak. But today it's well into the shadow too, compare this image to this one.

Comment by cousin_it on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-17T22:50:22.373Z · LW · GW

Maybe offtopic, but the "trying too hard to try" part rings very true to me. Been on both sides of it.

The tricky thing about work, I'm realizing more and more, is that you should just work. That's the whole secret. If instead you start thinking how difficult the work is, or how important to the world, or how you need some self-improvement before you can do the work effectively, these thoughts will slow you down and surprisingly often they'll be also completely wrong. It always turns out later that your best work wasn't the one that took the most effort, or felt the most important at the time; you were just having a nose-down busy period, doing a bunch of things, and only the passage of time made clear which of them mattered.

Comment by cousin_it on Book Review: Denial of Death · 2021-10-17T16:37:34.708Z · LW · GW

Setting up comfortable limitations might be partly explained by self-handicapping:

So they chose the harmful drug as an excuse: “Oh, I would have passed the test, only the drug was making me stupid.” As the study points out, this is a win-win situation: if they fail, the drug excuses their failure, and if they succeed it’s doubly impressive that they passed even with a handicap.

Comment by cousin_it on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-17T12:54:36.296Z · LW · GW

Maybe at Google or some other corporation you'd have a more pleasant time, because many employees view it as "just putting food on the table", which stabilizes things. It has some bureaucratic and Machiavellian stuff for sure, but to me it feels less psychologically pressuring than having everything be about the mission all the time.

Just for disclosure, I was a MIRI research associate for a short time, long ago, remotely, and the experience mostly just passed me by. I only remember lots of email threads about AI strategy, nothing about psychology. There was some talk about having secret research, but when joining I said that I wouldn't work on anything secret, so all my math / decision theory stuff is public on LW.

Comment by cousin_it on Book Review: Denial of Death · 2021-10-17T11:41:23.859Z · LW · GW

Such a strange hypothesis: we need a "beyond", so we chase a good career, a good spouse and nice kids, success in art and other pursuits. Don't these things bring their own rewards, like physical comfort and social status and nurturing instinct and so on? Do we really need an extra philosophical reason to chase them?

It seems simpler to consider these mundane rewards as primary, and when someone is deprived of them by some equally mundane circumstance, that person is likely to go looking for "beyond". They feel that one side of the explore-exploit tradeoff isn't working out for them, so they switch to the other side for awhile. Hoffer makes the same point: mass movements recruit from those who are materially discontent. There seems no reason to bring Freudian mortality fears into any of this.

Comment by cousin_it on Zoe Curzi's Experience with Leverage Research · 2021-10-13T08:59:36.344Z · LW · GW

Good stuff. Very similar to DeMille's interview about Hubbard. As an aside, I love how the post rejects the usual positive language about "openness to experience" and calls the trait what it is: openness to influence.