Posts

Transitive Tolerance Means Intolerance 2021-08-14T17:52:26.849Z
Improvement for pundit prediction comparisons 2021-03-28T18:50:51.648Z
Developmental Stages of GPTs 2020-07-26T22:03:19.588Z
Don't Make Your Problems Hide 2020-06-27T20:24:53.408Z
Map Errors: The Good, The Bad, and The Territory 2020-06-27T05:22:58.674Z
Negotiating With Yourself 2020-06-26T23:55:15.638Z
[Link] COVID-19 causing deadly blood clots in younger people 2020-04-27T21:02:32.945Z
Choosing the Zero Point 2020-04-06T23:44:02.083Z
The Real Standard 2020-03-30T03:09:02.607Z
Adding Up To Normality 2020-03-24T21:53:03.339Z
Does the 14-month vaccine safety test make sense for COVID-19? 2020-03-18T18:41:24.582Z
Rationalists, Post-Rationalists, And Rationalist-Adjacents 2020-03-13T20:25:52.670Z
AlphaStar: Impressive for RL progress, not for AGI progress 2019-11-02T01:50:27.208Z
orthonormal's Shortform 2019-10-31T05:24:47.692Z
Fuzzy Boundaries, Real Concepts 2018-05-07T03:39:33.033Z
Roleplaying As Yourself 2018-01-06T06:48:03.510Z
The Loudest Alarm Is Probably False 2018-01-02T16:38:05.748Z
Value Learning for Irrational Toy Models 2017-05-15T20:55:05.000Z
HCH as a measure of manipulation 2017-03-11T03:02:53.000Z
Censoring out-of-domain representations 2017-02-01T04:09:51.000Z
Vector-Valued Reinforcement Learning 2016-11-01T00:21:55.000Z
Cooperative Inverse Reinforcement Learning vs. Irrational Human Preferences 2016-06-18T00:55:10.000Z
Proof Length and Logical Counterfactuals Revisited 2016-02-10T18:56:38.000Z
Obstacle to modal optimality when you're being modalized 2015-08-29T20:41:59.000Z
A simple model of the Löbstacle 2015-06-11T16:23:22.000Z
Agent Simulates Predictor using Second-Level Oracles 2015-06-06T22:08:37.000Z
Agents that can predict their Newcomb predictor 2015-05-19T10:17:08.000Z
Modal Bargaining Agents 2015-04-16T22:19:03.000Z
[Clearing out my Drafts folder] Rationality and Decision Theory Curriculum Idea 2015-03-23T22:54:51.241Z
An Introduction to Löb's Theorem in MIRI Research 2015-03-23T22:22:26.908Z
Welcome, new contributors! 2015-03-23T21:53:20.000Z
A toy model of a corrigibility problem 2015-03-22T19:33:02.000Z
New forum for MIRI research: Intelligent Agent Foundations Forum 2015-03-20T00:35:07.071Z
Forum Digest: Updateless Decision Theory 2015-03-20T00:22:06.000Z
Meta- the goals of this forum 2015-03-10T20:16:47.000Z
Proposal: Modeling goal stability in machine learning 2015-03-03T01:31:36.000Z
An Introduction to Löb's Theorem in MIRI Research 2015-01-22T20:35:50.000Z
Robust Cooperation in the Prisoner's Dilemma 2013-06-07T08:30:25.557Z
Compromise: Send Meta Discussions to the Unofficial LessWrong Subreddit 2013-04-23T01:37:31.762Z
Welcome to Less Wrong! (5th thread, March 2013) 2013-04-01T16:19:17.933Z
Robin Hanson's Cryonics Hour 2013-03-29T17:20:23.897Z
Does My Vote Matter? 2012-11-05T01:23:52.009Z
Decision Theories, Part 3.75: Hang On, I Think This Works After All 2012-09-06T16:23:37.670Z
Decision Theories, Part 3.5: Halt, Melt and Catch Fire 2012-08-26T22:40:20.388Z
Posts I'd Like To Write (Includes Poll) 2012-05-26T21:25:31.019Z
Timeless physics breaks T-Rex's mind [LINK] 2012-04-23T19:16:07.064Z
Decision Theories: A Semi-Formal Analysis, Part III 2012-04-14T19:34:38.716Z
Decision Theories: A Semi-Formal Analysis, Part II 2012-04-06T18:59:35.787Z
Decision Theories: A Semi-Formal Analysis, Part I 2012-03-24T16:01:33.295Z
Suggestions for naming a class of decision theories 2012-03-17T17:22:54.160Z

Comments

Comment by orthonormal on The Strangest Thing An AI Could Tell You · 2021-09-14T03:13:51.694Z · LW · GW

It's something that the AI has got to make you understand.

Comment by orthonormal on Humanity is Winning the Fight Against Infectious Disease · 2021-08-30T18:58:30.043Z · LW · GW
  1. Overall mortality mortality and morbidity rates don't lie. You can't do enough creative accounting to hide vast amounts of infectious disease mortality within a much longer healthy lifespan. 

    (And yes, it's healthier on average, even counting obesity. Painful disability was more common in the past.)
  2. The nice thing about sequencing is that eventually it'll be feasible to take a slice of your tissue and identify everything that's not you. Easier to make progress at that point.
  3. Replacing your cells with nanotech cells that bacteria/viruses/prions/etc can't crack, and which have very solid checksum/error-correcting codes to prevent things like nanotech cancer... you're safe against any non-intentionally-designed attack. (To say nothing of the abstraction layers possible with uploading.) 

    "Normal" transhumanist technologies aren't perfect, but they are barely epsilon-susceptible to natural infectious diseases.
Comment by orthonormal on What 2026 looks like (Daniel's Median Future) · 2021-08-15T17:09:06.595Z · LW · GW

Obfuscation might be feasible, yeah. Though unless you can take down / modify the Wayback Machine and all other mirrors, you're still accountable retroactively.

Comment by orthonormal on Transitive Tolerance Means Intolerance · 2021-08-15T17:07:03.912Z · LW · GW

You can choose to draw your bounds of tolerance as broadly as you like!

On a prescriptive level, I'm offering a coherent alternative that's between "tolerate everybody" and "tolerate nobody".

On a descriptive level, I'm pointing out why you encounter consequences when you damn the torpedoes and try to personally tolerate every fringe believer.

Comment by orthonormal on Transitive Tolerance Means Intolerance · 2021-08-15T17:02:07.741Z · LW · GW

I like Scott Alexander's discussion of symmetric vs asymmetric weapons. Symmetric weapons lead to an unceasing battle, which as you said has at least become less directly violent, but whose outcomes are more or less a random walk. But asymmetric weapons pull ever so slightly toward, well, a weakly extrapolated volition of the players on both sides.

Brownian motion plus a small term looks just like Brownian motion until you look a long ways back and notice the unlikeliness of the trend. The arc of the moral universe is long, etc.

(Of course, in this century we very probably don't have the luxury of a long arc...)

Comment by orthonormal on What 2026 looks like (Daniel's Median Future) · 2021-08-14T21:00:41.924Z · LW · GW

I'd additionally expect the death of pseudonymity on the Internet, as AIs will find it easy to detect similar writing style and correlated posting behavior.  What at present takes detective work will in the future be cheaply automated, and we will finally be completely in Zuckerberg's desired world where nobody can maintain a second identity online.

Oh, and this is going to be retroactive, so be ready for the consequences of everything you've ever said online.

Comment by orthonormal on Analysis of World Records in Speedrunning [LINKPOST] · 2021-08-08T18:47:19.059Z · LW · GW

Ah yes, the bottle glitch...

Comment by orthonormal on My Marriage Vows · 2021-07-23T22:58:37.931Z · LW · GW

I have a hard time trusting any mere humans to think straight on the decision theory of divorce; the stakes are so high that emotions come to the fore.

There must be conditions, even conditions short of abuse, where unilateral exit is allowed regardless of whether the other thinks that is a mistake. The conditions are a safety valve for motivated thinking. They can be things like "if you're miserable, having more fights than intimacy, have tried couples therapy for at least 6 months, stayed apart for a month and felt better alone, then you can divorce if you want".

Obviously that would be clunky in the vows, so there may be a lower-entropy way of saying that this marriage has some unlikely conditions for exit as well as voice.

(If you don't have this, you risk "one spouse trying to convince the other, unwilling, spouse to accept a divorce", which is pretty damn bad.)

Comment by orthonormal on My Marriage Vows · 2021-07-21T17:55:14.356Z · LW · GW

The Vows can be unmade by dissolving the marriage, but the act of dissolving the marriage is in itself subject to the Vow of Concord, which limits the ability to dissolve it unilaterally.

My ex and I included a more informal version of this in our own vows, and it was the only vow I ever broke. You cannot exclude from possibility a situation where the marriage is unhealthy, one spouse is suffering, and the other cannot bear the idea of letting go.

My ex was desperate to make things work, and I was trying with all my might, but there was no progress on the problems that blew up as soon as we moved in together. The first highly recommended couples therapist couldn't help us after over a year, then the next one threw up her hands and said she didn't think she could help any more.

Could I have convinced my ex to agree to a divorce while still living together? It seemed impossible to me - my guilt, and my fear of letting loved ones down, would drag me back.

(I'm in so much healthier a place, by the way, and my ex now seems to be happier as well. Our divorce was amicable after the first few weeks.)

I don't know what the more human-safe version of this clause would be, but it's not this. Unilateral exit should be a difficult option—to be done only in great emergencies or after a large amount of effort has been expended—but please don't take it off the table.

Comment by orthonormal on [Link] Musk's non-missing mood · 2021-07-13T02:07:06.551Z · LW · GW

One of his main steps was founding OpenAI, whose charter looks like a questionable decision now from an AI Safety standpoint (as they push capabilities of language models and reinforcement learning forward, while driving their original safety team away) and looked fishy to me even at the time (simply because more initiatives make coordination harder).

I agree that Musk takes AI risk seriously, and I understand the "try something" mentality. But I suspect he founded OpenAI because he didn't trust a safety project he didn't have his hands on himself; then later he realized OpenAI wasn't working as he hoped, so he drifted away to focus on Neuralink.

Comment by orthonormal on The Loudest Alarm Is Probably False · 2021-07-09T03:55:19.037Z · LW · GW

That sounds pretty rough.

This is harsh and may be completely off the mark, but I was trying to call attention especially to alarms where those close to you disagree. If friends and family agree that you're not social enough, then that's probably a true alarm that you're facing.

Comment by orthonormal on Covid 6/24: The Spanish Prisoner · 2021-06-24T18:20:27.388Z · LW · GW

Ehhh, in 2019 McAfee wasn't in prison at all. I don't expect him in particular to have consistent enough desires over time for that old promise to bind him, and "just after your extradition is approved" is a pretty understandable time to commit suicide. (A fake suicide would be just about as effective at any point in time, given that I don't expect most people to update on the argument above.)

Plausible he was killed? Sure. But significantly less probable than Epstein for several reasons, including that the latter presumably had massive dirt that could come out soon. I don't think many powerful people were trusting McAfee with their secrets.

Comment by orthonormal on The Apprentice Thread · 2021-06-23T20:17:27.259Z · LW · GW

[MENTOR] Machine learning. I do it for work, but I'm a bit behind the frontier and this would motivate me to catch up. You should already know how to code in Python, and have taken multivariable calculus and linear algebra.

Comment by orthonormal on Covid 5/27: The Final Countdown · 2021-05-28T18:12:02.907Z · LW · GW

A political analysis asks why Biden has high ratings on the pandemic. It makes no mention of any of Biden’s policies, decisions, actions or statements, because it turns out none of that matters whatsoever. Presumably there’s a point at which something would matter, but we are still waiting to prove that via example.

I think this is too cynical.

FiveThirtyEight is analyzing a bunch of existing polls. I expect that none of them asked questions more specific than overall assessment of how Biden was doing on COVID-19. If someone did do a poll asking more specific subquestions - does he do messaging better, do they credit him with beating his vaccine rollout target, etc - you'd probably see some details emerge.

(His messaging hasn't been ideal, of course, but it's raised the bar from the last guy. And his rollout speed was pretty good, but he also managed expectations smartly. Etc.)

Of course, the biggest reason that people rate him highly is just that, well, things are going Back To Normal and that means he's doing a good job on it. That's not a super sophisticated analysis on their part, but there are worse things for low-info voters to do than to support the ruling party when things go well and oppose it when things go badly, without trying to divine the causality.

Comment by orthonormal on The Loudest Alarm Is Probably False · 2021-05-04T00:25:17.127Z · LW · GW

Interesting! Different experiences.

I do want to make it clear that people who are X often acknowledge that they are X, but don't intensely worry about it. E.g. a friend who knows he's abrasive, knows his life would be better if he were less abrasive on the margin, but doesn't have the emotional reaction "oh god, am I being abrasive?" in the middle of social interactions.

Comment by orthonormal on Applied Picoeconomics · 2021-03-29T19:00:12.807Z · LW · GW

On the other hand, I was undiagnosed (and accordingly untreated) bipolar type 2 at the time of that comment, so my results are not generalizable. My hypomanic self wrote checks that my depressive self couldn't cash.

Comment by orthonormal on Improvement for pundit prediction comparisons · 2021-03-28T22:28:31.255Z · LW · GW

That's a great point. [Getting more pundits to make predictions at all] is much more valuable than [more accurately comparing pundits who do make predictions] right now, to such an extent that I now doubt whether my idea was worthwhile.

Comment by orthonormal on Covid 3/12: New CDC Guidelines Available · 2021-03-12T18:52:42.476Z · LW · GW

Meanwhile, Biden continues to double down on underpromising to maximize the chances of being able to claim overdelivery on all fronts.

Besides the incentives (cf. the Scotty Factor), it's an important safety valve against the Planning Fallacy.

Comment by orthonormal on Adding Up To Normality · 2021-02-15T17:57:28.277Z · LW · GW

I have to disagree with you there.  Thanks to my friends' knowledge, I stopped my parents from taking a cross-country flight in early March, before much of the media reported that there was any real danger in doing so. You can't wave off the value of truly thinking through things.

But don't confuse "my model is changing" with "the world is changing", even when both are happening simultaneously. That's my point.

Comment by orthonormal on Your Cheerful Price · 2021-02-13T21:49:51.923Z · LW · GW

One problem: a high price can put more stress on a person, and raising the price further won't fix that!

For instance, say that you leave a fic half-finished, and someone offers a million dollars to MIRI iff you finish it. Would you actually feel cheerful and motivated, or might you feel stressed and avoidant and guilty about being slow, and have a painful experience in actually writing it?

(If you've personally mastered your relevant feelings, I think you'd still agree that many people haven't.)

I don't know what to do in that case.

Comment by orthonormal on Understanding “Deep Double Descent” · 2021-01-13T04:18:38.664Z · LW · GW

If this post is selected, I'd like to see the followup made into an addendum—I think it adds a very important piece, and it should have been nominated itself.

Comment by orthonormal on What failure looks like · 2021-01-13T04:07:51.596Z · LW · GW

I think this post (and similarly, Evan's summary of Chris Olah's views) are essential both in their own right and as mutual foils to MIRI's research agenda. We see related concepts (mesa-optimization originally came out of Paul's talk of daemons in Solomonoff induction, if I remember right) but very different strategies for achieving both inner and outer alignment. (The crux of the disagreement seems to be the probability of success from adapting current methods.)

Strongly recommended for inclusion.

Comment by orthonormal on Soft takeoff can still lead to decisive strategic advantage · 2021-01-13T03:57:34.236Z · LW · GW

It's hard to know how to judge a post that deems itself superseded by a post from a later year, but I lean toward taking Daniel at his word and hoping we survive until the 2021 Review comes around.

Comment by orthonormal on Rationality, Levels of Intervention, and Empiricism · 2021-01-13T03:52:34.426Z · LW · GW

I can't think of a question on which this post narrows my probability distribution.

Not recommended.

Comment by orthonormal on Chris Olah’s views on AGI safety · 2021-01-13T03:19:36.049Z · LW · GW

The content here is very valuable, even if the genre of "I talked a lot with X and here's my articulation of X's model" comes across to me as a weird intellectual ghostwriting. I can't think of a way around that, though.

Comment by orthonormal on AlphaStar: Impressive for RL progress, not for AGI progress · 2021-01-12T17:40:18.083Z · LW · GW

That being said, I'm not very confident this piece (or any piece on the current state of AI) will still be timely a year from now, so maybe I shouldn't recommend it for inclusion after all.

Comment by orthonormal on Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary · 2021-01-12T15:09:42.387Z · LW · GW

Ironically enough for Zack's preferred modality, you're asserting that even though this post is reasonable when decoupled from the rest of the sequence, it's worrisome when contextualized.

Comment by orthonormal on The AI Timelines Scam · 2021-01-10T22:07:51.931Z · LW · GW

I agree about the effects of deep learning hype on deep learning funding, though I think very little of it has been AGI hype; people at the top level had been heavily conditioned to believe we were/are still in the AI winter of specialized ML algorithms to solve individual tasks. (The MIRI-sphere had to work very hard, before OpenAI and DeepMind started doing externally impressive things, to get serious discussion on within-lifetime timelines from anyone besides the Kurzweil camp.)

Maybe Demis was strategically overselling DeepMind, but I expect most people were genuinely over-optimistic (and funding-seeking) in the way everyone in ML always is.

Comment by orthonormal on You Have About Five Words · 2021-01-10T05:51:11.810Z · LW · GW

This is a retroactively obvious concept that I'd never seen so clearly stated before, which makes it a fantastic contribution to our repertoire of ideas. I've even used it to sanity-check my statements on social media. Well, I've tried.

Recommended, obviously.

Comment by orthonormal on The Parable of Predict-O-Matic · 2021-01-10T05:49:06.141Z · LW · GW

This reminds me of That Alien Message, but as a parable about mesa-alignment rather than outer alignment. It reads well, and helps make the concepts more salient. Recommended.

Comment by orthonormal on Rule Thinkers In, Not Out · 2021-01-10T05:30:51.098Z · LW · GW

This makes a simple and valuable point. As discussed in and below Anna's comment, it's very different when applied to a person who can interact with you directly versus a person whose works you read. But the usefulness in the latter context, and the way I expect new readers to assume that context, leads me to recommend it.

Comment by orthonormal on The AI Timelines Scam · 2021-01-10T05:17:29.453Z · LW · GW

I liked the comments on this post more than I liked the post itself. As Paul commented, there's as much criticism of short AGI timelines as there is of long AGI timelines; and as Scott pointed out, this was an uncharitable take on AI proponents' motives.

Without the context of those comments, I don't recommend this post for inclusion.

Comment by orthonormal on Yes Requires the Possibility of No · 2021-01-08T06:36:46.286Z · LW · GW

I've referred and linked to this post in discussions outside the rationalist community; that's how important the principle is. (Many people understand the idea in the domain of consent, but have never thought about it in the domain of epistemology.)

Recommended.

Comment by orthonormal on Book summary: Unlocking the Emotional Brain · 2021-01-08T06:30:43.519Z · LW · GW

As mentioned in my comment, this book review overcame some skepticism from me and explained a new mental model about how inner conflict works. Plus, it was written with Kaj's usual clarity and humility. Recommended.

Comment by orthonormal on AlphaStar: Impressive for RL progress, not for AGI progress · 2021-01-08T06:18:05.739Z · LW · GW

I stand by this piece, and I now think it makes a nice complement to discussions of GPT-3. In both cases, we have significant improvements in chunking of concepts into latent spaces, but we don't appear to have anything like a causal model in either. And I've believed for several years that causal reasoning is the thing that puts us in the endgame.

(That's not to say either system would still be safe if scaled up massively; mesa-optimization would be a reason to worry.)

Comment by orthonormal on Book summary: Unlocking the Emotional Brain · 2021-01-08T06:01:01.233Z · LW · GW

I never found a Coherence Therapy practitioner, but I found a really excellent IFS practitioner who's helped me break down many of my perpetual hangups in ways compatible with this post.

In particular, one difference between the self-IFS I'd attempted before is that I'd previously tried to destroy some parts as irrational or hypocritical, where the therapist was very good at being non-judgmental towards them. That approach paid better dividends.

Comment by orthonormal on What evidence will tell us about the new strain? How are you updating? · 2020-12-27T00:53:22.546Z · LW · GW

Can't update on #4. Of course a rapidly growing new strain will have a negligible impact on total numbers early on; it's a question of whether it will dominate the total numbers in a few months.

Comment by orthonormal on Radical Probabilism · 2020-12-20T18:34:19.650Z · LW · GW

Remind me which bookies count and which don't, in the context of the proofs of properties?

If any computable bookie is allowed, a non-Bayesian is in trouble against a much larger bookie who can just (maybe through its own logical induction) discover who the bettor is and how to exploit them.

[EDIT: First version of this comment included "why do convergence bettors count if they don't know the bettor will oscillate", but then I realized the answer while Abram was composing his response, so I edited that part out. Editing it back in so that Abram's reply has context.]

Comment by orthonormal on Covid 12/10: Vaccine Approval Day in America · 2020-12-16T17:30:14.299Z · LW · GW

Not the most important thing, but Adler and Colbert's situations feel rather different to me.

Colbert is bubbled with a small team in order to provide mass entertainment to the nation... just like sports teams, which you endorse.

Adler is partying for his own benefit.

Comment by orthonormal on PredictIt: Presidential Market is Increasingly Wrong · 2020-10-25T20:37:39.147Z · LW · GW

To steelman the odds' consistency (though I agree with you that the market isn't really reflecting careful thinking from enough people), Biden is farther ahead in the 538 projection now than he was before, but on the other hand, Trump has completely gotten away with refusing to commit to a peaceful transfer of power. Even if that's not the most surprising thing in the world (how far indeed we have fallen), it wasn't at 100% two months ago.

Comment by orthonormal on Decoherence is Falsifiable and Testable · 2020-09-16T22:51:12.583Z · LW · GW

There's certainly a tradeoff involved in using a disputed example as your first illustration of a general concept (here, Bayesian reasoning vs the Traditional Scientific Method).

Comment by orthonormal on A Technical Explanation of Technical Explanation · 2020-09-16T22:49:14.503Z · LW · GW

I can't help but think of Scott Alexander's long posts, where usually there's a division of topics between roman-numeraled sections, but sometimes it seems like it's just "oh, it's been too long since the last one, got to break it up somehow". I do think this really helps with readability; it reminds the reader to take a breath, in some sense.

Or like, taking something that works together as a self-contained thought but is too long to serve the function of a paragraph, and just splitting it by adding a superficially segue-like sentence at the start of the second part.

Comment by orthonormal on A Technical Explanation of Technical Explanation · 2020-09-15T06:31:16.211Z · LW · GW

It may not be possible to cleanly divide the Technical Explanation into multiple posts that each stand on their own, but even separating it awkwardly into several chapters would make it less intimidating and invite more comments.

(I think this may be the longest post in the Sequences.)

Comment by orthonormal on My Childhood Role Model · 2020-09-15T06:06:45.448Z · LW · GW

I forget if I've said this elsewhere, but we should expect human intelligence to be just a bit above the bare minimum required to result in technological advancement. Otherwise, our ancestors would have been where we are now.

(Just a bit above, because there was the nice little overhang of cultural transmission: once the hardware got good enough, the software could be transmitted way more effectively between people and across generations. So we're quite a bit more intelligent than our basically anatomically equivalent ancestors of 500,000 years ago. But not as big a gap as the gap from that ancestor to our last common ancestor with chimps, 6-7 million years ago.)

Comment by orthonormal on Why haven't we celebrated any major achievements lately? · 2020-09-12T17:34:49.935Z · LW · GW

Additional hypothesis: everything is becoming more political than it has been since the Civil War, to the extent that any celebration of a new piece of construction/infrastructure/technology would also be protested. (I would even agree with the protesters in many cases! Adding more automobile infrastructure to cities is really bad!)

The only things today [where there's common knowledge that the demonstration will swamp any counter-demonstration] are major local sports achievements.

(I notice that my model is confused in the case of John Glenn's final spaceflight. NASA achievements would normally be nonpartisan, but Glenn was a sitting Democratic Senator at the time of the mission! I guess they figured that in heavily Democratic NYC, not enough Republicans would dare to make a stink.)

Comment by orthonormal on Decoherence is Falsifiable and Testable · 2020-09-11T23:31:12.667Z · LW · GW

Eliezer's mistake here was that he didn't, before the QM sequence, write a general post to the effect that you don't have an additional Bayesian burden of proof if your theory was proposed chronologically later. Given such a reference, it would have been a lot simpler to refer to that concept without it seeming like special pleading here.

Comment by orthonormal on 2020's Prediction Thread · 2020-09-09T00:42:35.134Z · LW · GW

It's not explicit. Like I said, the terms are highly dependent in reality, but for intuition you can think of a series of variables  for  from  to , where  equals  with probability . And think of  as pretty large.

So most of the time, the sum of these is dominated by a lot of terms with small contributions. But every now and then, a big one hits and there's a huge spike.

(I haven't thought very much about what functions of  and  I'd actually use if I were making a principled model;  and  are just there for illustrative purposes, such that the sum is expected to have many small terms most of the time and some very large terms occasionally.)

Comment by orthonormal on 2020's Prediction Thread · 2020-09-08T15:50:18.938Z · LW · GW

No. My model is the sum of a bunch of random variables for possible conflicts (these variables are not independent of each other), where there are a few potential global wars that would cause millions or billions of deaths, and lots and lots of tiny wars each of which would add a few thousand deaths.

This model predicts a background rate of the sum of the smaller ones, and large spikes to the rate whenever a larger conflict happens. Accordingly, over the last three decades (with the tragic exception of the Rwandan genocide) total war deaths per year (combatants + civilians) have been between 18k and 132k (wow, the Syrian Civil War has been way worse than the Iraq War, I didn't realize that).

So my median is something like 1M people dying over the decade, because I view a major conflict as under 50% likely, and we could easily have a decade as peaceful (no, really) as the 2000s.

Comment by orthonormal on Frequently Asked Questions for Central Banks Undershooting Their Inflation Target · 2020-08-28T19:12:27.874Z · LW · GW

An improvement in this direction: the Fed has just acknowledged, at least, that it is possible for inflation to be too low as well as too high, that inflation targeting needs to acknowledge that the US has been consistently undershooting its goal, and that this leads to the further feedback of the market expecting the US to continue undershooting its goal. And then it explains and commits to average inflation targeting:

We have also made important changes with regard to the price-stability side of our mandate. Our longer-run goal continues to be an inflation rate of 2 percent. Our statement emphasizes that our actions to achieve both sides of our dual mandate will be most effective if longer-term inflation expectations remain well anchored at 2 percent. However, if inflation runs below 2 percent following economic downturns but never moves above 2 percent even when the economy is strong, then, over time, inflation will average less than 2 percent. Households and businesses will come to expect this result, meaning that inflation expectations would tend to move below our inflation goal and pull realized inflation down. To prevent this outcome and the adverse dynamics that could ensue, our new statement indicates that we will seek to achieve inflation that averages 2 percent over time. Therefore, following periods when inflation has been running below 2 percent, appropriate monetary policy will likely aim to achieve inflation moderately above 2 percent for some time.

Of course, this say nothing about how they intend to achieve this—seigniorage has its downsides—but I expect Eliezer would see it as good news.

Comment by orthonormal on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-14T02:59:02.142Z · LW · GW

The claim that came to my mind is that the conscious mind is the mesa-optimizer here, the original outer optimizer being a riderless elephant.