Posts

The Real Standard 2020-03-30T03:09:02.607Z · score: 8 (3 votes)
Adding Up To Normality 2020-03-24T21:53:03.339Z · score: 68 (28 votes)
Does the 14-month vaccine safety test make sense for COVID-19? 2020-03-18T18:41:24.582Z · score: 48 (17 votes)
Rationalists, Post-Rationalists, And Rationalist-Adjacents 2020-03-13T20:25:52.670Z · score: 72 (25 votes)
AlphaStar: Impressive for RL progress, not for AGI progress 2019-11-02T01:50:27.208Z · score: 100 (47 votes)
orthonormal's Shortform 2019-10-31T05:24:47.692Z · score: 9 (1 votes)
Fuzzy Boundaries, Real Concepts 2018-05-07T03:39:33.033Z · score: 62 (16 votes)
Roleplaying As Yourself 2018-01-06T06:48:03.510Z · score: 88 (33 votes)
The Loudest Alarm Is Probably False 2018-01-02T16:38:05.748Z · score: 181 (73 votes)
Value Learning for Irrational Toy Models 2017-05-15T20:55:05.000Z · score: 0 (0 votes)
HCH as a measure of manipulation 2017-03-11T03:02:53.000Z · score: 1 (1 votes)
Censoring out-of-domain representations 2017-02-01T04:09:51.000Z · score: 2 (2 votes)
Vector-Valued Reinforcement Learning 2016-11-01T00:21:55.000Z · score: 2 (2 votes)
Cooperative Inverse Reinforcement Learning vs. Irrational Human Preferences 2016-06-18T00:55:10.000Z · score: 2 (2 votes)
Proof Length and Logical Counterfactuals Revisited 2016-02-10T18:56:38.000Z · score: 3 (3 votes)
Obstacle to modal optimality when you're being modalized 2015-08-29T20:41:59.000Z · score: 3 (3 votes)
A simple model of the Löbstacle 2015-06-11T16:23:22.000Z · score: 2 (2 votes)
Agent Simulates Predictor using Second-Level Oracles 2015-06-06T22:08:37.000Z · score: 2 (2 votes)
Agents that can predict their Newcomb predictor 2015-05-19T10:17:08.000Z · score: 1 (1 votes)
Modal Bargaining Agents 2015-04-16T22:19:03.000Z · score: 3 (3 votes)
[Clearing out my Drafts folder] Rationality and Decision Theory Curriculum Idea 2015-03-23T22:54:51.241Z · score: 6 (7 votes)
An Introduction to Löb's Theorem in MIRI Research 2015-03-23T22:22:26.908Z · score: 16 (17 votes)
Welcome, new contributors! 2015-03-23T21:53:20.000Z · score: 4 (4 votes)
A toy model of a corrigibility problem 2015-03-22T19:33:02.000Z · score: 4 (4 votes)
New forum for MIRI research: Intelligent Agent Foundations Forum 2015-03-20T00:35:07.071Z · score: 36 (37 votes)
Forum Digest: Updateless Decision Theory 2015-03-20T00:22:06.000Z · score: 5 (5 votes)
Meta- the goals of this forum 2015-03-10T20:16:47.000Z · score: 3 (3 votes)
Proposal: Modeling goal stability in machine learning 2015-03-03T01:31:36.000Z · score: 1 (1 votes)
An Introduction to Löb's Theorem in MIRI Research 2015-01-22T20:35:50.000Z · score: 2 (2 votes)
Robust Cooperation in the Prisoner's Dilemma 2013-06-07T08:30:25.557Z · score: 73 (71 votes)
Compromise: Send Meta Discussions to the Unofficial LessWrong Subreddit 2013-04-23T01:37:31.762Z · score: -2 (18 votes)
Welcome to Less Wrong! (5th thread, March 2013) 2013-04-01T16:19:17.933Z · score: 27 (28 votes)
Robin Hanson's Cryonics Hour 2013-03-29T17:20:23.897Z · score: 29 (34 votes)
Does My Vote Matter? 2012-11-05T01:23:52.009Z · score: 19 (37 votes)
Decision Theories, Part 3.75: Hang On, I Think This Works After All 2012-09-06T16:23:37.670Z · score: 23 (24 votes)
Decision Theories, Part 3.5: Halt, Melt and Catch Fire 2012-08-26T22:40:20.388Z · score: 31 (32 votes)
Posts I'd Like To Write (Includes Poll) 2012-05-26T21:25:31.019Z · score: 14 (15 votes)
Timeless physics breaks T-Rex's mind [LINK] 2012-04-23T19:16:07.064Z · score: 22 (29 votes)
Decision Theories: A Semi-Formal Analysis, Part III 2012-04-14T19:34:38.716Z · score: 23 (28 votes)
Decision Theories: A Semi-Formal Analysis, Part II 2012-04-06T18:59:35.787Z · score: 16 (19 votes)
Decision Theories: A Semi-Formal Analysis, Part I 2012-03-24T16:01:33.295Z · score: 24 (26 votes)
Suggestions for naming a class of decision theories 2012-03-17T17:22:54.160Z · score: 5 (8 votes)
Decision Theories: A Less Wrong Primer 2012-03-13T23:31:51.795Z · score: 73 (77 votes)
Baconmas: The holiday for the sciences 2012-01-05T18:51:10.606Z · score: 5 (5 votes)
Advice Request: Baconmas Website 2012-01-01T19:25:40.308Z · score: 11 (11 votes)
[LINK] "Prediction Audits" for Nate Silver, Dave Weigel 2011-12-30T21:07:50.916Z · score: 12 (13 votes)
Welcome to Less Wrong! (2012) 2011-12-26T22:57:21.157Z · score: 26 (27 votes)
Improving My Writing Style 2011-10-11T16:14:40.907Z · score: 6 (9 votes)
Decision Theory Paradox: Answer Key 2011-09-05T23:13:33.256Z · score: 6 (6 votes)
Consequentialism Need Not Be Nearsighted 2011-09-02T07:37:08.154Z · score: 55 (55 votes)

Comments

Comment by orthonormal on "No evidence" as a Valley of Bad Rationality · 2020-03-29T06:13:18.778Z · score: 14 (7 votes) · LW · GW

In general, it's good to check your intuitions against evidence where possible (so, seek out experiments and treat experimentally validated hypotheses as much stronger than intuitions).

The valley being described here is the idea that you should just discard your intuitions in favor of the null hypothesis, not just when experiments have failed to reject the null hypothesis (though even here, they could just be underpowered!), but when experiments haven't been done at all!

It's a generalized form of an isolated demand for rigor, where whatever gets defined as a null hypothesis gets a free pass, but anything else has to prove itself to a high standard. And that leads to really poor performance in domains where evidence is hard to come by (quickly enough), relative to trusting intuitive priors and weak evidence when that's all that's available.

Comment by orthonormal on mind viruses about body viruses · 2020-03-28T17:44:14.318Z · score: 4 (2 votes) · LW · GW

But we already filter more than the reference class of smart Internet people, that's the point. cousin_it argues, and I agree, that this community may already be on the extreme of "filters too carefully in the face of the need for urgent updates". We did well by taking COVID-19 seriously before it was proven, and we could have done still better on that front.

Comment by orthonormal on Just Lose Hope Already · 2020-03-27T17:30:59.940Z · score: 2 (1 votes) · LW · GW

Sometimes. More often the hero just tries AGAIN, BUT HARDER.

Comment by orthonormal on Adding Up To Normality · 2020-03-27T02:05:48.169Z · score: 2 (1 votes) · LW · GW

Huzzah, convergence! I appreciate the points you've made.

Comment by orthonormal on Adding Up To Normality · 2020-03-26T21:29:14.209Z · score: 5 (3 votes) · LW · GW

Don't know if you saw, but I updated the post yesterday because of your (and khafra's) points.

Also, your caveat is a good reframe of the main mechanism behind the post.

I do still disagree with you somewhat, because I think that people going through a crisis of faith are prone to flailing around and taking naive actions that they would have reconsidered after a week or month of actually thinking through the implications of their new belief. Trying to maximize utility while making a major update is safe for ideal Bayesian reasoners, but it fails badly for actual humans.

In the absence of an external crisis, taking relatively safe actions (and few irreversible actions) is correct in the short term, and the status quo is going to be reasonably safe for most people if you've been living it for years. If you can back off from newly-suspected-wrong activities for the time being without doing so irreversibly, then yes that's better.

Comment by orthonormal on Adding Up To Normality · 2020-03-26T02:40:36.416Z · score: 2 (1 votes) · LW · GW

Well, if you have a space program and you're dealing with crystal spheres...

Comment by orthonormal on Adding Up To Normality · 2020-03-25T17:51:20.236Z · score: 2 (1 votes) · LW · GW

I think khafra and Isnasene make good points about not applying this in cases where the plane shows signs of actually dropping and you're updating on that. (In this case, the signs would be watching people you respect tell you to start prepping immediately- act on the warning lights in the cockpit rather than waiting for the engines to fail.)

Comment by orthonormal on Adding Up To Normality · 2020-03-25T17:46:13.895Z · score: 2 (1 votes) · LW · GW

I agree that carefully landing the plane is better than maintaining the course if catastrophic outcomes suddenly seem more plausible than before.

Obviously it applies if you're the lead on a new technological project and suddenly realize a plausible catastrophic risk from it.

I don't think it applies very strongly in your example about animal welfare, unless the protagonist has unusually high leverage on a big decision about to be made. The cost of continuing to stay in the old job for a few weeks while thinking things over (especially if leaving and then coming back would be infeasible) is plausibly worth the value of information thus gained.

Comment by orthonormal on Adding Up To Normality · 2020-03-25T17:40:13.089Z · score: 4 (2 votes) · LW · GW

I'd modify that, since panic can make you falsely put yourself in weird reference classes in the short run. It's more reliable IMO to ask whether anything has shifted massively in the external world at the same time as it's shifted in your model.

How about promise yourself to keep steering the plane mostly as normal while you think about lift, as long as the plane seems to be flying normally?

Comment by orthonormal on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-24T19:53:50.471Z · score: 6 (3 votes) · LW · GW

I wish I'd remembered to include this in the original post (and it feels wrong to slip it in now), but Scott Aaronson neatly paralleled my distinction between rationalists and post-rationalists when discussing interpretations of quantum mechanics:

But the basic split between Many-Worlds and Copenhagen (or better: between Many-Worlds and “shut-up-and-calculate” / “QM needs no interpretation” / etc.), I regard as coming from two fundamentally different conceptions of what a scientific theory is supposed to do for you.  Is it supposed to posit an objective state for the universe, or be only a tool that you use to organize your experiences?

Scott tries his best to give a not-answer and be done with it, which is in keeping with my categorization of him as a prominent rationalist-adjacent.

Comment by orthonormal on Breaking quarantine is negligence. Why are democracies acting like we can only ask nicely? · 2020-03-24T18:47:43.400Z · score: 5 (3 votes) · LW · GW

One unfortunate consequence of [the US still lacking enough tests to cover everyone] is that we rely on people to come forward and ask for testing. The natural effect of your proposal would be to deter some people from getting tested at all!

(And yes, the problem here isn't directly with your proposal. But still, incentives are hard to get right.)

Comment by orthonormal on Does the 14-month vaccine safety test make sense for COVID-19? · 2020-03-19T01:57:17.901Z · score: 9 (6 votes) · LW · GW

Here's a news article reporting a 14-month Phase 1 trial for the COVID-19 vaccine, and I've seen the "12-18 months until vaccine deployment" timeline from Dr. Fauci and the NIH in several sources.

Comment by orthonormal on Universal Fire · 2020-03-17T20:05:03.742Z · score: 7 (4 votes) · LW · GW

It's fun to re-read this after seeing how HPMOR tried to deal with this problem (and what parts it still had to sweep under the rug).

Comment by orthonormal on Good News: the Containment Measures are Working · 2020-03-17T18:50:37.271Z · score: 4 (2 votes) · LW · GW

Tests of antivirals and other treatments may come along sooner than a vaccine. That's something to hope for.

Comment by orthonormal on March 14/15th: Daily Coronavirus link updates · 2020-03-17T18:45:30.310Z · score: 2 (1 votes) · LW · GW

It doesn't look as if the dashboard includes the United States on many of the pages.

Comment by orthonormal on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-16T03:51:48.998Z · score: 8 (4 votes) · LW · GW

The next paragraph applies there: you can rectify it by saying it's a conflict between hypotheses / heuristics, even if you can't get solid evidence on which is more likely to be correct.

Cases where you notice an inconsistency are often juicy opportunities to become more accurate.

Comment by orthonormal on How Do You Convince Your Parents To Prep? To Quarantine? · 2020-03-16T03:49:03.029Z · score: 20 (10 votes) · LW · GW

Talking to them early and often is better than trying to convince them to make big changes in one conversation.

My parents are pretty reasonable, and were willing to postpone a flight to visit me. It helped that I'd told them in late February it was a thing to worry about, then told them again at the beginning of March that it was critical this time.

Unfortunately, it's not always enough. I can't convince my dad to skip his short business trip this week, and even though it looks like his last one for months (and he'll work from home, and my mom is retired), I'm very worried. [Edited to add: fortunately, his client cancelled the trip for him.]

Comment by orthonormal on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-15T17:41:10.467Z · score: 2 (1 votes) · LW · GW

I'm not sure our definitions are the same, but they're very highly correlated in my experience.

Comment by orthonormal on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-15T17:40:00.522Z · score: 6 (3 votes) · LW · GW

I agree with this comment.

The rationalist way to handle multiple frames is to either treat them as different useful heuristics which can outperform naively optimizing from your known map, or as different hypotheses for the correct general frame, rather than as tactical gambits in a disagreement.

Comment by orthonormal on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-15T02:00:04.320Z · score: 11 (4 votes) · LW · GW

I think it would be very connotatively wrong to use those. I really need to say "the kind of conversation where you can examine claims together, and both parties are playing fair and trying to raise their true objections and not moving the goalposts", and "double-crux" points at a subset of that. It doesn't literally have to be double-crux, but it would take a new definition in order to have a handle for that, and three definitions in one post is already kind of pushing it.

Any better ideas?

Comment by orthonormal on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-15T01:54:11.912Z · score: 2 (1 votes) · LW · GW

Like "non-Evangelical Protestant", a label can be useful even if it's defined as "member of this big cluster but not a member of this or that major subcluster". It can even have more unity on many features than the big cluster does.

Comment by orthonormal on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-15T01:41:28.421Z · score: 2 (1 votes) · LW · GW

Scott could do all those things and be a rationalist-adjacent. He's a rationalist under my typology because he shares the sincere yearning and striving for understanding all of the things in one modality, even if he is okay with the utility of sometimes spending time in other modalities. (Which he doesn't seem to, much, but he respects people who do- he just wants to understand what's happening with them.)

Comment by orthonormal on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-14T20:11:41.950Z · score: 7 (2 votes) · LW · GW

It does.

Comment by orthonormal on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-14T20:11:15.753Z · score: 4 (2 votes) · LW · GW

Yes, it does. The probabilistic part applies to different parts of my model as well as to outputs of a single model part.

Comment by orthonormal on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-14T20:09:41.823Z · score: 11 (4 votes) · LW · GW

There's a big difference between a person who says double-cruxing is a bad tool and they don't want to use it, and someone who agrees to it but then turns out not to actually be Doing The Thing.

And it's not that ability-to-double-crux is synonymous with rationality, just that it's the best proxy I could think of for what a typical frustrating interaction on this site is missing. Maybe I should specify that.

Comment by orthonormal on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-14T06:17:52.728Z · score: 16 (7 votes) · LW · GW

You have a prior on Congolese politics, which draws from causal nodes like "central Africa", "post-colonialism", and the like; the fact that your model is uncertain about it (until you look anything up or even try to recall relevant details) doesn't mean your model is mute about it. It's there even before you look at it, and there's been no need to put special effort into it before it was relevant to a question or decision that mattered to you.

I'm just saying that rationalists are trying to make one big map, with regions filled in at different rates (and we won't get around to everything), rather than trying to make separate map-isteria.

Comment by orthonormal on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-14T04:04:37.064Z · score: 6 (3 votes) · LW · GW

Re: your anecdote, I interpret that conversation as one between a person with a more naive view of how the world works and one with a more sophisticated understanding. Both people in such a conversation, or neither of them, could be rationalists under this framework.

Comment by orthonormal on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-14T00:10:08.348Z · score: 5 (3 votes) · LW · GW
I'm confused by the phrase "in the sense of this particular community" for a description that does not mention community.

I'm distinguishing this sense of rationalist from the philosophical school that long predated this community and has many significant differences from it. Can you suggest a better way to phrase my definition?

Comment by orthonormal on A practical out-of-the-box solution to slow down COVID-19: Turn up the heat · 2020-03-13T21:51:46.107Z · score: 4 (2 votes) · LW · GW

Unfortunately for this, it looks to be spreading in South America, and it's probably being under-tested (notably, Bolsonaro himself or at least members of his entourage have been infected).

Comment by orthonormal on Quadratic models and (un)falsified data · 2020-03-13T21:35:43.877Z · score: 9 (2 votes) · LW · GW

Yup! I took this hypothesis seriously when it was first linked by Scott's tumblr, then I found the numbers didn't match up as well as the Reddit post (which only showed a chart) suggested. This was a strong sign that the poster was lying, but I still checked the numbers for a few days until they had clearly diverged.

Comment by orthonormal on orthonormal's Shortform · 2020-03-13T04:05:11.179Z · score: 4 (2 votes) · LW · GW

[EDIT: found it. Extensional vs intensional.]

Eliezer wrote something about two types of definitions, one where you explain your criterion, and one where you point and say "things like that and that, but not that or that". I thought it was called intensive vs extensive definition, but I can't find the post I thought existed. Does anyone else remember this?

Comment by orthonormal on Coronavirus: Justified Practical Advice Thread · 2020-03-04T00:40:17.611Z · score: 4 (2 votes) · LW · GW

As leggi said, the USA has about 3 hospital beds per 1000 people, and utilization is about 67%.

As for why it's a good proxy... I couldn't think of a better one that's simple and objective. Can you?

Comment by orthonormal on Coronavirus: Justified Practical Advice Thread · 2020-03-02T23:59:28.072Z · score: 12 (8 votes) · LW · GW

Instead of a single peak moment, we want to think about "the time period during which medical supplies and services are overwhelmed with demand". And that starts, in my rough estimation, the moment all the hospital beds are full.

In the US, we have 3 hospital beds for every 1000 people, and 2 of them are occupied on average. So we're going to start having problems once 1 in 1000 people want to go to the hospital for coronavirus, which corresponds to an infection rate around 1%.

So that pushes the moment of great worry forward by quite a bit!

On the other side, it's hard to predict when supply will again overtake demand. Maybe governmental intervention comes through on a massive scale, maybe mass quarantine works, maybe the weather warms up and transmission declines. But I'm worried it will take months for any of those to happen after the crisis times begin.

Comment by orthonormal on Suspiciously balanced evidence · 2020-02-21T21:24:08.471Z · score: 6 (3 votes) · LW · GW

A big part of the answer for me is something like this Scott Alexander post about the probability of X within your model versus the probability that your model is miscalibrated in a relevant way. Given how shaky our models of the world are, this alone makes it hard for me to push past 99% on many questions, especially those that require predicting human decisions.

Comment by orthonormal on What determines the balance between intelligence signaling and virtue signaling? · 2020-02-20T18:34:59.342Z · score: 4 (2 votes) · LW · GW

Organized religion may not have reduced virtue signaling, but it may have contained it within the church.

Comment by orthonormal on Writeup: Progress on AI Safety via Debate · 2020-02-20T17:35:43.518Z · score: 2 (1 votes) · LW · GW
As stated, I think this has a bigger vulnerability; B and B* just always answer the question with "yes."

Remember that this is also used to advance the argument. If A thinks B has such a strategy, A can ask the question in such a way that B's "yes" helps A's argument. But sure, there is something weird here.

Comment by orthonormal on Writeup: Progress on AI Safety via Debate · 2020-02-17T18:35:02.114Z · score: 4 (2 votes) · LW · GW
the dishonest team might want to call one as soon as they think the chance of them convincing a judge is below 50%, because that's the worst-case win-rate from blind guessing

I also think this is a fatal flaw with the existing two-person-team proposal; you need a system that gives you an epsilon chance of winning with it if you're using it spuriously.

I have what looks to me like an improvement, but there's still a vulnerability:

A challenges B by giving a yes-no question as well as a previous round to ask it. B answers, B* answers based on B's notes up to that point, A wins outright if B and B* answer differently.

(This has the side effect that A* doesn't need to be involved, and so B can later challenge A. But of course we could get this under any such proposal by having teams larger than two!)

The remaining vulnerability is that A could ask a question that is so abstruse (and irrelevant to the actual debate) that there's a good chance an honest B and B* will answer it differently. (I'm thinking of more sophisticated versions of "if a tree falls in the forest" questions.)

Comment by orthonormal on 2018 Review: Voting Results! · 2020-01-26T19:53:02.372Z · score: 2 (1 votes) · LW · GW

Correlation looks good all the way except for the three massive outliers at the top of the rankings.

Comment by orthonormal on Reality-Revealing and Reality-Masking Puzzles · 2020-01-18T20:09:20.852Z · score: 10 (7 votes) · LW · GW

Shorter version:

"How to get people to take ideas seriously without serious risk they will go insane along the way" is a very important problem. In retrospect, CFAR should have had this as an explicit priority from the start.

Comment by orthonormal on What cognitive biases feel like from the inside · 2020-01-12T23:14:01.239Z · score: 2 (1 votes) · LW · GW

Relatedly, there's an awkward cursor line in the top-right box for optimism bias.

Comment by orthonormal on Stripping Away the Protections · 2020-01-12T23:07:19.658Z · score: 4 (2 votes) · LW · GW

I believe the corporations in Moral Mazes were mostly in the manufacturing sector. (Your second point applies, though, as a decent explanation for why American manufacturing has been increasingly outcompeted in the last few decades.)

Comment by orthonormal on Bottle Caps Aren't Optimisers · 2020-01-12T22:57:50.618Z · score: 2 (1 votes) · LW · GW

Okay, so another necessary condition for being downstream from an optimizer is being causally downstream. I'm sure there are other conditions, but the claim still feels like an important addition to the conversation.

Comment by orthonormal on Circling as Cousin to Rationality · 2020-01-04T21:44:02.320Z · score: 16 (7 votes) · LW · GW

I'm still skeptical of Circling, but this is exactly the sort of post I want to encourage in general: trying to explain something many readers are skeptical of, while staying within this site's epistemic standards and going only one inferential step out.

Comment by orthonormal on 2020's Prediction Thread · 2020-01-01T18:26:54.670Z · score: 2 (1 votes) · LW · GW

Either (1) or (2) (and some other possibilities) would satisfy my prediction. My prediction is just that, however we do things in 2029, it won't be by handing each merchant the keys to our entire credit account.

Comment by orthonormal on 2020's Prediction Thread · 2020-01-01T04:08:13.943Z · score: 2 (1 votes) · LW · GW

Dammit, dammit, dammit, I meant to condition these all on no human extinction and no superintelligence. Commenting rather than editing because I forget if the time of an edit is visible, and I want it to be clear I didn't update this based on information from the 2020s.

Comment by orthonormal on 2020's Prediction Thread · 2020-01-01T00:06:05.241Z · score: 2 (1 votes) · LW · GW

Okay then, how about higher education as a fraction of GDP?

Comment by orthonormal on 2020's Prediction Thread · 2019-12-31T21:24:36.387Z · score: 2 (1 votes) · LW · GW

As with last decade, I'm most confident about boring things, though less optimistic than I'd like to be.

Fewer than 1 billion people (combatants + civilians) will die in wars in the 2020s: 95%

The United States of America will still exist under its current Constitution (with or without new Amendments) and with all of its current states (with or without new states) as of 1/1/30: 93%

Fewer than 10 million people (combatants + civilians) will die in wars in the 2020s: 85%

The median rent per unit in the United States will increase faster than inflation in the 2020s: 80%

The National Popular Vote Interstate Compact will not go into force by 1/1/30: 75%

Human-driven cars will still be street-legal in all major US cities as of 1/1/30: 75%

As of 1/1/30, customers will not make purchases by giving each merchant full access to a non-transaction-specific numeric string (i.e. credit cards as they are today): 70%

Conditional on Pew Research Center releasing a survey on the topic after 1/1/28, their most recent survey by 1/1/30 will show that 60% or fewer of American adults identify as Christian: 70%

Conditional on Pew Research Center releasing a survey on the topic after 1/1/28, their most recent survey by 1/1/30 will show that 33% or more of American adults identify as religiously unaffiliated: 70%

More than half of American adults will use a Facebook product at least once per day in 2029: 60%

Real-time 24-hour news networks will still exist as of 1/1/30, and will average more than 1 million average daily viewers (in the USA) in 2029: 50%

The largest not-currently-existing US tech company (by market cap as of 1/1/30) will not have its primary HQ in the Bay Area: 35%

A song generated entirely by an AI will make one of the Billboard charts: 25%

California will put a new state constitution to a statewide vote: 10%

Comment by orthonormal on 2020's Prediction Thread · 2019-12-31T20:25:35.045Z · score: 9 (5 votes) · LW · GW

Re (7), there's a laughable amount of conjunction on even the first prediction in the chain.

Comment by orthonormal on 2020's Prediction Thread · 2019-12-31T20:22:41.438Z · score: 2 (1 votes) · LW · GW

Re: higher education bubble, do you also predict that tuition increases will not outpace inflation?

Comment by orthonormal on 2020's Prediction Thread · 2019-12-31T20:21:17.667Z · score: 3 (2 votes) · LW · GW

The Rubik's Cube one strikes me as much more feasible than the other AI predictions. Look at the dexterity improvements of Boston Dynamics over the last decade and apply that to the current robotic hands, and I think there's a better than 70% chance you get a Rubik's Cube-spinning, chopsticks-using robotic hand by 2030.