Posts

cousin_it's Shortform 2019-10-26T17:37:44.390Z · score: 3 (1 votes)
Announcement: AI alignment prize round 4 winners 2019-01-20T14:46:47.912Z · score: 80 (19 votes)
Announcement: AI alignment prize round 3 winners and next round 2018-07-15T07:40:20.507Z · score: 102 (29 votes)
How to formalize predictors 2018-06-28T13:08:11.549Z · score: 16 (5 votes)
UDT can learn anthropic probabilities 2018-06-24T18:04:37.262Z · score: 65 (20 votes)
Using the universal prior for logical uncertainty 2018-06-16T14:11:27.000Z · score: 0 (0 votes)
Understanding is translation 2018-05-28T13:56:11.903Z · score: 155 (54 votes)
Announcement: AI alignment prize round 2 winners and next round 2018-04-16T03:08:20.412Z · score: 155 (46 votes)
Using the universal prior for logical uncertainty (retracted) 2018-02-28T13:07:23.644Z · score: 39 (10 votes)
UDT as a Nash Equilibrium 2018-02-06T14:08:30.211Z · score: 36 (12 votes)
Beware arguments from possibility 2018-02-03T10:21:12.914Z · score: 13 (9 votes)
An experiment 2018-01-31T12:20:25.248Z · score: 32 (11 votes)
Biological humans and the rising tide of AI 2018-01-29T16:04:54.749Z · score: 56 (19 votes)
A simpler way to think about positive test bias 2018-01-22T09:38:03.535Z · score: 34 (13 votes)
How the LW2.0 front page could be better at incentivizing good content 2018-01-21T16:11:17.092Z · score: 38 (19 votes)
Beware of black boxes in AI alignment research 2018-01-18T15:07:08.461Z · score: 71 (30 votes)
Announcement: AI alignment prize winners and next round 2018-01-15T14:33:59.892Z · score: 167 (64 votes)
Announcing the AI Alignment Prize 2017-11-04T11:44:19.000Z · score: 1 (1 votes)
Announcing the AI Alignment Prize 2017-11-03T15:47:00.092Z · score: 156 (68 votes)
Announcing the AI Alignment Prize 2017-11-03T15:45:14.810Z · score: 7 (7 votes)
The Limits of Correctness, by Bryan Cantwell Smith [pdf] 2017-08-25T11:36:38.585Z · score: 3 (3 votes)
Using modal fixed points to formalize logical causality 2017-08-24T14:33:09.000Z · score: 3 (3 votes)
Against lone wolf self-improvement 2017-07-07T15:31:46.908Z · score: 33 (29 votes)
Steelmanning the Chinese Room Argument 2017-07-06T09:37:06.760Z · score: 5 (5 votes)
A cheating approach to the tiling agents problem 2017-06-30T13:56:46.000Z · score: 3 (3 votes)
What useless things did you understand recently? 2017-06-28T19:32:20.513Z · score: 7 (7 votes)
Self-modification as a game theory problem 2017-06-26T20:47:54.080Z · score: 10 (10 votes)
Loebian cooperation in the tiling agents problem 2017-06-26T14:52:54.000Z · score: 5 (5 votes)
Thought experiment: coarse-grained VR utopia 2017-06-14T08:03:20.276Z · score: 16 (16 votes)
Bet or update: fixing the will-to-wager assumption 2017-06-07T15:03:23.923Z · score: 30 (27 votes)
Overpaying for happiness? 2015-01-01T12:22:31.833Z · score: 32 (33 votes)
A proof of Löb's theorem in Haskell 2014-09-19T13:01:41.032Z · score: 32 (31 votes)
Consistent extrapolated beliefs about math? 2014-09-04T11:32:06.282Z · score: 6 (7 votes)
Hal Finney has just died. 2014-08-28T19:39:51.866Z · score: 34 (36 votes)
"Follow your dreams" as a case study in incorrect thinking 2014-08-20T13:18:02.863Z · score: 29 (31 votes)
Three questions about source code uncertainty 2014-07-24T13:18:01.363Z · score: 9 (10 votes)
Single player extensive-form games as a model of UDT 2014-02-25T10:43:12.746Z · score: 21 (12 votes)
True numbers and fake numbers 2014-02-06T12:29:08.136Z · score: 19 (29 votes)
Rationality, competitiveness and akrasia 2013-10-02T13:45:31.589Z · score: 14 (15 votes)
Bayesian probability as an approximate theory of uncertainty? 2013-09-26T09:16:04.448Z · score: 16 (18 votes)
Notes on logical priors from the MIRI workshop 2013-09-15T22:43:35.864Z · score: 18 (19 votes)
An argument against indirect normativity 2013-07-24T18:35:04.130Z · score: 1 (14 votes)
"Epiphany addiction" 2012-08-03T17:52:47.311Z · score: 52 (56 votes)
AI cooperation is already studied in academia as "program equilibrium" 2012-07-30T15:22:32.031Z · score: 36 (37 votes)
Should you try to do good work on LW? 2012-07-05T12:36:41.277Z · score: 36 (41 votes)
Bounded versions of Gödel's and Löb's theorems 2012-06-27T18:28:04.744Z · score: 32 (33 votes)
Loebian cooperation, version 2 2012-05-31T18:41:52.131Z · score: 13 (14 votes)
Should logical probabilities be updateless too? 2012-03-28T10:02:09.575Z · score: 12 (15 votes)
Common mistakes people make when thinking about decision theory 2012-03-27T20:03:08.340Z · score: 54 (47 votes)
An example of self-fulfilling spurious proofs in UDT 2012-03-25T11:47:16.343Z · score: 20 (21 votes)

Comments

Comment by cousin_it on Against Victimhood · 2020-09-19T11:34:07.831Z · score: 2 (1 votes) · LW · GW

I mostly agree. Though it can be hard for a person to tell when this advice applies, as it's a bit absolutist, like "drink more water". Some kind of reasonable-person criterion could work here, like "if you say this is causing you X worth of problems, but you aren't taking reasonable steps that cost less than X and could help with these problems, then maybe stop complaining so much."

Comment by cousin_it on Maybe Lying Can't Exist?! · 2020-09-16T20:33:06.466Z · score: 4 (2 votes) · LW · GW

Under typical game-theoretic assumptions, we would assume all players to be strategic. In that context, it seems much more natural to suppose that all evil people would also be liars.

Why? Maybe some evil people are ok with kicking puppies but not with lying - that's part of their utility function. (If such differences in utility functions can't exist, then there's no such thing as "good" or "evil" anyway.)

Comment by cousin_it on Open & Welcome Thread - September 2020 · 2020-09-15T15:45:30.304Z · score: 4 (3 votes) · LW · GW

Wouldn't more moral uncertainty make people less certain that Communism or Nazism were wrong?

Comment by cousin_it on Social Capital Paradoxes · 2020-09-11T08:07:01.643Z · score: 2 (1 votes) · LW · GW

A free market isn’t a lawless jungle of arbitrary one-shot interactions. It’s an engineered game where participants can’t be forced into deals and should keep promises. That pushes the great mass of interactions away from “predatory” and toward “positive-sum”.

Comment by cousin_it on Maybe Lying Can't Exist?! · 2020-08-23T09:52:40.044Z · score: 12 (5 votes) · LW · GW

Wait, this doesn't seem right. Say 49% of people are good and truthful, 49% are evil and truthful, and 2% are evil liars. You meet a random person and are deciding whether to be friends with them. Apriori they're about equally likely to be good or evil. You ask "are you good?" They say "yeah". Now they are much more likely to be good than evil. So if the person is in fact an evil liar, their lie had the intended effect on you. It wasn't "priced into the equilibrium" or anything.

The technical explanation is still correct in the narrow sense - the message can be interpreted as "I'm either good or an evil liar", and it does increase the probability of "evil liar". But at the same time it increases the probability of "good" relative to "evil" overall, and often that's what matters.

Comment by cousin_it on "The Conspiracy against the Human Race," by Thomas Ligotti · 2020-08-14T20:24:05.956Z · score: 8 (4 votes) · LW · GW

Sometime ago I tried to come up with a theory to Make Sense Of It All, it went something like this - suffering is a tool of evolution, but in us evolution came up with creatures that can achieve creativity with joy instead of suffering. We're agents who should bring that change about more widely, and also living proofs-of-concept that it's possible.

Comment by cousin_it on No Ultimate Goal and a Small Existential Crisis · 2020-07-26T22:15:11.642Z · score: 2 (1 votes) · LW · GW

Not sure I can give advice on this... it feels different every time, and it probably differs between people as well. You're on your own :-/

Comment by cousin_it on No Ultimate Goal and a Small Existential Crisis · 2020-07-25T23:53:57.495Z · score: 3 (2 votes) · LW · GW

I don't think it makes sense to search for your one true love or one true calling in life. It's more of a mutual process: you encounter a person or calling, ask yourself if it could work out, then invest. There's always a free choice, a leap of faith. Isn't it nice that the world works that way, instead of funneling you to one predetermined answer?

Comment by cousin_it on A Scalable Urban Design and the Single Building City · 2020-07-25T23:05:15.162Z · score: 5 (3 votes) · LW · GW

I spent most of the lockdown in a small town by a lake and loved it. The future I'd like to see is a future where good jobs are less tied to cities, due to remote work tech like this.

Comment by cousin_it on Collection of GPT-3 results · 2020-07-24T19:11:49.457Z · score: 2 (1 votes) · LW · GW

Thank you! It looks very impressive.

Comment by cousin_it on Collection of GPT-3 results · 2020-07-19T21:54:50.553Z · score: 3 (2 votes) · LW · GW

Nono, I meant "talk its way out of the box". Have you tried something like that?

Comment by cousin_it on Collection of GPT-3 results · 2020-07-19T07:51:02.365Z · score: 6 (4 votes) · LW · GW

Has anyone tried to get it to talk itself out of the box yet?

Comment by cousin_it on Atemporal Ethical Obligations · 2020-06-28T10:30:09.583Z · score: 6 (3 votes) · LW · GW

If our children are better than us, I hope they'll offer us the same forgiveness and gratitude as we did to our parents.

Comment by cousin_it on DontDoxScottAlexander.com - A Petition · 2020-06-25T20:17:05.720Z · score: 34 (11 votes) · LW · GW

That list of names is amazing! I realize now how many like-minded people are out there, I'm not as alone as it felt before. Let's not delete it quickly, it's great that we're all able to find each other.

Comment by cousin_it on Open & Welcome Thread - June 2020 · 2020-06-04T12:16:02.369Z · score: 2 (1 votes) · LW · GW

I don't know the US situation firsthand, but it seems like it could get worse toward the election. Maybe move to Europe?

Comment by cousin_it on cousin_it's Shortform · 2020-06-04T05:19:32.041Z · score: 2 (1 votes) · LW · GW

Maybe stochastic matrix?

Comment by cousin_it on Conceptual engineering: the revolution in philosophy you've never heard of · 2020-06-03T13:35:05.869Z · score: 4 (2 votes) · LW · GW

Paper by Chalmers, maybe people will find it a good intro.

Overall I agree with your point and would even go further (not sure if you'll agree or not). My feelings about colloquial language are kind of environmentalist: I think it should be allowed to grow in the traditional way, through folk poetry and individual choices, without foisting academisms or attacking "old" concepts. Otherwise we'll just have a poor and ugly language.

Comment by cousin_it on Updated Hierarchy of Disagreement · 2020-05-29T13:04:04.497Z · score: 2 (1 votes) · LW · GW

I'd add three more levels at the bottom. The first would be about painting a target on someone: "Overheard Bob saying this terrible thing. #PopularHashtag" The next one would be about silencing: "You have been banned." And the last would be a picture of a gun.

Comment by cousin_it on What are objects that have made your life better? · 2020-05-22T07:19:51.097Z · score: 2 (1 votes) · LW · GW

Yeah. A steel string acoustic guitar is "a friend for life" as Mark Knopfler said. Another versatile instrument is the electronic keyboard.

Comment by cousin_it on What are Michael Vassar's beliefs? · 2020-05-18T22:32:39.794Z · score: 6 (4 votes) · LW · GW

I met him once and didn't feel much charisma, he just sounded overconfident about all things. I'm sure it works on some people though.

Comment by cousin_it on Why Artists Study Anatomy · 2020-05-18T22:02:08.029Z · score: 4 (2 votes) · LW · GW

Yeah. For me the aha moment came from Drawing the Head and Hands by Loomis, which is like an extended version of the post you linked. It feels great, you draw a sphere and some helper lines and end up with a realistic head from any angle.

Comment by cousin_it on Movable Housing for Scalable Cities · 2020-05-16T06:51:02.956Z · score: 24 (8 votes) · LW · GW

It seems to me that making people more mobile won't make more people exit from cities, but will instead pull people into cities. Recall how cities grow when there's a high supply of highly mobile people from poorer regions.

That said, even if cities grow a lot, I think it's possible to make rents lower. But it seems more like an economic and political problem.

Comment by cousin_it on What are examples of perennial discoveries? · 2020-05-09T14:07:47.315Z · score: 5 (4 votes) · LW · GW

Every year there's a handful of new "flying cars" or other vehicles that promise to make personal flight popular, but nothing ever comes of it.

Comment by cousin_it on Individual Rationality Needn't Generalize to Rational Consensus · 2020-05-07T22:02:35.566Z · score: 2 (1 votes) · LW · GW

Yeah. I was more trying to argue that, compared to Bayesian ideas, voting doesn't win you all that much.

Comment by cousin_it on Individual Rationality Needn't Generalize to Rational Consensus · 2020-05-07T10:58:47.702Z · score: 2 (1 votes) · LW · GW

Right, this is where strong Bayesianism is required. You have to assume, for example, that everyone agrees on the set of hypotheses under consideration and the exact models to be used.

But under these assumptions, combining evidence always gives the right answer. Compare with the example in the post: "vote on a, vote on b, vote on a^b" which just seems strange. Shouldn't we try to use methods that give right answers to simple questions?

The hard problem is choosing between points on the frontier... which is why norm-generation processes like voting are relevant.

I think if you have a set of coefficients for comparing different people's utilities (maybe derived by looking into their brains and measuring how much fun they feel), then that linear combination of utilities is almost tautologically the right solution. But if your only inputs are each person's choices in some mechanism like voting, then each person's utility function is only determined up to affine transform, and that's not enough information to solve the problem.

For example, imagine two agents with utility functions A and B such that A<0, B<0, AB=1. So the Pareto frontier is one branch of a hyperbola. But if the agents instead had utility functions A'=2A and B'=B/2, the frontier would be the same hyperbola. Basically there's no affine-invariant way to pick a point on that curve.

You could say that's because the example uses unbounded utility functions. But they are unbounded only in the negative direction, which maybe isn't so unrealistic. And anyway, the example suggests that even for bounded utility functions, any method would have to be sensitive to the far negative reaches of utility, which seems strange. Compare to what happens when you do have coefficients for comparing utilities, then the method is nicely local.

Does that make sense?

Comment by cousin_it on Individual Rationality Needn't Generalize to Rational Consensus · 2020-05-06T07:40:59.687Z · score: 2 (1 votes) · LW · GW

Aumann agreement isn’t an answer here, unless you assume strong Bayesianism, which I would advise against.

To expand the argument a bit: if many people have evidence-based beliefs about something, you could combine these beliefs by voting, but why bother? You have a superintelligent AI! You can peek into everyone's heads, gather all the evidence, remove double-counting, and perform a joint update. That's basically what Aumann agreement does - it doesn't vote on beliefs, but instead tries to reach an end state that's updated on all the evidence behind these beliefs. I think methods along these lines (combining evidence instead of beliefs) are more correct and should be used whenever we can afford them.

For more details on this, see the old post Share likelihood ratios, not posterior beliefs. Wei Dai and Hal Finney discuss a nice toy example in the comments: two people observe a private coinflip each, how do they combine their beliefs about the proposition that both coins came up heads? Combining the evidence is simple and gives the right answer, while other clever schemes give wrong answers.

I have to say I don’t know why a linear combination of utility functions could be considered ideal.

Imagine that after doing the joint update, the agents agree to cooperate instead of fighting, and have a set of possible joint policies. Each joint policy leads to a tuple of expected utilities for all agents. The resulting set of points in N-dimensional space has a Pareto frontier. Each point on that Pareto frontier has a tangent hyperplane. So there's some linear combination of utility functions that's maximized at that point, modulo some tie-breaking if the frontier is perfectly flat there.

Comment by cousin_it on Individual Rationality Needn't Generalize to Rational Consensus · 2020-05-05T09:26:16.162Z · score: 7 (4 votes) · LW · GW

Well, the "ideal" way to aggregate beliefs is by Aumann agreement, and the "ideal" way to aggregate values is by linear combination of utility functions. Neither involve voting. So I'm not sure voting theory will play much of a role. It's more intended for situations where everyone behaves strategically; a superintelligent AI with visibility into our natures should be able to skip most of it.

Comment by cousin_it on Topological metaphysics: relating point-set topology and locale theory · 2020-05-01T19:14:09.094Z · score: 3 (2 votes) · LW · GW

I see. In that case does the procedure for defining points stay the same, or do you need to use recursively enumerable sets of opens, giving you only countably many reals?

Comment by cousin_it on Topological metaphysics: relating point-set topology and locale theory · 2020-05-01T11:44:48.072Z · score: 5 (3 votes) · LW · GW

Wait, but rational-delimited open intervals don't form a locale, because they aren't closed under infinite union. (For example, the union of all rational-delimited open intervals contained in (0,√2) is (0,√2) itself, which is not rational-delimited.) Of course you could talk about the locale generated by such intervals, but then it contains all open intervals and is uncountable, defeating your main point about going from countable to uncountable. Or am I missing something?

Comment by cousin_it on I do not like math · 2020-04-29T16:50:20.854Z · score: 2 (1 votes) · LW · GW

Yeah, being good with proofs is mostly useful for doing original work in math. You don't need it for applying known math.

Comment by cousin_it on A speculative incentive design: self-determined price commitments as a way of averting monopoly · 2020-04-28T11:04:01.721Z · score: 6 (3 votes) · LW · GW

Now I feel a bit silly, because my comment wasn't a new idea at all, but rather the reason why public utilities exist. So maybe looking at their history and performance is the best way to answer your questions.

Comment by cousin_it on A speculative incentive design: self-determined price commitments as a way of averting monopoly · 2020-04-28T08:54:17.262Z · score: 2 (1 votes) · LW · GW

I have another idea: if the mere existence of a competitor makes a monopoly drop prices all the way from monopoly price (way above break-even) to below break-even (necessary to crush the competitor), maybe the government should be selling some monopoly-prone goods at break-even. It would be very profitable for consumers and cost almost nothing.

Comment by cousin_it on Validity>Soundness in Creative Writing · 2020-04-23T10:53:41.149Z · score: 2 (1 votes) · LW · GW

Inconsistency is a good problem to have. I think for most people the creativity problem is at an earlier stage - they just can't come up with non-boring stuff, consistent or not.

Comment by cousin_it on Intuitions on Universal Behavior of Information at a Distance · 2020-04-21T08:39:13.315Z · score: 2 (1 votes) · LW · GW

I'm a bit confused. In the first section you point out that pairwise independence doesn't imply independence, which is correct. Then you use that as motivation to define "distributed information", and then you switch to talking about normal distributions. But for normal distributions, pairwise independence does imply independence. What gives?

Comment by cousin_it on The Samurai and the Daimyo: A Useful Dynamic? · 2020-04-14T11:07:54.201Z · score: 15 (9 votes) · LW · GW

We had a huge discussion of this in 2015.

Comment by cousin_it on Law school taught me nothing · 2020-04-13T09:15:24.573Z · score: 2 (1 votes) · LW · GW

For language learning I think I've found "one weird trick": watch a youtube video of someone reading a text in the language (with the text onscreen), mimic the pronunciation of each sentence after hearing it, and look up each unfamiliar word in google translate as you go. Last year I did that with German, spending about 5 minutes every weekday morning before going to work. Basically each video would take me a few weeks to get through and then I'd switch to another one. Other than that, I did absolutely nothing - no grammar, no flashcards, no teachers. Then signed up for an official test (reading+writing+listening+talking) and passed it easily.

Comment by cousin_it on [deleted post] 2020-04-11T08:42:43.012Z

If the past is a cone of possible pasts, most of which have higher entropy than the present (due to time symmetry), that means your memories are probably fake, because they describe a past with lower entropy. This is known as Loschmidt's paradox.

One popular solution to the paradox is to assume that the distant past had very low entropy for some reason. If that's right, that means the past's nondeterminism is different from the future's nondeterminism: probabilities about the future are conditioned only on the present, but probabilities about the recent past are conditioned on both the present and the distant past.

Comment by cousin_it on Being right isn't enough. Confidence is very important. · 2020-04-07T10:37:09.999Z · score: 11 (6 votes) · LW · GW

The phrasing "being right isn't enough" seems a bit off, because the second machine is right more often than the first. So maybe the post is more about calibration vs confidence. But there's no tension between these two things, because they can be combined into log score - a single dimension that captures the best parts of both.

Comment by cousin_it on How About a Remote Variolation Study? · 2020-04-05T16:34:10.580Z · score: 2 (1 votes) · LW · GW

Yes, if the potential effect size is large, you can get away with imprecise answers to some questions. But if there are many questions, at some point your "imprecision budget" will be spent. For example, will you be able to detect if your dosing leads to later hospitalization instead of no hospitalization? Or it weakens immunity instead of strengthening it?

Comment by cousin_it on How About a Remote Variolation Study? · 2020-04-04T17:35:13.279Z · score: 7 (3 votes) · LW · GW

Let's say X% get hospitalized within 2 weeks. What's the highest value of X that would say variolation is a good idea? Keep in mind that:

  • The demographics of your sample aren't the same as the general population, hopefully you didn't include many 60+ folks.

  • You don't know how many botched the protocol. Could botch in any direction (dose too high, too low, or no dose at all).

  • You don't know the hospitalization rate after contacting corona in normal ways, which can also be low dose. Many people don't get tested now and the epidemic is spreading.

  • Etc.

Comment by cousin_it on How About a Remote Variolation Study? · 2020-04-04T13:38:20.848Z · score: 7 (3 votes) · LW · GW
  1. Spain has stabilized at 7K new cases/day, Italy at 5K new cases/day. At this rate it will take many months to reach a significant percentage of the population. The same will probably happen in the US. Most people won't get infected, so trying amateur vaccination is more dangerous than doing nothing.

  2. How will you send doses to volunteers? If I were a delivery company, I would refuse to deliver this and would call the cops.

  3. How will you measure the results? People have trouble measuring the death rate from corona, sometimes they can't even agree on the order of magnitude. It's really low and depends on demographic factors, environment, treatment and other things that aren't well understood. If you want to measure a change in that rate by looking at 10k remote volunteers in reasonable time, I'd like to see your methodology and error bounds.

Comment by cousin_it on mind viruses about body viruses · 2020-03-28T08:43:09.077Z · score: 14 (8 votes) · LW · GW

Counterpoint: most people who will read your post are already better than average at vetting-memes-before-spreading. If you succeed at making these folks even more cautious, everyone else in the world will still keep spreading unvetted memes, so worse memes will win.

Comment by cousin_it on March 24th: Daily Coronavirus Link Updates · 2020-03-27T07:29:11.912Z · score: 6 (3 votes) · LW · GW

Wait, so your graph shows the number of people having their 2-day "infectious period" at any given time, which could be much lower than the number of people infected at a given time? That doesn't seem to be explained on the page.

Anyway, I think the really important number is how many people are having their "required hospitalization period" at any given time (which is longer than 2 days). Maybe you could show that too, since you're already showing the "care capacity" line?

Comment by cousin_it on March 24th: Daily Coronavirus Link Updates · 2020-03-26T18:09:39.790Z · score: 2 (1 votes) · LW · GW

It still looks weird to me. For example, in Switzerland with no mitigation it estimates 1% of people infected now and 3% at the peak on Apr 14, which is 2.5 weeks from now. Since each infection lasts a couple weeks or more, and there have been few deaths and recoveries so far, that means <5% of the population will have been infected by that point. And then it says active infections will start falling. Why?

Comment by cousin_it on March 24th: Daily Coronavirus Link Updates · 2020-03-26T16:26:25.317Z · score: 3 (2 votes) · LW · GW

Does anyone know why the dashboard says infections will peak at 3% if no mitigation is done?

Comment by cousin_it on Occam's Guillotine · 2020-03-23T11:26:43.809Z · score: 4 (2 votes) · LW · GW

I think there are two issues here: 1) what are the right beliefs to have about life 2) what's the right emotional attitude to life. You paint a picture of truth as a harsh destroyer of illusions, but why not describe it as a source of wonder / beauty / power / progress instead?

Comment by cousin_it on Robin Hanson on whether governments can squash COVID-19 · 2020-03-19T23:05:29.377Z · score: 2 (1 votes) · LW · GW

Out of the four "obvious considerations" at the start of the post, two seem questionable to me.

you have to do a lot more to squash than to flatten

Afaik to get worthwhile flattening (not much overloading of hospital beds) we need to get R0 pretty close to 1 anyway, so the extra effort to get it below 1 (squash) could be relatively small.

while flattening policies need be maintained only for a few months, squashing policies must be maintained until a strong treatment is available, probably years

Afaik flattening over a few months = almost as many deaths as no flattening at all. The hump is too big, and the number of hospital beds too small, to safely "process" half of the population in a few months.

Comment by cousin_it on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-15T08:04:32.229Z · score: 13 (5 votes) · LW · GW

In terms of conversation style, I'd define a "rationalist" as someone who's against non-factual objections to factual claims: "you're not an expert", "you're motivated to say this", "you're friends with the wrong people", "your claim has bad consequences" and so on. An intermediate stage would be "grudging rationalist": someone who can refrain from using such objections if asked, but still listens to them, and relapses to using them when among non-rationalists.

Comment by cousin_it on The absurdity of un-referenceable entities · 2020-03-14T22:13:04.086Z · score: 4 (2 votes) · LW · GW

I think Jessica is right on this point. Within a system like ZFC, you can't define the system's own definability predicate, so the sentence "there are numbers undefinable in ZFC" can't even be said, let alone proved. (Which is just as well, since ZFC has a countable model, and even a model whose every member is definable.) The same applies to the system of everything you believe about math, as long as it's consistent and at least as strong as ZFC.

Comment by cousin_it on Puzzles for Physicalists · 2020-03-13T15:52:09.162Z · score: 2 (1 votes) · LW · GW

I think counterfactuals only make sense when talking about a part of a system from the perspective of another part. Maybe probabilities as well. Similar to how in quantum mechanics, a system of two qubits can be in a pure state, but from the perspective of the first qubit, the second is in a mixed state.

In this view, causality/counterfactuals don't have to be physically fundamental. For example, you can have a Game of Life world where "all causal claims reduce to claims about state" as you say: "if X then Y" where X and Y are successive states. Yet it makes perfect sense for an AI in that world to use probabilities or counterfactuals over another, demarcated part of the world.

There is of course a tension between that and logical decision theories, but maybe that's ok?