How four guys helped redirect Japan's coronavirus policy 2020-04-23T09:22:46.320Z


Comment by Taran on What will 2040 probably look like assuming no singularity? · 2021-05-17T11:45:18.189Z · LW · GW

AI-written books will be sold on Amazon, and people will buy them.  Specialty services will write books on demand based on customer specifications.  At least one group, and probably several, will make real money in erotica this way.  The market for hand-written mass market fiction, especially low-status stuff like genre fiction and thrillers, will contract radically.

Comment by Taran on Moving Data Around is Slow · 2021-03-22T13:03:05.481Z · LW · GW

Game programmers love this trick too, and for the same reasons: you're typically not doing elaborate computations to small amounts of data, you're doing simple computations (position updates, distance comparisons, &c) with large amounts of data.  Paring away unnecessary memory transfers is a big part of how to make that kind of computation go fast.

Comment by Taran on Mentorship, Management, and Mysterious Old Wizards · 2021-02-26T13:34:10.016Z · LW · GW

FWIW I also think your summary of the 2015 article is inaccurate.  For example, "EA needs very specific talents that are missing." isn't consistent with the section titled "Less Earning to Give", which states very clearly that more than 20% of EAs, total, should be doing direct work.  "EA needs lots of generally talented people" is a much better fit.  My own experiences are consistent with that:  the people I know who got career advice from 80k or other EA thought leaders in that era were all told to do direct work, typically operations at EA orgs.

Normally this wouldn't be worth talking about; who really cares whether an article from 2015 was unclear, or clearly communicated something its authors now disagree with?  Here I think the distinction matters, because it's a load-bearing part of the argument that mentorship is a bottleneck for EA specifically.  People who got top-tier mentorship in 2015 were told things we now agree aren't true, but that were consistent with the articles available at the time.  People who got top-tier mentorship in 2020 got different advice (I assume, I haven't kept up since covid started),  but how much better was it, in terms of knowledge, than the articles available?

I could definitely buy that EA has a shortage of mysterious old wizards, though.

Comment by Taran on Promoting Prediction Markets With Meaningless Internet-Point Badges · 2021-02-09T13:37:38.695Z · LW · GW

People are tired of shitty media. There's an enormous groundswell of media distrust from many angles, as far as I can tell. A measure like this is easy to understand, at least in the basics, and provides clear evidence of credibility for those who use it, entirely independent of trust.

If this were true, we would expect to see declining media consumption -- reduced viewership at Fox and CNN, for example.  Instead the opposite is true, both reported record viewership this year.  I take that to mean that the problem with journalism, insofar as there is one, is on the demand side rather than the supply side.

So, in general I think this claim is false.  I would focus on finding a small subgroup for which it's true, and dedicate your efforts to them.

Comment by Taran on What could one do with truly unlimited computational power? · 2020-11-16T12:24:47.722Z · LW · GW

Would BlooP allow for chained arrow notation, or would it be too restrictive for that?

Sadly, you need more than that: chained arrows can express the Ackermann function, which isn't primitive recursive.  But I guess you don't really need them.  Even if you just have Knuth's up-arrows, and no way to abstract over the number of up-arrows, you can just generate a file containing two threes with a few million up-arrows sandwiched between them, and have enough compute for most purposes (and if your program still times out, append the file to itself and retry).

Comment by Taran on What could one do with truly unlimited computational power? · 2020-11-13T12:47:32.172Z · LW · GW

Sure, go ahead.

Comment by Taran on What could one do with truly unlimited computational power? · 2020-11-13T11:50:39.088Z · LW · GW

Maybe the input box uses its own notation, something weak enough that infinite loops are impossible but powerful enough to express Conway's arrows?  That seems like it would be enough to be interesting without accidentally adding a halting oracle.

Comment by Taran on What could one do with truly unlimited computational power? · 2020-11-13T11:09:53.991Z · LW · GW

If you're using the oracle to generate moves directly then you don't need an agent, yeah.  But that won't always work: you can generate the complete Starcraft state space and find the optimal reply for each state, but you can't run that program in our universe (it's too big) and you can't use the oracle to generate SC moves in real time (it's too slow).

Comment by Taran on What could one do with truly unlimited computational power? · 2020-11-12T21:53:54.662Z · LW · GW

Pretty interesting.  You're still constrained by your ability to specify solutions, so you can't immediately solve cold fusion or FTL (you'd need to manually write and debug an accurate-enough physics simulator first).  Truly, no computing system can free you from the burden of clarifying your ideas.  But this constraint does leave some scope for miracles, and I want to talk about one technique in particular: program search.

Program Search

Program search is a very powerful, but dangerous and ethically dubious, way to exploit unbounded compute.  Start with a set of test cases, then generate all programs of length less than 100 megabytes (or whatever) and return the shortest, fastest one that passes all the test cases.  Both constraints are important: "shortest" prevents the optimizer from returning a hash table that memorizes all possible inputs, and "fastest" prevents it from relying on the unusual nature of the oracle universe (note that you will need a perfect emulator in order to find out which program is fastest, since wall-clock time measurements in the oracle's universe might be ineffective or misleading).  In a narrow sense, this is the perfect compiler: you tell it what kind of program you want, and it gives you exactly what you asked for.


There are some practical dangers.  In Python or C, for example, the space of all programs includes programs which can corrupt or mislead your test harness.  The ideal language for this task has no runtime flexibility or ambiguity whatsoever; Haskell might work.  But that still leaves you at the mercy of God's Haskell implementation: we can assume that He introduced no new bugs, but He might have faithfully replicated an existing bug in the reference Haskell compiler, which your enumeration will surely find.  This is unlikely to cause serious problems (at least at first), but it means you have to cross-check the output of whatever program the oracle finds for you.

More insidiously, some the programs that we run during the search might instantiate conscious minds, or otherwise be morally relevant.  If that seems unlikely, ask yourself: are you totally sure it's impossible to simulate a suffering human brain in 100 megs of Haskell?  This risk can be limited somewhat, for example by running the programs in order from smallest to largest, but is hard to rule out entirely.


If you're willing to put up with all that, the benefits are enormous.  All ML applications can be optimized this way: just find the program that scores above some threshold on your metric, given your other constraints (if you have a lot of data you might be able to use the best-scoring program, but in small-data regimes the smallest, fastest program might still just be a hash table.  Maybe score your programs by how much simpler than the training data they are?).

With a little more work, it should be possible to -- almost -- solve all of mathematics: to create an oracle which, given a formal system, can tell you whether any given statement can proved within that system and, if so, whether it can be proved true or false (or both)...that is, for proofs up to some ridiculous but finite length.  I think you will have to invent your own proof language for this; the existing ones are all designed around complexity limitations that don't apply to you.  Make sure your language isn't Turing complete, to limit the risk of moral catastrophe.  Once you have that, you can just generate all possible proofs and then check whether the one you want is present or not.


Up until now we've been limited by our ability to specify the solution we want.  We can write test cases and generate a program which fulfills them, but it won't do anything we didn't explicitly ask for.  We can find the ideal classifier for a set of images, but we first have to find those images out in the real world somewhere, and the power of our classifier is bounded by the number of images we can find.

If we can specify precise rules for a simulation, and a goal within that simulation, most of that constraint disappears.  For example, to find the strongest Go-playing program, we can instantiate all possible Go-playing programs and have them compete until there's an unambiguous winner; we don't need any game records from human players.  The same trick works for everything simulatable: Starcraft, Magic: the Gathering, piloting fighter jets, you name it.  If you don't want to use the oracle to directly generate a strong AI, you can instead develop accurate-enough simulations of the real-world, and then use the oracle to develop effective agents within those simulations.


Ultimately the idea would be to develop a computer model of the laws physics that's as correct and complete as our computer model of the rules of Go, so that you can finally develop nanofactories, anti-aging drugs, and things like that.  I don't see how to do it, but it's the only prize worth playing for.  At this point it becomes very important to be able prove the Friendliness of every candidate program; use the math oracle you built earlier to develop a framework for that before moving forward.

Comment by Taran on Trick-or-treating in Covid Times · 2020-11-02T13:46:07.663Z · LW · GW

But I don't understand how any town that allows indoor dining can categorize trick-or-treating as impermissibly high risk?

I think it's about risk versus reward, rather than risk per se.  If you allow indoor dining, the restaurant owners make money and won't fail or need bailouts (as often).  Trick or treating doesn't offer as much benefit economically.

(Not endorsing this reasoning, just trying to empathize).

Comment by Taran on Covid Covid Covid Covid Covid 10/29: All We Ever Talk About · 2020-10-30T15:50:16.677Z · LW · GW

Yes, within its limits:

  1. They don't do very much investigative journalism, mostly they just report on things that happen publicly.  
  2. Their articles tend to be pretty short, without a lot of storytelling or background detail.

If you want to efficiently survey what German people are hearing about it seems like a good choice.

If you want something more like a normal American newspaper, consider Der Spiegel:  I rarely visit them as their website does not run well for me, but they still have an independent fact-checking department.

Comment by Taran on Covid Covid Covid Covid Covid 10/29: All We Ever Talk About · 2020-10-30T14:08:22.250Z · LW · GW

What I didn’t get from my readers were good (English language) news sources that could give me a feel for goings on across the pond that go beyond raw data.

I want to push back on the "English language" requirement a little bit.  Even five years ago Google Translate was good enough to translate German news articles well.  The in-page translation, if you're willing to use Chrome, still seems fine; I tried it out on and while it clearly wasn't written by a native English speaker it's perfectly readable.

Comment by Taran on Covid 10/22: Europe in Crisis · 2020-10-23T07:54:06.442Z · LW · GW

For European covid data aggregation/visualization I don't know of anything better than Our World In Data's interactive graphs.  For Germany specifically Robert Koch Institute publishes quite a detailed report every day (with an English translation, even).

It's hard to talk about Europe generally because the tactics and outcomes are so different among the member states.  For example Germany committed to mandatory mask use indoors pretty early (IIRC in April), whereas in August the Netherlands still had no mask mandate, but each store did have an employee approach you at the entrance, unmasked, to remind you to disinfect your hands.  There's also a lot of variation in test volume/strategy.

Comment by Taran on Why isn't JS a popular language for deep learning? · 2020-10-19T12:50:37.738Z · LW · GW

Others have said most of what I would have, but I'll add one more point: TypeScript doesn't (AFAICT) support operator overloading, and in ML you do want that.  ML code is mostly written by scientists, based on papers they've read or written, and they want the code to resemble the math notation in the papers as closely as possible, to make it easy to check for correctness by eye.  For example, here's a line from a function to compute the split rhat statistic for tensors n and m, given other tensors B and W:

rhat = np.sqrt((n - 1) / n + (m + 1) / m * B / W)

In TypeScript, I guess you would have to rewrite this to something like 

rhat = sqrt((n.Minus(1).Divide(n)).Plus(m.Plus(1).Divide(m).Multiply(B).Divide(W)))

...which, like, clearly the scientists could do that rewrite, but they won't unless you offer them something really compelling in exchange.  TypeScript-tier type safety won't do it; I doubt if even static tensor shape checking would be good enough.

Comment by Taran on Is Stupidity Expanding? Some Hypotheses. · 2020-10-16T17:51:30.214Z · LW · GW

Drive wouldn't let me access your data, but that makes sense; a much larger share of the population is going to college now than in the 70s.

Comment by Taran on Efficacy of Vitamin D in helping with COVID · 2020-09-09T17:45:56.470Z · LW · GW

Just in case you, like me, wondered whether this was just a high base rate of vitamin d deficiency: no, vitamin d deficiency is common but not that common.

Comment by Taran on Sex, Lies, and Canaanites · 2020-04-23T19:17:48.629Z · LW · GW
In Jaynes’ thesis, Achilles and Agamemnon are “obedient to their gods” and “did not have any ego whatever”.

I don't think this is really right. Athena doesn't give Achilles a command that he obeys, she offers him a bribe which he accepts. Here's Butler's translation:

And Minerva said, “I come from heaven, if you will hear me, to bid you stay your anger. Juno has sent me, who cares for both of you alike. Cease, then, this brawling, and do not draw your sword; rail at him if you will, and your railing will not be vain, for I tell you—and it shall surely be—that you shall hereafter receive gifts three times as splendid by reason of this present insult. Hold, therefore, and obey.”
“Goddess,” answered Achilles, “however angry a man may be, he must do as you two command him. This will be best, for the gods ever hear the prayers of him who has obeyed them.”

Achilles certainly knows he can disobey the gods (later he'll get into an outright battle with Xanthus, the god of Troy's river). But he can be negotiated with, and Athena successfully persuades him.

It also isn't true that the Iliad is empty of deceit. For example, Athena later tricks Hector into facing Achilles alone by taking the shape of Hector's brother. Her idea is to get Hector killed, and it works perfectly.

Comment by Taran on Is this viable physics? · 2020-04-15T14:53:36.660Z · LW · GW

The technical reports do seem to contain at least one strong, surprising prediction:

This [multiway formulation of QM] leads to an immediate, and potentially testable, prediction of our interpretation of quantum mechanics: namely that, following appropriate coarse-graining...the class of problems that can be solved efficiently by quantum computers should be identical to the class of problems that can be solved efficiently by classical computers. More precisely, we predict in this appropriately coarse-grained case that P = BQP...

Of course Wolfram and Gorard are not the only people to say this, but it's definitely a minority view these days and would be very striking if it were somehow proved.

Comment by Taran on April Coronavirus Open Thread · 2020-04-08T15:33:14.006Z · LW · GW

Not a doctor, but it doesn't seem fishy to me: most people do not die, most of the time. If you sample a random person it's highly likely that they'll survive the next two weeks. This is true even among high-risk groups (the elderly, obese, etc.). If you hear that someone died, and that they had a CV19 diagnosis, you should not put much weight on the hypothesis that they died of something unrelated, just because of this low base rate.

It makes more sense to worry about the opposite thing: people dying without formal CV19 diagnoses being excluded from the official statistics. For example, right now in New York about 200 people are dying at home each day, up from a baseline of 20 to 25, according to the city's department of health: These are not presently counted as CV19 deaths, but probably a lot of them are.

Comment by Taran on A quick and crude comparison of epidemiological expert forecasts versus Metaculus forecasts for COVID-19 · 2020-04-02T08:06:32.941Z · LW · GW
A calibrated-over-time 80% confidence interval on such dice could be placed anywhere (e.g. [1,80] or [21,100], so long as they are 80 units wide.

Yes, but the calibrated and centered interval is uniquely [10, 90].

Comment by Taran on 3 Interview-like Algorithm Questions for Programmers · 2020-03-26T17:16:34.888Z · LW · GW

I share the general sentiment that these are tricks and unsuitable for interviews, but lsusr's answer is correct and does not require additional constraints.

Comment by Taran on bgold's Shortform · 2020-03-03T16:59:01.200Z · LW · GW

I think you're asking too much of evolutionary theory here. Human bodies do lots of things that aren't longterm adaptive -- for example, if you stab them hard enough, all the blood falls out and they die. One could interpret the subsequent shock, anemia, etc. as having some fitness-enhancing purpose, but really the whole thing is a hard-to-fix bug in body design: if there were mutant humans whose blood more reliably stayed inside them, their mutation would quickly reach fixation in the early ancestral environment.

We understand blood and wound healing well enough to know that no such mutation can exist: there aren't any small, incrementally-beneficial changes which can produce that result. In the same way, it shouldn't be confusing that depression is maladaptive; you should only be confused if it's both maladaptive and easy to improve on. Intuitively it feels like it should be -- just pick different policies -- but that intuition isn't rooted in fine-grained understanding of the brain and you shouldn't let it affect your beliefs.

Comment by Taran on Source of Karma · 2020-02-11T15:56:05.074Z · LW · GW

It gives a reasonably rigorous way of predicting how many upvotes and downvotes a post will get, given the history of the user who wrote it. Specifically, it defines a probabilistic model: for each user, we can specify a Beta distribution with various unknown parameters, and then learn those parameters from the user's post history. The details of that learning are rather charming if you're a statistician, or aspire to be one, but don't translate very well.

mr-hire would like to know what his particular Beta distribution looks like. To find out, we have to adapt Moulton's method to the LW karma system. This turns out to be a little difficult, and requires some additional modeling choices:

Moulton models votes on individual posts with a Binomial distribution, which is used for sequences of binary outcomes. In this case each voter either upvotes the post (with probability p) or downvotes it (with probability 1-p) -- we ignore non-voters since it's hard to know how many of them there are. But a LessWrong voter has four choices: they can vote Up or Down, and they can vote Normal or Strong, so the Binomial distribution is no longer appropriate.

This is fixable with a different choice of distributions, but then you run into another problem. In LW, even normal votes vary in value: an upvote from a high-karma user is worth more than one from a low-karma user. Do we wish to model this effect, and if so how?

If you were willing to treat all user votes equally I think you could get away with using the Dirichlet-multinomial. If not, I think you have to give up on modeling individual votes and try to model karma directly, without breaking it down into its component upvotes and downvotes.

Comment by Taran on Category Theory Without The Baggage · 2020-02-04T16:30:15.187Z · LW · GW
Is this correct? I'd have thought "colo*r" matches to both "color" and "colour", but "colou*r" only to "colour".

It's correct. Some pattern matchers work the way you describe, but in a regular expression "u*" matches "zero or more u characters". So "colou*r" matches "color", "colour", "colouuuuuuuuur", etc.

(In this case one would typically use "colou?r"; "u?" matches "exactly zero or one u characters", that is, "color", "colour", and nothing else.)

Comment by Taran on Decoupling vs Contextualising Norms · 2019-12-22T10:02:17.318Z · LW · GW
I'd love to see the post cleaned up to make it clear that you're talking about "contextualizing as understanding how your words will have an effect in the context that you're in" and decoupling as "decoupling what you say from the effects it may create."

I don't think there's a general consensus that this post does, or should, mean that. For example, Raemon's review suggests "jumbled" as an antonym to "decoupled", and gives a description that's more general than yours. For another example, you described your review of Affordance Widths as a decoupled alternative to the contextualizing reviews that others had already written, but the highest-voted contextualizing review is explicitly about the truth value of ialdabaoth's post -- it incorporates information about the author, but only to make the claim that the post contains an epistemic trap, one which we could in principle have noticed right away but which in practice wasn't obvious without the additional context of ialdabaoth's bad behavior. This is clearly contextualizing in some sense, but doesn't match the definition you've given here.

I think this post is fundamentally unfinished. It drew a distinction that felt immediate and important to many commenters here, but a year and a half later we still don't have a clear understanding of what that distinction is. I think that vagueness is part of what has made this post popular: everyone is free to fill in the underspecified parts with whatever interpretation makes the most sense to them.

Comment by Taran on In My Culture · 2019-03-12T08:18:46.281Z · LW · GW

"Important" was not the right word, I agree; I took a slightly better stab at it in the last paragraph of my reply to ZeitPolizei upthread. Vocabulary aside, would you agree that there's a class of cultural values that this framing doesn't help you talk about?

Comment by Taran on In My Culture · 2019-03-12T08:10:29.830Z · LW · GW

Does this also apply to your own personal culture (whether aspiring or as-is), or "just" the broader context culture?

We're talking about a tool for communicating with many different people with many different cultures, and with people whose cultures you don't necessarily know very much about. So the bit you quoted isn't just making claims about my culture, or even one of the (many) broader context cultures, it's making claims about the correct prior over all such cultures.

But what claims exactly? I intended these two:

  1. When you say, "In my culture X", you're also saying "In your culture plausibly not X".
  2. For some values of X, this will start fights (or hurt feelings, or sow mistrust, or have other effects you likely don't want).

It seems like you came to agree with point #1, so I won't belabor it further -- let me know if I misread you and we can circle back. For point #2, I definitely agree that, the more charity the listener extends to you, the smaller the set of hurtful Xs is. But if you rely on that, you're limiting the scope of this method to people who'll apply that charity and whom you know will apply that charity. I picked "punishing the innocent" for my example value of X because I expect it to be broadly cross-cultural: if you go find 100 random people and ask them whether they punish the innocent, I expect that most of them will take offense. If you also expect that, you should build that expectation into your communication strategy, regardless of what your own culture would have you do in those kinds of situations.

Now, the better your know the person you're talking to, the less important these warnings are. Then again, the better you know the person you're talking to, the less you need the safety of the diplomatic/sociological frame, you can just discuss your values directly. That's why I feel comfortable using all that highly absolutist "always/never" language above; it's the same impulse that says "it's always better to bet that a die will roll odd than that it'll roll a 1, all else held equal".

Thinking about it more, I suspect the real rule is that this method shouldn't be used to talk about cultural values at all, just cultural practices -- things that have little or no moral valence. That phrasing doesn't quite capture the distinction I want -- the Thai businessman who won't shake hands with you doesn't think his choice is arbitrary, after all -- but it's close. Another rule might be "don't use these statements to pass moral judgement", but that's hard to apply; as you saw it can be difficult to notice that you're doing it until after the fact.

Comment by Taran on In My Culture · 2019-03-10T11:30:03.858Z · LW · GW

I use this technique sometimes (my lead-in phrase is the deliberately silly "Among my people..."), but it has a couple of flaws that force me to be careful with it.

Most importantly, this framing is always about drawing contrasts: you're describing ways that your culture _differs_ from that of the person you're talking to. Keep this point in the forefront of your mind every time you use this method: you are describing _their_ culture, not just yours. When you say, "In my culture, we put peanut butter on bread", then you are also saying "in your culture, you do not put peanut butter on bread". At the very most you are asking a question: "does your culture also put peanut butter on bread?" So, do not ever say something like "In my culture we do not punish the innocent" unless you also intend to say "Your culture punishes the innocent" -- that is, unless you intend to start a fight.

Relatedly, you have to explicitly do the work of separating real cultural practices from aspirational ones -- this framing will not help you. When you write "In my culture we do not punish the innocent", probably you are thinking something like "In my culture, we think it's important not to punish the innocent", since mistakes do still happen from time to time. But statements like "In my culture we put peanut butter on bread" do not require this kind of aggressive interpretation, they can just be taken literally, so your listeners might reasonably take "In my culture we do not punish the innocent" as a (false) statement of literal fact. Clear and open communication is unlikely to follow.

(If you feel like you grasp these points and agree with them, here's an exercise: can the section of the OP that starts "In my culture, we distinguish between what a situation looks like and what it actually is." be productively rewritten, and if so how?)

Overall, although I do like this technique and use it from time to time, I don't think it's well-suited to important topics. For similar reasons it's easy to use in bad faith. That's why I present it in such a silly and sociological (instead of formally diplomatic) way.