Posts

Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" 2020-01-03T00:33:09.994Z · score: 81 (21 votes)
'Longtermism' definitional discussion on EA Forum 2019-08-02T23:53:03.731Z · score: 17 (6 votes)
Henry Kissinger: AI Could Mean the End of Human History 2018-05-15T20:11:11.136Z · score: 46 (10 votes)
AskReddit: Hard Pills to Swallow 2018-05-14T11:20:37.470Z · score: 17 (6 votes)
Predicting Future Morality 2018-05-06T07:17:16.548Z · score: 24 (9 votes)
AI Safety via Debate 2018-05-05T02:11:25.655Z · score: 40 (9 votes)
FLI awards prize to Arkhipov’s relatives 2017-10-28T19:40:43.928Z · score: 12 (5 votes)
Functional Decision Theory: A New Theory of Instrumental Rationality 2017-10-20T08:09:25.645Z · score: 36 (13 votes)
A Software Agent Illustrating Some Features of an Illusionist Account of Consciousness 2017-10-17T07:42:28.822Z · score: 16 (3 votes)
Neuralink and the Brain’s Magical Future 2017-04-23T07:27:30.817Z · score: 6 (7 votes)
Request for help with economic analysis related to AI forecasting 2016-02-06T01:27:39.810Z · score: 6 (7 votes)
[Link] AlphaGo: Mastering the ancient game of Go with Machine Learning 2016-01-27T21:04:55.183Z · score: 14 (15 votes)
[LINK] Deep Learning Machine Teaches Itself Chess in 72 Hours 2015-09-14T19:38:11.447Z · score: 8 (9 votes)
[Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim 2015-08-19T06:37:21.049Z · score: 7 (8 votes)
[Link] Neural networks trained on expert Go games have just made a major leap 2015-01-02T15:48:16.283Z · score: 15 (16 votes)
[LINK] Attention Schema Theory of Consciousness 2013-08-25T22:30:01.903Z · score: 3 (4 votes)
[LINK] Well-written article on the Future of Humanity Institute and Existential Risk 2013-03-02T12:36:39.402Z · score: 16 (19 votes)
The Center for Sustainable Nanotechnology 2013-02-26T06:55:18.542Z · score: 4 (11 votes)

Comments

Comment by esrogs on Two Alternatives to Logical Counterfactuals · 2020-04-07T00:43:42.547Z · score: 8 (4 votes) · LW · GW
"free will" is such a loaded word

As a side note -- one thing I don't understand is why more people don't seem to want to use just the word "will" without the "free" part in front of it.

It seems like a much more straightforward and less fraught term, and something that we obviously have. Do we have a "will"? Obviously yes -- we want things, we choose things, etc. Is that will "free"? Well what does that mean?

EDIT: I feel like this is a case of philosophers baking in a confusion into their standard term. It'd be like if instead of space we always talked about "absolute space". And then post-Einstein people argued about whether "absolute space" existed or not, without ever just using the term "space" just by itself.

Comment by esrogs on Why don't singularitarians bet on the creation of AGI by buying stocks? · 2020-04-03T23:34:21.737Z · score: 2 (1 votes) · LW · GW
To investigate this question, let's examine Alphabet Inc's share price around the time AlphaGo defeated world Go champion Lee Sedol in March 2016.

They had already announced beating Fan Hui a few months earlier. So it was already known months ahead of time that they were at least near world class. Sure, it was an open question whether they would actually beat the very best, but it shouldn't have been shocking either way, people were betting on both sides. (I bet in favor.)

Furthermore, despite all the claims about Go falling to AIs a decade faster than anticipated, AlphaGo wasn't actually that big a jump above the trend-line for Go AI Elo. (See: https://www.milesbrundage.com/blog-posts/alphago-and-ai-progress )

So even if the win was a shock to many individuals, if the market as a whole had been paying attention, it shouldn't have been too much of an update.

All that said, I do think it's quite plausible that the market is undervaluing AI.

Comment by esrogs on Can crimes be discussed literally? · 2020-03-29T06:39:49.689Z · score: 0 (2 votes) · LW · GW

This ontology, essentially: https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace

Comment by esrogs on Can crimes be discussed literally? · 2020-03-26T18:55:52.740Z · score: 1 (3 votes) · LW · GW

Same ones Scott talks about in his essay. E.g. "MLK was a criminal."

And remember, misleading connotations. The descriptor itself is technically correct.

Comment by esrogs on Can crimes be discussed literally? · 2020-03-26T01:18:40.678Z · score: 1 (3 votes) · LW · GW
What part of the essay do you find to be misleading (explicitly or implicitly*)?

I didn't say the essay was misleading. I said a descriptor could be misleading. That's what the article I linked talks about, and the post discusses a similar phenomenon, where true descriptors are interpreted as attacks (and I claim that some of the examples in the post seem to be similar to Scott's examples of non-central members of categories).

The linked essay is at the address, but doesn't include the address, so actually the post includes more than the linked essay.

I do not understand what this sentence is saying. When you say "address", are you talking about the URL?

Comment by esrogs on Can crimes be discussed literally? · 2020-03-23T21:10:00.140Z · score: 3 (4 votes) · LW · GW

Did you read the linked essay?

Comment by esrogs on Can crimes be discussed literally? · 2020-03-22T23:45:34.670Z · score: 27 (10 votes) · LW · GW
In Simulacra and Subjectivity, the part that reads "while you cannot acquire a physician’s privileges and social role simply by providing clear evidence of your ability to heal others" was, in an early draft, "physicians are actually nothing but a social class with specific privileges, social roles, and barriers to entry." These are expressions of the same thought, but the draft version is a direct, simple theoretical assertion, while the latter merely provides evidence for the assertion. I had to be coy on purpose in order to distract the reader from a potential fight.

I want to quibble with this a little bit (and maybe this is that fight you were trying to avoid), but to me the draft version doesn't seem so direct and simple.

In a sense it's simple, but if I just read that statement in isolation, it's less clear to me as a reader what you mean by it. Maybe largely because I'm not sure what you mean by the "nothing but". If you took out the "nothing but", I would agree that it's a clear and direct (and true!) statement. But with the "nothing but" it seems obviously false on many interpretations, so I'm not quite sure how to make sense of it.

In contrast, the "while you cannot acquire..." version seems much clearer to me about what it's claiming and complaining about.

Comment by esrogs on Can crimes be discussed literally? · 2020-03-22T23:40:11.836Z · score: 10 (8 votes) · LW · GW

This seems related to the noncentral fallacy. A descriptor may be technically accurate but have misleading connotations.

Comment by esrogs on Credibility of the CDC on SARS-CoV-2 · 2020-03-09T19:25:48.571Z · score: 6 (3 votes) · LW · GW

Thanks for the answer, that's a fair point.

I would support a policy where, if an LW post starts to go viral, then original authors or mods are encouraged to add disclaimers to the top of posts that they wouldn't otherwise need to add when writing for the LW audience. As SSC sometimes does.

I would not support a policy where LW authors always preemptively write for a general audience.

Comment by esrogs on Credibility of the CDC on SARS-CoV-2 · 2020-03-09T07:00:26.708Z · score: 10 (2 votes) · LW · GW

Is it your impression that the general public reads LessWrong?

What's the model where an LW blogpost in any way undermines the CDC's credibility with the general public?

Comment by esrogs on Credibility of the CDC on SARS-CoV-2 · 2020-03-07T06:43:16.988Z · score: 26 (12 votes) · LW · GW

Well, the two of you have now been seen in the same place at the same time, putting to bed that theory...

Comment by esrogs on Coronavirus: Justified Practical Advice Thread · 2020-03-06T03:58:11.804Z · score: 13 (10 votes) · LW · GW

This comment could maybe use a tl;dr saying:

If you have (slightly) low SpO2, but no trouble breathing, you probably don't need to go to the hospital. And if you have trouble breathing, you should probably go to the hospital whether or not you have low SpO2. So testing for oxygen saturation doesn't add much.

Is there any info the comment was meant to convey that that leaves out?

Comment by esrogs on Coronavirus: Justified Practical Advice Thread · 2020-03-03T07:56:30.472Z · score: 5 (3 votes) · LW · GW
H1N1 decreased by 4 logs in 6 hours in this review

In this context, is a log a factor of 2, or of 10 (or of e)?

Comment by esrogs on Coronavirus: Justified Practical Advice Thread · 2020-03-02T03:32:08.991Z · score: 6 (4 votes) · LW · GW

Wikipedia suggests copper can kill at least influenza A virus and adenovirus. It seems likely that it would be effective against other viruses too, though not clear (to me) if it would work against every virus.

Comment by esrogs on Quarantine Preparations · 2020-02-25T13:15:21.992Z · score: 4 (3 votes) · LW · GW

The advise of this post seems to be an advise on the margin

FYI, 'advise' is the verb, and 'advice' is the noun.

(Also, advice is a mass noun, so you'd say "piece of advice" or just "advice" rather than "an advice".)

Comment by esrogs on Jan Bloch's Impossible War · 2020-02-21T23:27:03.248Z · score: 5 (4 votes) · LW · GW
That is not true

Nitpick -- for replies like this, it's helpful if you say which part of the parent comment you're objecting to.

Obviously the reader can figure it out from the rest of your comment, but (especially since I didn't immediately recognize CSA as referring to the Confederate States of America) I wasn't sure what your first sentence was saying. A quote of the offending sentence from the parent comment would have been helpful.

Comment by esrogs on How do you survive in the humanities? · 2020-02-21T21:58:43.361Z · score: 13 (4 votes) · LW · GW

If it's any consolation, they probably take their own statements less literally than you do, and so it's less important that they're incoherent than you might think. They'll mostly end up acting and deciding by copying others, which works pretty well in general (see: The Secret Of Our Success).

Comment by esrogs on Hello, is it you I'm looking for? · 2020-02-09T21:29:18.275Z · score: 4 (2 votes) · LW · GW
my enthusiasm for the project leads me to be a bit coy about revealing too much detail on the internet

FWIW, it may be worth keeping in mind the Silicon Valley maxim that ideas are cheap, and execution is what matters. In most cases you're far more likely to make progress on the idea if you get it out into the open, especially if execution at all depends on having collaborators or other supporters. (Also helpful to get feedback on the idea.) The probability that someone else successfully executes on an idea that you came up with is low.

Comment by esrogs on Hello, is it you I'm looking for? · 2020-02-08T21:13:15.628Z · score: 2 (1 votes) · LW · GW

Ah! It's much clearer to me now what you're looking for.

Two things that come to mind as vaguely similar:

1) The habit of some rationalist bloggers of flagging claims with "epistemic status". (E.g. here or here)

2) Wikipedia's guidelines for verifiability (and various other guidelines that they have)

Of course, neither is exactly what you're talking about, but perhaps they could serve as inspiration.

Comment by esrogs on Writeup: Progress on AI Safety via Debate · 2020-02-07T23:52:54.359Z · score: 2 (1 votes) · LW · GW

I suppose there is a continuum of how much insight the human has into what the agent is doing. Squeezing all your evaluation into one simple reward function would be on one end of the spectrum (and particularly susceptible to unintended behaviors), and then watching a 2d projection of a 3d action would be further along the spectrum (but not all the way to full insight), and then you can imagine setups with much more insight than that.

Comment by esrogs on Writeup: Progress on AI Safety via Debate · 2020-02-07T23:41:15.828Z · score: 2 (1 votes) · LW · GW
We already see examples of RL leading to undesirable behaviours that superficially ‘look good’ to human evaluators (see this collection of examples).

Nitpick: this description seems potentially misleading to me (at least it gave me the wrong impression at first!). When I read this, it sound like it's saying that a human looked at what the AI did and thought it was good, before they (or someone else) dug deeper.

But the examples (the ones I spot-checked anyway) all seem to be cases where the systems satisfied some predefined goal in an unexpected way, but if a human had looked at what the agent did (and not just the final score), it would have been obvious that the agent wasn't doing what was expected / wanted. (E.g. "A classic example is OpenAI’s demo of a reinforcement learning agent in a boat racing game going in circles and repeatedly hitting the same reward targets instead of actually playing the game." or "A robotic arm trained using hindsight experience replay to slide a block to a target position on a table achieves the goal by moving the table itself." A human evaluator wouldn't have looked at these behaviors and said, "Looks good!")

I call this out just because the world where ML systems are actually routinely managing to fool humans into thinking they're doing something useful when they're not is much scarier than one where they're just gaming pre-specified reward functions. And I think it's important to distinguish the two.

EDIT: this example does seem to fit the bill though:

One example from an OpenAI paper is an agent learning incorrect behaviours in a 3d simulator, because the behaviours look like the desired behaviour in the 2d clip the human evaluator is seeing.
Comment by esrogs on Money isn't real. When you donate money to a charity, how does it actually help? · 2020-02-05T23:18:14.129Z · score: 2 (1 votes) · LW · GW
My original response was a more abstract explaination of why I think describing money as "not real" is misleading but maybe I more direct response to the article would be more useful since I think that part actually isn't core to your question.

Just to clarify, I'm not the OP. It just seemed to me like you and the OP were saying something similar.

Comment by esrogs on Money isn't real. When you donate money to a charity, how does it actually help? · 2020-02-05T08:06:33.091Z · score: 2 (1 votes) · LW · GW
describing at as a way of temporarily motivating strangers seems like a misunderstanding of what money is

Compare to:

The innovation of money itself is that we can get favors repaid by complete strangers without having to figure out the chain of favors every time

What's the difference? It sounds like the two of you are saying the same thing. Except just that you don't like using the term "illusion" to describe it?

Comment by esrogs on Hello, is it you I'm looking for? · 2020-02-05T06:08:35.408Z · score: 5 (3 votes) · LW · GW
I think that means I'm [...] bad at describing/ searching for what I'm looking for.

One thing that might help, in terms of understanding what you're looking for, is -- how do you expect to be able to use this "model of ranking"?

It's not quite clear to me whether you're looking for something like an algorithm -- where somebody could code it up as a computer program and you could feed in sentences and it will spit out scores, or something more like a framework or rubrik -- where the work of understanding and evaluating sentences will still be done by people, but they can use the framework/rubrik as a guide to decide how to rate the sentences, or something else.

Comment by esrogs on Hello, is it you I'm looking for? · 2020-02-05T05:52:49.097Z · score: 13 (3 votes) · LW · GW
I can't say that I'm familiar with the morass that you speak of. I work in clinical medicine and tend to just have a 10,000 mile view on philosophy. Can you maybe elaborate on what you see the problem as?

You might want to take a look at the A Human's Guide to Words sequence. (Or, for a summary, see just the last post in that sequence: 37 Ways That Words Can Be Wrong.)

Comment by esrogs on Meta-Preference Utilitarianism · 2020-02-05T05:47:17.366Z · score: 2 (1 votes) · LW · GW
I have come to the conclusion that this is just a bandaid on a more fundamental problem. Whether we should choose total, average or even median utility isn’t something we could objectively decide. So I suggest that we go up one level, and maximize what most people want to maximize.

If you haven't seen it, you may find this paper interesting: Geometric reasons for normalising variance to
aggregate preferences
, by Owen Cotton-Barratt (as an example of another potentially elegant approach to aggregating preferences).

Comment by esrogs on Category Theory Without The Baggage · 2020-02-05T04:05:56.092Z · score: 5 (3 votes) · LW · GW

Relatedly, I'm going back and forth in my head a bit about whether it's better to explain category theory in graph theory terms by identifying the morphisms with edges or with paths.


Morphisms = Edges

  • In this version, a subset of multidigraphs (apparently also called quivers!), can be thought of as categories -- those for which every vertex has an edge to itself, and for which whenever there's a path from A to B, there's also an edge directly from A to B.
  • You also have to say:
    • for each pair of edges from A to B and B to C, which edge from A to C corresponds to their composition
    • for each node, which (of possibly multiple) edge to itself is its default or identity edge
    • in such a way that the associative and unital laws hold

Morphisms = Paths

  • In this version, any multidigraph (quiver) can be thought of as a category.
  • You get the identities for free, because they're just the trivial, do-nothing, paths.
  • You get composition for free, because we already know what it means that following a path from A to M and then from M to Z is itself a path.
  • And you get the associative and unital laws for (almost) free:
    • unital: doing nothing at the start or end of a path obviously doesn't change the path
    • associative: it's natural to think of the paths ((e1, e2), e3) and (e1, (e2, e3)) -- where e1, e2, and e3 are edges -- as both being the same path [e1, e2, e3]
  • However, you now have to add on one weird extra rule that two paths that start and end at the same place can be considered the same path, even if they didn't go through the same intermediate nodes.
    • In other words, the intuition that composing-pairs-of-paths-in-different-orders-always-gives-you-the-same-final-path gives you a sufficient, but not necessary condition for two paths being considered equivalent.

I think this final point in the morphisms = paths formulation might be what tripped you up in the case Eigil points out above, where category theory treats two arrows from A to B that are equivalent to each other as actually the same arrow. This seems to be the one place (from what I can see so far) where the paths formulation gives the wrong intuition.

Comment by esrogs on Category Theory Without The Baggage · 2020-02-05T03:21:40.726Z · score: 4 (2 votes) · LW · GW

I also liked the explanation of natural transformations as being about prisms, I found that image helpful.

Comment by esrogs on Category Theory Without The Baggage · 2020-02-05T03:21:17.779Z · score: 6 (3 votes) · LW · GW
My current impression is that broader adoption of category theory is limited in large part by bad definitions, even when more intuitive equivalent definitions are available - "morphisms" vs "paths"

Just wanted to note that I recently learned some of the very basics of category theory and found myself asking of the presentations I came across, "Why are you introducing this as being about dots and arrows and not immediately telling me how this is the same as or different from graph theory?"

I had to go and find this answer on Math.StackExchange to explain the relationship, which was helpful.

So I think you're on the right track to emphasize paths, at least for anyone who knows about graphs.

Comment by esrogs on Looking for books about software engineering as a field · 2020-02-05T02:30:17.117Z · score: 6 (3 votes) · LW · GW
For example, I've had three people try to explain exactly what an API is to me, for more than two hours total, but I just can't internalize it.

Perhaps because they rehearsed their understanding at you rather than being more Socratic?

How would you describe what an API is (given your current level of understanding)?

Comment by esrogs on Looking for books about software engineering as a field · 2020-02-05T02:18:54.770Z · score: 2 (1 votes) · LW · GW

If I still have it, it's in storage in Seattle :P

Comment by esrogs on Looking for books about software engineering as a field · 2020-02-05T01:46:47.951Z · score: 2 (1 votes) · LW · GW

You might find Joel Spolsky's books:

Joel on Software: And on Diverse and Occasionally Related Matters That Will Prove of Interest to Software Developers, Designers, and Managers, and to Those Who, Whether by Good Fortune or Ill Luck, Work with Them in Some Capacity

and

More Joel on Software: Further thoughts on Diverse and Occasionally Related Matters...

to be amusing and helpful. They're selections from his popular blog. I read them when I was getting started as a software engineer and found them helpful.

(The same Joel Spolsky who, after his blog got popular, went on to create StackOverflow.com)

Comment by esrogs on New paper: The Incentives that Shape Behaviour · 2020-01-23T22:36:52.968Z · score: 7 (2 votes) · LW · GW

Stylistic nitpick -- did you mean to include the "fbclid=..." part of the URL?

Comment by esrogs on An OpenAI board seat is surprisingly expensive · 2020-01-21T20:07:03.731Z · score: 4 (2 votes) · LW · GW

For others who didn't get the reference: https://en.wikipedia.org/wiki/Jam_tomorrow

Comment by esrogs on Bay Solstice 2019 Retrospective · 2020-01-19T08:38:54.340Z · score: 6 (3 votes) · LW · GW
Pretty much the entire point of every "solstice" in every culture ever has been to celebrate the end of winter

But the actual day of solstice is the first day of winter...

Comment by esrogs on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T00:11:13.466Z · score: 9 (6 votes) · LW · GW
I should say, these shifts have not been anything like an unmitigated failure, and I don't now believe were worth it just because they caused me to be more socially connected to x-risk things.

Had a little trouble parsing this, especially the second half. Here's my attempted paraphrase:

I take you to be saying that: 1) the shifts that resulted from engaging with x-risk were not all bad, despite leading to the disorienting events listed above, and 2) in particular, you think the shifts were (partially) beneficial for reasons other than just that they led you to be more socially connected to x-risk people.

Is that right?

Comment by esrogs on Circling as Cousin to Rationality · 2020-01-12T18:34:12.091Z · score: 5 (2 votes) · LW · GW

This seems like useful advice for how to engage with Circling, etc., but I'm not sure how it responds to what Said wrote in the parent comment.

Is the idea that it would be okay if Circling asks the wrong questions when dealing with cases of potential betrayal (my quick summary of Said's point), because Circling is just practice, and in real life you would still handle a potential betrayal in the same way?

But if Circling is just practice, isn't it important what it trains you to do? (And that it not train you to do the wrong things?)

(FWIW, I don't share the objection that Said raises in the parent comment, but my response would be more like Raemon's here, and not that Circling is just practice.)

Comment by esrogs on Of arguments and wagers · 2020-01-12T17:15:36.456Z · score: 2 (1 votes) · LW · GW
If the total willingness to risk of people who believe "Judy will believe X on reflection" is lower than the value of Alice's time

Judy's time?

Comment by esrogs on CFAR's 2019 Fundraiser · 2020-01-12T04:33:26.748Z · score: 10 (6 votes) · LW · GW

Just donated!

Comment by esrogs on Book review: Rethinking Consciousness · 2020-01-12T02:34:36.325Z · score: 8 (5 votes) · LW · GW

I guess my request of philosophers (and the rest of us) is this: when you are using an every day term like "free will" or "consciousness", please don't define it to mean one very specific thing that bakes in a bunch of philosophical assumptions. Because then anyone who questions some of those assumptions ends up arguing whether the thing exists. Rather than just saying it's a little different than we thought before.

It'd be like if we couldn't talk about "space" or "time" anymore after Einstein. Or if half of us started calling ourselves "illusionists" w.r.t. space or time. They're not illusions! They exist! They're just a little different than we thought before.

(See also this comment, and remember that all abstractions are leaky!)

Comment by esrogs on Book review: Rethinking Consciousness · 2020-01-12T02:28:45.994Z · score: 8 (4 votes) · LW · GW

Semi-relatedly, I'm getting frustrated with the term "illusionist". People seem to use it in different ways. Within the last few weeks I listened to the 80k podcast with David Chalmers and the Rationally Speaking podcast with "illusionist" Keith Frankish.

Chalmers seemed to use the term to mean that consciousness was an illusion, such that it means we don't really have consciousness. Which seems very dubious.

Frankish seemed to use the term to mean that many of the properties that other philosophers think our consciousness has are illusory, but that of course we are conscious.

From listening to the latter interview, it's not clear to me that Frankish (who, according to Wikipedia, is "known for his 'illusionist' stance in the theory of consciousness") believes anything different from the view described in this post (which I assume you're classing as "representationalism").

Maybe I'm just confused. But it seems like leading philosophers of today still haven't absorbed the lesson of Wittgenstein and are still talking past each other with confusing words.

Comment by esrogs on [AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment · 2020-01-12T02:08:47.522Z · score: 2 (1 votes) · LW · GW

Hmm, maybe I'm missing something basic and should just go re-read the original posts, but I'm confused by this statement:

So what we do here is say "belief set A is strictly 'better' if this particular observer always trusts belief set A over belief set B", and "trust" is defined as "whatever we think belief set A believes is also what we believe".

In this, belief set A and belief set B are analogous to A[C] and C (or some c in C), right? If so, then what's the analogue of "trust... over"?

If we replace our beliefs with A[C]'s, then how is that us trusting it "over" c or C? It seems like it's us trusting it, full stop (without reference to any other thing that we are trusting it more than). No?

Comment by esrogs on Voting Phase of 2018 LW Review · 2020-01-11T20:39:42.261Z · score: 17 (8 votes) · LW · GW

I voted!

Comment by esrogs on Book review: Rethinking Consciousness · 2020-01-11T06:23:31.381Z · score: 4 (2 votes) · LW · GW
Now put the two together, and you get an "attention schema", an internal model of the activity of the GNW, which he calls attention.

To clarify, he calls "the activity of the GNW" attention, or he calls "an internal model of the activity of the GNW" attention?

My best guess interpretation of what you're saying is that it's the former, and when you add "an internal model of" on the front, that makes it a schema. Am I reading that right?

Comment by esrogs on [AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment · 2020-01-11T06:08:16.606Z · score: 2 (1 votes) · LW · GW
Notably, we need to trust A[C] even over our own beliefs, that is, if A[C] believes something, we discard our position and adopt A[C]'s belief.

To clarify, this is only if we (or the process that generated our beliefs) fall into class C, right?

Comment by esrogs on Are "superforecasters" a real phenomenon? · 2020-01-09T21:04:57.820Z · score: 14 (4 votes) · LW · GW
I definitely imagine looking at a graph of everyone's performance on the predictions and noticing a cluster who are discontinuously much better than everyone else. I would be surprised if the authors of the piece didn't imagine this as well.

Some evidence against this is that they described it as being a "power law" distribution, which is continuous and doesn't have these kinds of clusters. (It just goes way way up as you move to the right.)

If you had a power law distribution, it would still be accurate to say that "a few are better than most", even though there isn't a discontinuous break anywhere.

EDIT: It seems to me that most things like this follow approximately continuous distributions. And so whenever you hear someone talking about something like this you should assume it's continuous unless it's super clear that it's not (and that should be a surprising fact in need of explanation!). But note that people will often talk about it in misleading ways, because for the sake of discussion it's often simpler to talk about it as if there are these discrete groups. So just because people are talking about it as if there are discrete groups does not mean they actually think there are discrete groups. I think that's what happened here.

Comment by esrogs on Are "superforecasters" a real phenomenon? · 2020-01-09T20:55:39.613Z · score: 14 (4 votes) · LW · GW
If you graphed "jump height" of the population and 2% of the population is Superman, there would be a clear discontinuity at the higher end.

But note that the section you quote from Vox doesn't say that there's any discontinuity:

Tetlock and his collaborators have run studies involving tens of thousands of participants and have discovered that prediction follows a power law distribution.

A power law distribution is not a discontinuity! Some people are way way better than others. Other people are merely way better than others. And still others are only better than others.

"Philip Tetlock discovered that 2% of people are superforecasters." When I read this sentence, it reads to me like "2% of people are superheroes"

I think the sentence is misleading (as per Scott Alexander). A better sentence should give the impression that, by way of analogy, some basketball players are NBA players. They may seem superhuman in their basketball ability compared to the Average Joe. And there are a combination of innate traits as well as honed skills that got them there. These would be interesting to study if you wanted to know how to play basketball well. Or if you were putting together a team to play against the Monstars.

But there's no discontinuity. Going down the curve from NBA players, you get to professional players in other leagues, and then to division 1 college players, and then division 2, etc. Somewhere after bench warmer on their high school basketball team, you get to Average Joe.

So SSC and Vox are both right. Some people are way way better than others (with a power law-like distribution), but there's no discontinuity.

Comment by esrogs on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-06T19:25:46.561Z · score: 5 (2 votes) · LW · GW

Looks like he's getting some pushback:

Boris Johnson’s chief adviser, Dominic Cummings, will not be allowed to bypass Whitehall’s usual recruitment processes when recruiting “weirdos” and “misfits” for Downing Street jobs, No 10 has said.
Cummings has been criticised by employment lawyers and unions after posting a rambling 2,900-word blogpost calling for people with “odd skills” to circumvent the usual rules in applying for jobs as special advisers and officials in government.
[...]
The prime minister’s spokesman insisted the post was aimed only at seeking “expressions of interest” and that civil servants would still be appointed within the usual tight procedures of the civil service.
Comment by esrogs on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-05T03:25:37.606Z · score: 9 (5 votes) · LW · GW
also confused why their list of achievements contains (or consists entirely of) Brexit

He also had a hand in keeping Britain on the pound instead of the euro, back in 1999-2002.

To me it seems like the original strategy behind Brexit referendum was simply "let's make a referendum that will lose, but it will give us power to convert any future complaints into political points by saying 'we told you'".

My understanding is that this was David Cameron's strategy. But others, like Daniel Hannan and Dominic Cummings, actually wanted the UK out of the EU.

In Cummings' case, his (stated) reason was that he thought the UK government was in need of major reform, and the best odds for reforming it seemed to require first withdrawing from the EU. (See the section in this blog post labeled, "Why do it?")

Comment by esrogs on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T23:37:00.493Z · score: 6 (3 votes) · LW · GW

Driving away other writers with annoyingness also constrains the flow of information. Trade-offs abound!