Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" 2020-01-03T00:33:09.994Z · score: 79 (20 votes)
'Longtermism' definitional discussion on EA Forum 2019-08-02T23:53:03.731Z · score: 17 (6 votes)
Henry Kissinger: AI Could Mean the End of Human History 2018-05-15T20:11:11.136Z · score: 46 (10 votes)
AskReddit: Hard Pills to Swallow 2018-05-14T11:20:37.470Z · score: 17 (6 votes)
Predicting Future Morality 2018-05-06T07:17:16.548Z · score: 24 (9 votes)
AI Safety via Debate 2018-05-05T02:11:25.655Z · score: 40 (9 votes)
FLI awards prize to Arkhipov’s relatives 2017-10-28T19:40:43.928Z · score: 12 (5 votes)
Functional Decision Theory: A New Theory of Instrumental Rationality 2017-10-20T08:09:25.645Z · score: 36 (13 votes)
A Software Agent Illustrating Some Features of an Illusionist Account of Consciousness 2017-10-17T07:42:28.822Z · score: 16 (3 votes)
Neuralink and the Brain’s Magical Future 2017-04-23T07:27:30.817Z · score: 6 (7 votes)
Request for help with economic analysis related to AI forecasting 2016-02-06T01:27:39.810Z · score: 6 (7 votes)
[Link] AlphaGo: Mastering the ancient game of Go with Machine Learning 2016-01-27T21:04:55.183Z · score: 14 (15 votes)
[LINK] Deep Learning Machine Teaches Itself Chess in 72 Hours 2015-09-14T19:38:11.447Z · score: 8 (9 votes)
[Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim 2015-08-19T06:37:21.049Z · score: 7 (8 votes)
[Link] Neural networks trained on expert Go games have just made a major leap 2015-01-02T15:48:16.283Z · score: 15 (16 votes)
[LINK] Attention Schema Theory of Consciousness 2013-08-25T22:30:01.903Z · score: 3 (4 votes)
[LINK] Well-written article on the Future of Humanity Institute and Existential Risk 2013-03-02T12:36:39.402Z · score: 16 (19 votes)
The Center for Sustainable Nanotechnology 2013-02-26T06:55:18.542Z · score: 4 (11 votes)


Comment by esrogs on Hello, is it you I'm looking for? · 2020-02-09T21:29:18.275Z · score: 4 (2 votes) · LW · GW
my enthusiasm for the project leads me to be a bit coy about revealing too much detail on the internet

FWIW, it may be worth keeping in mind the Silicon Valley maxim that ideas are cheap, and execution is what matters. In most cases you're far more likely to make progress on the idea if you get it out into the open, especially if execution at all depends on having collaborators or other supporters. (Also helpful to get feedback on the idea.) The probability that someone else successfully executes on an idea that you came up with is low.

Comment by esrogs on Hello, is it you I'm looking for? · 2020-02-08T21:13:15.628Z · score: 2 (1 votes) · LW · GW

Ah! It's much clearer to me now what you're looking for.

Two things that come to mind as vaguely similar:

1) The habit of some rationalist bloggers of flagging claims with "epistemic status". (E.g. here or here)

2) Wikipedia's guidelines for verifiability (and various other guidelines that they have)

Of course, neither is exactly what you're talking about, but perhaps they could serve as inspiration.

Comment by esrogs on Writeup: Progress on AI Safety via Debate · 2020-02-07T23:52:54.359Z · score: 2 (1 votes) · LW · GW

I suppose there is a continuum of how much insight the human has into what the agent is doing. Squeezing all your evaluation into one simple reward function would be on one end of the spectrum (and particularly susceptible to unintended behaviors), and then watching a 2d projection of a 3d action would be further along the spectrum (but not all the way to full insight), and then you can imagine setups with much more insight than that.

Comment by esrogs on Writeup: Progress on AI Safety via Debate · 2020-02-07T23:41:15.828Z · score: 2 (1 votes) · LW · GW
We already see examples of RL leading to undesirable behaviours that superficially ‘look good’ to human evaluators (see this collection of examples).

Nitpick: this description seems potentially misleading to me (at least it gave me the wrong impression at first!). When I read this, it sound like it's saying that a human looked at what the AI did and thought it was good, before they (or someone else) dug deeper.

But the examples (the ones I spot-checked anyway) all seem to be cases where the systems satisfied some predefined goal in an unexpected way, but if a human had looked at what the agent did (and not just the final score), it would have been obvious that the agent wasn't doing what was expected / wanted. (E.g. "A classic example is OpenAI’s demo of a reinforcement learning agent in a boat racing game going in circles and repeatedly hitting the same reward targets instead of actually playing the game." or "A robotic arm trained using hindsight experience replay to slide a block to a target position on a table achieves the goal by moving the table itself." A human evaluator wouldn't have looked at these behaviors and said, "Looks good!")

I call this out just because the world where ML systems are actually routinely managing to fool humans into thinking they're doing something useful when they're not is much scarier than one where they're just gaming pre-specified reward functions. And I think it's important to distinguish the two.

EDIT: this example does seem to fit the bill though:

One example from an OpenAI paper is an agent learning incorrect behaviours in a 3d simulator, because the behaviours look like the desired behaviour in the 2d clip the human evaluator is seeing.
Comment by esrogs on Money isn't real. When you donate money to a charity, how does it actually help? · 2020-02-05T23:18:14.129Z · score: 2 (1 votes) · LW · GW
My original response was a more abstract explaination of why I think describing money as "not real" is misleading but maybe I more direct response to the article would be more useful since I think that part actually isn't core to your question.

Just to clarify, I'm not the OP. It just seemed to me like you and the OP were saying something similar.

Comment by esrogs on Money isn't real. When you donate money to a charity, how does it actually help? · 2020-02-05T08:06:33.091Z · score: 2 (1 votes) · LW · GW
describing at as a way of temporarily motivating strangers seems like a misunderstanding of what money is

Compare to:

The innovation of money itself is that we can get favors repaid by complete strangers without having to figure out the chain of favors every time

What's the difference? It sounds like the two of you are saying the same thing. Except just that you don't like using the term "illusion" to describe it?

Comment by esrogs on Hello, is it you I'm looking for? · 2020-02-05T06:08:35.408Z · score: 5 (3 votes) · LW · GW
I think that means I'm [...] bad at describing/ searching for what I'm looking for.

One thing that might help, in terms of understanding what you're looking for, is -- how do you expect to be able to use this "model of ranking"?

It's not quite clear to me whether you're looking for something like an algorithm -- where somebody could code it up as a computer program and you could feed in sentences and it will spit out scores, or something more like a framework or rubrik -- where the work of understanding and evaluating sentences will still be done by people, but they can use the framework/rubrik as a guide to decide how to rate the sentences, or something else.

Comment by esrogs on Hello, is it you I'm looking for? · 2020-02-05T05:52:49.097Z · score: 13 (3 votes) · LW · GW
I can't say that I'm familiar with the morass that you speak of. I work in clinical medicine and tend to just have a 10,000 mile view on philosophy. Can you maybe elaborate on what you see the problem as?

You might want to take a look at the A Human's Guide to Words sequence. (Or, for a summary, see just the last post in that sequence: 37 Ways That Words Can Be Wrong.)

Comment by esrogs on Meta-Preference Utilitarianism · 2020-02-05T05:47:17.366Z · score: 2 (1 votes) · LW · GW
I have come to the conclusion that this is just a bandaid on a more fundamental problem. Whether we should choose total, average or even median utility isn’t something we could objectively decide. So I suggest that we go up one level, and maximize what most people want to maximize.

If you haven't seen it, you may find this paper interesting: Geometric reasons for normalising variance to
aggregate preferences
, by Owen Cotton-Barratt (as an example of another potentially elegant approach to aggregating preferences).

Comment by esrogs on Category Theory Without The Baggage · 2020-02-05T04:05:56.092Z · score: 5 (3 votes) · LW · GW

Relatedly, I'm going back and forth in my head a bit about whether it's better to explain category theory in graph theory terms by identifying the morphisms with edges or with paths.

Morphisms = Edges

  • In this version, a subset of multidigraphs (apparently also called quivers!), can be thought of as categories -- those for which every vertex has an edge to itself, and for which whenever there's a path from A to B, there's also an edge directly from A to B.
  • You also have to say:
    • for each pair of edges from A to B and B to C, which edge from A to C corresponds to their composition
    • for each node, which (of possibly multiple) edge to itself is its default or identity edge
    • in such a way that the associative and unital laws hold

Morphisms = Paths

  • In this version, any multidigraph (quiver) can be thought of as a category.
  • You get the identities for free, because they're just the trivial, do-nothing, paths.
  • You get composition for free, because we already know what it means that following a path from A to M and then from M to Z is itself a path.
  • And you get the associative and unital laws for (almost) free:
    • unital: doing nothing at the start or end of a path obviously doesn't change the path
    • associative: it's natural to think of the paths ((e1, e2), e3) and (e1, (e2, e3)) -- where e1, e2, and e3 are edges -- as both being the same path [e1, e2, e3]
  • However, you now have to add on one weird extra rule that two paths that start and end at the same place can be considered the same path, even if they didn't go through the same intermediate nodes.
    • In other words, the intuition that composing-pairs-of-paths-in-different-orders-always-gives-you-the-same-final-path gives you a sufficient, but not necessary condition for two paths being considered equivalent.

I think this final point in the morphisms = paths formulation might be what tripped you up in the case Eigil points out above, where category theory treats two arrows from A to B that are equivalent to each other as actually the same arrow. This seems to be the one place (from what I can see so far) where the paths formulation gives the wrong intuition.

Comment by esrogs on Category Theory Without The Baggage · 2020-02-05T03:21:40.726Z · score: 4 (2 votes) · LW · GW

I also liked the explanation of natural transformations as being about prisms, I found that image helpful.

Comment by esrogs on Category Theory Without The Baggage · 2020-02-05T03:21:17.779Z · score: 6 (3 votes) · LW · GW
My current impression is that broader adoption of category theory is limited in large part by bad definitions, even when more intuitive equivalent definitions are available - "morphisms" vs "paths"

Just wanted to note that I recently learned some of the very basics of category theory and found myself asking of the presentations I came across, "Why are you introducing this as being about dots and arrows and not immediately telling me how this is the same as or different from graph theory?"

I had to go and find this answer on Math.StackExchange to explain the relationship, which was helpful.

So I think you're on the right track to emphasize paths, at least for anyone who knows about graphs.

Comment by esrogs on Looking for books about software engineering as a field · 2020-02-05T02:30:17.117Z · score: 6 (3 votes) · LW · GW
For example, I've had three people try to explain exactly what an API is to me, for more than two hours total, but I just can't internalize it.

Perhaps because they rehearsed their understanding at you rather than being more Socratic?

How would you describe what an API is (given your current level of understanding)?

Comment by esrogs on Looking for books about software engineering as a field · 2020-02-05T02:18:54.770Z · score: 2 (1 votes) · LW · GW

If I still have it, it's in storage in Seattle :P

Comment by esrogs on Looking for books about software engineering as a field · 2020-02-05T01:46:47.951Z · score: 2 (1 votes) · LW · GW

You might find Joel Spolsky's books:

Joel on Software: And on Diverse and Occasionally Related Matters That Will Prove of Interest to Software Developers, Designers, and Managers, and to Those Who, Whether by Good Fortune or Ill Luck, Work with Them in Some Capacity


More Joel on Software: Further thoughts on Diverse and Occasionally Related Matters...

to be amusing and helpful. They're selections from his popular blog. I read them when I was getting started as a software engineer and found them helpful.

(The same Joel Spolsky who, after his blog got popular, went on to create

Comment by esrogs on New paper: The Incentives that Shape Behaviour · 2020-01-23T22:36:52.968Z · score: 7 (2 votes) · LW · GW

Stylistic nitpick -- did you mean to include the "fbclid=..." part of the URL?

Comment by esrogs on An OpenAI board seat is surprisingly expensive · 2020-01-21T20:07:03.731Z · score: 4 (2 votes) · LW · GW

For others who didn't get the reference:

Comment by esrogs on Bay Solstice 2019 Retrospective · 2020-01-19T08:38:54.340Z · score: 6 (3 votes) · LW · GW
Pretty much the entire point of every "solstice" in every culture ever has been to celebrate the end of winter

But the actual day of solstice is the first day of winter...

Comment by esrogs on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T00:11:13.466Z · score: 9 (6 votes) · LW · GW
I should say, these shifts have not been anything like an unmitigated failure, and I don't now believe were worth it just because they caused me to be more socially connected to x-risk things.

Had a little trouble parsing this, especially the second half. Here's my attempted paraphrase:

I take you to be saying that: 1) the shifts that resulted from engaging with x-risk were not all bad, despite leading to the disorienting events listed above, and 2) in particular, you think the shifts were (partially) beneficial for reasons other than just that they led you to be more socially connected to x-risk people.

Is that right?

Comment by esrogs on Circling as Cousin to Rationality · 2020-01-12T18:34:12.091Z · score: 5 (2 votes) · LW · GW

This seems like useful advice for how to engage with Circling, etc., but I'm not sure how it responds to what Said wrote in the parent comment.

Is the idea that it would be okay if Circling asks the wrong questions when dealing with cases of potential betrayal (my quick summary of Said's point), because Circling is just practice, and in real life you would still handle a potential betrayal in the same way?

But if Circling is just practice, isn't it important what it trains you to do? (And that it not train you to do the wrong things?)

(FWIW, I don't share the objection that Said raises in the parent comment, but my response would be more like Raemon's here, and not that Circling is just practice.)

Comment by esrogs on Of arguments and wagers · 2020-01-12T17:15:36.456Z · score: 2 (1 votes) · LW · GW
If the total willingness to risk of people who believe "Judy will believe X on reflection" is lower than the value of Alice's time

Judy's time?

Comment by esrogs on CFAR's 2019 Fundraiser · 2020-01-12T04:33:26.748Z · score: 10 (6 votes) · LW · GW

Just donated!

Comment by esrogs on Book review: Rethinking Consciousness · 2020-01-12T02:34:36.325Z · score: 7 (4 votes) · LW · GW

I guess my request of philosophers (and the rest of us) is this: when you are using an every day term like "free will" or "consciousness", please don't define it to mean one very specific thing that bakes in a bunch of philosophical assumptions. Because then anyone who questions some of those assumptions ends up arguing whether the thing exists. Rather than just saying it's a little different than we thought before.

It'd be like if we couldn't talk about "space" or "time" anymore after Einstein. Or if half of us started calling ourselves "illusionists" w.r.t. space or time. They're not illusions! They exist! They're just a little different than we thought before.

(See also this comment, and remember that all abstractions are leaky!)

Comment by esrogs on Book review: Rethinking Consciousness · 2020-01-12T02:28:45.994Z · score: 7 (3 votes) · LW · GW

Semi-relatedly, I'm getting frustrated with the term "illusionist". People seem to use it in different ways. Within the last few weeks I listened to the 80k podcast with David Chalmers and the Rationally Speaking podcast with "illusionist" Keith Frankish.

Chalmers seemed to use the term to mean that consciousness was an illusion, such that it means we don't really have consciousness. Which seems very dubious.

Frankish seemed to use the term to mean that many of the properties that other philosophers think our consciousness has are illusory, but that of course we are conscious.

From listening to the latter interview, it's not clear to me that Frankish (who, according to Wikipedia, is "known for his 'illusionist' stance in the theory of consciousness") believes anything different from the view described in this post (which I assume you're classing as "representationalism").

Maybe I'm just confused. But it seems like leading philosophers of today still haven't absorbed the lesson of Wittgenstein and are still talking past each other with confusing words.

Comment by esrogs on [AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment · 2020-01-12T02:08:47.522Z · score: 2 (1 votes) · LW · GW

Hmm, maybe I'm missing something basic and should just go re-read the original posts, but I'm confused by this statement:

So what we do here is say "belief set A is strictly 'better' if this particular observer always trusts belief set A over belief set B", and "trust" is defined as "whatever we think belief set A believes is also what we believe".

In this, belief set A and belief set B are analogous to A[C] and C (or some c in C), right? If so, then what's the analogue of "trust... over"?

If we replace our beliefs with A[C]'s, then how is that us trusting it "over" c or C? It seems like it's us trusting it, full stop (without reference to any other thing that we are trusting it more than). No?

Comment by esrogs on Voting Phase of 2018 LW Review · 2020-01-11T20:39:42.261Z · score: 17 (8 votes) · LW · GW

I voted!

Comment by esrogs on Book review: Rethinking Consciousness · 2020-01-11T06:23:31.381Z · score: 4 (2 votes) · LW · GW
Now put the two together, and you get an "attention schema", an internal model of the activity of the GNW, which he calls attention.

To clarify, he calls "the activity of the GNW" attention, or he calls "an internal model of the activity of the GNW" attention?

My best guess interpretation of what you're saying is that it's the former, and when you add "an internal model of" on the front, that makes it a schema. Am I reading that right?

Comment by esrogs on [AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment · 2020-01-11T06:08:16.606Z · score: 2 (1 votes) · LW · GW
Notably, we need to trust A[C] even over our own beliefs, that is, if A[C] believes something, we discard our position and adopt A[C]'s belief.

To clarify, this is only if we (or the process that generated our beliefs) fall into class C, right?

Comment by esrogs on Are "superforecasters" a real phenomenon? · 2020-01-09T21:04:57.820Z · score: 14 (4 votes) · LW · GW
I definitely imagine looking at a graph of everyone's performance on the predictions and noticing a cluster who are discontinuously much better than everyone else. I would be surprised if the authors of the piece didn't imagine this as well.

Some evidence against this is that they described it as being a "power law" distribution, which is continuous and doesn't have these kinds of clusters. (It just goes way way up as you move to the right.)

If you had a power law distribution, it would still be accurate to say that "a few are better than most", even though there isn't a discontinuous break anywhere.

EDIT: It seems to me that most things like this follow approximately continuous distributions. And so whenever you hear someone talking about something like this you should assume it's continuous unless it's super clear that it's not (and that should be a surprising fact in need of explanation!). But note that people will often talk about it in misleading ways, because for the sake of discussion it's often simpler to talk about it as if there are these discrete groups. So just because people are talking about it as if there are discrete groups does not mean they actually think there are discrete groups. I think that's what happened here.

Comment by esrogs on Are "superforecasters" a real phenomenon? · 2020-01-09T20:55:39.613Z · score: 14 (4 votes) · LW · GW
If you graphed "jump height" of the population and 2% of the population is Superman, there would be a clear discontinuity at the higher end.

But note that the section you quote from Vox doesn't say that there's any discontinuity:

Tetlock and his collaborators have run studies involving tens of thousands of participants and have discovered that prediction follows a power law distribution.

A power law distribution is not a discontinuity! Some people are way way better than others. Other people are merely way better than others. And still others are only better than others.

"Philip Tetlock discovered that 2% of people are superforecasters." When I read this sentence, it reads to me like "2% of people are superheroes"

I think the sentence is misleading (as per Scott Alexander). A better sentence should give the impression that, by way of analogy, some basketball players are NBA players. They may seem superhuman in their basketball ability compared to the Average Joe. And there are a combination of innate traits as well as honed skills that got them there. These would be interesting to study if you wanted to know how to play basketball well. Or if you were putting together a team to play against the Monstars.

But there's no discontinuity. Going down the curve from NBA players, you get to professional players in other leagues, and then to division 1 college players, and then division 2, etc. Somewhere after bench warmer on their high school basketball team, you get to Average Joe.

So SSC and Vox are both right. Some people are way way better than others (with a power law-like distribution), but there's no discontinuity.

Comment by esrogs on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-06T19:25:46.561Z · score: 5 (2 votes) · LW · GW

Looks like he's getting some pushback:

Boris Johnson’s chief adviser, Dominic Cummings, will not be allowed to bypass Whitehall’s usual recruitment processes when recruiting “weirdos” and “misfits” for Downing Street jobs, No 10 has said.
Cummings has been criticised by employment lawyers and unions after posting a rambling 2,900-word blogpost calling for people with “odd skills” to circumvent the usual rules in applying for jobs as special advisers and officials in government.
The prime minister’s spokesman insisted the post was aimed only at seeking “expressions of interest” and that civil servants would still be appointed within the usual tight procedures of the civil service.
Comment by esrogs on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-05T03:25:37.606Z · score: 9 (5 votes) · LW · GW
also confused why their list of achievements contains (or consists entirely of) Brexit

He also had a hand in keeping Britain on the pound instead of the euro, back in 1999-2002.

To me it seems like the original strategy behind Brexit referendum was simply "let's make a referendum that will lose, but it will give us power to convert any future complaints into political points by saying 'we told you'".

My understanding is that this was David Cameron's strategy. But others, like Daniel Hannan and Dominic Cummings, actually wanted the UK out of the EU.

In Cummings' case, his (stated) reason was that he thought the UK government was in need of major reform, and the best odds for reforming it seemed to require first withdrawing from the EU. (See the section in this blog post labeled, "Why do it?")

Comment by esrogs on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T23:37:00.493Z · score: 6 (3 votes) · LW · GW

Driving away other writers with annoyingness also constrains the flow of information. Trade-offs abound!

Comment by esrogs on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T19:03:03.333Z · score: 11 (2 votes) · LW · GW
Why should Said be the one to change, though?

Good question. When there are conflicts over norms, it's not obvious how to resolve them in general. I suppose the easy, though less preferred, solution would be some kind of appeal to the will of the majority, or to an authority. The harder, but better, way would be an appeal to a deeper set of shared norms. I'm not sure how tractable that is in this case though.

What happens if you reframe your reaction as, "He's surprised, but surprise is the measure of a poor hypothesis—the fact that he's so cluelessly self-centered as to not be able to predict what other people know means that I should have higher status"?

This is in fact often my reaction. But I will note that neither social attacks nor the writings of clueless self-centered people are particularly fun to read. (Especially not when it seems to be both.)

That may be stating it overly harshly. I do think Said is an intelligent person and often has good points to make. And I find it valuable to learn that others are getting a lot of value from his comments.

The signal to noise (not exactly the right term) ratio has not seemed particularly favorable to me though. But perhaps there's yet some different reframing that I could do to be less frustrated (in addition to whatever changes Said might make).

Comment by esrogs on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T09:32:58.835Z · score: 30 (6 votes) · LW · GW
Said is doing something similar, so I see it as a valuable contribution.

I appreciate hearing this counterpoint.

I wish there was a way to get the benefit of Said's pointed questioning w/o readers like me being so frustrated by the style. I suspect that relatively subtle tweaks to the style could make a big difference. But I'm not exactly sure how to get there from here.

For now all I can think of is to note that some users, like Wei Dai, ask lots of pointed and clarifying questions and never provoke in me the same kind of frustration that many of Said's comments do.

Comment by esrogs on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T08:21:16.366Z · score: 6 (3 votes) · LW · GW
while defending Said, who is a super valuable commenter

Just wanted to note that, as a person who often finds Said's style off-putting, I appreciate reading this counterpoint from you.

EDIT: In my ideal world, Said can find a way to still be nitpick-y and insistent on precision and rigor in a way that doesn't frustrate me (and other readers) so much. I am unfortunately not exactly sure how to get to there from here.

Comment by esrogs on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T08:02:46.757Z · score: 2 (1 votes) · LW · GW

I agree, but as I put it in the great-grandparent comment:

I want to clarify that asking about the meanings of particular words is not the main thing I'm talking about (even though that was the example at the top of this whole thread).

It's more a pattern of expressing surprise / indignation as a rhetorical move. Here is an example, where he's not asking for clarification, but still doing the surprise / indignation thing.

You might think that comment is perfectly fine, and even from my perspective in any one single comment, it's often no big deal. But when surprised indignation is basically your tone in seemingly almost every thread, I eventually find it annoying.

Comment by esrogs on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T07:46:44.442Z · score: 4 (4 votes) · LW · GW
May as well blame the disabled for dastardly forcing us to waste money on wheelchair ramps

I do not believe that Said is unable to generate hypotheses in all the cases where he expresses bafflement / indignation. I believe it is (at least partially) a rhetorical move.

If people pretended to need wheelchairs to prove a point, we'd be right to blame them for forcing us to spend resources on them.

Comment by esrogs on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-03T03:28:38.725Z · score: 7 (5 votes) · LW · GW

Similarly, I've seen people complain when someone said at a CFAR alumni reunion, "I declare the Schelling location for xyz activity to be abc place."

I find this to be a perfectly valid (if tongue-in-cheek) usage of the term. Sure, that location wasn't the Schelling point for that activity before, but the act of declaring it to be makes it so!

Once that statement has been made and everyone has heard it, no further coordination is required for that location to be the default location for that activity. It is the Schelling point from now on.

Comment by esrogs on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-03T03:24:41.973Z · score: 8 (7 votes) · LW · GW

I don't quite understand this objection (which seems similar to other objections I've seen to uses of the term).

We've all exchanged information in the past. If you think of a Schelling point as the point we'd coordinate on with no further exchange of information, then I think the above kind of usage is valid.

Isn't meeting up at the information desk in Grand Central Station at noon (if you know you're meeting someone during the day in Manhattan, but you haven't agreed upon where) supposed to be the canonical example of a Schelling point?

But arriving at that point didn't involve zero coordination. There's a bunch of information we all have to know, and there are a bunch of specific reasons why that would be the place to meet. We all had to know that Grand Central exists. That it's prominent. That it's a convenient point for getting to lots of other parts of New York City. And people certainly had to coordinate to build Grand Central in the first place.

Similarly, there are a bunch of reasons why the SF Bay Area is the rationalist hub. And some people have put in effort to attract others here. But if you're a rationalist who wants to get to meet a bunch of other rationalists in person, then does anybody have to coordinate with anyone else to get you to make a trip to SF? It seems like at this point, it's become the default place for rationalists to meet, just as Grand Central would be the default place to meet up with someone in NYC.

Am I missing something?

Comment by esrogs on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T03:04:00.715Z · score: 12 (4 votes) · LW · GW

However, on the topic of words in particular, I do think that simply asking, "What does X mean?" is usually not the best path forward.

Consider three cases:

  • X is a term you're not familiar with (and you haven't looked it up yet)
  • X is a term you're not familiar with, so you've looked it up, but the definitions don't seem to match the way it's being used
  • X is a common term that seems to be used in a weird way

For which of these cases does it make sense to just write, "What do you mean by X?"

1) For case 1, it seems most respectful of others' time to just google the term. If that answers your question, consider also leaving a comment saying, "For others who weren't familiar with X, it means ..."

2) For case 2, I'd recommend saying that you've looked it up and the definitions don't seem to match. Otherwise you might just get one of the standard definitions back when someone replies to your comment and still be confused. Also this lets others know that you're extending them the courtesy recommended in case 1.

3) For case 3, I think it depends on the specific case, and how non-standard the usage is.

3A) If you're confident that the usage is as a technical term of art, such that when it's pointed out, the author will say, "Ah, you're right, I'm using that in a non-standard way. I mean ..." then just asking how it's being used seems like a fine way to go. (However, I do think it's easy to overestimate the odds that the author will understand why you find it confusing. They may be in a bubble where everyone uses that term in that way all the time.)

3B) In a case where the author might not realize that everyone wouldn't be familiar with the particular usage, then I think it's helpful to say something specific about how you interpret the word and what seems off about the usage. That way they'll have a better idea what to say to resolve the confusion.

The particular case of "authentic" at the top of this thread seems like kind of a border case between 3A and 3B. Everyone reading this should be familiar with what "authentic" means in a variety of contexts. And it's not exactly being used as a non-standard term-of-art, but it is doing a lot of work in the post, so it does seem reasonable to poke at it for a clearer picture.

I think the ideal version of Said's question would be the one that mentioned applause lights and "healthy" as a possible substitute. That one made it a lot clearer what the issue with the usage of a fairly common term was.

But I would agree that generating that level of comment instead of a short question is supererogatory, and I wouldn't downvote Said's original question. (Though since Said was the one asking it, I might find myself wondering if the discussion following the comment was going to fit the pattern of rhetorical bafflement that I've been annoyed by before.)

Comment by esrogs on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T02:34:43.899Z · score: 7 (3 votes) · LW · GW
that it's trivial for people to generate hypotheses for undefined words
I at least consider providing hypotheses as to a definition as obviously supererogatory. If you don't know the meaning of a word in a text, then the meaning may be either obvious or obscured

I want to clarify that asking about the meanings of particular words is not the main thing I'm talking about (even though that was the example at the top of this whole thread).

Said expresses bafflement at all sorts of things that people say. If it was always, "what do you mean by this specific word?" that would be a very different pattern.

Or if it was always expressing genuine curiosity, as opposed to making a rhetorical point, that would also be a very different pattern.

I am particularly complaining about the pattern of expressing surprise / confusion in a way that seems to be making a rhetorical point rather than seeking genuine understanding.

Comment by esrogs on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-03T00:46:57.197Z · score: 1 (3 votes) · LW · GW

Eliezer tweet about this here.

Comment by esrogs on [Link] High-performance government (Dominic Cummings) · 2020-01-03T00:35:57.410Z · score: 5 (3 votes) · LW · GW

This seems like it should be its own post! I've made a link post here.

Comment by esrogs on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T00:13:14.309Z · score: 16 (4 votes) · LW · GW

Let me restate my core claims:

1) I think "I am having trouble understanding what you mean, the best guess I can come up with is X." is far more conducive to getting to clarity than "I have no idea what you mean." even when X feels quite unlikely to be what the person actually meant.

I am not asking the reader to read the mind of the author. I am asking them to generate at least one hypothesis about what the author might mean.

Do not forget the lesson of the Double Illusion of Transparency -- just as the author will think they have communicated clearly when they have not, someone asking a question will also think the question is clear when it has not in fact been understood.

2) Asking for clarification as a form of criticism is bad form (or at lease is a move that should be used sparingly).

Perhaps you suspect the author's thoughts are muddled and that shining the light of clarification on what they've written will expose this fact. You can say, "What do you mean by X?" And perhaps you will catch them in an error.

However, doing this all the time is annoying! Especially if it's unclear to the author whether you in fact are trying to work towards mutual understanding, or are simply playing gotcha.

If you think the author might have something meaningful that they are saying, then offering your best hypothesis will work far better for finding out what it is.

And if you don't think there's anything to what they're saying, it's a bit disingenuous to state your criticism in the form of a question.

I'm actually having a little trouble expressing this second point, because I do think there's a place for Socratic questioning, which can be very helpful. I just think there are ways to do it that are more collaborative, polite, illuminating, and other ways that are unpleasant and adversarial.

The best rule I can come up with at the moment is: If you're going to be in collaborative mode, offer hypotheses, and if you're going to be in adversarial mode, don't pretend to be in collaborative mode.

Comment by esrogs on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T21:33:48.515Z · score: 23 (10 votes) · LW · GW
enough authors and users have experienced those follow-up discussions that the problems have backpropagated into a broader aversion to questions like Said's top-level question

Just want to provide one data point: that I agree with this.

I have not personally had many back-and-forths with Said, but I've read enough of them to have built up a sense of frustration with Said's communication style.

I find that he sometimes makes good points, but they're often (usually?) wrapped in a style that I personally find unpleasant.

I'm not sure if I can quickly or exhaustively describe what the problem is -- it's not that the comments are rude, per se. He's not calling people names or anything so blatant as that. But there's an attitude that I perceive in them, combined with a set of rhetorical moves that to me seem like bad form.

Maybe a term for the attitude / rhetorical move that I find frustrating would be: "weaponized bafflement". Said often expresses that he has no idea what someone could mean by something, or is totally shocked that someone could think two things are similar (e.g. grouping both reading the sequences and attending CFAR as rationality training), when to me it seems pretty easy to at least generate some hypotheses about what they might mean or why they might think something.

Of course, noticing confusion is great. Asking for clarification is helpful. But the thing that Said does often strikes me as attempting to pull a "The Emperor has no clothes" move all the time, without being explicit that that's what he's doing, or allowing for the possibility that perhaps the emperor does have clothes. I find it tiresome.

I find myself thinking: if you're so consistently unable to guess what people might mean, or why people might think something, maybe the problem is (at least some of the time) with your imagination.

I think that if requests for clarification or expressions of surprise more consistently seemed to acknowledge that the interlocutor might have a good point that Said is just missing, that would be fine. Instead, the common pattern seems to be an expression of surprise, combined with an implication that the interlocutor is an idiot.

Maybe that's not what Said means to communicate, but I would find his comments more pleasant to read if they gave a wider berth to such interpretations.

Comment by esrogs on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T20:54:30.398Z · score: 4 (2 votes) · LW · GW
I still don’t know what more I could’ve said about its “shape”.

Just want to point out that I think that simply adding the following to your original comment would have been a marginal improvement:

I think the standard usage is just an applause light.

That is, if your comment had been, "'Authentic' seems like it's often used as an applause light to me. Can you say more about specifically what you mean by it in this context?" I think that would have been an improvement over the original comment.

I agree with others that just saying, "What do you mean by X?" When X is a common, well-known word can often be insufficient for making it easy for the author to figure out what to say in reply.

It's Double Illusion of Transparency all the way down. :P

Comment by esrogs on Might humans not be the most intelligent animals? · 2019-12-24T17:13:55.323Z · score: 10 (5 votes) · LW · GW
We do see some initial signs that humans might not be the most intelligent species on Earth in the innovative sense. For instance,
Although humans have the highest encephalization quotient, we don't have the most neurons, or even the most neurons in our forebrain.

I don't find the facts about number of neurons very suggestive of humans not being the most intelligent. On the contrary, when I look at the lists, it reinforces the idea that humans are smartest individually.

For total neuron count, only one animal beats humans: the African elephant, with 50% more, and humans have more than 2x the next highest (which is the Gorilla). So if we think the total neuron count numbers tell us how smart an animal is, then either humans or elephants are at the top.

For forebrain neurons, the top 10 are 8 kinds of whale, with humans at number 3 and gorillas at number 10.

Notably, humans and gorillas are the only animals in the top 10 on both lists, with humans handily beating gorillas in both cases. And all the animals which beat humans having much higher body mass.

I think if you were an alien who thought neurons were important for intelligence, and saw the lists above, but didn't know anything else about the species, you'd probably have most of your probability mass on either elephants, primates, or whales being the smartest. Once you saw human behavior you'd update to most of the mass being on humans at the top.

And similarly, if you already had as one of your leading hypotheses that humans were smartest, based on their behavior, and also that neurons were important, then I'd think seeing humans so high on the lists above would tend to confirm the humans-are-smartest hypothesis, rather than disconfirm it.

Comment by esrogs on We run the Center for Applied Rationality, AMA · 2019-12-21T01:59:55.889Z · score: 3 (2 votes) · LW · GW

Thank you for this clarification.

Comment by esrogs on We run the Center for Applied Rationality, AMA · 2019-12-20T22:12:52.539Z · score: 19 (11 votes) · LW · GW

Yes, when the better way takes more resources.

On the meta level, I claim that doing things the usual way most of the time is the optimal / rational / correct way to do things. Resources are not infinite, trade-offs exist, etc.

EDIT: for related thoughts, see Vaniver's recent post on T-Shaped Organizations.