Comment by benquo on Comment section from 05/19/2019 · 2019-05-23T09:57:29.286Z · score: 3 (2 votes) · LW · GW
Other abstract topics should be avoided, if the relevant examples are politically-charged and the abstraction doesn't easily encompass other points of view.


Choosing to discuss abstracts primarily which happen to support a specific position, without disclosing that tie, is not OK.

How exactly does this differ from, "if the truth is on the wrong side politically, so much the worse for the truth"? Should we limit ourselves to abstract discussions that don't constrain our anticipations on things we care about?

Comment by benquo on A War of Ants and Grasshoppers · 2019-05-23T02:47:40.754Z · score: 10 (2 votes) · LW · GW

Expanding a bit on gallabytes's comment: The language around Moloch often assumes a Nash equilibrium, i.e. a situation in which a rational agent implementing causal decision theory couldn't do better. Sometimes the agents aren't general intelligences, but are simpler evolutionarily fit processes responding to feedback.

Comment by benquo on A War of Ants and Grasshoppers · 2019-05-23T02:36:21.256Z · score: 2 (1 votes) · LW · GW

I ended up not including the full text because this felt a lot less lesswrongy than most of my stuff - but not so much that I didn't think a link would be appropriate.

Comment by benquo on Go Do Something · 2019-05-23T02:24:07.825Z · score: 14 (4 votes) · LW · GW

Thanks for the specific examples. I'm more worried about subtler cases, that aren't overtly about social reality, but where feedback is mediated through it.

For instance, people like Taleb often name entrepreneurship as an especially "real" thing you can do, but founding a startup can look more like passing a series of tests where you're supposed to look like VCs' consensus idea of a business, than figuring out how to make a product you can sell profitably. And success in the corporate world is often even sillier (see just about any story from Moral Mazes for details - or Dilbert for the fictional version), even in firms that make useful physical products. If you're not careful about what kinds of feedback you respond to or incentive gradients you follow, you may learn to conflate the symbolic representation of the thing (optimized to get approval) with the thing itself.

Acting on social reality is an important skill for many projects, but not all ways of interacting with social reality are the same. In particular, coordinating to manage appearances and stories is very different from coordinating to do something in objective reality. (The engineer and the diplomat, Actors and scribes, words and deeds, On Drama, and Blame games all touch on this.)

Comment by benquo on Discourse Norms: Justify or Retract Accusations · 2019-05-22T21:03:07.700Z · score: 14 (4 votes) · LW · GW

What constitutes an attack?

Comment by benquo on Discourse Norms: Justify or Retract Accusations · 2019-05-22T20:59:28.778Z · score: 13 (4 votes) · LW · GW

Why the asymmetric burden on criticism? Seems like positive claims are more likely to imply demands on others' resources or attention.

A War of Ants and Grasshoppers

2019-05-22T05:57:37.236Z · score: 16 (3 votes)
Comment by benquo on Beware Social Coping Strategies · 2019-05-21T20:14:33.128Z · score: 10 (2 votes) · LW · GW

I did a lot of this too, and ended up constructing a simulation of myself as a social layer. I recently switched to a more integrated approach, and my current development edges are around figuring out when being open and honest is the wrong choice.

Comment by benquo on Beware Social Coping Strategies · 2019-05-21T20:11:56.853Z · score: 13 (3 votes) · LW · GW

Resolving internal conflicts that are aggravated by social interactions seems like a very important leverage point for some people.

Comment by benquo on Beware Social Coping Strategies · 2019-05-21T20:11:14.648Z · score: 10 (2 votes) · LW · GW

It's not clear what the performance metric is here or which things to focus on for practice. For instance, learning to read microexpressions in more detail can help reduce the long-run amount of social work required to manage an interaction, but at the cost of additional short-run cognitive load, and it has the risk of exacerbating problems instead.

Comment by benquo on Go Do Something · 2019-05-21T18:46:19.042Z · score: 47 (14 votes) · LW · GW

This works for versions of "do something" that mainly interact with objective reality, but there's a pretty awful value-misalignment problem if the way you figure out what works is through feedback from social reality.

So, for instance, learning to go camping or cook or move your body better or paint a mural on your wall might count, but starting a socially legible project may be actively harmful if you don't have a specific need that's meeting that you're explicitly tracking. And unfortunately too much of people's idea of what "go do something" ends up pointing to trying to collect credit for doing things.

Sitting somewhere doing nothing (which is basically what much meditation is) is at least unlikely to be harmful, and while of limited use in some circumstances, often an important intermediate stage in between trying to look like you're doing things, and authentically acting in the world.

Comment by benquo on Excerpts from a larger discussion about simulacra · 2019-05-21T03:38:57.260Z · score: 5 (2 votes) · LW · GW

Cool, this seems like decent reason to try to rewrite this in a slightly more polished way.

Comment by benquo on Comment section from 05/19/2019 · 2019-05-20T16:12:15.499Z · score: 11 (3 votes) · LW · GW

What specifically do you mean by "werewolf" here & how do you think it relates to the way Jessica was using it? I'm worried that we're getting close to just redefining it as a generic term for "enemies of the community."

Comment by benquo on Yes Requires the Possibility of No · 2019-05-20T14:58:59.751Z · score: 8 (1 votes) · LW · GW

If people have a reason to lie, they may want to use intensifiers like "honestly" for the same reason. Likewise for asking others to lie while pretending to ask for the honest truth - if you're already pretending, why should we start being surprised only once you use words like "honestly"?

There's an underlying question of why this particular pretense, of course.

Comment by benquo on Naked mole-rats: A case study in biological weirdness · 2019-05-20T01:28:50.545Z · score: 8 (4 votes) · LW · GW

This seems like a really great kind of question to ask - "what's the weirdness generator?" - in response to an intuition that's important - many unrelated surprising deviations from the norm is suspicious, and probably there's a single cause.

Comment by benquo on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2019-05-16T11:39:20.031Z · score: 18 (3 votes) · LW · GW

Your overall point is right and important but most of your specific historical claims here are false - more mythical than real.

Free-market economic theory developed only after millenia during which everyone believed that top-down control was the best way of allocating resources.

Free market economic theory was developed during a period of rapid centralization of power, before which it was common sense that most resource allocation had to be done at the local level, letting peasants mostly alone to farm their own plots. To find a prior epoch of deliberate central resource management at scale you have to go back to the Bronze Age, with massive irrigation projects and other urban amenities built via palace economies, and even then there wasn't really an ideology of centralization. A few Greek city-states like Sparta had tightly regulated mores for the elites, but the famously oppressed Helots were still probably mostly left alone. In Russia, Communism was a massive centralizing force - which implies that peasants had mostly been left alone beforehand. Centralization is about states trying to become more powerful (which is why Smith called his book The Wealth of Nations, pitching his message to the people who needed to be persuaded.) Tocqueville's The Old Regime describes centralization in France before and after the Revolution. War and Peace has a good empirical treatment of the modernizing/centralizing force vs the old-fashioned empirical impulse in Russia. "Freedom" is not always decentralizing, though, as the book makes clear.

Freedom of speech developed only after millenia during which everyone believed that it was rational for everyone to try to suppress any speech they disagreed with.

There was something much like this in both the Athenian (and probably broader Greek) world (the democratic prerogative to publicly debate things), and the Israelite world (prophets normatively had something close to immunity from prosecution for speech, and there were no qualifications needed to prophesy). In both cases there were limits, but there are limits in our world too. The ideology of freedom of speech is new, but your characterization of the alternative is tendentious.

Political liberalism developed only after millenia during which everybody believed that the best way to reform society was to figure out what the best society would be like, then force that on everyone.

Political liberalism is not really an exception to this!

Evolution was conceived of--well, originally about 2500 years ago, probably by Democritus, but it became popular only after millenia during which everyone believed that life could be created only by design.

It's really unclear what past generations meant by God, but this one is probably right.

Comment by benquo on Eight Books To Read · 2019-05-15T11:50:24.949Z · score: 17 (6 votes) · LW · GW

I agree with most of the recommendations. Some advice for getting into Plato, for the untrained reader:

The Bloom translation of Republic is the classic. Any older English translation is suspect, for reasons explained in the introduction. Happy to share a copy with anyone who needs one.

On the whole, I think someone who feels they have a good learning curve reading Plato sensitively would do better reading more Plato than Strauss, though they should trust their own intuitions here and not mine.

Nothing wrong with starting cold with Republic, but John Holbo's Reason and Persuasion seems like an unusual combo of accessible and careful to respect the integrity of the signal.

Comment by benquo on Coherent decisions imply consistent utilities · 2019-05-14T01:30:41.722Z · score: 8 (3 votes) · LW · GW

Presumably to keep morale up by making it look like the rightful Caliph is still alive and producing output.

Comment by benquo on Why books don't work · 2019-05-13T13:42:35.550Z · score: 8 (1 votes) · LW · GW
Parents talk to their kids, and read to them, including children’s books and alphabet books and nursery rhymes and “say ‘ma-ma’! go on… ‘ma-ma’!” and so on; and parents play with their kids, and build or buy playpens, etc., etc. What is that, but “problems” and “projects”?

It's play. In extremely rare cases like A Mathematician's Lament, people do propose that teachers play with their students about the subject matter, but mostly problem sets and projects are not assigned by the same methods by which language is introduced to children. If the OP were proposing that professors play with their students, I'd be more sympathetic, and have brought up the babies as a confirming rather than disconfirming example!

Comment by benquo on Why books don't work · 2019-05-13T13:39:28.742Z · score: 10 (2 votes) · LW · GW

Lectures were literally invented as a method of text distribution, when printing was unavailable and paper expensive. I don't mean that in the past they were more effective than integrated instruction - I mean that an academic context in which the main formal service provided was delivery of lectures did not prevent students from thinking about the content of lectures on their own.

Here's what I meant by the CD metaphor. It seems like there's an old practice of doing the equivalent of handing students CDs. We can now see that this practice is broken, in the sense that students, lacking CD players, don't appreciate the music or other audio. One plausible interpretation is that the practice of handing students CDs has always been a poor fit for the audio formats compatible with students' ears. But another plausible interpretation - the one I'm proposing - is that the students used to have CD players, and no longer do.

Likewise, it's not as though learning didn't go on in highly lecture-centric (or book-centric) contexts. So if students aren't learning from lectures (and books), we might expect that some interpretive faculty they used to have is now absent. This seems to me like it ought to be a higher priority to get to students (or stop taking away from them), than the content of almost any particular lecture course.

Comment by benquo on Why books don't work · 2019-05-13T03:50:07.162Z · score: 10 (4 votes) · LW · GW

I would expect healthy people who want to learn something found in a book to think of complements to the book, e.g. to take initiative to try something based on what the book says, to think through different cases than the ones discussed in the book to see how the same principles might apply, etc.

If students wouldn't do that, something's gone wrong that isn't easily summarizable as a local failing of pedagogy.

Comment by benquo on Why books don't work · 2019-05-12T17:06:17.809Z · score: 10 (2 votes) · LW · GW
Yes, if an instructor were, for some strange reason, to decide that he will only give lectures, and not assign any problems, projects, etc., then the students will not learn the material. Similarly, if a student were, for some strange reason, to decide that he will only attend lectures and not take notes, put together outlines or study guides, read the text, or do the exercises, he will not learn the material.

I don't disagree, but this seems indicative of preexisting damage to the students or at least a direly impoverished environment. After all, we usually don't give babies problems, projects, etc to teach them to walk and talk, but they learn just fine. Someone invented this stuff, and plenty of people in the past have made efficient use of standardized streams of text (whether delivered visually or aurally) to improve their understanding of a thing.

If someone said CDs don't work because you can't hear the music by looking at them, we'd wonder whether this person knows about CD players.

Comment by benquo on Narcissism vs. social signalling · 2019-05-12T13:52:30.490Z · score: 4 (2 votes) · LW · GW

It seems to me like the first two stages are simple enough that Jessica's treatment is an adequate formalization, insofar as the "market for lemons" model is well-understood. Can you say a bit more about how you'd expect additional formalization to help here?

It's in the transition from stage 2 to 3 and 4 that some modeling specific to this framework seems needed, to me.

Comment by benquo on Narcissism vs. social signalling · 2019-05-12T13:42:30.988Z · score: 10 (2 votes) · LW · GW

Since all three comments so far seem to have had the same basic objection, I'm going to reply to the parent.

It seems like the claim in your first paragraph is implicitly disjunctive: IF your beliefs are "about the world" (i.e. you're modeling yourself as an agent with a truth-seeking epistemology), THEN "convincing yourself" isn't a thing. So IF you're "convincing yourself", THEN the relevant "beliefs" aren't a sincere attempt to represent the world.

Comment by benquo on Narcissism vs. social signalling · 2019-05-12T13:39:12.541Z · score: 10 (2 votes) · LW · GW
This is stage 1 signalling. Stage 2 signalling is this but with convincing lies, which actually are enough to convince a Bayesian evaluator (who may be aware of the adversarial dynamic, and audit sometimes).

The theory of costly signaling is specifically about stage 1 strategies in an environment where stage 2 exists - sometimes a false signal is much more expensive than a true signal of the same thing.

Comment by benquo on Narcissism vs. social signalling · 2019-05-12T13:36:48.344Z · score: 10 (2 votes) · LW · GW

I can't think of a clear thing to point to in the text - I think he's more concerned with describing what's happening than modeling its historical causes.

I can guess on my own account - I think the commodification of human life, rapid pace of change with respect to economic roles, and rise of mass-media advertising in the mid 20C accelerated a force already latent in American culture. But that's my guess. TLP is more empirical.

Comment by benquo on Narcissism vs. social signalling · 2019-05-12T03:49:42.307Z · score: 12 (3 votes) · LW · GW

The temporal aspect seems important in distinguishing the two models - TLP says something changed in 20th Century American culture to make narcissism much more common.

Comment by benquo on Tales From the American Medical System · 2019-05-10T17:04:20.662Z · score: 4 (2 votes) · LW · GW
I really only have one patient who is definitely doing this, but it’s enough that I can understand why some doctors don’t want to have to have this fight and institute a stricter “no refill until appointment is on the books” policy.

Why not just dump that one patient?

Comment by benquo on How To Use Bureaucracies · 2019-05-10T16:19:20.330Z · score: 14 (4 votes) · LW · GW

Here's my vague overall impression from reading secondary sources not directly concerned with this question (probably more noisy but also more trustworthy than secondary sources making a direct argument about this.)

Overall the sense I get is that recordkeeping and action were kept separate in most ancient civilizations, even pretty big ones - no minutes of meetings or white paper equivalents or layers of approval and formalized decision delegation.

It seems to me like "clay tablet" cultures had extensive scribal institutions, but these were mostly used for rituals in temple cults (of unknown function), tax assessment, and central recording of contracts (the state served as a trusted third party for record storage and retrieval). You'd also need logistical records for many public works projects, but these were often very simple. Someone would be in charge and sometimes have to request resources from other people, who would keep track of what was sent, sometimes the king would want to know what was going on, so they had to know the broad outlines.

As I understand it the Persian empire's managerial and formal information-processing layer was extremely lean, the king would just personally send some guy to check on a whole province, there was a courier network but nothing on the scale of USPS or even Akkadian scribal records.

Comment by benquo on Tales From the American Medical System · 2019-05-10T14:26:06.289Z · score: 11 (3 votes) · LW · GW

Doctors valuing their position as an authority, and caring enough about this to threaten to withhold vital care until their authority is affirmed, seems like it would necessarily entail the kind of distrust you're worried about. The paradigm of epistemic authority is one where information can only flow down power gradients - there's no way someone with lower rank would know something that someone with higher rank is ignorant of.

Obviously this is a terrible paradigm for any kind of healing that requires knowing about the patient.

Comment by benquo on Tales From the American Medical System · 2019-05-10T14:23:57.459Z · score: 8 (1 votes) · LW · GW

In some cases (this is the most nearby alternative hypothesis to Davidmanheim's), the spending required to maintain their class privilege (unless they're really unusually clever) scales slightly ahead of their income.

In other cases, they get addicted to the game, and become obsessed with scoring points.

Comment by benquo on How To Use Bureaucracies · 2019-05-09T21:34:04.386Z · score: 8 (3 votes) · LW · GW

Ritualizes might be more precise. Provides a stereotyped interface that plays nicely with other stereotyped interfaces. Military drill sort of serves a similar function, in the face of a different kind of entropy than the one this is a defense against.

Comment by benquo on Towards optimal play as Villager in a mixed game · 2019-05-09T19:54:46.568Z · score: 8 (3 votes) · LW · GW
Claiming to be king is unnecessary if there is already such evidence, and ineffective if there is not.

Actual kings thought otherwise strongly enough to have others who claimed to be king of their realm killed if at all possible. Repetition of royal pomp within a king's lifetime implies that claiming to be king is not redundant for an already-acknowledged king either. Often there were annual or even more frequent reaffirmations, plus constant reinforcement via local protocol among whoever was physically near the king.

Comment by benquo on How To Use Bureaucracies · 2019-05-09T19:47:30.766Z · score: 8 (3 votes) · LW · GW

That's true but the bureaucracy isn't what builds parks. The person in charge bosses around a bunch of other people competent to design and build parks, and secures the land and other inputs needed to do so via political processes. The bureaucracy is what normalizes the arrangement so that it can interface with other things in control of resource flows, e.g. so that people can get paid for reporting to Moses.

Comment by benquo on How To Use Bureaucracies · 2019-05-09T18:31:40.603Z · score: 10 (4 votes) · LW · GW

The sense I got from The Power Broker is that Moses's work was good when doing good work was aligned with his perceived interests, and not when not, and it wasn't that hard for him to find people competent at the relevant technical disciplines when that was needed (and his ability to accumulate power quickly initially gave him a lot of slack to hire based on merit, when delivering a conspicuously high-quality product seemed like it would be helpful for accumulating more power).

In general it doesn't really seem to require much technical expertise to lead a technical project, just a somewhat difficult to maintain mixture (in a political context) of the skills necessary to obtain and defend resources, and the mindset that still cares about getting the technical side right.

Comment by benquo on GiveWell and the problem of partial funding · 2019-05-09T17:46:14.567Z · score: 6 (2 votes) · LW · GW

The actual amount of money involved matters here. Hedging bets seems less unsympathetic (although still incoherent enough to be a problem) for a player that couldn't fully fund for a few years and still have the vast majority of its funds uncommitted. But Good Ventures actually had that option!

Comment by benquo on Towards optimal play as Villager in a mixed game · 2019-05-09T17:40:11.256Z · score: 9 (4 votes) · LW · GW

This may be true in the particular case mentioned - I think you only get that sort of maladaptive level of transparency from people to whom the paradigm doesn't feel native and they have to consciously learn it. (Similarly, part of why the case of GiveWell is so valuable is that GiveWell doesn't lie or bullshit about what it's doing to the extent that more conventional orgs do - its commitment to transparency is in some tension with its actual strategy, but executed well enough that it tells the truth about some of the adversarial games it's playing.)

But there's a transformation of "I have more social skills, so you should do what I say" that does work, where multiple people within a group will coordinate to invalidate bids for clarity as socially unskilled. This tends to work to silence people when it accumulates enough social proof. To go to the king example, a lot of royal pomp is directly about creating common knowledge that one person is the king and everyone else accepts this fact.

Comment by benquo on Authoritarian Empiricism · 2019-05-09T17:33:25.490Z · score: 0 (2 votes) · LW · GW

Thanks, the pronouns were unclear, hopefully my edit fixed that.

Comment by benquo on Should Effective Altruism be at war with North Korea? · 2019-05-09T16:59:21.201Z · score: 9 (4 votes) · LW · GW

Thanks for the additional detail. In general I consider a post of that length that has a "main point" to be too long. I'm writing something more like essays than like treatises, while it seems to me that your reading style is optimized for treatises. When I'm writing something more like a treatise, I do find it intuitive to have summaries of main points, clear section headings, etc. But the essay form tends to explore the connections between a set of ideas rather than work out a detailed argument for one.

I'm open to arguments that I should be investing more in treatises, but right now I don't really see the extra work per idea as paying off in a greater number of readers understanding the ideas and taking initiative to extend them, apply them, or explain them to others in other contexts.

Comment by benquo on Should Effective Altruism be at war with North Korea? · 2019-05-09T16:23:57.839Z · score: 7 (3 votes) · LW · GW

Thanks, this style of feedback is much easier for me to understand! I'm a bit confused about how much I should care about people having liked my post on GiveWell since it doesn't seem like the discourse going forward changed much as a result. I don't think I've seen a single clear example of someone taking initiative (where saying something new in public based on engagement with the post's underlying model would count as taking initiative) as a result of that post, and making different giving decisions would probably count too. As a consolation prize, I'll accept reduced initiative in counterproductive directions.

If you can point me to an example of either of those (obviously I'd have to take your word about counterfactuals) then I'll update away from thinking that writing that sort of post is futile. Strength of update depends somewhat on effect size, of course.

Comment by benquo on Hierarchy and wings · 2019-05-09T16:05:45.397Z · score: 7 (3 votes) · LW · GW
Maybe I have mentally edited out something offensive

I think the potential problem is that many participants in the discussion will strongly identify with one of the two factions I'm describing, so that what appears to be intellectual engagement might just be rooting for one's team, and arguments are more likely to be soldiers than usual. (I ended up deleting a comment on the North Korea post for this reason.)

Comment by benquo on Hierarchy and wings · 2019-05-09T16:02:27.877Z · score: 8 (1 votes) · LW · GW

I think you're right about the Warsaw Pact vs NATO arrangements. Explicitly organizing the state around central economic planning altered the power arrangement so the info-processing bureaucracy (including secret police) ends up being the central coordinating point.

I agree the second-order effects you mention are there, and maybe important to model that so as to not get confused by it, but I don't think it has done very much so far except occasionally confuse people.

Comment by benquo on Should Effective Altruism be at war with North Korea? · 2019-05-09T06:18:25.604Z · score: 3 (3 votes) · LW · GW

Now I think this is getting too much into a kind of political discussion that is going to be unhelpful.

Comment by benquo on Blame games · 2019-05-09T03:02:34.841Z · score: 13 (4 votes) · LW · GW

It's not that higher simulacrum level players can't do level 1 internally, it's that if people play higher levels inside an organization, that organization's information processing is corrupted, and it gets worse at the sorts of things level 1 is good at. There are huge advantages to groups being able to coordinate on level 1, and there are advantages to individuals knowing about all four levels.

To some extent levels 3 and 4, if practiced commonly enough, erode the ability to talk in level 1 language.

Comment by benquo on Towards optimal play as Villager in a mixed game · 2019-05-08T22:02:14.433Z · score: 16 (3 votes) · LW · GW

Incentives matter, but trauma matters too. And learning what it's like to "play Werewolf" or "play Villager" on purpose in a stereotyped environment with divided roles is helpful for learning the discernment to notice these processes, which are often quite subtle.

Comment by benquo on Towards optimal play as Villager in a mixed game · 2019-05-08T19:41:17.242Z · score: 14 (2 votes) · LW · GW

More generally there have been lots of times and places where some people have been playing the game of scale-to-maximum, and people not playing that game have often had to adjust for its existence, but can often do quite a lot anyway, including weird bank-shot projects like Christianity and Buddhism that end up scaling a bunch, across borders, without having any sort of army or centralized infrastructure for quite a while. Things that don't scale can engineer things that do. Obviously none of the past attempts have been good enough to permanently solve the problem, but you can look at how much deliberation went into them vs level of alignment and impact. To me, it looks like a Dunbar group that is modeling this situation explicitly has a pretty decent chance of building something much better, which in turn should improve the rate at which we get chances to try things.

Comment by benquo on Towards optimal play as Villager in a mixed game · 2019-05-08T19:37:24.732Z · score: 16 (3 votes) · LW · GW
I would be interested in a sketch of a situation which you think would output a different strategic equilibrium.

I don't think I fully understand what you're asking, but here's a partial answer.

I think that the US North before the Civil War but decreasingly between then and WWII had a different equilibrium, in which people could viably just be landholders who improved their lot through, well, improving their lot. The Puritan states especially tended towards this. Merchants and artisans were also a thing, but a single whaling expedition was kind of a big venture compared with most things, and most business didn't occur at scale. The rise of major government expenditures (direct wartime expenses and indirect mobilization-related ones like railroads) changed that, as did the central controls put in place between the World Wars to manage the disruptions this caused, which naturally were designed to accommodate powerful incumbent stakeholders.

If you read accounts of economic life from that time, it really doesn't look like "competent people are rare, and the world is big," it looks like "ultra-competent people are gold, most people are good enough at something to get by, there are lots of different types of people with their own weirdness going on, and the world is very messy and diverse." Moby Dick definitely gives me this sense.

Comment by benquo on Towards optimal play as Villager in a mixed game · 2019-05-08T19:21:44.975Z · score: 10 (2 votes) · LW · GW

Agreed that Jews aren't anything like a full answer here, just another important example to bear in mind when considering the space of what's possible.

Comment by benquo on Should Effective Altruism be at war with North Korea? · 2019-05-08T18:07:56.753Z · score: 16 (2 votes) · LW · GW

On merging utility functions, here's the relevant quote from Coherent Extrapolated Volition, by Eliezer Yudkowsky:

Avoid creating a motive for modern-day humans to fight over the initial dynamic.
One of the occasional questions I get asked is “What if al-Qaeda programmers write an AI?” I am not quite sure how this constitutes an objection to the Singularity Institute’s work, but the answer is that the solar system would be tiled with tiny copies of the Qur’an. Needless to say, this is much more worrisome than the solar system being tiled with tiny copies of smiley faces or reward buttons. I’ll worry about terrorists writing AIs when I am through worrying about brilliant young well-intentioned university AI researchers with millions of dollars in venture capital. The outcome is exactly the same, and the academic and corporate researchers are far more likely to do it first. This is a critical point to keep in mind, as otherwise it provides an excuse to go back to screaming about politics, which feels so much more satisfying. When you scream about politics you are really making progress, according to an evolved psychology that thinks you are in a hunter-gatherer tribe of two hundred people. To save the human species you must first ignore a hundred tempting distractions.
I think the objection is that, in theory, someone can disagree about what a superintelligence ought to do. Like Dennis [sic], who thinks he ought to own the world outright. But do you, as a third party, want me to pay attention to Dennis? You can’t advise me to hand the world to you, personally; I’ll delete your name from any advice you give me before I look at it. So if you’re not allowed to mention your own name, what general policy do you want me to follow?
Let’s suppose that the al-Qaeda programmers are brilliant enough to have a realistic chance of not only creating humanity’s first Artificial Intelligence but also solving the technical side of the FAI problem. Humanity is not automatically screwed. We’re postulating some extraordinary terrorists. They didn’t fall off the first cliff they encountered on the technical side of Friendly AI. They are cautious enough and scared enough to double-check themselves. They are rational enough to avoid tempting fallacies, and extract themselves from mistakes of the existing literature. The al-Qaeda programmers will not set down Four Great Moral Principles, not if they have enough intelligence to solve the technical problems of Friendly AI. The terrorists have studied evolutionary psychology and Bayesian decision theory and many other sciences. If we postulate such extraordinary terrorists, perhaps we can go one step further, and postulate terrorists with moral caution, and a sense of historical perspective? We will assume that the terrorists still have all the standard al-Qaeda morals; they would reduce Israel and the United States to ash, they would subordinate women to men. Still, is humankind screwed?
Let us suppose that the al-Qaeda programmers possess a deep personal fear of screwing up humankind’s bright future, in which Islam conquers the United States and then spreads across stars and galaxies. The terrorists know they are not wise. They do not know that they are evil, remorseless, stupid terrorists, the incarnation of All That Is Bad; people like that live in the United States. They are nice people, by their lights. They have enough caution not to simply fall off the first cliff in Friendly AI. They don’t want to screw up the future of Islam, or hear future Muslim scholars scream in horror on contemplating their AI. So they try to set down precautions and safeguards, to keep themselves from screwing up.
One day, one of the terrorist programmers says: “Here’s an interesting thought experiment. Suppose there were an atheistic American Jew, writing a superintelligence; what advice would we give him, to make sure that even one so steeped in wickedness does not ruin the future of Islam? Let us follow that advice ourselves, for we too are sinners.” And another terrorist on the project team says: “Tell him to study the holy Qur’an, and diligently implement what is found there.” And another says: “It was specified that he was an atheistic American Jew, he’d never take that advice. The point of the Coherent Extrapolated Volition thought experiment is to search for general heuristics strong enough to leap out of really fundamental errors, the errors we’re making ourselves, but don’t know about. What if he should interpret the Qur’an wrongly?” And another says: “If we find any truly general advice, the argument to persuade the atheistic American Jew to accept it would be to point out that it is the same advice he would want us to follow.” And another says: “But he is a member of the Great Satan; he would only write an AI that would crush Islam.” And another says: “We necessarily postulate an atheistic Jew of exceptional caution and rationality, as otherwise his AI would tile the solar system with American music videos. I know no one like that would be an atheistic Jew, but try to follow the thought experiment.”
I ask myself what advice I would give to terrorists, if they were programming a superintelligence and honestly wanted not to screw it up, and then that is the advice I follow myself.
The terrorists, I think, would advise me not to trust the self of this passing moment, but try to extrapolate an Eliezer who knew more, thought faster, were more the person I wished I were, had grown up farther together with humanity. Such an Eliezer might be able to leap out of his fundamental errors. And the terrorists, still fearing that I bore too deeply the stamp of my mistakes, would advise me to include all the world in my extrapolation, being unable to advise me to include only Islam.
But perhaps the terrorists are still worried; after all, only a quarter of the world is Islamic. So they would advise me to extrapolate out to medium-distance, even against the force of muddled short-distance opposition, far enough to reach (they think) the coherence of all seeing the light of Islam. What about extrapolating out to long-distance volitions? I think the terrorists and I would look at each other, and shrug helplessly, and leave it up to our medium-distance volitions to decide. I can see turning the world over to an incomprehensible volition, but I would want there to be a comprehensible reason. Otherwise it is hard for me to remember why I care.
Suppose we filter out all the AI projects run by Dennises who just want to take over the world, and all the AI projects without the moral caution to fear themselves flawed, leaving only those AI projects that would prefer not to create a motive for present-day humans to fight over the initial conditions of the AI. Do these remaining AI projects have anything to fight over? This is an interesting question, and I honestly don’t know. In the real world there are currently only a handful of AI projects that might dabble. To the best of my knowledge, there isn’t more than one project that rises to the challenge of moral caution, let alone rises to the challenge of FAI theory, so I don’t know if two such projects would find themselves unable to agree. I think we would probably agree that we didn’t know whether we had anything to fight over, and as long as we didn’t know, we could agree not to care. A determined altruist can always find a way to cooperate on the Prisoner’s Dilemma.
Comment by benquo on Should Effective Altruism be at war with North Korea? · 2019-05-08T17:54:34.622Z · score: 8 (3 votes) · LW · GW
  • Writing a policy paper clarifying the Utilitarian and Decision-Theoretic calculus as it applies to some core North Korean interest, such as negotiation between parties of very unequal power that don't trust each other, and its implications for nuclear disarmament.
  • Writing another persuasive essay like the letter from utopia directing some attention to the value of reconciling freedom to trade / global integration, with preserving the diversity of individual and collective minds.
  • Taking on a grad student from NK (or arranging for a more suitable colleague to do so.)

Not sure which if any of these would be interesting from a NK perspective.

Comment by benquo on Should Effective Altruism be at war with North Korea? · 2019-05-08T17:30:06.982Z · score: 10 (2 votes) · LW · GW

Hmm, I think I can be clearer (and nicer) than I've been.

I wouldn't be posting this stuff if I didn't think it was a reasonably efficient summary of an important model component, enough that I'm happy to move on and point people back to the relevant post if they need that particular piece of context.

I try to write and title this stuff so that it's easy to see what the theme etc. is early in the post. Dialogue that doesn't have an intuitive narrative arc is much less likely to get posted as-is, much more likely to be cannibalized into a more conventional article. But there's something about putting up an abstract or summary separate from the body of the article that often feels bad and forced, like it's designed for a baseline expectation that articles will have a lot of what I'd consider pointless filler. I don't want to signal that - I want my writing to accurately signal what it is and I worry that a shell written in a different style will tacitly send discordant signals, doing more harm than good.

I can't write high-quality posts on these particular topics with LessWrong in mind as the target audience, because I have little expectation that my understanding will be improved by engagement from LessWrong. The motivation structure of writing for readers who include the noninterested isn't conducive to high-quality output for me - the responses of the imagined reader affects my sense of taste. So I have to write them with some other audience in mind. I write them to be high-quality in that context. (It does seem to be getting a bit better lately, though.) But I share them on LessWrong since I do actually think there's a non-negligible chance that someone on LessWrong will pick up some of the ideas and use them, or engage with some part productively.

I don't seem to get enhanced engagement when I try to preempt likely questions - instead the post just ends up being too long for people to bother with even if I have an abstract and section headings, and the kinds of readers who would benefit from a more tightly written treatment find it too tedious to engage with. My series on GiveWell is an example. I'm often happy to expand on arguments etc. if I find out that they're actually unclear, depending on how sympathetic I find the confusion.

More specific feedback would be helpful to me, like, "I started reading this article because I got the sense that it was about X, and was disappointed because it didn't cover arguments Y and Z that I consider important." Though almost the same information is contained in "what about arguments Y and Z?", and I expect I'd make similar updates in how to write articles in either case.

In the specific case you brought up (negotiations between NK govt or NK people), it's really tangential to the core structural points in the dialogue, which include (a) it's important to track your political commitments, since not representing them in your internal model doesn't mean you don't have them, it just means you're unable to reason about them, and (b) it's important to have a model of whether negotiation is possible and with whom before ruling out negotiation. Your (implied) question helped me notice that that point had been missed by at least one reader in my target audience.

Towards optimal play as Villager in a mixed game

2019-05-07T05:29:50.826Z · score: 40 (12 votes)

Hierarchy and wings

2019-05-06T18:39:43.607Z · score: 24 (10 votes)

Blame games

2019-05-06T02:38:12.868Z · score: 41 (8 votes)

Should Effective Altruism be at war with North Korea?

2019-05-05T01:50:15.218Z · score: 14 (11 votes)

Totalitarian ethical systems

2019-05-03T19:35:28.800Z · score: 36 (12 votes)

Authoritarian Empiricism

2019-05-03T19:34:18.549Z · score: 38 (12 votes)

Excerpts from a larger discussion about simulacra

2019-04-10T21:27:40.700Z · score: 41 (14 votes)

Blackmailers are privateers in the war on hypocrisy

2019-03-14T08:13:12.824Z · score: 22 (16 votes)

Moral differences in mediocristan

2018-09-26T20:39:25.017Z · score: 21 (8 votes)

Against the barbell strategy

2018-09-20T15:19:08.185Z · score: 20 (19 votes)

Interpretive Labor

2018-09-05T18:36:49.566Z · score: 28 (16 votes)

Zetetic explanation

2018-08-27T00:12:14.076Z · score: 73 (42 votes)

Model-building and scapegoating

2018-07-27T16:02:46.333Z · score: 23 (7 votes)

Culture, interpretive labor, and tidying one's room

2018-07-26T20:59:52.227Z · score: 29 (13 votes)

There is a war.

2018-05-24T06:44:36.197Z · score: 52 (24 votes)


2018-05-18T20:30:01.179Z · score: 47 (12 votes)

Oops Prize update

2018-04-20T09:10:00.873Z · score: 42 (9 votes)

Humans need places

2018-04-19T19:50:01.931Z · score: 113 (28 votes)

Kidneys, trade, sacredness, and space travel

2018-03-01T05:20:01.457Z · score: 51 (13 votes)

What strange and ancient things might we find beneath the ice?

2018-01-15T10:10:01.010Z · score: 32 (12 votes)

Explicit content

2017-12-02T00:00:00.946Z · score: 14 (8 votes)

Cash transfers are not necessarily wealth transfers

2017-12-01T10:10:01.038Z · score: 110 (42 votes)

Nightmare of the Perfectly Principled

2017-11-02T09:10:00.979Z · score: 32 (8 votes)

Poets are intelligence assets

2017-10-25T03:30:01.029Z · score: 26 (9 votes)

Seeding a productive culture: a working hypothesis

2017-10-18T09:10:00.882Z · score: 28 (9 votes)

Defense against discourse

2017-10-17T09:10:01.023Z · score: 64 (21 votes)

On the construction of beacons

2017-10-16T09:10:00.866Z · score: 58 (18 votes)

Sabbath hard and go home

2017-09-27T07:49:40.482Z · score: 77 (46 votes)

Why I am not a Quaker (even though it often seems as though I should be)

2017-09-26T07:00:28.116Z · score: 61 (31 votes)

Bad intent is a disposition, not a feeling

2017-05-01T01:28:58.345Z · score: 13 (14 votes)

Actors and scribes, words and deeds

2017-04-26T05:12:29.199Z · score: 6 (8 votes)

Effective altruism is self-recommending

2017-04-21T18:37:49.111Z · score: 71 (52 votes)

An OpenAI board seat is surprisingly expensive

2017-04-19T09:05:04.032Z · score: 5 (6 votes)

OpenAI makes humanity less safe

2017-04-03T19:07:51.773Z · score: 18 (20 votes)

Against responsibility

2017-03-31T21:12:12.718Z · score: 13 (12 votes)

Dominance, care, and social touch

2017-03-29T17:53:20.967Z · score: 3 (4 votes)

The D-Squared Digest One Minute MBA – Avoiding Projects Pursued By Morons 101

2017-03-19T18:48:55.856Z · score: 1 (2 votes)

Threat erosion

2017-03-15T23:32:30.000Z · score: 1 (2 votes)

Sufficiently sincere confirmation bias is indistinguishable from science

2017-03-15T13:19:05.357Z · score: 19 (19 votes)

Bindings and assurances

2017-03-13T17:06:53.672Z · score: 1 (2 votes)

Humble Charlie

2017-02-27T19:04:37.578Z · score: 2 (3 votes)

Against neglectedness considerations

2017-02-24T21:41:52.144Z · score: 1 (2 votes)

GiveWell and the problem of partial funding

2017-02-14T10:48:38.452Z · score: 2 (3 votes)

The humility argument for honesty

2017-02-05T17:26:41.469Z · score: 4 (5 votes)

Honesty and perjury

2017-01-17T08:08:54.873Z · score: 4 (5 votes)

[LINK] EA Has A Lying Problem

2017-01-11T22:31:01.597Z · score: 13 (13 votes)

Exploitation as a Turing test

2017-01-04T20:55:54.675Z · score: 4 (5 votes)

Claim explainer: donor lotteries and returns to scale

2016-12-30T19:46:48.314Z · score: 5 (5 votes)

The engineer and the diplomat

2016-12-27T20:49:26.371Z · score: 14 (15 votes)