Comment by dagon on No, it's not The Incentives—it's you · 2019-06-16T23:22:45.852Z · score: 2 (1 votes) · LW · GW

I'd agree. Outside of closely-knit, high-trust situations, I don't think it's achievable to have that subtlety of conceptual communication. You can remind (some) people, and you can use more precise terminology where the distinction is both important and likely to succeed. In other cases, maintaining your internal standards of epistemic hygiene is valuable, even when playing status games you don't like very much.

Comment by dagon on No, it's not The Incentives—it's you · 2019-06-16T14:55:00.608Z · score: 2 (1 votes) · LW · GW

As long as we understand that "bad person" is shorthand for "past and likely near-future behaviors are interfering with group goals", It's a reasonable judgement to make. And it's certainly useful to call out people you'd like to eject from the group, or to reduce in status, or to force a behavioral change on.

I don't object to calling someone a bad person, I only object to believing that such a thing is real.

Comment by dagon on Reasonable Explanations · 2019-06-16T14:50:30.126Z · score: 2 (1 votes) · LW · GW

There's a question of specificity too - you could make a high-confidence prediction that guvf cnpxntr jnf frag gb lbh qhr gb na reebe, abg vagragvbanyyl but there was a very wide possibility space for jub znqr jung xvaq bs reebe.

The idea of confidence levels only works at fairly high abstractions of predictions. Nobody is 90% confident of extremely precise predictions, only of predictions that will cover a large amount (90%, in fact) of near-infinite variety of future states of the universe.

Comment by dagon on Do policy makers frequently underestimate non-human threats (ie: epidemic, natural disasters, climate change) when compared to threats that are human in nature (ie: military conflict, economic competition, Cold War)? · 2019-06-15T17:15:50.340Z · score: 4 (3 votes) · LW · GW

I say no. They're prioritizing short-term over long and very long, but they're properly thinking about human threats as more amenable to policy response (not exactly more important, but more comparative-advantage for governments).

One very important piece missing from your analysis is that humans are strategic and other threats are not. If you can convince humans that you're prepared for their threat, you don't actually have to be prepared, because the attack won't come. You can't convince a virus of anything - they just keep replicating regardless of your stance or position.

I'd also argue that governments (and other ingroup/outgroup political units) are correctly categorizing threats to themselves, even when that incorrectly prioritizes threats to constituents. WWI was a threat to the style and composition of governments. Influenza killed more, but didn't threaten the organizations.

Comment by dagon on Discourse Norms: Moderators Must Not Bully · 2019-06-15T04:08:08.637Z · score: 10 (9 votes) · LW · GW

It turns out that cranks and hucksters are indistinguishable from confused and vulnerable newbies. And protectors of conversational norms are indistinguishable from bullies. I think others have pointed out that your footnote hides the entire problem, because you don't actually have a nazi detector.

If you'd said "a given community should be transparent about the norms it'll enforce", I'd agree. Even saying "norm enforcement should start out gentle and only gradually ramp up if the participant appears to be working on it" would be totally reasonable. Saying "be nice to all participants, even if they're disruptive and not fitting in" is much harder for me to swallow as general advice. There are communities where it'd work (mostly small ones where there's time and energy to more gently bring someone up to speed), but at a certain size you have to decide if you value inclusiveness more than you value the actual stated purpose of the group.

For most such groups, there is always the actual important vote: participation. If you see norms being enforced that you disagree with (or in ways that you disagree with), definitely say something - people will either agree with you or defend themselves against what they see as trolling. If the latter, it's probably not the place you want to be.

(note: I don't see much on LW that I'd call bullying, or even incivility. If this is a complaint about a specific event that I didn't notice, I apologize for your bad experience, but I don't actually know what happened, so I can't advise on whether you're being oversensitive or a moderator was unnecessarily harsh about something. Both are possibilities to consider.)

Comment by dagon on No, it's not The Incentives—it's you · 2019-06-15T03:56:09.459Z · score: 6 (3 votes) · LW · GW

The gradients between horrific, forbidden, disallowed, discouraged, acceptable, preferable, commendable, heroic seem like something that should be discussed here. I suspect you're mixing a few different kinds of judgement of self, judgement of others, and perceived judgement by others. I don't find them to be the same thing or the same dimensions of judgement, but there's definitely some overlap.

I reject "goodness" as an attribute of a person - it does not fit my intuitions nor reasoned beliefs. There are behaviors and attitudes which are better or worse (sometimes by large amounts), but these are contingent rather than identifying. There _are_ bad people, who consistently show harmful behavior and no sign of changing throughout their lives. There are a LOT of morally mediocre people who have a mix of good and bad behavior, often more context-driven than choice-driven. I don't think I can distinguish among them, so I tend to assume that almost everyone is mediocre. Note that I can decide that someone is unpleasant or harmful TO ME, and avoid them, without having to condemn them as a bad person.

So, I don't aspire to be a truly good person, as I don't think that's a thing. I aspire to do good things and make choices for the commons, which I partake of. I'm not perfect at it, but I reject judgement on any absolute scale, so I don't think there's a line I'm trying to find where I'm "good enough", just fumbling my way around what I'm able/willing to do.


Comment by dagon on FB/Discord Style Reacts · 2019-06-13T22:11:29.803Z · score: 4 (2 votes) · LW · GW

It's about on par with other sites. Conversational threading has never worked well on websites, AFAIK. trn did fairly well on Usenet, there have been some e-mail clients (none currently, AFAIK) that do a fairly good job. LW (and other reddit-like sites) does well for topics with comment trees that are bushy for the first level or two, and narrow below that.

The killer feature that nobody's been able to replicate is zoom/filter to one unread path down the reply tree, and then to advance to the next branch which contains an unread reply from a followed ancestor. Basically depth-first reading of new comments (with an option to go up to ancestors for context). All web comment systems I know of show breadth-first, including both read and unread comments (or with context loss when switching between all and unread).

I presume nobody's been able to replicate it because the idea of a conversational tree AS A NAVIGATION TREE is too complicated and techie for the vast majority of modern participants. It might actually work on LW, but it's different enough that there's not much reusable stuff out there and it'd be a lot of work.

Comment by dagon on Real-World Coordination Problems are Usually Information Problems · 2019-06-13T21:59:29.657Z · score: 5 (2 votes) · LW · GW

I don't see much justification for the word "usually" in the title. And while I'd agree that many real-world communication problems include information issues, I think they're almost always mixed with alignment (not all participants are fully onboard with the goals) and trust (even if we agree, is it worth my effort if you're going to let me down) issues.

It's very hard to tell which of these issues is most important for any given failure, but I'd argue that the alignment and trust issues are causes of the information issues.

Comment by dagon on FB/Discord Style Reacts · 2019-06-13T18:12:20.303Z · score: 4 (2 votes) · LW · GW

Note that low-effort is one side of this. Low-interruption-of-comment-stream is far more important to me. Threading on LW is not great, and having short, lightweight reactive comments inline with substantive ones can be quite distracting.

Comment by dagon on Some Ways Coordination is Hard · 2019-06-13T16:48:35.560Z · score: 7 (2 votes) · LW · GW

I agree with and support your conclusions - "just start, even if the first hunt might fail" is excellent for cases where the problem is mostly one of coordination - there needs to be common knowledge that a stag hunt has any chance of working at all.

I worry that the hunting model is so oversimplified as to obscure the OTHER hard parts about densely-connected interpersonal behaviors (aka "group behaviors"), most notably actual unaligned values (my ears prick up on this one whenever someone uses "utility" as if it were a resource), and akrasia (internal unacknowledge misalignment), and actual capability differences (some people can bag a rabbit, but actually make a stag hunt less likely to succeed).

And it _ALSO_ obscures some solutions. Find ways to make them non-exclusive. I haven't left Facebook, but I'm happy to use other mechanisms for groups which have. Use rabbit guts as stag snares, letting individual research contribute in smaller ways to the overall goal. If you like stags, and your current tribe likes rabbits, change tribes.

Comment by dagon on What kind of thing is logic in an ontological sense? · 2019-06-12T22:53:02.615Z · score: 2 (3 votes) · LW · GW

I'm not sure what "exist" means in this context. IMO, logic is a somewhat ambiguous word, but the common use is for a particularly powerful base model (that is, a meta-model that can be extended in many directions to make predictions). It doesn't exist any more (nor any less) than "thermodynamics" or "love" exists - these are concepts that can be used to communicate and predict states of the universe, but don't directly correspond to a state of the universe.

Actually, for the question of existence, the power and applicability of the model doesn't matter. it exists in the same sense as bad models exist, too. It's an idea, or pattern of processing inside a brain.

Comment by dagon on How much does neatly eating matter? What about other food manners? · 2019-06-12T19:40:18.894Z · score: 4 (2 votes) · LW · GW

see also https://en.wikipedia.org/wiki/Countersignaling .

Comment by dagon on How much does neatly eating matter? What about other food manners? · 2019-06-12T19:38:35.234Z · score: 2 (1 votes) · LW · GW

Don't jump to "selected for" when "culturally rewarded and practiced" is likely sufficient. The author is WAY too quick to reject the idea that PPEF executives could become this good at it just by believing that it's important. To me, this seems like something one could excel at with only a few weeks of practice and intent (note: I have no personal evidence of this, as I myself have not done so).

Separately "in software" is a ludicrously large category, with such a wide variety of cultural expectations as to be meaningless. There are places where appearance and poise matters a lot, and places where it might not be noticed on the third or fourth day you wore the same food-stained t-shirt.

Comment by Dagon on [deleted post] 2019-06-12T17:18:08.717Z

Ugh, a "video" that's really just audio, with no transcript and link to paid patreon access to the text? No, thanks.

Comment by dagon on No, it's not The Incentives—it's you · 2019-06-11T22:09:34.511Z · score: 4 (3 votes) · LW · GW
5. There is lot of truth in Taleb's position that research should not be a source of your income, rather a hobby.

Is this specific to research? Given unaligned incentives and Goodheart, I think you could make an argument that _nothing important_ should be a source of income. All long-term values-oriented work should be undertaken as hobbies.

(Note - this is mostly a reductio argument. My actual opinion is that the split between hobby and income is itself part of the incorrect incentive structure, and there's no actual way to opt out. As such, you need to thread the needle of doing good while accepting some and rejecting other incentives.)

Comment by dagon on No, it's not The Incentives—it's you · 2019-06-11T20:59:08.209Z · score: 2 (1 votes) · LW · GW
I wouldn't call them "economic" actions/decisions - how to do things at a concrete level is about what you want.

I think of economic decisions in terms of visible/modeled tradeoffs including time- and uncertainty-discounted cost/benefit choices. Moral decisions are this, plus hard-to-model (illegible) values and preferences. I acknowledge that there's a lot of variance in how those words are used in different contexts, and I'm open to suggestions on what to use instead.

Comment by dagon on No, it's not The Incentives—it's you · 2019-06-11T18:24:32.721Z · score: 14 (5 votes) · LW · GW

I can't upvote this strongly enough. It's the perfect followup to discussion and analysis of Moloch and imperfect equilibria (and Moral Mazes) - goes straight to the heart of "what is altruism?" If you're not taking actions contrary to incentives, choosing to do something you value that "the system" doesn't, you're not making moral choices, only economic ones.

Comment by dagon on Dissolving the zombie argument · 2019-06-11T03:12:06.584Z · score: 4 (2 votes) · LW · GW

you're right - in the case where it points out a logical contradiction, it can be cause for update. It doesn't necessarily help in what direction to update (away from materialism or away from p-zombies being possible).

Comment by dagon on Get Rich Real Slowly · 2019-06-10T19:49:48.028Z · score: 3 (3 votes) · LW · GW

A little more info on why you recommend smaller and safer returns, rather than slightly-higher and somewhat riskier, would help a lot. The vast majority of medium- and long-term individual pure investing (that is, not mixed with housing or sweat-equity business investing) has a utility-to-money curve that is at least a little nonlinear, where returns are wonderful, and losses are painful, but a slight loss is not disastrous. Deciding where to be on the risk/return curve seems to be one of the more impactful investing decisions an individual can make.

That said, it's useful to point out the variability in quality of offerings, and just how much improvement can be had with a small amount of research.

I'm a little unsure about Wealthfront - they used to be more about automated advice, and didn't have great reviews. "FDIC insured" is pretty powerful, but the hassle factor is large if they go under. I don't understand how they can offer so much more than other banks' CDs, and that makes me suspicious. However, if you _do_ think they belong in the same category as the rest, you should probably lead with that, as there's no reason to consider other options (including Vanguard bond funds).


Comment by dagon on Ramifications of limited positive value, unlimited negative value? · 2019-06-10T19:27:07.133Z · score: 5 (2 votes) · LW · GW

I'm especially skeptical about intuitions very different from past or current situations. In me, there's an availability heuristic where I incorrectly visualize some examples that are too close to reality, use my intuitions for those, then try to apply a corrective factor.

I don't have a good model for moral value of near-identical copies. My current belief is that quantum-level duplication makes them one being, and any other level of similarity diverges fast enough to make them distinct entities, each with full weight. I definitely don't think that positive and negative experiences are different on this topic.

This goes directly to the idea of "done it all". There are a (literally) infinite number of variations of experience, and we can never run out. There'll always be good experiences that we don't have enough consciousnesses to have. There is no maximum high score (except possibly maximum temporal density of reachable experiences until the end of reality). You can never step in the same river twice, and entities can never have the same experience twice.

I MAY share your intuition (or I may be misreading) that there is more negative than positive in entity-experience space. My take from that is that we need to be somewhat careful in creating more experiences, but not that it needs to be perfect, nor that prioritizing elimination of negative overwhelms creation of positive.

Edit: I do think I understand where the "repeated negative is bad, repeated positive is neutral" intuition comes from. In my personal life, I prefer variety of positive experiences over repetition. My hedonic adaptation seems to be faster (and perhaps more complete a regression to the norm) for good things than for pain. I don't think that reasoning applies to cases of distinct-but-similar individuals in the same way it does for distinct-but-similar experiences in an individual.


Comment by dagon on Dissolving the zombie argument · 2019-06-10T18:32:46.454Z · score: 4 (4 votes) · LW · GW

Much like "newcomb's paradox", p-zombies get brought up as an "argument", and then somehow nobody notices that the actual argument is "what is the universe like", not "what would happen in this hypothetical undefined-maybe-possible situation". It's fundamentally an empirical question, and trying to answer it with thought experiments and logic is kind of pointless.



Comment by dagon on FB/Discord Style Reacts · 2019-06-10T18:22:21.761Z · score: 5 (2 votes) · LW · GW

Here's a challenge: I was about to make this (low-value) comment on: https://www.lesswrong.com/posts/cnBGXGSFGpfvknFc3/ramifications-of-limited-positive-value-unlimited-negative-1

"Upvoted because it's relevant and perhaps important, but the premises are too far from both my intuitions and reasoned beliefs (which I acknowledge are deeply entwined with intuitions, and not necessarily "truth") for me to participate. I look forward to seeing the conversation. "

I'm not sure what the emoticon or short phrase for that looks like. perhaps thumbs-up+shrug.

Comment by dagon on Honors Fuel Achievement · 2019-06-10T18:05:08.887Z · score: 10 (5 votes) · LW · GW

Alternate hypothesis: honors are given more in order for the ruler/bestower to take some of the credit for the great work, than for motivating future great work.

It's hard to see that ANY Nobel prize winner did their work significantly motivated by the idea that it'd win the prize. Likewise knighthoods - they're given for publicly-great work, but that work isn't undertaken for the knighthood. Hugo and Nebula awards are valued by authors, but aren't the main reason for writing - they're more to make the industry seem relevant than to encourage behaviors.

Comment by dagon on On pointless waiting · 2019-06-10T17:18:31.712Z · score: 5 (3 votes) · LW · GW

It's worth a bit of reflection to determine if you're getting value from slack in those times that you think you're waiting. Daydreaming, reflecting on things you read, etc. all can be very useful, and tend to happen for me mostly when I'm not actively pursuing something more visibly satisfying.

I also have experienced the "just on hold" feeling, where I can see no value and I regret not doing something else. I like to think that's an unavailable-to-introspection value, but it's probably just waste.

Comment by dagon on Steelmanning Divination · 2019-06-06T23:06:09.729Z · score: 2 (1 votes) · LW · GW

One of the things I take from the post is that you don't have any new information about what they rolled, but you DO now have an indication that they had some reason to roll a die. If you know what kind of decisions they make based on die rolls, you know they're making such a decision.

For some things, that is a lot of information about the universe from the act of divination, not from the results of the act.

This is sort of the inverse of what the post is saying (that preparing for the act ensures that you consider the question with sufficient weight).

Comment by dagon on FB/Discord Style Reacts · 2019-06-05T22:15:50.198Z · score: 2 (1 votes) · LW · GW

Ah, true. deep comment threads are different. I vote on most posts I comment on, and most top-level comments that I reply to. I generally only vote on deeper comments if that's where I'm entering the conversation.

Comment by dagon on FB/Discord Style Reacts · 2019-06-05T22:10:08.582Z · score: 8 (4 votes) · LW · GW

I would like to react to this in some positive way :)

I was away for a bit and had missed that post.

Comment by dagon on FB/Discord Style Reacts · 2019-06-05T21:03:01.183Z · score: 2 (1 votes) · LW · GW

Cool, I like having reacts tied to ideas and expressions rather than to people.

Your comment about feedback other than votes makes me wonder, though - wouldn't it be in addition to votes? Do you expect people to react and not vote (or vote and not react) very often? I rarely (not never, but not usually) comment without voting.

Comment by dagon on Major Update on Cost Disease · 2019-06-05T19:18:15.413Z · score: 2 (1 votes) · LW · GW

Is part of the discrepancy the inclusion of more people (administrators, support personnel, etc., who are not necessarily paid more, but who are more numerous) in the "expenditure per educator" but not the "salary of educators" measurements?

Comment by dagon on FB/Discord Style Reacts · 2019-06-05T19:00:31.382Z · score: 2 (1 votes) · LW · GW

It came up in this thread: https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019#5wECFA9o8T6ohzrNE . I can't easily find the other comments from people who say or strongly imply that they think of LessWrong being mostly promoted posts, and have different content and topic expectations for blog posts on the site.

re-reading the comments makes me realize that "totally flummoxed" is a massive overstatement - I was surprised, but I kind of get it (and kind of don't - there's not enough separation to make me believe that they're not mostly the same).

Comment by dagon on FB/Discord Style Reacts · 2019-06-05T16:45:57.374Z · score: 5 (3 votes) · LW · GW

How are you thinking about the time-value of such react tokens? Are you trying to fix a problem with votes, or to introduce a new mechanism for a purpose orthogonal to voting?

I'd like to see more signal in the voting: slashdot-style "why are you voting this way" would naively fit that, but I don't actually like any implementation I know of, so I may be wrong in my understanding of my preference on that front.

One of the things about the current "votes" mechanism that consumes my mental energy with no value is that they're summed up and kept forever, and impact your ability to give more of them (more karma = stronger votes). I want not to care, but I do. Adding another signal for my brain to goodheart on (caring about the metric rather than the signal) is a strict loss for me. If it's intentionally lightweight and ephemeral, that solves this problem, but perhaps makes it less useful if it's intended to motivate some behavior.

Separately, but also related to the signaling value of LW posts/comments/votes/reacts, I'm also totally flummoxed by recent comments that most of what I see in /allPosts aren't "really" part of LW, and votes there should have different standards and meanings.


Comment by dagon on Moral Mazes and Short Termism · 2019-06-04T22:40:35.853Z · score: 2 (1 votes) · LW · GW

I think "close but not quite right" is likely to be a ceiling for map coverage when talking about complicated individuals in more complicated group activities. The world-modeling and goal-seeking divergence between the infected (including myself, on some topics) and the enlightened (I'll ignore the bystanders for now) is pretty significant.

And, of course, it's a continuum, which shifts across contexts and across time even for an individual, so any generalization will be wrong sometimes. That's true of the actor/scribe dimension as well - I generally think of myself as a scribe in my internal narrative, and in select individual and very-small-group conversations, but an actor in most work and larger social contexts.

LessWrong is an interesting case. Posts and comments are definitely acts, but the goal of the act is improved truth-knowing (and -telling), which is a fun hybrid of the two styles.

Comment by dagon on Moral Mazes and Short Termism · 2019-06-04T15:44:50.413Z · score: 12 (3 votes) · LW · GW

I don't think it's an honor or trust thing, but simply a communication ease. The "infection" is memetic, at a world-view level. People who believe they're doing the right thing (or at least the universal thing) by pursuing shorter-term visible successes at the potential expense of longer term positive impact find they're in disagreement in ways they don't understand with those who are sacrificing short-term gains for uncertain long-term goals.

It's not a matter of pathologically dishonest (well, any more than any other value system or religion is dishonest), it's a matter of misalignment in beliefs about what's important, and different judgement about tradeoffs in measurable vs unmeasurable goods.

Comment by dagon on Egoism In Disguise · 2019-06-01T15:35:15.779Z · score: 4 (3 votes) · LW · GW

It's interesting that you approach it from the "bad effects of treating moral beliefs as X" rather than "moral beliefs are not X". Are you saying this is true but harmful, or that it's untrue (or something else, like neither are true and this is a worse equilibrium)?

I do not understand the argument about value drift, when applied to divergent starting values.

Comment by dagon on Egoism In Disguise · 2019-05-31T20:16:43.369Z · score: 3 (2 votes) · LW · GW

I often point out that I'm a consequentialist, but not a utilitarian. I differ from you in that I don't think of egoism as my source of truth for moral questions, but aesthetics. My intuitions and preferences are about how I like to see the world, and I do believe they generalize (not perfectly) to how others can cooperate and function together better. As part of this, I do have indexical preferences, which is compatible with some amount of egoism.

That said, most of my preferences are satisfied if a bunch of non-me entities are utilitarian, and many (most, even) of my actions are the same as they would be under a utilitarian framework, so I'm happy to support utilitarianism over the uglier forms of egoism that I fear would take over.

Comment by dagon on Lonelinesses · 2019-05-31T20:11:40.750Z · score: 5 (4 votes) · LW · GW

It's interesting that there's no antonym for loneliness. There's a built-in presumption that one is happier/more fulfilled with more and deeper contact with other humans. I often feel lonely (and I find this taxonomy interesting, but I'm not sure how I'll use it), but I also often feel anti-lonely, and that I'd be more happy if left alone for awhile.

At the deepest levels, we are all alone, always - there's this air-gap between human individuals that cannot be overcome (or at least between me and everyone else I've met; maybe the rest of you have a connection I don't). "Everyone dies alone" as the saying goes. Distinguishing what kinds of loneliness are just denial of this fact, and what kinds are indications that you actually do want more/deeper communication with someone is very difficult.


Comment by dagon on Quotes from Moral Mazes · 2019-05-30T19:41:01.093Z · score: 5 (3 votes) · LW · GW

Both (low-level) management and (somewhat senior) technical. In previous work, as an owner of a tiny (10 employee) business, and as a line-level (4-12 direct reports, no indirect) software manager. For the last 20 years or so as an individual contributor, with no reports but a fair bit of org strategy and people-management input over a 600-person division in an extremely large corporation, with some interaction and discussion with very senior management (VP and SVP who have 8-10 levels of management between them and the most junior ICs).

There is certainly a fair amount of the stuff described here, but not to that extent and there's also a _whole lot_ of object-level visibility (across all levels) of the intent to actually deliver stuff that works and attracts customers (and therefore revenue) over the medium- and long-term.

I pretty strongly suspect that different industries (and different companies within industries) are on different points on this scale. It's quite possible that timeframe of object-level feedback loops and mobility of workforce force software companies to do better than established industrial companies who've managed to get government and market-perception protection of their revenue streams.

[ This applies generally, but I figure I should state it clearly here: unless otherwise stated, nothing I write is endorsed by my employer. It is solely based on my personal observations and opinions. And the necessity to include such a disclaimer is indicative that I do recognize some amount of CYA and dilbert-ism in my work. ]

Comment by dagon on Infinity is an adjective like positive rather than an amount · 2019-05-30T18:17:58.595Z · score: 3 (2 votes) · LW · GW

I'm afraid you lost me. I don't understand why 5-7 is "inpositivity" when -2 is more precise and useful. Why do I want to "dull" a number (or number system)?

I'm pretty comfortable with my previous understanding of infinities - limits of an unbounded calculation and distinctions between different levels of cardinality. I don't see what this adds.


Comment by dagon on Quotes from Moral Mazes · 2019-05-30T18:07:13.058Z · score: 9 (4 votes) · LW · GW

Wow, thanks for this! It will take me awhile to read all the excerpts, and I don't intend to read the book, but my initial impression is that this is a dilbert-like parody of the culture of corporations, or perhaps a near-worst-case example, rather than a description of an average or median corporate work life. At least it doesn't match my own experience in large software firms, but I have the advantage (at least for the last 25 years or so, less so before that) of mobility: workers and line managers are highly sought, so can (somewhat) easily leave if a situation is too bad.

I would like to see a comparison to government (especially managers below the elected levels, as elections carry their own special pathologies), and a bit more acknowledgement of the object-level reality of corporations in terms of measurable things like revenue. It seems like most of these quotes are directly at odds with seeking profit (either long- or short-term), and it would be enlightening to hear why there's not a bunch more efficient organizations taking over.

Comment by dagon on Drowning children are rare · 2019-05-29T19:02:47.433Z · score: 5 (2 votes) · LW · GW

This may be true - desperation encourages in-group cooperation (possibly with increased out-group competition) and wealth enables more visible social competition. Or it may be a myth, and there's just different forms of domination and information obfuscation in pursuit of power, based on different resources and luxuries to be competed over. We don't have much evidence either way of daily life in pre-literate societies (or illiterate subgroups within technically-literate "civilizations").

We do know that groups of apes have many of the same behaviors we're calling "werewolf", which is some indication that it's baked in rather than contextual.

Comment by dagon on What is required to run a psychology study? · 2019-05-29T18:39:34.336Z · score: 14 (3 votes) · LW · GW

I, for one, would get some value out of seeing how you'd use such data. Instead of running a survey or study, write up the results for all possible (or a few likely) outcomes, without actually knowing which is true. What are you going to infer differently, for example if 12% of of people can write FizzBuzz than if 40% or 80%?

Writing these (or at least the outlines of each) is a great way to pre-register the studies, to avoid worries about p-hacking.

Comment by dagon on Tales From the American Medical System · 2019-05-29T17:06:06.357Z · score: 2 (1 votes) · LW · GW

I believe he's pointing out that yes, this is about time and money, but the limits of the game include life and death. Death will sometimes result from this money-centric behavior.

I don't think that's particularly surprising nor outrageous, but I'm a bit more cynical and semi-malthusian than many seem to be. There are BILLIONS (and growing!) of humans with near-infinite wants. There is a finite (also growing, possibly even faster than population, but still more limited than the wants of humans). Some are going to travel internationally and eat fine meals, and to do so they will find ways to get paid for providing unnecessary "services", like writing a trivially-obvious prescription.

The system is such that a few people will die because they fail to jump through the hoops set up to ensure payment to the power-holders.


Comment by dagon on What is required to run a psychology study? · 2019-05-29T16:54:11.044Z · score: 4 (2 votes) · LW · GW

Informal surveys are done literally all the time, by university undergrads, sensationalist news organizations, political organizations, businesses, etc.

If you want to publish somewhere, you'll need to follow their rules. If you're using or establishing some sort of business or medical relationship with those surveyed, there are restrictions on how you can do that. Targeting or collecting data on under-18 humans in many jurisdictions is restricted, and I don't know what it takes. If you're calling or texting people, there are rules there too. The rules seem to be ignored a lot of the time, especially for informal one-time small-scale uses.

The bigger problem I see is validity of study, and representative-ness of sample. The sample topics you give all seem to be about counting or quantifying something within a population. Most of your work will be in defining the population you're trying to measure and figuring out how to get a wide-ranging evenly-distributed sample of responses within that population.

The other "most" of your work will be in figuring out how to get the data that actually tells you anything. There's a lot of individual variance in the topics given, and a lot of ambiguity in what results of any concrete test would show.

Comment by dagon on Drowning children are rare · 2019-05-29T16:28:45.954Z · score: 16 (5 votes) · LW · GW
As far as I can tell, the "werewolf" thing is how large parts of normal, polite society work by default

This is true, and important. Except "werewolf" is a misleading analogy for it - they're not intentionally colluding with other secret werewolves, and it's not a permanent attribute of the participants. It's more that misdirection and obfuscation are key strategies for some social-competitive games, and these games are part of almost all humans motivation sets, both explicitly (wanting to have a good job, be liked, etc.) and implicitly (trying to win every status game, whether it has any impact on their life or not).

The ones who are best at it (most visibly successful) firmly believe that the truth is aligned with their winning the games. They're werewolf-ing for the greater good, because they happen to be convincing the villagers to do the right things, not because they're eating villagers. And as such, calling it "werewolf behavior" is rejected.


Comment by dagon on Evidence for Connection Theory · 2019-05-28T17:41:04.153Z · score: 6 (3 votes) · LW · GW

It would help a lot to include a link to some description of what CT actually claims, before I read a description that asserts it should be evaluated based on evidence + elegance (and mentions, but doesn't seem to define, usefulness).

I only skimmed the document, but I couldn't tell if CT is about prediction of success of interventions, or the interventions themselves.

Comment by dagon on A shift in arguments for AI risk · 2019-05-28T17:14:15.089Z · score: 5 (2 votes) · LW · GW

I welcome more discussion of different forms of takeoff trajectory, competitive (both among AIs and among human+AI coalitions), and value-drift risks.

I worry a fair bit (and don't really know whether the concern is even coherent) that I value individual experiences and diversity-of-values in a way that can erode if encoded and enforced in a formal way, which is one of the primary mechanisms being pursued by current research.

Comment by dagon on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-28T14:40:36.972Z · score: 2 (1 votes) · LW · GW

What utility do you get from keeping the promise, and how does it outweigh an extra $1 from bidding $99 (and getting $101) instead of $100?

If you're invoking Hofstadter's super-rationality (the idea that your keeping a promise is causally linked to the other person keeping theirs), fine. If you're acknowledging that you get outside-game utility from being a promise-keeper, also fine (but you've got a different payout structure than written). Otherwise, why are you giving up the $1?

And if you are willing to go $99 to get another $1 payout, why isn't the other player (kind of an inverse super-rationality argument)?

Comment by dagon on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-28T05:01:33.373Z · score: 2 (1 votes) · LW · GW

You're just avoiding acknowledging the change in payoff matrix, not avoiding the change itself. If "breaking a promise" has a cost or "keeping a promise" has a benefit (even if it's only a brief good feeling), that's part of the utility calculation, and is part of the actual payoff matrix used for decision-making..

Comment by dagon on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-27T17:03:54.293Z · score: 2 (1 votes) · LW · GW

If you bid $2 you get at least $2 (you might get $4 if your partner bids above $2, but there's no partner bid that can get you less than $2). If you bid anything more than $2, you might get $0, if the other party bids $2. Nash equilibrium is simply the state where no other-player-choice can reduce your payout.

If you're trying to maximize average/expected payout, and you have some reason to believe that the other player is empathetic, super-rational, or playing a different game than stated (like part of their payout is thinking of themselves as cooperative), you should usually bid $100. Playing against an alien or an algorithm who you expect is extremely loss-averse and trying to maximize their minimum payout, you should do the same and bid $2.


Comment by dagon on Newcomb's Problem: A Solution · 2019-05-27T08:07:31.291Z · score: 5 (2 votes) · LW · GW

We can't do the experiment because the problem isn't real. So appealing to Galileo and experiments is at best misleading. There's no reality we're testing here.

A _LOT_ hinges on how Omega is performing this impossible feat. I assert that two-boxers believe that past averages don't apply to this instance - they don't actually expect to get $1000, they expect $1001000. But we can't be sure what they're thinking, and we can't be sure what Omega's mechanism is, because we can't do the experiment.

The thought experiment is far enough removed from reality that it doesn't tell us much about ... anything. When I first heard it a few decades ago, it seemed to be about free will. and even then it didn't teach anything, as its assuming the answer is "no". Now it's morphed into ... something something decision theory. And still doesn't map to any reality, so still doesn't have much truth-value.


Did the recent blackmail discussion change your beliefs?

2019-03-24T16:06:52.811Z · score: 37 (14 votes)