Posts

Dagon's Shortform 2019-07-31T18:21:43.072Z · score: 3 (1 votes)
Did the recent blackmail discussion change your beliefs? 2019-03-24T16:06:52.811Z · score: 37 (14 votes)

Comments

Comment by dagon on Prediction Markets Don't Reveal The Territory · 2019-10-13T15:08:20.828Z · score: 3 (2 votes) · LW · GW

Yes: "predict test scores if we include passive hypnotic audio during sleep for students", with it being no-bet if we don't do that. Do this for a number of interventions, and then perform the one with best predicted results (and refund all the bets on both sides for conditions that did not occur).

Comment by dagon on Social Class · 2019-10-11T21:52:15.839Z · score: 11 (2 votes) · LW · GW

You need to add timescales and expectation into your model. Neither economic nor social class is about your current wealth, salary, activities, or even peer groups. Class is about what you expect to be near-constant for your life, and what you hope for your children.

Comment by dagon on Reflections on Premium Poker Tools: Part 3 - What I've learned · 2019-10-11T21:38:33.522Z · score: 4 (2 votes) · LW · GW

I notice you're talking about market size, but not about marginal value to the median member of that market. I no longer play poker seriously, but I still know a number of pros and some authors/publishers/coaches that were reasonably successful. When you say:

if you're a poker player and are actually trying to get better, that you should have some sort of poker software to study with

You're making a few major errors. Assuming that very much of your potential audience is all that dedicated to getting better is one - poker survives on optimistic idiots, not on smart, patient, long-term thinkers. There are very few players who win consistently, and most of those don't spend that much effort studying.

Assuming that studying is the main part of poker success is also a mistake. Most profit comes from identifying and exploiting opponent mistakes, not in optimizing your own play beyond a certain competence. It's very hard, even for very good players, to formally model the kinds of mistakes they profit from, so very hard to imagine that software calculations help them improve their exploitative play.

Assuming that software is a critical part of studying is another, and that your paid software is the one people would choose even worse. There's plenty of pretty decent free software and many reams of printed study material. And once you reach a certain level of competence, you get paid to study by actually playing hands and then reviewing hand histories (often with the aid of free software) afterward. The exception is live helpers (aka "cheats") for online games - those that scrape the screen and sometimes even click buttons for you, based on shared player databases. I'm out of the current market, but these were big business a decade ago.

The nice thing is you have a built-in revenue method: if the software is really all that great, you should use it to make money at poker, rather than selling it to your opponents.


Comment by dagon on Bets and updating · 2019-10-08T14:02:48.388Z · score: 3 (2 votes) · LW · GW

Adversarial evidence (that which is specifically crafted to make your beliefs worse) is tricky. It still fits, but you need to expand the circle of beliefs you're updating. What are your priors that the bet/evidence comes from Omicron or Omega?

Comment by dagon on Realigning Housing Coalitions · 2019-10-08T13:59:48.494Z · score: 3 (2 votes) · LW · GW

Unfortunately (or perhaps not), it's not possible in the modern world to make trades and compromises. Democracy won, everything is visible and everything is too public and obvious to quietly give up some of your goals in order to meet others. There's no "in exchange for" anymore.

Voters and public figures want exactly their plan, and will not support those who alter it.



Comment by dagon on What are your strategies for avoiding micro-mistakes? · 2019-10-05T00:02:54.397Z · score: 4 (2 votes) · LW · GW

This applies to any sort of coding, as well. Trivial mistakes compound and are difficult to find later. My general approach is unit testing. For each small section of work, do the calculation or process or sub-theorem or whatever in two different ways. Many times the check calculation can be an estimate, where you're just looking for "unsurprising" as the comparison, rather than strict equality.


Comment by dagon on What are we assuming about utility functions? · 2019-10-02T23:04:26.902Z · score: 2 (1 votes) · LW · GW

I think there's a reasonable position that the CEV hypothesis is false: humans are just hopeless, AND the ASI hypothesis is true. Somewhere in the evolution and increase in intelligent modeling of behaviors it'll become clearly more effective/selective/powerful to have consistent utility-like evaluation of future world-states.

Comment by dagon on I try not to have opinions · 2019-10-01T17:02:55.352Z · score: 5 (3 votes) · LW · GW

Not convinced. An opinion is a heuristic or shorthand for a set of beliefs and preferences. The actual state of the world, and my preferences over potential future states are far more complex and detailed than fit in one human brain (or any other modelling substrate: the universe is it's own best model; no local portion of it can contain the whole).

"Bob Shepherd is a good politician" is a set of beliefs about how Bob acts and will act, and preferences about how you prefer him to act. It's far too compressed to express the detail of prediction and preference that you actually hold (which is too compressed for the actual eventual result), but it's sufficient to express the sentiment.

If you want more detail, use more words (though, you'll never have enough). The choice to summarize does NOT imply that the summary is all there is, not that it's sufficient for all purposes.

Comment by dagon on The MA Estate Tax is Broken · 2019-09-24T16:02:33.620Z · score: 4 (2 votes) · LW · GW

At those margins, there are a lot of ways to avoid that particular issue - mostly by giving to other recipients to keep you at $0.999M unless you can give over 1.038M.

This is a long way from the worst thing about inheritance taxes or about common tax systems today.

Comment by dagon on Taxing investment income is complicated · 2019-09-23T18:13:16.804Z · score: 2 (1 votes) · LW · GW

I think the reverse is true as well - many "same" taxes include diversity already. Taxing "consumption" is actually many millions of different things (with some correlation, but not 100%). Likewise "income" or even "regular income" is taxing any of many many choices of income.

Comment by dagon on Taxing investment income is complicated · 2019-09-22T15:54:54.966Z · score: 3 (2 votes) · LW · GW

If this is true, wouldn't we have to worry about correlation between types of tax? Taxing A at 1 and B at 1 has social cost 2 if they're totally independent and social cost 4 if they're actually the same thing.

Comment by dagon on How Much is Your Time Worth? · 2019-09-02T17:36:34.206Z · score: 6 (7 votes) · LW · GW

This is a good reminder for those who haven't thought of it or haven't examined their voices and habits for awhile. But https://www.lesswrong.com/posts/6NvbSwuSAooQxxf7f/beware-of-other-optimizing, and watch out for a rationalist habit to overweight the legible (measurable, calculable) portions of life choices and ignore the hidden and hard-to-quantify.

Consider that Philip may preFer not to have gotten much done on the dimensions he can communicate, and the heat gives him an easy justification. Or that the emotional cost of picking and committing to spend for the unit is higher than the benefit for a week or two.

Comment by dagon on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2019-08-29T18:07:03.002Z · score: 2 (1 votes) · LW · GW

What reward (and more importantly, what utility) does the predictor receive/lose for a correct/incorrect guess?

To the extent that "you" care about your clones, you should guess in ways that maximize the aggregate payout to all guessers. If you don't, then guess to maximize the guesser's payout even at the expense of clones (who will make the same guess, but be wrong more often).

Self-locating probabilities exist only to the extent that they influence how much utility the current decision-maker assigns to experiences of possibly-you entities.

Comment by dagon on The Missing Math of Map-Making · 2019-08-28T23:18:56.003Z · score: 3 (2 votes) · LW · GW

I am still finding it difficult to understand how the focus on causality of mapmaking is more helpful than examining the intent to summarize (which encompasses which information gets thrown away based on what domain of prediction the map is created for) and the (pretty pure bayesean) accuracy of predictions.

Comment by dagon on Don't Pull a Broken Chain · 2019-08-28T17:57:13.063Z · score: 4 (2 votes) · LW · GW

Note that mapping is a type of abstraction that's independent of causal chains and feedback/control loops. You can make an excellent thermostat that doesn't understand a thing about thermodynamics, air flow, or control theory (though you may need to know something about these things in order to make the thermostat work well).



Comment by dagon on Cartographic Processes · 2019-08-27T23:41:49.172Z · score: 2 (1 votes) · LW · GW

Hmm. I might have a sense of where you're going, but the terminology is confusing to me. Nothing happens spontaneously, every future state happens because of the past state of the universe, so your intro makes very little sense to me. I think the distinction you're pointing to isn't spontaneous/caused, I think it's natural/artificial, or maybe automatic/planned, or maybe inevitable/intentional. In any case, it seems to be about human conscious decisions to create the map. I'm not sure why this doesn't apply to the human conscious decision to create the roads being mapped, but I suspect there's an element of objective/subjective in there or full-fidelity/simplified-model.

I'm also unsure if the "cartographic process" is the human intent to make a map/model, or the physical steps (measurements, update of display, etc.) that generate the map.



Comment by dagon on Chris_Leong's Shortform · 2019-08-27T22:19:47.791Z · score: 2 (1 votes) · LW · GW

Not new that I could tell - it is a refreshing clarity for strict determinism - free will is an illusion, and "possible" is in the map, not the territory. "Deciding" is how a brain feels as it executes it's algorithm and takes the predetermined (but not previously known) path.

He does not resolve the conflict that it feels SOOO real as it happens.

Comment by dagon on A Personal Rationality Wishlist · 2019-08-27T21:24:23.414Z · score: 4 (2 votes) · LW · GW

Understanding how it works and remembering details when asked out of context may be very different things. I wish the participants had been given follow-up questions about how it works, and then the exercises repeated when a bicycle was present.

Comment by dagon on Chris_Leong's Shortform · 2019-08-27T18:55:34.253Z · score: 3 (2 votes) · LW · GW

The topic is interesting, but no discussion about it is interesting. These are not contradictory.

The open question about strong determinism vs libertarian free will is interesting, and there is a yet-unexplained contradiction between my felt experience (and others reported experiences) and my fundamental physical model of the universe. The fact that nobody has any alternative model or evidence (or even ideas about what evidence is possible) that helps with this interesting question makes the discussion uninteresting.

Comment by dagon on Gratification: a useful concept, maybe new · 2019-08-26T19:09:55.241Z · score: 2 (1 votes) · LW · GW

True. There are two distinctions (I think) you're making from base utilitarianism (preferences over state of the universe in terms of agent-experienced utility):

1) This is about path, not state. You have an opinion about something to do/experience that's independent of any difference in expected value of a future state. It's also (I think) explicitly indexical - you care that it's you having this experience, not that it's experienced by more people.

2) This is about ... something ... which isn't on the pain/pleasure axis. I'm less sure of this one, as I tend to experience identity-affirming things as somewhat pleasurable and I'm not sure that's any less comparable on this dimension than any other pleasure or personal disappointment.

The torture example is similar on the first point, but misses the second. Is that roughly correct?

Comment by dagon on Gratification: a useful concept, maybe new · 2019-08-26T18:40:59.979Z · score: 4 (3 votes) · LW · GW

I've heard this referred to as "experience preference", or sometimes "experiential utility", in that they are things you want to experience, distinct from states you want the universe to be in. (skipping the rabbit hole of whether all experience is memory or whether this is a preference for having a memory vs having an experience).

It occurs a lot in the negative as well - things you don't want to experience (or don't want ANYONE to experience), regardless of the state of the universe afterward. Many torture-tradeoff discussions hinge on this point - to a lot of people, suffering is bad not because of consequences or because is reduces a hedonic sum, but is a dimension of bad in itself.

Comment by dagon on Schelling Categories, and Simple Membership Tests · 2019-08-26T18:33:34.383Z · score: 7 (4 votes) · LW · GW

I really appreciate the clarity of including both the math and the good examples. It may be useful to acknowledge that many of the observation/communication limits are in fact contingent on something unstated. What is it that keeps you from observing another variable or drawing better-fit categories?

but that's no reason to not aspire to the true precision of the Bayes-structure to whatever extent possible

Which brings us to the actual hard question: to what extent _is it_ possible? A lot of the time, the right answer is to notice that the compromise that's commonly made is undershooting the precision that you could handle, and determine which of your audience or conversational partners are ahead of or behind you in your modeling of the actual world. In your first example, you can ABSOLUTELY talk about financial emancipation, contract enforcement, or other adult rights that can be applied to some minors. Some things are taboo or sacred and you probably can't openly talk about them without severely violating norms, but you can still model the details inside your head to avoid truth-deflecting norms/laws.

Picking the right level of detail for the person or group (or internally, topic) you're dealing with is, IMO, far more important that picking the actual sharp lines to draw on a fuzzy canvas.

Comment by dagon on Am I going for a job interview with a woo pusher? · 2019-08-25T16:58:53.305Z · score: 3 (4 votes) · LW · GW

If you haven't interviewed in awhile, no harm in practicing on them. It does seem pretty woo-ey, but one can make a pretty strong argument that most popular woo contains some actually helpful elements. I do worry that the job is actually a sales job in the guise of a technician (like it's based on commissions or quotas for clients you bring in), and if that's not what you want, you should be extremely clear about it before you accept.

Comment by dagon on Torture and Dust Specks and Joy--Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces · 2019-08-25T16:27:58.616Z · score: 4 (3 votes) · LW · GW

I like this insight - not only nonlinear but actually discontinuous. There are some marginal instants of torture that are hugely negative, mixed in with those that are more mildly negative. This is due to something that's often forgotten in these discussions: ongoing impact of a momentary experience.

Being "broken" by torture may make it impossible to ever recover enough for any future experiences to be positive. There may be a few quanta of brokenness, but it's not the case that every marginal second is all that bad, only some of them.

Comment by dagon on The "Commitment Races" problem · 2019-08-23T14:20:34.194Z · score: 2 (1 votes) · LW · GW

I think you're missing at least one key element in your model: uncertainty about future predictions. Commitments have a very high cost in terms of future consequence-effecting decision space. Consequentialism does _not_ imply a very high discount rate, and we're allowed to recognize the limits of our prediction and to give up some power in the short term to reserve our flexibility for the future.

Also, one of the reasons that this kind of interaction is rare among humans is that commitment is impossible for humans. We can change our minds even after making an oath - often with some reputational consequences, but still possible if we deem it worthwhile. Even so, we're rightly reluctant to make serious committments. An agent who can actually enforce it's self-limitations is going to be orders of magnitude more hesitant to do so.

All that said, it's worth recognizing that an agent that's significantly better at predicting the consequences of potential commitments will pay a lower cost for the best of them, and has a material advantage over those who need flexibility because they don't have information. This isn't a race in time, it's a race in knowledge and understanding. I don't think there's any way out of that race - more powerful agents are going to beat weaker ones most of the time.

Comment by dagon on Time Travel, AI and Transparent Newcomb · 2019-08-22T23:01:57.650Z · score: 2 (1 votes) · LW · GW

Let us suppose <impossible thing>. Now <impossible result> remains impossible, how? Maybe the universe has a mysterious agency we can trick or bargain with!

I think you'll need to back up a bit further if you want to explore this. "time travel is possible" isn't well enough defined to be able to reason about, except in the human conceptual space with no physics attached. And if you're assuming away physics, you don't need to explain anything, just let the paradoxes happen.


Comment by dagon on Davis_Kingsley's Shortform · 2019-08-20T23:51:26.276Z · score: 3 (2 votes) · LW · GW

I can't tell if this is just another example that strategic choices tend to be valuable (guaranteed non-negative, but in practice usually positive). OF COURSE an opponent's choice is going to reduce your value in a zero-sum game.

I do want to warn against applying to other aspects of life that aren't purely zero-sum and aren't designed by a human to balance the power between both parties. See also https://www.lesswrong.com/posts/rHBdcHGLJ7KvLJQPk/the-logical-fallacy-of-generalization-from-fictional

Comment by dagon on Goodhart's Curse and Limitations on AI Alignment · 2019-08-20T23:24:09.907Z · score: 4 (2 votes) · LW · GW

I don't understand why https://en.wikipedia.org/wiki/Theory_of_the_second_best doesn't get more consideration. In a complex interconnected system, V can not only be much less than E, it can be much less than would be obtained with ~C. You may not get mere utopia, you may get serious dystopia.


Comment by dagon on What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause? · 2019-08-20T23:03:30.749Z · score: 8 (8 votes) · LW · GW

Other reasons that people may have (I have some of these reasons, but not all):

  • not a classical utilitarian
  • don't believe those timelines
  • too distant to feel an emotional tie to
  • unclear what to do even if it is a priority
  • very high discount rate for future humans
  • belief that moral value is relative with cognitive ability (an extremely smart AI may be worth a few quitillion humans in a moral/experiential sense)

Of these, I think the one that I'm personally least moved by while acknowleging it as one of the better arguments against utilitarianism is the last. It's clear that there's SOME difference in moral weight for different experiences of different experiencers. Which means there's some dimension on which a utility monster is conceivable. If it's a dimension that AGI will excel on, we can maximize utility by giving it whatever it wants.


Comment by dagon on Swimmer963's Shortform · 2019-08-20T17:08:13.540Z · score: 2 (1 votes) · LW · GW

(don't write fiction, but have run and playtested a lot of RPGs, which share many of the worldbuilding elements).

Among the hard parts is figuring out how much suspension of disbelief your audience will willingly bring, on what topics. This _is_ fiction, so we're not generally trying to truly predict a hypothetical "possible" outcome, we're trying to highlight similarities and differences from our own. This VERY OFTEN implies assuming a similarity (where the point of departure has less effect that is likely) and then justifying it or constraining the departure so it's less difficult to maintain that this element of society would still be recognizable.

Comment by dagon on Negative "eeny meeny miny moe" · 2019-08-20T16:14:48.591Z · score: 2 (4 votes) · LW · GW

Ehn. For kids who will EVER accept this as fair, you're putting too much thought into politics. If the kids are this manipulable, they'll probably accept your authority in the one-shot case as well.

Also, more iterations gives them more time to realize that you're cheating (by shifts in how to count syllables) or that the game is fully deterministic (and you're cheating by deciding who to start with).

This is only usable for such low-stakes cases where the participants don't mind that it's not fair. And in those cases, don't waste time on pointless complexity. Of course, if this is part of the entertainment, I reverse that advice - choose the single-elimination method to extend the duration of the tension of not knowing.


Comment by dagon on Do We Change Our Minds Less Often Than We Think? · 2019-08-19T23:37:19.621Z · score: 9 (5 votes) · LW · GW

This question is very sensitive to reference classes and definitions. I change my estimates of my future choices very very often, but the vast majority of my decisions are too trivial to notice that I'm doing so. Yesterday at breakfast I thought I'd probably have tacos for dinner. I didn't.

For decisions that seem more important, I spend more time on them, and ALSO probably change my mind less often than I intend to. The job-change example is a good one: I usually know what I want after the first few conversations, but I intentionally force myself to consider other alternatives and collect data to be more confident in the decision. Part of my tactics for doing that research and consideration is to understate the chance that I'll pick the leading option (semi-intentionally; it's motivated by wanting to make a more reasoned decision in the future, not an honest neutral prediction, but I'd admit that if pressed).

I don't change my overall values or relationship tenets very often at all, but I do change priorities and activities a whole lot, and am often surprised by what seems best at the time, compared to when planning.

Comment by dagon on Beliefs Are For True Things · 2019-08-17T18:51:16.610Z · score: 4 (2 votes) · LW · GW

I think this is an important point. There's a false dichotomy (and a lossy reduction in dimensionality) in "you can believe true things or you can believe useful things". You can and should strive for useful true beliefs. If you're not https://wiki.lesswrong.com/wiki/Making_beliefs_pay_rent , you likely have some useless beliefs, and it's _really_ hard to define "true" for beliefs that don't actually do anything.

You shouldn't believe falsehoods full stop. Falsehoods are not useful beliefs, as they make incorrect predictions. You ALSO shouldn't spend a whole lot of effort on truth of beliefs that don't matter. You should have a LOT of topics where you don't have strong beliefs.

Comment by dagon on Matthew Barnett's Shortform · 2019-08-17T15:15:06.639Z · score: 2 (1 votes) · LW · GW

I like your phrasing better than mine. "only" is definitely too strong. "most likely path to"?

Comment by dagon on Matthew Barnett's Shortform · 2019-08-16T23:05:20.622Z · score: 2 (1 votes) · LW · GW

A lot depends on your model of progress, and whether you'll be able to predict/recognize what's important to understand, and how deeply one must understand it for the project at hand.

Perhaps you shouldn't frame it as "study early" vs "study late", but "study X" vs "study Y". If you don't go deep on math foundations behind ML and decision theory, what are you going deep on instead? It seems very unlikely for you to have significant research impact without being near-expert in at least some relevant topic.

I don't want to imply that this is the only route to impact, just the only route to impactful research.
You can have significant non-research impact by being good at almost anything - accounting, management, prototype construction, data handling, etc.

Comment by dagon on Matthew Barnett's Shortform · 2019-08-16T21:57:33.226Z · score: 3 (2 votes) · LW · GW

Beware motivated reasoning. There's a large risk that you have noticed that something is harder for you than it seems for others, and instead of taking that as evidence that you should find another avenue to contribute, you convince yourself that you can take the same path but do the hard part later ( and maybe never ).

But you may be on to something real - it's possible that the math approach is flawed, and some less-formal modeling (or other domain of formality) can make good progress. If your goal is to learn and try stuff for your own amusement, pursuing that seems promising. If your goals include getting respect (and/or payment) from current researchers, you're probably stuck doing things their way, at least until you establish yourself.

Comment by dagon on Matthew Barnett's Shortform · 2019-08-16T18:40:06.480Z · score: 2 (3 votes) · LW · GW

I think you might be goodhearting a bit (mistaking the measure for the goal) when you claim that final exam performance is productive. The actual product is the studying and prep for the exam, not the exam itself. The time limits and isolated environment is helpful in proctoring (it ensures the output is limited enough to be able to grade, and ensures that no outside sources are being used), not for productivity.

That's not to say that these elements (isolation, concentration, time awareness, expectation of a grading/scoring rubric) aren't important, just that they're not necessarily sufficient nor directly convertible from an exam setting.

Comment by dagon on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-15T23:36:01.518Z · score: 3 (4 votes) · LW · GW
The problem with having the conversation in public is precisely that other people will be asking "wait, what precious thing, exactly?" which derails the high context conversation.

I get that, but if the high-context extensive private conversation doesn't or can't) identify the precious thing, it seems somewhat likely that either you're both politely accepting that the other may be thinking about something else entirely, and/or it may not actually be a thing.

I very much like your idea that you should have the conversation with the default expectation of publishing at a later time. If you haven't been able to agree on what the thing is by then, I think the other people asking "wait, what precious thing exactly" are probably genuinely confused.

Note that I realize and have not resolved the tension between my worry that indescribable things aren't things, and my belief that much (and perhaps most) of human decision-making is based on illegible-but-valid beliefs. I wonder if at least some of this conversation is pointing to a tendency to leak illegible beliefs into intellectual discussions in ways that could be called "bias" or "deception" if you think the measurable world is the entirety of truth, but which could also be reasonably framed as "correction" or "debiasing" a limited partial view toward the holistic/invisible reality. I'm not sure I can make that argument, but I would respect it and take it seriously if someone did.

Comment by dagon on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-15T20:41:55.966Z · score: 5 (7 votes) · LW · GW

(note: this is more antagonistic than I feel - I agree with much of the direction of this, and appreciate the discussion. But I worry that you're ignoring a motivated blind spot in order to avoid biting some bullets).

So, there's something precious that dissolves when defined, and only seems to occur in low-stakes conversations with a small number of people. It's related to trust, ability to be wrong (and to point out wrongness). It feels like the ability to have rational discourse, but that feeling is not subject to rational discourse itself.

Is it possible that it's not truth-seeking (or more importantly, truth itself) you're worried about, but unstated friendly agreement to ignore some of the hard questions? In smaller, less important conversations, you let people get away with all sorts of simplifications, theoretical constructs, and superficial agreements, which results in a much more pleasant and confident feeling of epistemic harmony.

When it comes time to actually commit real resources, or take significant risks, however, you generally want more concrete and detailed agreement on what happens if you turn out to be incorrect in your stated, shared beliefs. Which indicates that you're less confident than you appear to be. This feels bad, and it's tempting for all participants to now accuse the other of bad faith. This happens very routinely in friends forming business partnerships, people getting married, etc.

Maybe it's not a loss in truth-seeking ability, it's a loss of the ILLUSION of truth-seeking ability. Humans vary widely in their levels of rationality, and in their capability to hold amounts of data and make predictions, and in their willingness to follow/override their illegible beliefs in favor of justifiable explicit ones. It's not the case that the rationalist community is no better than average: we're quite a bit better than average (and conversations like this may well improve it further). But average is TRULY abysmal.

I've long called it the "libertarian dilemma": agency and self-rule and rational decision-making is great for me, and for those I know well enough to respect, but the median human is pretty bad at it, and half of them are worse than that. When you're talking about influencing other people's spending decisions, it's a really tough call whether to nudge/manipulate them into making better decisions than they would if you neutrally present information in the way you (think you) prefer. Fundamentally, it may be a question of agency: do you respect people's right to make bad decisions with their money/lives?




Comment by dagon on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-15T18:41:29.279Z · score: 7 (6 votes) · LW · GW

This is awesome! I cannot sufficiently express my admiration for trying to make these kinds of discussions transparent and accessible.

There's a lot of surface area in this, even in the summary, so I don't think I can do justice in a comment. I'll instead just highlight a few things that resonated or confused me.

  • I don't know if the ambiguity is intentional, but I'm put off by statements like "billions of dollars at stake". If it's just a shorthand for "now it's real, and has real-world impact", fine, but different readers will have very different emotional reactions to that framing. I can read it as semi-literal "things we say have some amount of influence over people who will spend billions of dollars in the next few years", direct-literal "we have a budget in the billions that we're directly allocating", or hyperbolic "over the course of time, small improvements in thinking can shift billions of dollars in acting", and those three interpretations somewhat change my reaction to the rest of the discussion. Also, if it's the last one, you're thinking too small. Billions isn't that much in today's world - if you can solve politics, you're influencing at least trillions, and potentially beyond countable money.
  • It's not clear (but you don't list it as an important crux, so I'm confused), what that "precious thing" actually is. It may be the ability to be publicly admit epistemic uncertainty while still having public support (and money). It may be the ability to gracefully give up resources when it turns out you're wrong. It may be the ability to call out errors (motivated or not, systemic or not) without defensive counter-accusations. It may be an assumption of cooperation, so not much effort needs to be spent on skepticism of intent (only on cooperative skepticism of data and models). It may be something else I haven't identified. I suspect these are all related, but will need to be addressed separately.
  • The public vs private aspect is, I believe, far more critical than you seem to give it credit for. Especially if the $billions is other peoples' money, you'll need to deal with the fact that you're discussing other peoples' rationality (or lack thereof). If so, this is squarely in https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline territory. And deeply related to questions of elitism vs populism, and how to accept the existence of power dynamics that are based on different criteria than you'd like.

Comment by dagon on Adjectives from the Future: The Dangers of Result-based Descriptions · 2019-08-14T17:00:55.013Z · score: 3 (2 votes) · LW · GW
Why not minimize the manipulation by describing both the intent and the means

I believe, in most cases, this actually happens when you read/discuss beyond the headline. Use more words, actually put effort into understanding rather than just assuming the the 2-4 word description is all there is.

In the examples you give, it would be somewhat misleading to describe both motive and method - "weight-loss program" doesn't specify mechanism because it applies to a lot of different mechanisms. The person describing it wants to convey the intent, not the mechanism - that detail is important for some things, and not for others, so it's left to you to decide if you want it. "Against Malaria" likewise. They believe that the right tactic is mosquito nets, but if things change and that stops being the case, they don't intend to change their mission or identity in order to use laser-guided toads or whatever.

Comment by dagon on Adjectives from the Future: The Dangers of Result-based Descriptions · 2019-08-13T23:44:53.200Z · score: 3 (2 votes) · LW · GW

I'd characterize these as "intent-description" as opposed to "activity-description". And I think the underlying problem is the compression inherent in short, catchy phrases to describe a complex endeavor that includes thousands or more people working on it. Famously and only somewhat tongue-in-cheek, one of the two unsolved computer science problems is "naming things".

Failure to look into the model and activity behind any intended consequence will leave you open to manipulation and incorrect expectations. Failure to look at the intent can lead you to ignore the possibility that tactics and methods might need to change, and how aware the org is of that.


Comment by dagon on Hazard's Shortform Feed · 2019-08-13T21:43:55.810Z · score: 4 (2 votes) · LW · GW

This happens on LW as well, fairly often. It's hard to really introduce a topic in a way that people BELIEVE you when you say you're exploring concept space and looking for ideas related to this, rather than trying to evaluate this actual statement. It's still worth trying to get that across when you can.

It's also important to know your audience/discussion partners. For many people, it's entirely predictable that when you say "I'm thinking about ... get everyone on patreon" they will react to the idea of getting their representation of "everyone" on their ideas of "patreon". In fact, I don't know what else you could possibly get.

It may be better to try to frame your uncertainty about the problem, and explore that for awhile, before you consider solutions, especially solutions to possibly-related-but-different problems. WHY are you thinking about funding and revenue? Do you need money? Do you want to give money to someone? Do you want some person C to create more content and you think person D will fund them? It's worth it to explore where Patreon succeeds and fails at whatever goals you have, but first you have to identify the goals.

Comment by dagon on Natural laws should be explicit constraints on strategy space · 2019-08-13T20:50:32.381Z · score: 10 (4 votes) · LW · GW

I'm not sure I understand your recommendation. You talk about pilot as a constraint and the obvious removal of the constraint (unmanned fighters). This is the opposite of a natural law: it's an assumed constraint or a constraint within a model, not a natural law.

I think " We have a good command of natural law at the scale where warmachines operate. " is exactly opposite of what I believe. We have some hints as to natural law in those scales, but we're nowhere near those constraints. There are a huge number of contingent constraints in our technology and modeling of the problem, which are very likely overcome-able with effort.

[edit after re-reading]

Do you mean "_only_ natural laws should be explicit constraints"? You're recommending that if we think we're constrained and can't identify the natural law that's binding, the constraint is probably imaginary or contingent on some other thing we should examine?

Comment by dagon on Could we solve this email mess if we all moved to paid emails? · 2019-08-12T17:14:33.574Z · score: 10 (3 votes) · LW · GW

May I ask for a resolution comment (or follow-up questions if not resolved) when you've decided that this question has sufficient answers to make a decision or summarize a consensus?

It's not fair to pick on this one, and I apologize for that, but this is one of a number of recent topics that generate opinions and explore some models (some valuable, many interesting), but then kind of die out rather than actually concluding anything.


Comment by dagon on Dony's Shortform Feed · 2019-08-12T16:39:50.619Z · score: 2 (1 votes) · LW · GW

Beware over-generalization and https://wiki.lesswrong.com/wiki/Typical_mind_fallacy. There's a LOT of variation in human capabilities and preferences (including preferences about productivity vs rest). Some people do have 100-hour workweeks (I did for awhile, when I was self-employed. )

Try it, see how it works for you. If you're in a position of leadership over others, give them room to find what works best for them.

Comment by dagon on Could we solve this email mess if we all moved to paid emails? · 2019-08-12T13:38:16.468Z · score: 4 (2 votes) · LW · GW

It doesn't require you to know the right people, it requires you to expend effort to determine the right people, and then to convince THOSE less-busy people that the referral is valuable.

For many, they have assistants or employees who perform this as an actual task - filter the volume of contacts and handle most things, escalating those that warrant it. That's great. For others, this is more informal - they have other communication channels like LessWrong, or twitter, or social network meshes, and you can get their attention by getting the attention of any of their friends or posting interesting stuff on those channels.

Either way (or ways in between and outside this), it uses the community to signal value of communication between individuals, rather than only discrete per-message signals that ignore any context.

Basically, there are two cases:

1) the recipient will want to talk with you, but doesn't know it. In this case, you need to show that you're interesting, not that you're interested. Spending money isn't interesting. Being interesting to people around me is interesting.

2) the recipient won't care, even after reading. In this case, money may compensate for their time, but probably not and it doesn't get you the attention you want anyway. A useless reply isn't worth their time nor your money.

Note that I'm assuming you're talking about trivial amounts of money (less than full-time equivalent pay for their time), and for more than a trivial form-letter response to collect the bounty. I'd be very interested in a SINGLE concrete example where any amount of money is a good value for both parties who wouldn't otherwise connect. Ideally, you'd give two examples: one of someone you wouldn't respond to without your $5, and one of someone who's not responding to you, who you'd pay $X to do so (including what X you'd pay and what kind of response would qualify).


After some more thought, I think my main objection is that adding small amounts of money to a communication is a pretty strong NEGATIVE signal that I want to read the communication. I want to read interesting things that lead to more interesting things. The fact that someone will pay to have me read it is an indication that I don't want to read it otherwise.

Comment by dagon on Could we solve this email mess if we all moved to paid emails? · 2019-08-12T00:25:14.984Z · score: 2 (1 votes) · LW · GW

I will happily accept payment for reading and responding to e-mail. I will not pay to send one, and I don't know of any cases where I feel the need to pay for someone's initial reading of an e-mail (I may want to pay for their attention, but that will be a negotiation or fee for a thing, not for a mail).

What _might_ be valuable is a referral service - a way to have someone who (lightly) knows you and who (somewhat) knows the person you want to correspond with, who can vouch for the fact that there's some actual reason not to ignore your mail. No payment in money, some payment (and reinforcement) in reputation.

Basically, e-mail isn't the problem, the variance in quality of things for me to look at is the problem. Curation is the answer, not payment.


Comment by dagon on Matthew Barnett's Shortform · 2019-08-09T20:22:39.442Z · score: 2 (1 votes) · LW · GW

The problem is, if a conversational topic can be hurtful, the meta-topic can be too. "do you want to talk about the test" could be as bad or worse than talking about the test, if it's taken as a reference to a judgement-worthy sensitivity to the topic. And "Can I ask you if you want to talk about whether you want to talk about the test" is just silly.

Mr-hire's comment is spot-on - there are variant cultural expectations that may apply, and you can't really unilaterally decide another norm is better (though you can have opinions and default stances).

The only way through is to be somewhat aware of the conversational signals about what topics are welcome and what should be deferred until another time. You don't need prior agreement if you can take the hint when an unusually-brief non-response is given to your conversational bid. If you're routinely missing hints (or seeing hints that aren't), and the more direct discussions are ALSO uncomfortable for them or you, then you'll probably have to give up on that level of connection with that person.


Comment by dagon on Why do humans not have built-in neural i/o channels? · 2019-08-09T16:14:12.711Z · score: 2 (1 votes) · LW · GW

Fair enough - I underestimate the power of evolution at my epistemic peril. My point remains: more direct communication (unfiltered by many levels of decoding and processing) could easily be more harmful than helpful.

Aside from snow crash / basilisk scenarios (which are yet un-demonstrated), language and vision are pretty safe, as they're filtered through a lot of neural systems to find and pay special attention to surprising things. This is slow, but makes it way harder to trick than a more direct interface would.

Some drugs are an example of more direct impacts that are available today. If there were such a thing that's actually guided by a human specifically to alter your mind in ways desired by the communicator, it would be quickly abused and removed.