Posts

For moderately well-resourced people in the US, how quickly will things go from "uncomfortable" to "too late to exit"? 2020-06-12T16:59:56.845Z · score: 72 (31 votes)
Money isn't real. When you donate money to a charity, how does it actually help? 2020-02-02T17:03:04.426Z · score: 19 (11 votes)
Dagon's Shortform 2019-07-31T18:21:43.072Z · score: 3 (1 votes)
Did the recent blackmail discussion change your beliefs? 2019-03-24T16:06:52.811Z · score: 37 (14 votes)

Comments

Comment by dagon on Industrial literacy · 2020-09-30T21:51:02.755Z · score: 0 (0 votes) · LW · GW

Ehn.  Nobody really understands anything, we're just doing the best we can with various models of different complexity.  Adam Smith's pin factory description in the 18th century has only gotten more representative of the actual complexity in the world and the impossibility of fully understanding all the tradeoffs involved in anything.  Note also that anytime you frame something as "responsibility of every citizen", you're well into the political realm.

You can see the economy as a set of solutions to some problems, but you also need to see it as exacerbation of other problems.  Chesterton's Fence is a good heuristic for tearing down fences, where it's probably OK to let it stand for awhile while you think about it.  It's a crappy way to decide whether you should get off the tracks before you understand the motivation of the railroad company.

I suspect that if people really understood the cost to future people of the contortions we go through to support this many simultaneous humans in this level of luxury, we'd have to admit that we don't actually care about them very much.  I sympathize with those who are saying "go back to the good old days" in terms of cutting the population back to a sustainable level (1850 was about 1.2B, and it's not clear even that was sparse/spartan enough to last more than a few millennia).  

Comment by dagon on "Zero Sum" is a misnomer. · 2020-09-30T19:08:45.772Z · score: 2 (1 votes) · LW · GW

Thanks for this - it's helpful to have a detailed description of some common misconceptions about types of games.  Personally, I don't particularly mind "zero sum" as the common term, interchangeable with "constant sum", and I'll only have to care about the misconception when someone's making an erroneous inference based on it.

I believe that the mistake in using the term "zero-sum" for games like "theft" or "elections" is NOT that the term zero-sum is limited, but that it throws out incredibly important information in the mapping.  It's just wrong to treat future interactions and trust as outside the decision.  In most real-world cases, the externalities and unmodeled effects are orders of magnitude bigger than the actual outcome of the game under discussion.

Comment by dagon on capybaralet's Shortform · 2020-09-30T16:56:30.553Z · score: 2 (1 votes) · LW · GW

What part is scary?  I think they're missing out on the sheer variety of model usage - probably as variable as software deployments.  But I don't think there's anything particularly scary about any given point on the curve.

Some really do get built, validated, and deployed twice a year.  Some have CI pipelines that re-train with new data and re-validate every few minutes.  Some are self-updating, and re-sync to a clean state periodically.  Some are running continuous a/b tests of many candidate models, picking the best-performer for a customer segment every few minutes, and adding/removing models from the pool many times per day.

Comment by dagon on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-29T20:29:04.335Z · score: 9 (5 votes) · LW · GW

where does this expectation come from?

I hadn't paid attention to the topic, and did not know it had run last year with that result (or at least hadn't thought about it enough to update on) so that expectation was my prior.  

Now that I've caught up on things, I realize I am confused.  I suspect it was a fluke or some unanalyzed difference in setup that caused the success last year, but that explanation seems a bit facile, so I'm not sure how to actually update.  I'd predict that running it again would result in the button being pressed, but I wouldn't wager very much (in either direction).

Comment by dagon on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-29T20:06:28.610Z · score: 13 (5 votes) · LW · GW

Just a datapoint on variety of invitees: I was included in the 270, and I've invested hundreds of hours into LW.  while I don't know you personally outside the site, I hope you consider me a trusted acquaintance, if not a friend.  I had no clue this was anything but a funny little game, and my expectation was that there would be dozens of button presses before I even saw the mail.

I had not read nor paid attention to the petrov day posts (including prior years).  I had no prior information about the expectations of behavior, the weight put on the outcome, nor the intended lesson/demonstration of ... something that's being interpreted as "coordination" or "trust".  

I wasn't using the mental model that indicated I was being trusted not to do something - I took it as a game to see who'd get there first, or how many would press the button, not a hope that everyone would solemnly avoid playing (by passively ignoring the mail).  I think without a ritual for joining the group (opt-in), it's hard to judge anyone or learn much about the community from the actions that occurred.

Comment by dagon on Doing discourse better: Stuff I wish I knew · 2020-09-29T18:30:38.266Z · score: 7 (3 votes) · LW · GW

Beware mixing up different kinds and purposes of communication.  Your friend's LOLOLOLOLOL is understating the complexity by a long way.  

For two-person conversations, where both (claim to be) seeking truth rather than signaling dominance or quality, and where both are reasonably intelligent and share a lot of cultural background, and where there's time and willingness to invest in the topic, https://www.lesswrong.com/tag/double-crux is an awesome technique.  Very often you won't resolve the answer, but you'll identify the un-resolvable differences in model or weight of utility you each have.  And you'll be able to (if you're lucky) identify portions of the topic where you can actually change your beliefs (and your partner may change some beliefs as well, but it's important for this not to be a goal or a contest - it doesn't matter who started out more wrong, if you can jointly be less wrong).

Where these conditions do not hold (more than two people, some participants less committed to truth-seeking, no face-to-face communication to help reinforce the purpose of this part of the relationship, not everyone with similar background models or capability of understanding the same level of discussion, etc.), the mix between truth-seeking and signaling changes, and there is a tipping point at which truth-seeking becomes obscured.  Your failure mode list is not sufficient, even if we had working counters to them - there are unique modes for every site, and they blend together in different ways over time.  To paraphrase Tolstoy: great communities are all alike, bad communities fail each in it's own way.

I recommend you also include temporal value in your analysis of success or failure of a site/community/forum.  Even if the things you list do succumb to death spirals, they were insanely valuable successes for a number of years, and much of that value remains long after they stop generating very much good new discussion.  

Comment by dagon on How often do series C startups fail to exit? · 2020-09-29T14:36:04.588Z · score: 2 (1 votes) · LW · GW

Gah, I thought ARR was Annual Run Rate (Burn Rate), not Revenue.  I meant to say they'll use their newfound capital much faster than they increase revenue (which they should!  that's the whole point of seeking funding).  And then, for most, the revenue won't actually increase enough and they go bankrupt.  

My main point was that when it turns, it turns completely.  Every funding source is projecting the future, not looking at the current situation.  A negative direction is zero-value.

Comment by dagon on On Destroying the World · 2020-09-28T21:55:19.752Z · score: 4 (2 votes) · LW · GW

Thanks.  I am not convinced, but I have a better idea of where our perspectives differ.  I have to admit this feels a bit like a relationship shit-test, where an artificial situation is created, and far too much weight is put on the result.

I'd be interested to hear various participants' and observers' takes on the actual impact of this event, in terms of what they believe about people's willingness to support the site or improve the world in non-artificial conditions.

Comment by dagon on Not all communication is manipulation: Chaperones don't manipulate proteins · 2020-09-28T21:24:53.468Z · score: 4 (3 votes) · LW · GW

Not sure I follow - chaperones don't seem complex enough to have intent at all, so by that definition they are non-manipulative in the same sense that rocks are - it's a concept that doesn't apply to them, not something they could do and choose not to.

That's a big contrast with human communication - there is definitely intent behind every communication.  For this kind of action, the selective removal of forces seems near-indistinguishable from the selective addition of force in order to enable/influence some change.  It feels like there's a naturalistic fallacy going on - some underlying belief that what happens in a vacuum is better than what happens in a real equilibrium.  

Comment by dagon on On Destroying the World · 2020-09-28T18:50:25.148Z · score: 5 (7 votes) · LW · GW

Wow.  I honestly don't get it - do you have a link to the previous discussion that justified why anyone's taking it all that seriously?

IMO, this was a completely optional, artificial setup - "just a game", in Chris's words.  When I got the e-mail, I wondered if it was already down, and was surprised that it wasn't (though maybe I just didn't notice - it never seemed down to me, but I go straight to /allPosts without ever looking at the front page).  

There was none of the weight of Petrov's decision, and no tension about picking one or the other - no lasting harm for pressing the button, no violation of norms (or being executed for treason, or losing WWIII) by failing to do so if it were necessary.  And no evidence one way or the other what the actual territory is.  Really, just a game.  And not even a very good one.

The fundamental cooperation to take down the site had ALREADY HAPPENED.  When someone wrote the code that would do so if someone pressed the button, that's FAR FAR stronger than some rando actually pressing the button.

Comment by dagon on Not all communication is manipulation: Chaperones don't manipulate proteins · 2020-09-28T16:25:08.836Z · score: 3 (2 votes) · LW · GW

Do you have an operational definition of "manipulation" that can help me understand what you're asserting here?  

The chaperone manipulates the environment to let the protein fold into a lower-energy state than it would without the chaperone.  Is this not manipulation of the end-state of the protein?  I interpret the cultural norm that one is to be "nonmanipulative" as "please only cause change in ways that we approve of", not as "have no impact at all".  

I don't believe there is any communication which doesn't change the future state of the universe, so I think I'm unclear what "manipulate" means.

Comment by dagon on Shittests are actually good · 2020-09-24T23:21:31.789Z · score: 6 (3 votes) · LW · GW

I think this over-states the benefit, and misses out on some of the costs/risks.  Shit-tests are a classic proxy, and subject to all the caveats of any measurement which is not perfectly correlated with your actual desires.  Goodhart is one of them - vaguely self-aware people will recognize and game the test.  Another problem is that if the test is different from your normal behavior, you're likely to see a different response than you would to your normal behavior.  The differences will be correlated with just how different the test is from your baseline activities and signals.

Importantly, the testing is itself a signal.  You mention this, but don't mention that it's anti-correlated with self-respect and competence, things which presumably you value in a partner.  Early in a courtship, when you're tempted to use this kind of test, is exactly when this test will drive away your best prospects.  Later, when you know each other better, the test is less harmful, but also less valuable.

There probably are cases where a shit-test is justified - the time savings of fast-failures is worth the false-positives and additional friction that the artificial filter will create.  But for many many cases (of romantic and other relationship-based exploration), you're best off looking for natural experiments than intentionally creating stressful situations.

Comment by dagon on The new Editor · 2020-09-24T18:24:36.953Z · score: 2 (1 votes) · LW · GW

Awesome!  Would it be possible to put an easy link to https://www.markdownguide.org/cheat-sheet/ or some other reminder of syntax somewhere easy-to-find while actually entering/editing comments?  

Comment by dagon on ryan_b's Shortform · 2020-09-24T15:49:40.748Z · score: 5 (2 votes) · LW · GW

I agree that economic models are not optimal for war

Go a little further, and I'll absolutely agree.  Economic models that only consider accounting entities (currency and reportable valuation) are pretty limited in understanding most human decisions.   I think war is just one case of this.  You could say the same for, say, having children - it's a pure expense for the parents, from an economic standpoint.  But for many, it's the primary joy in life and motivation for all the economic activity they partake in.

But the bottom line is that the value of weapons is destruction.

Not at all.  The vast majority of weapons and military (or hobby/self-defense) spending are never used to harm an enemy.  The value is the perception of strength, and relatedly, the threat of destruction.  Actual destruction is minor.

military procurement is viewed in Congress as an economic stimulus

That congress (and voters) are economically naïve is a distinct problem.  It probably doesn't get fixed by additional naivete of forcing negative-value concepts into the wrong framework.  If it can be fixed, it's probably by making the broken windows fallacy ( https://en.wikipedia.org/wiki/Parable_of_the_broken_window) less common among the populace.

Comment by dagon on How often do series C startups fail to exit? · 2020-09-23T18:54:07.661Z · score: 10 (5 votes) · LW · GW

Believe the internet.  Most really do fail, and return little or nothing to restricted share- or option-holders, who couldn't sell early as part of other funding deals.

How does all the value just evaporate? 

The problem is that the company is trying to grow, and will increase it's ARR by an order of magnitude in pursuit of that growth.  If they don't actually find a sustainable market, that won't justify an IPO or forward-looking exit.  And then, when the valuation starts to fall, it becomes VERY unattractive to any buyer, who thinks "what am I really getting for this, and why not just wait until it's zero and pick up the remains from the bankruptcy court"?  

Comment by dagon on ryan_b's Shortform · 2020-09-23T18:47:12.672Z · score: 4 (2 votes) · LW · GW

I'm not sure what you're proposing - it seems confusing to me to have "production" of negative value.  I generally think of "production" as optional - there's a lower bound of 0 at which point you prefer not to produce it.

I think there's an important question of different entities doing the producing and capturing/suffering the value, which gets lost if you treat it as just another element of a linear economic analysis.  Warfare is somewhat external to purely economic analysis, as it is generally motivated by non-economic (or partly economic but over different timeframes than are generally analyzed) values.

Comment by dagon on For what X would you be indifferent between living X days, but forgetting your day at the end of everyday, or living 10 days? (terminally) · 2020-09-22T20:55:24.396Z · score: 2 (1 votes) · LW · GW

I recognize that time-value-of-utility is unsolved, and generally ignored for this kind of question. But I'm not sure I follow the reasoning that current-you must value future experiences based on what farther-future-you values.

Specifically, why would you require a very large X? Shouldn't you value value both possibilities at 0, because you're dead either way?

Comment by dagon on For what X would you be indifferent between living X days, but forgetting your day at the end of everyday, or living 10 days? (terminally) · 2020-09-22T15:03:45.462Z · score: 3 (2 votes) · LW · GW

There are TONS of moments I forget, but they _do_ leave residue. Either in income, effect on other people, or environmental improvements (the lightbulb I changed continues to work). Not sure if this scenario removes or carries forward unconscious changes in habits or mental pathways, but for real memory loss, victims tend to retain some amount of such changes, even if they don't consciously remember doing so.

I also value human joy in the abstract. Whether some other person, or some un-remembered version of me experiences it, there is value.

If you give a very very large value, do you also believe that all mortal lives are very-low-value, as they won't have any memory once they die?

Comment by dagon on For what X would you be indifferent between living X days, but forgetting your day at the end of everyday, or living 10 days? (terminally) · 2020-09-21T17:56:06.759Z · score: 3 (2 votes) · LW · GW

10 days of connected experience vs X days of disconnected experience? Honestly, I can't compound experiences/values very much in 10 days, so the amnesia doesn't cost that much - somewhere between 11 and 20 days seems reasonable to me.

I know people with severe memory problems, and they enjoy life a significant fraction (at least 10%, perhaps 80%, some days over 100%) as much as they might if they remembered yesterday.

This question gets much harder for 2, 10, or 50 years. The amount of joy/satisfaction/impact that can be had in those timeframes by building on previous days, is perhaps orders of magnitude higher if an agent has continuity than if not.

Comment by dagon on Comparing Utilities · 2020-09-18T21:28:51.161Z · score: 4 (2 votes) · LW · GW

Ah, I think I understand better - I was assuming a much stronger statement of what social choice function is rational for everyone to have, rather than just that there exists a (very large) set of social choice functions, and it it rational for an agent to have any of them, even if it massively differs from other agents' functions.

Thanks for taking the time down this rabbit hole to clarify it for me.

Comment by dagon on Dagon's Shortform · 2020-09-18T21:22:24.163Z · score: 2 (1 votes) · LW · GW

Useful pointers. I do remember those conversations, of course, and I think the objections (and valid uses) remain - one can learn from unlikely or impossible hypotheticals, but it takes extra steps to specify why some parts of it would be applicable to real situations. I also remember the decoupling vs contextualizing discussion, and hadn't connected it to this topic - I'm going to have to think more before I really understand whether Newcomb-like problems have clear enough paths to applicability that they can be decoupled by default or whether there's a default context I can just apply to make sense of them.

Comment by dagon on Covid 9/17: It’s Worse · 2020-09-17T21:56:07.537Z · score: 6 (4 votes) · LW · GW

If you think the identical processes and level of caution should be used for an emergent pandemic as for relatively small-scale long-standing viruses, you're not doing cost/benefit analysis very well. It's very hard for me to simultaneously believe that it's so risky that we should all avoid travel and most leisure activities, AND that the vaccine is so unimportant that we shouldn't accept more risks than we otherwise would.

I'll respond to Natalie Dean's quote, because they're easy bullet points.

Gives people a false sense of security if efficacy is really low

Perhaps true, but efficacy would have to be ridiculously low for it to be a net loss. Which will show in early trials and uses.

Diverts resources away from other interventions (fixing testing!)

Do both!

Makes it harder to evaluate better vaccines

Only to the extent that it's effective and very common. Which is a good outcome in itself.

Jeopardizes safety

More than a 6-month delay would? I doubt it.

Erodes trust in the process

That implies that anyone trusts the process now.

Comment by dagon on Making the Monte Hall problem weirder but obvious · 2020-09-17T17:04:15.755Z · score: 9 (3 votes) · LW · GW

An easier illustration is "Monty doesn't open any doors, he just gives the player to stay with their chosen door, or switch to ALL of the other doors". This is isomorphic to showing losers for all but one of the other doors.

Like the original, it's somewhat ambiguous if you don't specify that Monty will do this regardless of whether the player guessed right the first time.

Comment by dagon on Dagon's Shortform · 2020-09-17T04:05:29.946Z · score: 2 (1 votes) · LW · GW

I always enjoy convoluted Omega situations, but I don't understand how these theoretical entities get to the point where their priors are as stated (and especially the meta-priors about how they should frame the decision problem).

Before the start of the game, Omega has some prior distribution of the Agent's beliefs and update mechanisms. And the Agent has some distribution of beliefs about Omega's predictive power over situations where the Agent "feels like" it has a choice. What experiences cause Omega to update sufficiently to even offer the problem (ok, this is easy: quantum brain scan or other Star-Trek technobabble)? But what lets the Agent update to believing that their qualia of free-will is such an illusion in this case? And how do they then NOT meta-update to understand the belief-action-payout matrix well enough to take the most-profitable action?

Comment by dagon on Comparing Utilities · 2020-09-16T23:13:01.466Z · score: 2 (1 votes) · LW · GW

I follow a bit more, but I still feel we've missed a step in stating whether it's "a social choice function, which each agent has as part of it's preference set", or "the social choice function, shared across agents somehow". I think we're agreed that there are tons of rational social choice functions, and perhaps we're agreed that there's no reason to expect different individuals to have the same weights for the same not-me actors.

I'm not sure I follow that it has to be linear - I suspect higher-order polynomials will work just as well. Even if linear, there are a very wide range of transformation matrices that can be reasonably chosen, all of which are compatible with not blocking Pareto improvements and still not agreeing on most tradeoffs.

If you imagine that you're trying to use this argument to convince someone to be utilitarian, this is the step where you're like "if it doesn't make any difference to you, but it's better for them, then wouldn't you prefer it to happen?"

Now I'm lost again. "you should have a preference over something where you have no preference" is nonsense, isn't it? Either the someone in question has a utility function which includes terms for (their beliefs about) other agents' preferences (that is, they have a social choice function as part of their preferences), in which case the change will ALREADY BE positive for their utility, or that's already factored in and that's why it nets to neutral for the agent, and the argument is moot. In either case, the fact that it's a Pareto improvement is irrelevant - they will ALSO be positive about some tradeoff cases, where their chosen aggregation function ends up positive. There is no social aggregation function that turns a neutral into a positive for Pareto choices, and fails to turn a non-Pareto case into a positive.

To me, the premise seems off - I doubt the target of the argument is understanding what "neutral" means in this discussion, and is not correctly identifying a preference for pareto options. Or perhaps prefers them for the beauty and simplicy of them, and that doesn't extend to other decisions.

If you're just saying "people don't understand their own utility functions very well, and this is an intuition pump to help them see this aspect", that's fine, but "theorem" implies something deeper than that.

Comment by dagon on Comparing Utilities · 2020-09-16T21:32:10.800Z · score: 2 (1 votes) · LW · GW
I'm feeling a bit of "are you trolling me" here.

Me too! I'm having trouble seeing how that version of the pareto-preference assumption isn't already assuming what you're trying to show, that there is a universally-usable social aggregation function. Or maybe I misunderstand what you're trying to show - are you claiming that there is a (or a family of) aggregation function that are privileged and should be used for Utilitarian/Altruistic purposes?

So a pareto improvement is a move that is > for at least one agent, and >= for the rest.

Agreed so far. And now we have to specify which agent's preferences we're talking about when we say "support". If it's > for the agent in question, they clearly support it. If it's =, they don't oppose it, but don't necessarily support it.

The assumption I missed was that there are people who claim that a change is = for them, but also they support it. I think that's a confusing use of "preferences". If it's =, that strongly implies neutrality (really, by definition of preference utility), and "active support" strongly implies > (again, that's the definition of preference). I still think I'm missing an important assumption here, and that's causing us to talk past each other.

When I say "Pareto optimality is min-bar for agreement", I'm making a distinction between literal consensus, where all agents actually agree to a change, and assumed improvement, where an agent makes a unilateral (or population-subset) decision, and justifies it based on their preferred aggregation function. Pareto optimality tells us something about agreement. It tells us nothing about applicability of any possible aggregation function.

In my mind, we hit the same comparability problem for Pareto vs non-Pareto changes. Pareto-optimal improvements, which require zero interpersonal utility comparisons (only the sign matters, not the magnitude, of each affected entity's preference), teach us nothing about actual tradeoffs, where a function must weigh the magnitudes of multiple entities' preferences against each other.

Comment by dagon on Comparing Utilities · 2020-09-16T17:52:36.705Z · score: 1 (3 votes) · LW · GW
The Pareto-optimality assumption isn't that you're "just OK" with Pareto-improvements, in a ≥ sense. The assumption is that you prefer them, ie, >.

That's not what Pareto-optimality asserts. It only talks about >= for all participants individually. If you're making assumptions about altruism, you should be clearer that it's an arbitrary aggregation function that is being increased.

And then, Pareto-optimality is a red herring. I don't know of any aggregation functions that would change a 0 to a + for a Pareto-optimal change, and would not give a + to some non-Pareto-optimal changes, which violate other agents' preferences.

My primary objection is that any given aggregation function is itself merely a preference held by the evaluator. There is no reason to believe that there is a justifiable-to-assume-in-others or automatically-agreeable aggregation function.

if you assent to Pareto improvements as something to aspire to

This may be the crux. I do not assent to that. I don't even think it's common. Pareto improvements are fine, and some of them actually improve my situation, so go for it! But in the wider sense, there are lots of non-Pareto changes that I'd pick over a Pareto subset of those changes. Pareto is a min-bar for agreement, not an optimum for any actual aggregation function.

I should probably state what function I actually use (as far as I can tell). I do not claim universality, and in fact, it's indexed based on non-replicable factors like my level of empathy for someone. I do not include their preferences (because I have no access). I don't even include my prediction of their preferences. I DO include my preferences for what (according to my beliefs) they SHOULD prefer, which in a lot of cases correlates closely enough with their actual preferences that I can pass as an altruist. I then weight my evaluation of those imputed-preferences by something like an inverse-square relationship of "empathetic distance". People closer to me (including depth and concreteness of my model for them, how much I like them, and likely many other factors I can't articulate), including imaginary and future people who I feel close to get weighted much much higher than more distant or statistical people.

I repeat - this is not normative. I deny that there exists a function which everyone "should" use. This is merely a description of what I seem to do.

Comment by dagon on The Axiological Treadmill · 2020-09-16T17:39:07.046Z · score: 1 (2 votes) · LW · GW
The obvious reason that Moloch is the enemy is that it destroys everything we value in the name of competition and survival.

Moloch is not always the enemy. Competition (among imperfectly-aligned agents) is the most efficient arbitration of different values. For almost everyone, survival is in fact something they value quite highly. Moloch happens when these pressures become so great (or are perceived as such) that they crowd out other values. Moloch destroys nothing except the illusion of freedom. Moloch creates value, just not the exact mix of value types that some or all participants would prefer.

But this is missing the bigger picture. We value what we value because, in our ancestral environment, those tended to be the things that helped us with competition and survival.

"tended to be" and "are exactly and only" are very different statements. You're saying the first, but your argument requires the second. My preferences as an individual human vary greatly from a historical human average. Even to the extent that they're mutable and change with time, I have meta-preferences about the ways in which they change, and those are ALSO different from any historical or aggregate set.

If the things that help us compete and survive end up changing, then evolution will ensure that the things we value change as well.

Not even. There's a bit of Slack even between genotype and phenotype, and a whole lot between biology and psychology.

Comment by dagon on Applying the Counterfactual Prisoner's Dilemma to Logical Uncertainty · 2020-09-16T17:23:54.808Z · score: 0 (2 votes) · LW · GW

I always enjoy convoluted Omega situations, but I don't understand how these theoretical entities get to the point where their priors are as stated (and especially the meta-priors about how they should frame the decision problem).

Before the start of the game, Omega has some prior distribution of the Agent's beliefs and update mechanisms. And the Agent has some distribution of beliefs about Omega's predictive power over situations where the Agent "feels like" it has a choice. What experiences cause Omega to update sufficiently to even offer the problem (ok, this is easy: quantum brain scan or other Star-Trek technobabble)? But what lets the Agent update to believing that their qualia of free-will is such an illusion in this case? And how do they then NOT meta-update to understand the belief-action-payout matrix well enough to take the most-profitable action?

Moved to my shortform - it's not a direct answer to the post.

Comment by dagon on Comparing Utilities · 2020-09-15T22:14:06.567Z · score: 1 (3 votes) · LW · GW
altruistic enough to prefer Pareto improvements with respect to everyone's preferences.

Wait, what? Altruism has nothing to do with it. Everyone is supportive of (or indifferent to) any given Pareto improvement because it increases (or at least does not reduce) their utility. Pareto improvements provide no help in comparing utility because they are cases where there is no conflict among utility functions. Every multiplicative or additive transform across utility functions remains valid for Pareto improvements.

For example, if you refuse to trade off between people's ordinal incommensurate preferences, then you just end up refusing to have an opinion when you try to choose between charity A which saves a few lives in Argentina vs charity B which saves many lives in Brazil.

I don't refuse to have an opinion, I only refuse to claim that that it's anything but my preferences which form that opinion. My opinion is about my (projected) utility from the saved or unsaved lives. That _may_ include my perception of their satisfaction (or whatever observable property I choose), but it does not have any access to their actual preference or utility.

Comment by dagon on Comparing Utilities · 2020-09-15T17:15:39.213Z · score: 9 (3 votes) · LW · GW
This could (should?) also make you suspicious of talk of "average utilitarianism" and "total utilitarianism". However, beware: only one kind of "utilitarianism" holds that the term "utility" in decision theory means the same thing as "utility" in ethics: namely, preference utilitarianism.

Ok, I'm suspicious of preference utilitarianism which requires aggregation across entities. And suspicious of other kinds because they mean something else by "utility". Then you show that there are aggregate functions that have some convenient properties. But why does that resolve my suspicion?

What makes any of these social choice functions any more valid than any other assumption about other people's utility transformations? The pareto-optimal part is fine, as they are compatible with all transformations - they work for ordinal incommensurate preferences. So they're trivial and boring. But once you talk about bargaining and " the relative value of one person's suffering vs another person's convenience ", you're back on shaky ground.

The incomparability of utility functions doesn't mean we can't trade off between the utilities of different people.

We can prefer whatever we want, we can make all sorts of un-justified comparisons. But it DOES MEAN that we can't claim to be justified in violating someone's preferences just because we picked an aggregation function that says so.

We just need more information. ... we need more assumptions ...

I think it's _far_ more the second than the first. There is no available information that makes these comparisons/aggregations possible. We can make assumptions and do it, but I wish you'd be more explicit in what is the minimal assumption set required, and provide some justification for the assumptions (other than "it enables us to aggregate in ways that I like").

Comment by dagon on Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Battle of the Sexes · 2020-09-15T16:15:01.151Z · score: 4 (2 votes) · LW · GW
not all conflicts are zero-sum.

This should be the lede. Most real-world interactions lose a lot of options, and a lot of potential value, by being simplified to a (n iterated) PD, SH, or BotS.

In reality, there's almost always un-modeled transfer and payouts - just being able to say "good job, thanks!" after a result is FREE UTILITY! Also, non-pathological humans have terms for the other player(s) in their utility function. Most importantly, there are far too many future games in the iteration set of a human lifetime for anyone to model, so reputation and self-image effects very often will dominate the modeled payouts.

Comment by dagon on Comparative advantage and when to blow up your island · 2020-09-14T17:18:38.774Z · score: 6 (5 votes) · LW · GW

I'd love to see your model - include a value of leisure, which is _NOT_ tradeable. Show your work in how you calculate this value, relative to the value of other resources (note: this request is a trap. It's likely impossible to agree on private valuation of leisure.)

I very strongly expect that there are arrangements which increase total leisure, but they do not satisfy the requirement that all transfers are voluntary and make all participants better off. They increase the leisure of those bad at their job (and worse at other jobs), but they decrease the leisure of people who are good at their job.

Comment by dagon on Against boots theory · 2020-09-14T16:58:45.479Z · score: 1 (2 votes) · LW · GW
This disconnect between what a thing actually says, and what people seem to think it says, just bothers me. I feel the desire to point it out.

Heh, welcome to public discussion. There are enough idiots who can't or won't think deeply, and hucksters who prefer good-sounding stories to rigorous analysis, that you're pretty well doomed to be bothered most of the time on most topics.

Being rich enables you to spend less money on things.

I think even at the base object level, the reason good boots appear to last so much longer is that owners spend on maintenance and have spares so they're not daily wear. You'd be hard-put to actually show the math that they're less expensive per-day of use. More comfortable, healthier, more convenient, better-looking, and generally nicer, sure. The Ghetto Tax is real, but it's not purely monetary.

Note that there's a curve at play - the cheapest of the cheap probably _is_ more expensive in the long term than mid-tier boots. But fancy, expensive ones are _also_ more expensive (in long- and short-term) than the plain, solid ones. It's not as compelling a metaphor to say that it's the reason the very poor spend more than the lower-middle-class, but it's perhaps more true.

Comment by dagon on Rafael Harth's Shortform · 2020-09-14T16:03:18.568Z · score: 2 (1 votes) · LW · GW

I think this simplifies a lot by looking at public acceptance of a proposition, rather than literal internal truth. It hurts if you think people will believe it, and that will impact their treatment of you.

The "hurts because it's true" heuristic is taking a path through "true is plausible", in order to reinforce the taunt.

Comment by dagon on A Brief Chat on World Government · 2020-09-13T20:27:50.072Z · score: 2 (1 votes) · LW · GW

Is "world government" an explicit metaphor for alignment and empathy across more of humanity, or a literal (and wrong) belief that citizenship papers control allegiance of individuals?

The problematic competition (also, the necessary creative competition) is at least as visible in non-governmental organizations as in governments, so I don't buy the unstated premise that government is the right measure of unity for the entire thesis.

Comment by dagon on Progress: Fluke or trend? · 2020-09-13T15:38:23.151Z · score: 2 (1 votes) · LW · GW

the difference between a fluke (or a bubble) and a trend is simply timescale. There are intra-day trends that look like spikes on a weekly graph.

I suspect the rate of change of human institutions and typical experiences is likely to remain somewhat high - fluctutating, but not reverting to pre-Enlightenment rates. Whether it's "progress" or just "change" is a matter of framing. The fundamental drivers of connectivity and quantity of potential trading partners (intellectually as well as materially) won't go away.

And unsustainable situations will eventually stop. A precipitous drop in population and trust (war or other mass disruption) removes that primary driver, and probably reverts a lot of changes to systems more sustainable in more difficult resource situations. I hope it's a smooth increase until the heat death of the universe. But that's not the bulk of my probability estimate.

Comment by dagon on MikkW's Shortform · 2020-09-11T22:44:01.377Z · score: 2 (1 votes) · LW · GW

related map of the US, with clustering of actual commutes: https://www.atlasobscura.com/articles/here-are-the-real-boundaries-of-american-metropolises-decided-by-an-algorithm . Note this uses longer commutes than I'd ever consider.

(edit: removed stray period at end of URL)

Comment by dagon on Escalation Outside the System · 2020-09-11T21:29:29.874Z · score: 2 (1 votes) · LW · GW

It's not a proposal without some path to implementation or vaguely possible opportunity to do it. "guillotines" is a signal and perhaps a pipe dream. It's not a plan or useful suggestion.

Comment by dagon on Should some variant of longtermism identify as a religion? · 2020-09-11T18:16:47.194Z · score: 2 (1 votes) · LW · GW

Fully agree with both points (that it's not "naturally" a religion, and that groups are free to try whatever they like to optimize government and other-group interactions).

The best approach is probably not to be as general as "some variant of longtermism". Identify an actual group (or set of groups) that would sufficiently benefit from getting this religion recognized in some specific jurisdiction(s). Then those groups can discuss the actual weights of the pros and cons among their constituents.

Comment by dagon on Safer sandboxing via collective separation · 2020-09-11T16:51:31.584Z · score: 2 (1 votes) · LW · GW

Depending on your threat modeling of a given breach, this could be comforting or terrifying.

The economic incentives to attack and to defend are usually similar. Systems get broken sometimes but not always.

If the cost of a loss (AGI escapes, takes over the world, and runs it worse than humans are) is much higher, that changes the "economic incentives" about this. It implies that "sometimes but not always" is a very dangerous equilibrium. If the cost of a loss (AGI has a bit more influence on the outside world, but doesn't actually destroy much) is more inline with today's incentives, it's a fine thing.

Comment by dagon on Updates Thread · 2020-09-10T20:57:04.137Z · score: 4 (2 votes) · LW · GW

This pretty much describes my mumble-many years of increasing scope as a programmer. There is plenty of irreducible complexity around, but there's FAR MORE accidental complexity from incorrect attempts at encapsulation, forcing edge cases into "standard mechanisms" (bloating those standards), failing to force edge cases into "standard mechanisms" (bloating the non-core systems), insufficiently abstracted or too-abstracted models of data and behaviors, etc.

It's easy to move complexity around a system. Done haphazardly, this increases overall complexity. Done carefully, it can decrease overall complexity. Knowing the difference (and being able to push back on (l)users who are not listening to process changes that make the world easier) is what makes it worth hiring a Principal rather than just a Senior SDE.

Comment by dagon on In 1 year and 5 years what do you see as "the normal" world. · 2020-09-10T15:13:42.904Z · score: 5 (3 votes) · LW · GW

4) Yes. So much depends on timing, availability, and effectiveness of vaccines.

1-3) All of the others have some amount of pent-up demand, and some long-term lingering fear/habit effects. And how those interact will mostly depend on 4. I suspect very-long-term effects (10+ years) will mostly be some trends that accelerated during the virus-fear times, rather than actual directional changes from the virus.

Comment by dagon on MikkW's Shortform · 2020-09-09T22:14:18.484Z · score: 2 (1 votes) · LW · GW
over the long term they take away control of resources from people who have proven in the past but I know how to use resources

Umm, that's the very point of taxes - taking resources from non-government entities because the government thinks they can use those resources better. We take them from people who have resources, because that's where the resources are.

Comment by dagon on Escalation Outside the System · 2020-09-08T19:30:12.552Z · score: 17 (10 votes) · LW · GW

It seems likely that you're just talking about different topics. "I'm upset enough to advocate irrational destruction and violence with no clear plan to long-term success" is a very valid statement. For very deep social-signalling reasons, it's never put that clearly, and instead framed as somewhat wild-sounding proposals. And this is internal to the person - they THINK it's a proposal, even when it's not.

You're arguing against the proposal, but it's not actually a proposal. One hint to this is the reference to "outside the system", but not actually being outside of the system (of politics) - guillotines required organized agreement by large groups of people, or they just get you arrested.

Comment by dagon on The ethics of breeding to kill · 2020-09-08T14:58:41.482Z · score: 4 (5 votes) · LW · GW
it's a viewpoint I do not see expressed much.

It's the common viewpoint, outside of over-intellectual insanely rich discussion groups. It doesn't get discussed much because there's no need to defend it - just go on with your life. And because there's a subset of vegetarian and vegan proponents who will be uncomfortable around such arguments, and that may make you uncomfortable as well.

I eat meat. I eat factory-farmed meat. I do care about animal suffering (and animal joy and the question of "what's a net-positive life?" for all things). I weight my caring by some high-order function of complexity of mind-space, so I care FAR FAR more about the least human than I do the most exalted cow, and I care about diversity in experience-space, so I care for a marginal factory animal (who's extremely similar in experience to all the others) less than a wild animal or a pet.

Comment by dagon on Shed Wall Plans · 2020-09-07T17:34:43.704Z · score: 4 (2 votes) · LW · GW

Are there any tradeoffs you should consider aside from what you've mentioned (cost, insulation value, appearance)? Access to wiring in the future (for upgrades/changes)? Ability to mount things on the wall, or bring additional utilities (water, sewer, networking) into the space?

To my eye, the cost difference, for something that'll last many years, is minimal - it may be 3x difference, but still only $766 absolute. I don't know the value of insulation - what's the climate like where you're building? Are you worried only about heating, or will you have cooling needs as well (insulation is critical for both, but planning airflow and mechanisms is a lot more complicated for cooling)?

For me, in the northwest United States (temperate, wet, rarely very hot nor cold), I'd optimize for appearance and access, over insulation. That would be _VERY_ different if I lived in a desert or tundra, or the midwest US which alternates between desert and tundra.

Comment by dagon on AllAmericanBreakfast's Shortform · 2020-09-03T20:07:49.175Z · score: 5 (3 votes) · LW · GW

I like this line of reasoning, but I'm not sure it's actually true. "better" rationality should lead your thinking to be more effective - better able to take actions that lead to outcomes you prefer. This could express as less thinking, or it could express as MORE thinking, for cases where return-to-thinking is much higher due to your increase in thinking power.

Whether you're thinking less for "still having good outcomes", or thinking the same amount for "having better outcomes" is a topic for introspection and rationality as well.

Comment by dagon on MikkW's Shortform · 2020-09-02T18:20:53.202Z · score: 4 (2 votes) · LW · GW

Hrm. I though it referred to distribution of energy, not temperature. "heat death of the universe" is when entropy can increase no more, and there are no differentials across space by which to define anything at conscious scale. No activity is possible when everything is uniform.

At least, that's my simplistic summary - https://en.wikipedia.org/wiki/Heat_death_of_the_universe gives a lot more details, including the fact that my summary was probably not all that good even in the 19th century.

Comment by dagon on Plans / prepping for possible political violence from upcoming US election? · 2020-08-31T21:22:08.769Z · score: 6 (3 votes) · LW · GW

I share your fear that violent political unrest will spread (it's currently non-zero, but not widespread) after a disputed election. I'm not sure what your probability estimate is - I give it about a percent and a half, which is orders of magnitude higher than previous elections.

Most of that probability is for a short-lived protest or conflict, which destroys a bunch of property, kills and hospitalizes a small percentage (but significant absolute numbers), and then tapers off after a few weeks. Significant secession-level conflict is unlikely enough that I'm not trying to prepare for it; I'd try much harder to be elsewhere if I thought that was going to happen. The timing is uncertain as well - we'll know the theoretical outcome in November, but there may be months of posturing and brinksmanship before we find out in January if the nominal outcome is honored. During this, violence may or may not be present, to varying degrees.

As such, my current strategy is normal emergency preparedness - 30 days of food, medicine, water, etc. Keeping some amount of non-US currency and silver or gold coins is wise as well, IMO. I keep firearms, but wouldn't advise anyone take that up solely for this situation - it's hard to train and practice safely during COVID, so now isn't the time to start.

As to leaving, I'm not sure there are very many good options. I'm near enough the Canadian border to drive or boat across, but it's closed for COVID, and that will be even more severely enforced if there are literal refugees streaming across. It's unlikely that anyone's going to give you asylum status, no matter how bad it gets. My current belief is that before illegal entry into another country becomes attractive, I should switch strategies from flight to fight - become an active participant and risk myself (yes, and my family) in order to slightly shift the likelihood of outcome toward my preferences. Huh, I guess I'm a patriot after all (once all the better options are eliminated).