Posts

Money isn't real. When you donate money to a charity, how does it actually help? 2020-02-02T17:03:04.426Z · score: 19 (11 votes)
Dagon's Shortform 2019-07-31T18:21:43.072Z · score: 3 (1 votes)
Did the recent blackmail discussion change your beliefs? 2019-03-24T16:06:52.811Z · score: 37 (14 votes)

Comments

Comment by dagon on You are an optimizer. Act like it! · 2020-02-22T20:23:09.828Z · score: 3 (2 votes) · LW · GW

I directionally agree - much of the time I can benefit by thinking a bit more about what I'm optimizing, and acting in a more optimal fashion. But I don't think this is universally applicable.

In the long run, optimizers win.

Well, no. Most optimizers fail. Many optimizers are only seeking short-term measurable outcomes, and the long run makes them irrelevant (or dead).

Comment by dagon on landfish lab · 2020-02-21T21:37:59.854Z · score: 2 (1 votes) · LW · GW

It seems weird to expect that OS vendors are particularly more aligned with your preferences than app vendors are. You actually have more control over apps - it's possible to use different ones without building your own hardware and writing your own drivers. Don't like the bundle of behaviors that an app presents? don't use it. There are fewer OSes to choose from, and they tend to group together harder-to-replicate functionality in a way that you can't really pick and choose very well.

I'm totally with you that I don't much care for the way current social media platforms (including apps and data-handling outside of apps) work, but I'm not sure what the alternative is, for things where almost everyone I want to interact with is captured by them, and there's no coordination point to change it. Compare with limited choice in options on a political ballot - I hate it, but I don't think the equilibrium has a good leverage point to improve.

Comment by dagon on George's Shortform · 2020-02-21T21:32:01.263Z · score: 2 (1 votes) · LW · GW

I'd agree that this is useful to think on, but I tend to use "meta model" to mean "a model of how to build and apply models across distinct people", and your example of abstracting Dave's preferences is just another model for him, not all that meta.

I might suggest you call it an "abstract model" or an "explainable model". In fact, if they make the same predictions, they're equally powerful, but one is more compressible and easier to transmit (and examine in your head).

Comment by dagon on How do you survive in the humanities? · 2020-02-21T17:15:07.699Z · score: 3 (4 votes) · LW · GW

I would make the same argument for a Scientology class[1]. You can and should learn empathy and humility, and one of the best ways is interaction with people with very different beliefs and models than you. You don't have to agree with them, you don't have to use their mechanisms directly, but you can and should identify how those mechanisms work for them, and understand that you'll probably need some mechanisms for yourself that aren't perfectly self-legible.

[1] Except the actual torture and brainwashing parts. If sleep deprivation or overt threats of violence are part of the class, you should probably just get out.

Comment by dagon on How do you survive in the humanities? · 2020-02-20T23:10:36.951Z · score: 11 (6 votes) · LW · GW
If arguments + evidence are compelling enough, you have no choice but to believe

This is trivially true by definition of "compelling enough", and the corollary is "if she chooses not to believe, the arguments and evidence are insufficiently compelling". You have no choice but to accept THAT, right?

Your actual disagreement is whether a given set of arguments and evidence is compelling enough to believe. And this can certainly vary person to person, as you start with different priors and give different weight to evidence based on different modeling.

Comment by dagon on Stuck Exploration · 2020-02-20T19:18:21.772Z · score: 2 (1 votes) · LW · GW
But coin needs to depend on your prediction instead of being always biased a particular way.

Does it? I think it only depends on failure to explore/update which is a property of the (broken) agent, not an effect of the setup.

My recommended semi-causal agent (thanks to https://www.lesswrong.com/posts/9m2fzjNSJmd3yxxKG/acdt-a-hack-y-acausal-decision-theory for the idea) does the following: Start out with X% intend to heads/pay and 1-X intend to tails/not pay, based on priors of chance (NOT 0 or 1) and value of each payout box in the matrix. Randomize and commit before start of game (so the predictor can operate), adjust per bayes' rule and re-randomize the committment after each iteration. You'll never get stuck at certainty, so you'll converge on the optimal percentage for the power of the predictor and the outcomes actually available.

This doesn't work with computationally-infeasible ranges of action/outcome, but it seems to solve iterated (including iterated in thought-experiments) simple definable cases.

Comment by dagon on How do you survive in the humanities? · 2020-02-20T18:25:10.792Z · score: 16 (9 votes) · LW · GW

This feels a bit like https://xkcd.com/386/ - you're feeling the pain of knowing that incorrect thought is often rewarded above more rational beliefs. Where's the justice?

Answer: nowhere. And I can't offer great hope that it gets better - epistemic hygiene is pretty horrific even in STEM (and in the corporate world), outside of things simple enough to be resolved in the lab (or short-term market reaction). Most people are just bad at thinking. Fortunately, for much of your life, you'll be able to pick your bubbles and arrange things to mostly ignore the worst of the idiots.

And, of course, I include myself in the problem - I have plenty of blind spots and things I don't examine too closely. I don't actually know which (if any) of my preferred models and approaches actually lead to better results, and I certainly don't know that they'll work for anyone but me. I take the risk of https://wiki.lesswrong.com/wiki/Other-optimizing very seriously.

To that point, I would recommend you grant a bit more agency to your fellow students - they're allowed to choose what to take and what to leave from their classes, just like you are. Neither you nor they need permission nor agreement to believe what you believe. In fact, this may be the teacher's point: you don't have to believe THEM any more than they have to believe YOU.

Trying to fight with teachers or un-receptive students at every turn is unlikely to further any goals I can think of. It's certainly a kindness (and a benefit to you, in refinement of your beliefs) to offer additional perspective (mostly outside of class, as a rationality club or the like), but it's not your responsibility nor in your power to make them think the way you do.

Comment by dagon on Does donating to EA make sense in light of the mere addition paradox ? · 2020-02-20T00:55:31.381Z · score: 3 (2 votes) · LW · GW

Nope, in the end it all comes down to your personal self-conception and intuition. You can back it up with calculations and testing your emotional reaction to intellectual counterfactuals ("how does it feel that I saved half a statistical life, but couldn't support my friend this month"). But all the moral arguments I've seen come down to either religious authority or assertion that some intuitions are (or should be) universal.

Comment by dagon on Does donating to EA make sense in light of the mere addition paradox ? · 2020-02-19T21:22:16.195Z · score: 5 (2 votes) · LW · GW

(note: I don't identify as Utilitarian, so discount my answer as appropriate)

You can split the question into multiple parts:

1) should I be an altruist, who gives up resources to benefit others more than myself?

2) if so, what does "benefit" actually mean for others?

3) How can I best achieve my desires, as defined by #1 and #2?


#1 is probably not answerable using only logic - this is up to you and your preferred framework for morals and decision-making.

#2 gets to the title of your post (though the content ranges further). Do you benefit others by reducing global population? By making some existing lives more comfortable or longer (and which ones)? There's a lot more writing on this, but no clear enough answers that it can be considered solved.

#3 is the focus of E in EA - if your goals match theirs (and if you believe their methodology for measuring), then EA helps identify the most efficient ways you can use resources for these goals.


To answer your direct question - maybe! To the extent that you're pursuing topics that EA organizations are also pursuing, you should probably donate to their recommended charities rather than trying to do it yourself or going through less-measured charities.

To the extent that you care about topics they don't, don't. For instance, I also donate to local arts groups and city- and state-wide food charities, which I deeply understand are benefiting people who are already very lucky relative to global standards. If utility is fungible and there is declining utility for resources for any given recipient, this is not efficient. But I don't believe those things are smooth enough curves to overwhelm my other preferences.

Comment by dagon on What are information hazards? · 2020-02-18T22:24:05.200Z · score: 2 (1 votes) · LW · GW

Do people generally consider wireheading and Goodheart's law as information hazards? They're both "errors" caused by access to true data, but that is easy to misuse.

Comment by dagon on A Memetic Mediator Manifesto · 2020-02-17T22:43:40.925Z · score: 2 (1 votes) · LW · GW

Any success stories? Or indicators of when to use this approach, vs when to look for truth or when to optimize for preferred outcomes?

The theory behind it (linked from the doc, at https://medium.com/s/world-wide-wtf/memetic-tribes-and-culture-war-2-0-14705c43f6bb ) feels pretty woo-ey to me, and denies agency to most participants (note: I may actually agree, but it's not clearly specified) in a way that it's hard to know what the goals of such intervention are.

IOW, why is mediation better than truth-seeking? It's probably better than simple conflict on things that don't matter, but on topics worth fighting for, I actually care that the truth wins out. If you mean "appearance of mediation is the best way for your preferences to have traction", I kind of agree, but that's very difficult to say plainly enough that anyone will publicly endorse it.

Comment by dagon on Why do we refuse to take action claiming our impact would be too small? · 2020-02-17T18:00:30.567Z · score: 5 (3 votes) · LW · GW
I don't support the idea that you just blindly hand your responsibility for understanding and resolving issues to anyone else, let alone the state.

vs

I don't support the idea that ordinary people can have a good enough level of understanding of everything it takes to run a society. For that matter they can't fix their own cars or bodies.

These are not contradictory. States are Soylent Green - they're made of people! There is literally no person who has a good enough level of understanding of everything it takes to run a society. More importantly, societies aren't "run", they're ... I don't know. "followed"? "co-dependently-evolved"? Societies pick (or at least tolerate) the "leaders" that exemplify the confusion in goals that the society has.

Experts have fairly narrow focus, and tend to be just as incorrect as the rest of us outside their field (and often, inside, for fields with heavy political/funding influence).

Comment by dagon on Wanting More Intellectual Stamina · 2020-02-17T16:17:41.786Z · score: 5 (3 votes) · LW · GW

There's enough variance, even among hyper-intellectuals, that you should talk all answers as "this seems to work for some", rather than "this will work for you".

I've never been able to override my curiosity for very long, and have pretty much stopped trying. I _HAVE_ developed a skill of finding topics that my curiosity will take me deeper into, and in framing the tasks that aren't hour-to-hour exciting (I'm a fairly senior software developer at a large company, so there's plenty of "just work") as critical parts of the larger experiment of "understanding at the gears-level how this project/product/organization works".

Also I've found it helpful to remind myself to ask about impact rather than just knowledge. How am I applying what I learn, in order to filter the future branches of reality such that I experience more-preferred ones? Thinking about explore/exploit balances and making sure I'm spending a reasonable fraction of my effort in exploit mode (and then learning from the results) has been fairly compatible with my curiosity-driven approach to life. I've trained myself (or been lucky enough) to think of "how will this work out in reality?" as an important thing to be curious about.

Comment by dagon on Taking the Outgroup Seriously · 2020-02-17T16:09:57.727Z · score: 4 (2 votes) · LW · GW

This is the point at which "ingroup-outgroup" has to get more nuanced. Groups have sub-groups, and it's absolutely NOT the case the the arguments used "internally" are all that "truthful". There's definitely a tendency to use DIFFERENT arguments with different groups (in the example, "God says" with one group and "no way to avoid bad incentives" with another), but the actual true reason may well be "it's icky". Or rather, a mix of all of the given reasons - most of the time people (ingroup or out-) don't actually think they're lying when they use different reasons for a demanded/recommended policy, just that they're focusing on different valid elements of their argument.

Comment by dagon on Taking the Outgroup Seriously · 2020-02-16T19:42:10.990Z · score: 2 (1 votes) · LW · GW

This is good advice, but nowhere near simple to implement. Much of the public writing on group-identity topics does not include enough foundation agreement on models and assumptions for it to actually make sense. Most people (including your ingroup, if you apply the same standards) are just awful at explaining what they believe, let alone why they believe it.

Note: the contrapositive is perhaps one way to actually pursue this. "Don't take your ingroup seriously". You're just as prone to unexamined assumptions and faulty logic as your counterpart in one of your outgroups. Identifying where your peers are simply not seeing things clearly can help you in finding the topics on which communication is hard and tends to cleave into social groups, rather than shared examination across diverse backgrounds.

Comment by dagon on Why do we refuse to take action claiming our impact would be too small? · 2020-02-14T20:27:30.804Z · score: 3 (2 votes) · LW · GW

In that scenario, the strategy is probably to stop the leak or find a new pool, rather than trying to coordinate to fill it fast enough. Or perhaps just to enjoy the remaining water before we all die.

I'm reminded of the old military recommendation: "Sir, what should I do if I step on a mine?" "The recommended strategy is to leap 10 feet into the air and splatter yourself over a wide area".

Comment by dagon on ofer's Shortform · 2020-02-14T17:27:50.755Z · score: 2 (1 votes) · LW · GW

This doesn't require AI, it happens anywhere that competing prices are easily available and fairly mutable. AI will be no more nor less liable than humans making the same decisions would be.

Comment by dagon on Suspiciously balanced evidence · 2020-02-12T20:39:01.214Z · score: 2 (1 votes) · LW · GW

I don't know if it's a good or not-good explanation, but in a lot of discussions of these examples, a different question will be answered than you're asking. But also #1, and also some ok-version-of-not-so-good: we cap our probability estimates with plausibility heuristics. If I'd be surprised but not shocked to see something, it's not less than 5% probability. If I'd be shocked, but not question everything I know, it's not less than 0.1% (for reference classes with a few-per-lifetime incidence of resolution).

"Human activity has caused substantial increases in global mean surface temperature over the last 50 years and barring major policy changes will continue to do so over at least the next 50 years -- say, at least half a kelvin in each case."

Generally, people seem to answer about whether policy changes are possible or useful, even when they claim to be discussing causality and prediction. This topic is rife with arguments-as-soldiers, and it's unsurprising that probability estimates are hard to find.

"On average, Christians in Western Europe donated a larger fraction of their income last year to non-religious charitable causes than atheists."

Wow, never thought about this, but I have trouble with the categorizations - I know a whole lot of semi-Christians (and Jews and Muslims) who could be counted in either camp, and a whole lot of charities which have religious origins but are currently not affiliated. In any case, I'd wonder why you were asking and how my prediction would change any behavior.

"Over the next 10 years, a typical index-fund-like basket of US stocks will increase by more than 5% per annum above inflation."

This one is better. Define the basket and inflation index and I'll make a bet. S&P 500 vs US CPI, I'd say about 15% to beat it by 5% per year (62% over the decade). Even so, I'd expect the joint prediction about inflation and return on investment to cause confusion. This is one where my proposed plausibility heuristic is putting a cap on my confidence level. For "reasonable" numbers, I don't think I can be more than 95% confident.

"In 2040, most global electricity generation will be from renewable sources."

Definition of "renewable" makes this one tricky, and questions about whether to include energy that could be electric or not (heating, for instance). Is nuclear included? I think we could narrow it down to a specific prediction that one could have a prediction market on. I don't know enough to bet, myself. And I won't be shocked in either direction, so my estimates can't be very strong.

Comment by dagon on Why do we refuse to take action claiming our impact would be too small? · 2020-02-10T23:48:45.137Z · score: 4 (3 votes) · LW · GW

Possible explanations:

1) Many impacts are not just small, but effectively zero, or even slightly negative. Spending more effort/resources to do things that APPEAR good but actually don't matter, is a net harm.

2) Some items have threshold or nonlinear impact such that it's near-zero unless everybody (or at least more than are likely) does them. This gets to second-order arguments of "my example won't influence the people who need to change", but the argument does recurse well.

3) The world is, in fact, full of irresponsible people. Unfortunately, it's mostly governed by those same people.

4) Reasons given for something don't always match the actual causality. "It wouldn't matter" is more socially defensible than "I value my comfort over the aggregate effect".

5) Relative rather than absolute measures - "I'm a sucker" vs "the world is slightly better".

6) The https://en.wikipedia.org/wiki/Bystander_effect may not be a real thing, but there is an element of social proof in the idea that if most people are doing something, it's probably OK.

Comment by dagon on Some quick notes on hand hygiene · 2020-02-10T22:48:06.826Z · score: 7 (3 votes) · LW · GW

BTW, regardless of my skepticism on what is the perfect balance balance of time spent hand-washing, I will note that this post and discussion has done at least two things:

1) caused me to increase my hand-washing by about 75% (spending about 33% more time, about 33% more often).

2) spent more time in reading and thinking about it than just washing my hands a lot more would have taken in a year.

So, I'm either more knowledgeable and safer, OR wasting even more time than I otherwise would. Thanks?

Comment by dagon on Did AI pioneers not worry much about AI risks? · 2020-02-10T18:29:57.524Z · score: 9 (6 votes) · LW · GW

Would you say it's taken particularly seriously NOW? There are some books about it, and some researchers focusing on it. A very tiny portion of the total thought put into the topic of machine intelligence.

I think:

1) about the same percentage of publishing on the overall topic went to risks, then as now. There's a ton more on AI risks now, because there are 3 orders of magnitude more overall thought and writing on AI generally.

2) This may still be true. Humans aren't good at long-term risk analysis.

3) Perhaps more than 60 years of thinking will be required. We're beginning to ask the right questions (I hope).

Comment by dagon on Source of Karma · 2020-02-09T21:25:17.255Z · score: 2 (1 votes) · LW · GW

Yup, voting systems with no mechanism attached (nothing to do with the votes, no outcome decided by them) have very little information, for very little cost. It's unclear what, if any, changes will significantly increase the information without significantly increasing the cost.

Comment by dagon on Source of Karma · 2020-02-09T16:01:53.253Z · score: 4 (2 votes) · LW · GW

I'd much rather have less focus on meaningless internet points around here (and in most places). Focus on collecting good comments that help you update your beliefs and models, in order to be less wrong.

Note that there _is_ a weighting that happens - higher-karma people give/take more than 1 karma with their votes. It's not specific to your evaluation of them, nor all that visible, but it's there.

Comment by dagon on Some quick notes on hand hygiene · 2020-02-07T20:39:16.776Z · score: 3 (2 votes) · LW · GW

There's enough variance that relative recommendations ("more often", "more completely", "more time spent") are difficult to take seriously.

I wash my hands maybe 2-3 times per day and apply Purell (the brand provided at work) another 1-2. I don't spend over 10 seconds, though I do try to get all surfaces. Will the marginal improvement of adding one instance or 5 seconds of additional scrubbing noticeably reduce my risk? I can't find any study that has this level of granularity.

Comment by Dagon on [deleted post] 2020-02-07T18:55:35.062Z

"The brain is the most important organ" says the brain.

Comment by dagon on "But that's your job": why organisations can work · 2020-02-07T17:27:42.647Z · score: 2 (1 votes) · LW · GW

The difference between "best systems win" and "worst systems lose" is only one of timeframe. The two differ in filter effectiveness per iteration on the way to equilibrium.

Comment by dagon on Stuart_Armstrong's Shortform · 2020-02-07T17:20:04.010Z · score: 4 (2 votes) · LW · GW

Oh, wait. I've been treating preferences as territory, though always expressed in map terms (because communication and conscious analysis is map-only). I'll have to think about what it would mean if they were purely map artifacts.

Comment by dagon on TurnTrout's shortform feed · 2020-02-07T17:13:50.111Z · score: 2 (1 votes) · LW · GW

Ah, I took the "just" in "just a lower bound on lost surplus" as an indicator that it's less important than other factors. And I lightly believe (meaning: for the cases I find most available, I believe it, but I don't know how general it is) that the supply elasticity _is_ the more important effect of such distortions.

So I wanted to reinforce that I wasn't ignoring that cost, only pointing out a greater cost.

Comment by dagon on Long Now, and Culture vs Artifacts · 2020-02-07T00:11:46.165Z · score: 3 (2 votes) · LW · GW
Cultures usually last hundreds of years. Physical artifacts are much more reliable ways to affect people 10,000 years from now. 

This seems simply wrong. Identifiable cultures last hundreds of years, but cultural impact on successive cultures can easily be thousands. Can you point to any physical artifact from even a few thousand years ago that has any relevance today? I can see the argument that technologies have impact, but I argue that's mostly cultural impact.

And it's not clear that either culture OR artifacts have predictable or useful effects 10K years out.

Comment by dagon on TurnTrout's shortform feed · 2020-02-07T00:02:40.045Z · score: 2 (1 votes) · LW · GW

Lost surplus is definitely a loss - it's not linear with utility, but it's not uncorrelated. Also, if supply is elastic over any relevant timeframe, there's an additional source of loss. And I'd argue that for most goods, over timeframes smaller than most price-fixing proposals are expected to last, there is significant price elasticity.

Comment by dagon on "But that's your job": why organisations can work · 2020-02-06T23:48:44.910Z · score: 6 (3 votes) · LW · GW

Social pressure to conform (in doing a recognizable job to get respect from people around you) is a great explainer for mazes. They tend to be cases where it's hard to tell if you're doing the job.

Comment by dagon on Stuart_Armstrong's Shortform · 2020-02-06T16:10:27.544Z · score: 4 (2 votes) · LW · GW

This seems related to scope insensitivity and availability bias. No amount of money (that I have direct control of) is worth one human life ( in my Dunbar group). No money (which my mind exemplifies as $100k or whatever) is worth the life of my example human, a coworker. Even then, its false, but it's understandable.

More importantly, categorizations of resources (and of people, probably) are map, not territory. The only rational preference ranking is over reachable states of the universe. Or, if you lean a bit far towards skepticism/solopcism, over sums of future experiences.

Comment by dagon on Money isn't real. When you donate money to a charity, how does it actually help? · 2020-02-03T22:56:59.082Z · score: 2 (1 votes) · LW · GW
boggling at money, and at altruism, and at the way the two interrelate

This would not be an incorrect summary of my confusion. The difference in realm between social and fiscal motivations is fairly well-studied, including some counter-intuitive things like taking payment for transgressions causing more incidence, as you've removed the unquantified guilt for it. And yet, we often exhort people to donate cash rather than time or changing behaviors in other ways.

Comment by dagon on Money isn't real. When you donate money to a charity, how does it actually help? · 2020-02-03T22:26:44.959Z · score: 8 (3 votes) · LW · GW
(There's an unstated assumption here that the motivation is to do something they wouldn't have done anyway This assumption may very well be correct.) This sounds like a possible flaw in how things work, even if they don't work quite like that.

Very good point - I hadn't considered this aspect in my original question. Cash donations may not ONLY cause a behavior that wasn't previously motivated by the recipient, it may also act via a mechanism of letting the recipient follow their preferences (which they've shown to be aligned with your altruistic desires) with fewer outside constraints.

Donating money to a food bank may not (only) motivate employees to distribute food any more than they otherwise prefer; it may make them more able to focus on their preferred amount of food-distribution.

Put another way, some of it may be more like UBI for people performing activities you support, than for direct motivation of those activities.

Comment by dagon on A point of clarification on infohazard terminology · 2020-02-02T21:03:24.552Z · score: 4 (2 votes) · LW · GW

I'd recommend just using https://wiki.lesswrong.com/wiki/Information_hazard as a base ontology. The knowledge of Swedish Fish availability would be a temptation or distraction hazard.

I'd reserve "memetic hazard" for information hazards related to beliefs passed via a memetic route (as a metaphor from genetic information). These may be true or false (or may be models or belief-systems that are neither true nor false), but are "catchy" in terms of propagating the ideas in humans. To my mind, it's about the transmission and encoding of the information, not the effect on the receiver. There can be memetic temptation hazards and memetic biasing hazards, for instance.

Comment by dagon on Money isn't real. When you donate money to a charity, how does it actually help? · 2020-02-02T20:51:06.257Z · score: 7 (4 votes) · LW · GW

I may have taken too much background as common knowledge. I appreciate the "money is as real as baseball" reminder, especially as nobody recommends donating baseball to improve the lives of others. I absolutely favor the model of money as fungible favor system, and it shares some characteristics of that, but it isn't that either. Personal favors have lasting impact on the relationship, and cannot be accounted perfectly so there's always some residual debt (IMO, this is a feature). Money is necessarily transactional in nature. Once both sides of the exchange are complete, there is no remaining effect (except perhaps the chance of future money).

Altruism is (generally) seeking deeper value and behavioral change. This seems like a very deep contradiction, but I suspect I misunderstand altruism more than I do money.

Comment by dagon on Existing work on creating terminology & names? · 2020-02-02T17:30:16.772Z · score: 4 (2 votes) · LW · GW

Kind of famous programming quote (Martin Fowler credits Phil Karlton for it, but it's likely much older):

There are only two hard things in Computer Science: cache invalidation and naming things.

Really, naming is idea compression in hard mode. You need to find very short strings that encompass your current and future desire for this category. In a semi-adversarial environment where other people will misinterpret your meaning by taking it too literally or not literally enough.

Their clustering of ideas is slightly different than yours, so the label will "naturally" not align across any two people, and may really hit different clusters across some. It's going to take tens of thousands of words to debate which is the "true" meaning, and the original namer isn't really in control of what ideas win.

I very much recommend Habryka's and Petter's idea: don't start with naming. First think about idea organization and transmission. Some things you should probably NOT name, in order to avoid compression artifacts.

Comment by dagon on Create a Full Alternative Stack · 2020-02-01T22:00:24.756Z · score: 2 (1 votes) · LW · GW

Goodheart applies to any use of an alignment indicator, not just funding.

Comment by dagon on purpleposeidon's Shortform · 2020-02-01T21:58:12.896Z · score: 2 (1 votes) · LW · GW

I think you'd need to define "perfect", "good", and "silly" before I really know what you're claiming. By naive interpretation of the words, all three are simply wrong. Almost all humans are born helpless and screaming, causing great physical pain to their mothers. Many things please while causing large amounts of future harm. Many things that sound silly at first turn out to be useful or necessary. Besides, I prefer silly.

Comment by dagon on Matthew Barnett's Shortform · 2020-02-01T21:54:09.104Z · score: 2 (1 votes) · LW · GW

You don't need it anywhere near as stark a contrast as this. In fact, it's even harder if the agent (like many actual humans) has previously considered suicide, and has experienced joy that they didn't do so, followed by periods of reconsideration. Intertemporal preference inconsistency is one effect of the fact that we're not actually rational agents. Your question boils down to "when an agent has inconsistent preferences, how do we choose which to support?"

My answer is "support the versions that seem to make my future universe better". If someone wants to die, and I think the rest of us would be better off if that someone lives, I'll oppose their death, regardless of what they "really" want. I'll likely frame it as convincing them they don't really want to die, and use the fact that they didn't want that in the past as "evidence", but really it's mostly me imposing my preferences.

There are some with whom I can have the altruistic conversation: future-you AND future-me both prefer you stick around. Do it for us? Even then, you can't support any real person's actual preferences, because they don't exist. You can only support your current vision of their preferred-by-you preferences.

Comment by dagon on Create a Full Alternative Stack · 2020-01-31T20:19:50.545Z · score: 5 (2 votes) · LW · GW

Hmm. This seems to ignore the underlying actor-agent problem that is a partial cause of mazes: most (perhaps all, perhaps including you, certainly including me) people aren't willing to truly dedicate their entire life to a thing. There just aren't enough people who are willing/able/whatever to ignore all interpersonal competition for some of the slack (whether that be money or time or other non-shared-goal-directed value).

Comment by dagon on High-precision claims may be refuted without being replaced with other high-precision claims · 2020-01-31T00:40:09.821Z · score: 5 (3 votes) · LW · GW

Note that degree and type of refutation matters a whole lot. Many theories CAN still be applicable to a more restricted set of predictions, or can be valuable in making less precise predictions. "It's the best we have" isn't sufficient, but "it's the best we have AND it's good enough for X" could be.

There are TONS of models that get proven wrong, but still allowed excellent progress, and still have validity in a subset of cases (and are usually easier to use than the more complete/precise models).

I suspect the phrase "high-precision" is doing a lot of work in your post that I haven't fully understood. Almost all of your examples don't require universality or exception-free application (what I take your "high-precision" requirement to mean), only preponderance of utility in many commonly-encountered cases.

For some of them, a very minor caveat "this may be wrong; here are signs that you might be misapplying it" would redeem the theory without changing hardly any behavior.

Comment by dagon on Mod Notice about Election Discussion · 2020-01-31T00:35:55.686Z · score: 2 (1 votes) · LW · GW

Can we have a visible thread to watch that gets a lightweight pointer whenever a post is hidden this way? I'm intensely curious to find out if we successfully defended against a swarm, or if the swarm is just mostly elsewhere.

Comment by dagon on Raemon's Scratchpad · 2020-01-30T21:40:23.272Z · score: 2 (1 votes) · LW · GW

I'd get rid of strong upvotes as well, or perhaps make voting nonlinear, such that a weak/strong vote changes in value based on how many voters expressed an opinion (as it kind of does over time - strong votes only matter a small bit when there are 20+ votes cast, but if they're one of the first or only few to vote, they're HUGE). Or perhaps only display the ordinal value of posts and comments (relative to others shown on the page), with the actual vote values hidden in the same way we do number of voters.

The vast majority of my comments get 5 or fewer voters. This is data in itself, of course, but it means that I react similarly to Wei when I see an outsided change.

Comment by dagon on Dagon's Shortform · 2020-01-30T18:12:58.652Z · score: 2 (1 votes) · LW · GW

Expand on that a bit - is this an employment/social maze for lawyers and legislators, or a bureaucratic maze for regular citizens, or something else? I'll definitely add policing (a maze for legal enforcers in that getting ahead is almost unrelated to stopping crime).

Comment by dagon on Dagon's Shortform · 2020-01-30T16:57:46.938Z · score: 6 (3 votes) · LW · GW

Types and Degrees of Maze-like situations

Zvi has been writing about topics inspired by the book Moral Mazes, focused on a subset of large corporations where politics rather than productivity are seen as the primary competition dimension and life focus for participants. It's not clear how universal this is, nor what size and competitive thresholds might trigger it. This is just a list of other Maze-like situations that may share some attributes with those, and may have some shared causes, and I hope solutions.

  • Consumerism and fashion.
  • Social status signaling and competition.
  • Dating, to the extent you are thinking of eligibility pools and quality of partner, rather than individual high-dimensional compatibility.
  • Acutal politics (elected and appointed-by-elected office).
  • Academic publishing, tenure, chairs.
  • Law enforcement. Getting ahead in a police force, prison system, or judiciary (including prosecutors) is almost unrelated to actually reducing crime or improving the world.
  • Bureaucracies are a related, but different kind of maze. There, it's not about getting ahead, it's about ... what? Shares the feeling that it's not primarily focused on positive world impact, but maybe not other aspects of the pathology.
Comment by dagon on Potential Ways to Fight Mazes · 2020-01-30T01:19:03.480Z · score: 7 (3 votes) · LW · GW

I'm not sure we've identfied enough examples and counterexamples to assign relative weights to these remedies, but it's a pretty good list.

I will note that we have to understand whether and why a government isn't itself a maze before we propose it as a solution to other mazes. Healthcare, as an example, from a consumer standpoint in the US and UK (the only two I've had experience with), feels more maze-like than the average consumer-facing megacorporation-dominated industry.

Comment by dagon on The Skewed and the Screwed: When Mating Meets Politics · 2020-01-30T01:16:15.357Z · score: -2 (4 votes) · LW · GW

Obvious solution: bisexuality. Or bipartisanship. Or anything more epistemicly humble than "I know the truth on all topics and I'm lonely because everyone else is wrong on at least one dimension".

Comment by dagon on Rationalist prepper thread · 2020-01-30T00:00:29.628Z · score: 10 (6 votes) · LW · GW
Firstly, we should act in the way, which should be good if everybody will act in the same way.

Really? I'd generally prefer to act in the ways that will be most successful given my predictions of other people's actual behavior. The question isn't "how should a population of Hofstadterian superrationalists, who will independently figure out the best way to cooperatively act, prepare?" It's "how should we prepare?", and I infer you really mean "how do you intend to act and how do you recommend I act?".

I don't currently see a need to prepare any differently than normal life: I keep about a week's very comfortable supplies (which is ~3 weeks rationed non-water supplies) for my family, and a somewhat extensive first-aid kit. I recommend the same to you (and anyone else who asks). If you're particularly cautious or altruistic, you can multiply by the number of neighbors you want to care for in a disaster. I also recommend a "go bag" with some portable supplies and money so you can evacuate very rapidly if necessary (and pre-commit to "in an emergency evacuation, we're not taking anything we can't carry - we set any pets free outside, we lock the doors and GO"). It's a very personal decision whether you want weapons available and the required training to be safe and effective with them.

In most big cities, any more preparation than that is dominated by significant lifestyle change and moving somewhere less populated. A cabin or vacation home you can pre-evacuate to if it looks MUCH worse than today would be ideal. Any extended collapse (say, no services for 4+ weeks) is just going to kill most people who didn't get out.

For my current understanding of this outbreak, the normal prevention mechanisms (wash hands a lot, don't touch your face, etc.) are sufficient as well. If it gets noticeably worse, I'll add gloves and drive to work rather than using public transit for awhile.

Comment by dagon on Moral public goods · 2020-01-27T23:40:48.121Z · score: 8 (3 votes) · LW · GW

The public goods idea _does_ help explain things if we think there's a threshold issue (not valuable unless a certain amount is redistributed) _AND_ a coordination problem such that many people would like to donate, but only if they know the total is over the threshold.

It may also explain things if the motivation to not get full utility by an altruist donating more is some form of punishment for free riders (those who don't donate, but still get value).

I agree that the more likely explanation is that poverty altruism isn't linear (in utility with money donated) for any individual, and most people are, in fact, giving at the level they want to give. They would like to get some "free" utility by encouraging/forcing others to give more.

This isn't at odds with a public goods model - there are lots of public goods that go un-provided because they're not worth it to enough people to provide it privately or mandate it publicly. "this is a public good; therefore government must do it" is not a valid argument.