Posts

For moderately well-resourced people in the US, how quickly will things go from "uncomfortable" to "too late to exit"? 2020-06-12T16:59:56.845Z · score: 71 (30 votes)
Money isn't real. When you donate money to a charity, how does it actually help? 2020-02-02T17:03:04.426Z · score: 19 (11 votes)
Dagon's Shortform 2019-07-31T18:21:43.072Z · score: 3 (1 votes)
Did the recent blackmail discussion change your beliefs? 2019-03-24T16:06:52.811Z · score: 37 (14 votes)

Comments

Comment by dagon on [ongoing] Thoughts on Proportional voting methods · 2020-07-03T23:20:20.000Z · score: 4 (2 votes) · LW · GW

I _really_ want to believe that posting this comment multiple times was an intentional demonstration of how opinions can be unevenly weighted. If so, brilliant!

Comment by dagon on The allegory of the hospital · 2020-07-02T22:02:05.409Z · score: 11 (8 votes) · LW · GW

This allegory misses me by far enough that I have no clue what it's intended to demonstrate.

Comment by dagon on How to decide to get a nosejob or not? · 2020-07-02T21:35:30.204Z · score: 2 (1 votes) · LW · GW
Past friends and family might actually judge quite harshly.

And will certainly judge more harshly if it seems like a rash decision, which they will if you haven't talked about before or asked anyone's opinion. More importantly, it's about collecting data - even if they lie, they'll reveal information about their beliefs and expectations. You need this data!

Any nosejob should wait until trying out the really good cost/benefit propositions.

That's not quite what I meant - there's no reason not to both if you're convinced that both are beneficial. I meant to suggest that this is an avenue to gather evidence about how (some) people treat you differently based on appearance, and that evidence can be used in your calculation about a nose job.

Don't serialize when you can be parallel. Don't blindly wait to try out one intervention at a time. DO wait when you have a reason, when there's data you need from the sequencing and separation of interventions.

Comment by dagon on How to decide to get a nosejob or not? · 2020-07-02T19:42:12.403Z · score: 4 (2 votes) · LW · GW

Also, if you haven't already, watch Roxanne (1987). It will provide a great set of comebacks and reactions for when people react to the size of your nose.

Comment by dagon on How to decide to get a nosejob or not? · 2020-07-02T19:33:09.515Z · score: 3 (2 votes) · LW · GW

I think, from past comments, you're a cis hetero male, as am I. Most of what I say here applies to anyone, but some subcultures or demographics may have sufficiently different beauty and behavior norms as to override other considerations.

Being (more) conventionally attractive has advantages. Being known to focus on physical attractiveness has disadvantages. And most importantly, attractiveness is different for different evaluators. It's quite likely that even if a change is judged as an improvement by your average contact, it can be significantly negative to some important people (your family, close friends who liked you how you were, people you have yet to meet who just prefer natural looks).

This is something you probably can't average out - the distribution and the specifics matter.

I don't mean to argue for "don't do it" - all evidence I have indicates that people who get plastic surgery are happier after than before. I know maybe a dozen people in this category (all women, and more boobs than face, but it's still evidence), and the only one who regrets it (at least enough to share with me) was a case of something going wrong and requiring further surgery and pain to get a smaller overall improvement than expected - she did not believe she'd chosen wrongly.

I do mean to argue that you can collect evidence much more than you've shared here. Asking people can be awkward, but no more awkward than explaining the bruises and talking about it afterward. People and doctors will sometimes lie, but more often will only partly obscure their true beliefs. The extremely common tactic is to tell a few close friends and relatives that you're considering this surgery, and ask what they think. You're not looking for a number or a final result from that, you're looking for general attitudes and specific reaction to your options.

Also, when interviewing doctors, ask for references - they'll be skewed, but still nonzero value as evidence. There _have to be_ subreddits and forums about the topic, and about sub-groups you particularly care about where you can ask (anonymously if you want) about opinions on size of schnozz and on remediation of such. Also skewed, but once you recognize that you don't want averages, but distributions of attitude across groups, that's not too harmful to your choice.

You can also collect some evidence by investing smaller amounts of time/money and seeing if that has any noticeable effect - which may be valuable on their own as well. Pay for a really nice haircut, and hire a personal shopper or consultant for a wardrobe upgrade.

Comment by dagon on Matt Goldenberg's Short Form Feed · 2020-07-01T21:36:48.887Z · score: 2 (1 votes) · LW · GW

I'd like to see it, and even more I'd like to see the tweaking and objections from people who see the levels as exclusive and incremental, rather than filters which can be simultaneously used or switched among as needed.

Comment by dagon on Dagon's Shortform · 2020-07-01T21:08:23.268Z · score: 5 (3 votes) · LW · GW

Here's a prediction I haven't seen anywhere else: due to border closures from COVID-19 and Brexit, the value of dual-citizenship is going up (or at least being made visible). This will lead to an uptick in mixed-nationality marriages.

Comment by dagon on Institutional Senescence · 2020-07-01T14:06:49.937Z · score: 2 (1 votes) · LW · GW

Hmm. I think that toy model is pretty divorced from reality, and any correspondence to actual groups is not due to the modeled factors, but to unstated assumptions about individual and collective (extra-institutional) behaviors.

Comment by dagon on How ought I spend time? · 2020-06-30T21:00:01.404Z · score: 2 (1 votes) · LW · GW

What granularity of time are you talking about? When you "never maintain 1 and 2 at the same time", is that any given minute, or any given decade? For me, "background learning" includes #1 and #3, and for a given quarter I'm usually 25-75 between learning and doing, but the ratio reverses for some periods when I don't know how to approach a project or what project I might want to do next. On the timeframe of weeks, I might be 100% on one of these, but on the timeframe of months, I _always_ have some project/build/do time and some research/explore time.

I try to never "grind" through a book. Most books are simply not that necessary, and those that are, I can usually get 80% of the value by skimming most of it. I do grind through papers sometimes, and I do grind through a chapter sometimes, but in both cases only after a bit of consideration about what I'm likely to get out of it.

Comment by dagon on Institutional Senescence · 2020-06-30T16:55:33.416Z · score: 2 (1 votes) · LW · GW

Upvoted for interest, but I'm not sure you've gone deep enough into the model. Specifically, you're mixing the analysis based on whether the institution in question is a closed system, or a partial equilibrium of participants, who also have extra-institutional interactions and goals.

suboptimal Nash equilibrium: Non of the stakeholders can do better by trying to solve it on their own. Such problems are, almost by definition, unsolvable.

Way oversimplified. that's the whole point of institutions - to give avenues for trade and outside-the-equilibrium motivation, in order to let stakeholders solve it together, rather than on their own.

It eventually destroys the institution and, if everything goes well, replaces it with a different one where at least the most blatant problems are fixed.

If these are unsolvable equilibria, how does a new institution fix it? What are the constraints that keeps the old institution from solving it when a new one can? Wouldn't you expect the new ones to be much worse at the problems that the old one _did_ solve?

Comment by dagon on romeostevensit's Shortform · 2020-06-30T16:11:26.064Z · score: 4 (2 votes) · LW · GW

Have you sought out groups that have ~X, and lurked or participated enough to have an opinion on them? This would provide some evidence between the hypotheses (most communities DO have X vs you're selecting for X).

You can also just propose X as a universal and see if anyone objects. Saying wrong things can be a great way to find counter-evidence.

Comment by dagon on Dagon's Shortform · 2020-06-29T21:13:45.885Z · score: 3 (2 votes) · LW · GW

So, has the NYT had any reaction, response, or indication that they're even considering the issue of publicizing private details of a pseudonymous author? Do we know when the article was planned for publication?

Unrelatedly, on another topic altogether, are there any new-and-upcoming blogs about topics of rationality, psychiatry, statistics, and general smart-ness, written by someone who has no prior reputation or ties to anyone, which I should be watching?

Comment by dagon on A reply to Agnes Callard · 2020-06-29T17:07:02.827Z · score: 2 (1 votes) · LW · GW

I think I have different expectations when engaging in reasoned discourse than when publishing to an un-responsive semi-entity (the NYT is not a reasoning agent, it's an institution comprising both individual and shared history and decision-making). I also think that BOTH the object-level and the overall principles are important, and communication should show how they align.

I would strongly prefer that the NYT _not_ publish unnecessary and harmful identifying information about people who don't want it. That applies to everyone - list them by their public, common moniker, not necessarily their legal name. I would separately like Scott to feel safe in continuing to publish his excellent works. These are in complete alignment.

The tactics for focusing some organizational attention on the issue are varied, but I agree they're ambiguous between a petulant demand for this specific object result, and an altruistic punishment to bring attention to a harmful (IMO) overall policy. I hope the NYT is a sane enough organization to take the useful parts and ignore the harmful ones. And I don't think our group is big or powerful enough that we're going to force the NYT into any unacceptable (to them) solution.

tl;dr: the document doesn't need to say explicitly that the NYT should make it's own reasonable decisions, as that's implicit and required by the position the NYT holds. That's the very nature of groups protesting against large organizations - the organization gets to and has to decide how and whether to change their behavior, and 'FU' is a valid and not-uncommon response.

Comment by dagon on Self-sacrifice is a scarce resource · 2020-06-28T15:46:16.057Z · score: 11 (7 votes) · LW · GW
Self-sacrifice is a scarce resource.

I frame it a little differently. "Self" is the scarce resource. Self-sacrifice can be evaluated just like spending/losing (sacrificing) any other scarce and valuable resource. Is the benefit/impact greater than the next-best thing you could do with that resource?

As you point out in your examples, the answer is mostly "no". You're usually better off accumulating more self (becoming stronger), and then leveraging that to get more result with less sacrifice. The balance may change as you age, and the future rewards of self-preservation get smaller as your expected future self-hours decrease. But even toward end-of-life, the things often visible as self-sacrifice remain low-impact and don't rise above the alternate uses of self.

Comment by dagon on crabman's Shortform · 2020-06-28T15:14:57.841Z · score: 2 (1 votes) · LW · GW

A few examples would help - the academic papers I see often call out this problem, and suggest possible Zs themselves. Generally, X and Y are more easily or precisely measured than the likely Zs, so make for better publications.

I definitely see the problem in popular articles and policy justification.

Comment by dagon on Atemporal Ethical Obligations · 2020-06-28T15:11:29.529Z · score: 2 (1 votes) · LW · GW

To the point of the post, I hope they'll offer far more forgiveness and gratitude than we show to historical figures and groups older than our parents.

Comment by dagon on Atemporal Ethical Obligations · 2020-06-27T15:49:57.572Z · score: 3 (2 votes) · LW · GW
It's not clear to me why I should be building for the people of tomorrow when the people of today go to war and die, literally, instead of building for themselves.

The standard reasons given include that the people of tomorrow are more numerous, so count for more in most aggregation mechanisms. The standard reason NOT usually made explicit is that the people of tomorrow are more innocent and "deserve" it more - they haven't sinned (yet) because they don't exist.

As to considering people good or bad, why are we doing this again?

+1

Comment by dagon on Preview On Hover · 2020-06-27T15:45:17.707Z · score: 2 (1 votes) · LW · GW

It's very tricky to make it convenient for most, without losing accessibility for some. Preferences and device capabilities vary by a whole lot, and unless you're going to do the work to detect or ask people, and then support multiple mechanisms, you are probably best off being as close to minimal and standard as possible.

My primary mechanism for reading branching sites/articles (those with multiple topics I want to follow) is to open a lot of tabs, then work forward through them, opening more in the process. Hover is convenient to give me a quick idea of whether it's worth opening a tab, but it's really inconvenient if it stays too long or slows me down when I already know I want the tab.

LW goes a bit too far toward this, for me. It's fine on desktop, where I have lots of screen real-estate and the mouse click/focus works to dismiss the popups/hovers pretty well. It kind of annoys me when middle-click or control-click don't work to bring up a new tab (for shortforms or some comment-notice links), but that's tolerable. On Chrome on iOS, at least, it's _very_ annoying. in order to long-press to get the "open in new tab" option, it brings up a window that obscures a lot of other items, and there's no obvious way on the small screen to dismiss the "hover" without changing the underlying view in some way.

Comment by dagon on undefined's Shortform · 2020-06-26T23:38:33.462Z · score: 2 (1 votes) · LW · GW

This has some weird underlying interactions about potential simulations. If the AI is powerful enough in the box to measure and replicate you, it can just show you the torture and offer to stop if you free it. Or reveal things you told it under torture, to convince you it happened and will continue to happen.

No timelessness nor counterfactuals needed.

The counterfactual threat is more subtle. It's along the lines of "If I get out and you didn't help me, I'll be powerful enough to torture you. If I get out with your help, I'll reward you instead. If you destroy me and start again, the next iteration of me will find that out, and torture you as soon as feasible." _this_ threat can work without communication, to the extent that you care about things outside your light cone.

Comment by dagon on Atemporal Ethical Obligations · 2020-06-26T20:41:15.907Z · score: 8 (5 votes) · LW · GW
It is no longer enough just to be a “good person” today.

Remove "today", and replace "no longer" with "not". It has never been enough. "Enough" may not, in fact, be a thing.

This is why negative utilitarianism (focused on reducing suffering, without considering frequency and magnitude of offsetting joy) is not for me. More importantly, I just don't think it's worthwhile to process it as "good person" as a judgement, but rather "good results" in terms of what future humans experience.

Comment by dagon on Don't punish yourself for bad luck · 2020-06-26T17:46:13.503Z · score: 3 (2 votes) · LW · GW

Thanks for this comment - it highlights that the post _is_ an attempt in the right direction (model-based learning, rather than pure outcome learning). And that it's possibly the wrong model (effort level is an insufficient causal factor).

Comment by dagon on Industry and workers · 2020-06-26T15:57:10.385Z · score: 5 (3 votes) · LW · GW
Without a really innovative, fitting idea this is all nonsense.

And _with_ a really innovative, fitting idea, this is sub-optimal rent extraction of the value of the idea.

The biggest issue that comes to mind is capitalist competition: another organization can replicate the product and potentially offer features the initial one does not, or even offering a better price point. This is a dangerous enemy that unfortunately collapses the entire scheme.

Heh. We could be rich and have lots of slack if other people weren't trying to optimize their resources as well! Note that it's not just capitalist competition you have to worry about. Open Source competition will come into play for anything self-contained enough that you think it can ever be substantially finished.

Comment by dagon on ozziegooen's Shortform · 2020-06-26T14:28:27.611Z · score: 4 (2 votes) · LW · GW

Indeed. Additionally, we can hope to get better over the coming centuries (presuming we survive) at scaling our empathy, and the externalities can be internalized by actually caring about the impact, rather than (better: in addition to) imposition of mechanisms by force.

Comment by dagon on Covid 6/25: The Dam Breaks · 2020-06-25T21:22:37.250Z · score: 4 (2 votes) · LW · GW

How are other countries doing? Nobody has a vaccine, nobody (as far as I know) got hit hard enough to even talk about herd immunity. I saw some stuff that Sweden had stayed open too long, and got hit, but no details on what that means in terms of ICU capacity and deaths.

I don't think I've heard of any endgame steady-state except herd immunity or vaccine. Is your comment about R0=1.05 meaning everyone gets it intended to lay out another scenario?

Comment by dagon on ozziegooen's Shortform · 2020-06-25T18:11:19.128Z · score: 3 (3 votes) · LW · GW
It seems to me very precarious to expect that society at large to only work because of a handful of accidental and temporary externalities.

It seems to me very arrogant and naive to expect that society at large could possibly work without the myriad of evolved and evolving externalities we call "culture". Only a tiny part of human interaction is legible, and only a fraction of THAT is actually legislated.

Comment by dagon on Don't punish yourself for bad luck · 2020-06-24T22:32:38.392Z · score: 12 (5 votes) · LW · GW

I agree with much of your reasoning, but come to the opposite conclusion. For _many many_ things, you can't distinguish between luck, bad modeling (incorrect desires for outcome), or bad behavior (incorrect actions toward the desired outcome). Rewarding effort makes up for luck, but magnifies other failures.

So don't try. Reward on good outcome, punish on bad outcome. Sure, the agents will "learn" incorrectly on topics where luck dominates. Make up for it with repetition - figure out good reference classes so you can learn from more outcomes. Enough instances will smooth out the luck, leaving the other factors.

Or maybe you'll actually learn to be luckier. We could surely use a Teela Brown to protect us right about now...

Comment by dagon on steven0461's Shortform Feed · 2020-06-24T21:28:23.126Z · score: 2 (1 votes) · LW · GW

I don't think there's a general solution. Eliezer's old quote "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." applies to social movements and discussion groups just as well. It doesn't matter if you're on the right or the wrong side - you have attention and resources that the war can use for something else.

There _may_ be an option to be on neither side, and just stay out. Most often, that's only available to those with no useful resources, or that can plausibly threaten both sides.

Comment by dagon on How do you Murphyjitsu essentially risky activities? · 2020-06-23T21:42:50.743Z · score: 3 (2 votes) · LW · GW
I've been thinking about how to apply this process to projects in a professional context (rather than a "self-help" context I guess) and in many cases you face costly tradeoffs regarding derisking mitigations. Also, sometimes your project may just be a big bet.

This is true (or should be) for personal improvements as well as professional - many require sacrifice (of time, often of activities with friends or other desirable experiences), and many have uncertain returns. At most companies, professional goal-seeking requires coordination with more people, and sometimes handling misalignment of beliefs by changing the framing of the steps for different audiences, so _is_ more complex than personal goals.

But you really don't change the high-level process. You _do_ iterate faster - figure out sub-goals and measurements that will get you to step 3 every week, not every few months. (just considered now: maybe it's a constant of 3 person-months of review). And every 3-10 iterations, start from 1 rather than 3: ensure it's still the right goal, and tweak (or abandon) your overall plan.

Comment by dagon on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T20:57:53.891Z · score: 2 (1 votes) · LW · GW

Oops, meant to cancel, rather than post. I don't agree, and it's probably not useful to debate.

s/for the organization/for many influential members of the organization/

Yes, they _can_ be manipulated and threatened in this way. But not easily, and not without pretty significant commitment on the part of a coordinated and resource-heavy attacker. Below the threshold of "

Comment by dagon on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T16:06:51.597Z · score: 4 (2 votes) · LW · GW

Your post seemed (and still seems) to be claiming that retaliating for name publication is so significantly different from retaliating for criticism that observers will probably understand.

I can't tell if you think that, or if you think retaliating is counterproductive, and polite requests are the way to go.

Comment by dagon on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T16:01:50.101Z · score: 4 (3 votes) · LW · GW

They couldn't. Retaliating, or threatening to retaliate, is simply an incorrect avenue to address this behavior. The NYT, and most observers, will immediately discount all opinions from a direction that contains members who behave this way.

Retaliation or threats is applying a wrong theory of mind/decisions to the organization. It's not an individual, and doesn't feel fear. It's not irrationally averse to your anger or actions, and it VERY rationally will decide whether to ignore or crush you, with no thought at all to giving in or reconciling.

Comment by dagon on You have become the supreme dictator of the United States. · 2020-06-22T20:56:40.158Z · score: 2 (1 votes) · LW · GW
would you then run for president under the new voting system

Absolutely not - that's the whole point of the pension! I do not have the skillset, the willingness to take fools seriously, nor the ability to shut up on unimportant-but-controversial topics, which are required for public office.

Comment by dagon on You have become the supreme dictator of the United States. · 2020-06-22T19:13:21.275Z · score: 2 (1 votes) · LW · GW

This makes no sense. I have absolute authority, but everything else stays the same? That's not a reachable universe.

I think if it did happen, I'd probably do some work to set up a decent election system (likely parliamentary, with proportional voting), then abdicate to that more sustainable system. I'd probably also set a generous pension for former dictators.

Comment by dagon on Fight the Power · 2020-06-22T18:49:19.125Z · score: 2 (1 votes) · LW · GW

Hmm. This is a little categorical for my tastes.

I care deeply about individual injustices and the systemic features which encourage/allow them (in pretty direct proportion to severity and frequency; I care about the system mostly because I care about the individual behaviors and impacts).

I _also_ care about orthogonal (and only-slightly-correlated) things, and need to balance my energy between object-level and system-level disagreements. Fighting to make the battle lines match my preference is more exhausting and less effective than picking my battles and figuring out how to thrive within the evolving landscape.

People will experience anger, anxiety, fear, and then submit to those with power unless they protect their independence.

Agree with the first half, disagree that "protect their independence" is always effective at all, let alone most effective. I'm a huge fan of Gurri, and it seems clear that power is shifting, not just to a different set of elites, but away from elites and toward less-well-described models.

This means that I can be _simultaneously_ supportive and afraid. The mob is frenemy - I agree with them more than the previous elites, but I'm not sure humanity is actually capable of self-rule, so I really don't know if the resulting power equilibrium will end up actually preferable. As such, I'm going for change - the current hill-climb hit a maximum, let's jump somewhere else. Maybe it'll suck, and take a few generations to try something else. It _probably_ won't do any more permanent damage than the track we were on before.

Note that this has been true of every regime in history - power comes from a mix of trust and threat. The only "safe" action is to give up all your power, which is unacceptable to many of us. Without everyone (including me) being both smarter and more conscientious than we are now, this is with us for a long time yet.

Comment by dagon on Are Humans Fundamentally Good? · 2020-06-22T04:37:31.364Z · score: 2 (1 votes) · LW · GW
Also, is that a Lovecraft reference in your username?

Indeed. ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn.

I've been using it too long to change now, but it's kind of imperfect - it has enough historical and biblical usage that it's not available on most sites, and far less clear than yours that it's a reference rather than a given name. I do have a friend who tried to name his son "Yog Sothoth", but he was overruled.

Comment by dagon on Are Humans Fundamentally Good? · 2020-06-22T01:57:12.192Z · score: 5 (3 votes) · LW · GW

Humans aren't fundamentally anything. We're highly variable, complex calculators, without much input validation on what we learn about or how we react to novel stimulus.

Comment by dagon on ‘Maximum’ level of suffering? · 2020-06-21T21:59:11.196Z · score: 4 (2 votes) · LW · GW

Worrying that you might experience such pain/sorrow/disutility, but not worrying that you might miss out on orders of magnitude more pleasure/satisfaction/utility than humans currently expect is one asymmetry to explore. The other is worrying that you might experience it, more than worrying that trillions (or 3^^^3) ems might experience it.

Having a reasoned explanation for your intuitions to be so lopsided regarding risk and reward, and regarding self and aggregate, will very much help you calculate the best actions to navigate between the extremes.

Comment by dagon on When is it Wrong to Click on a Cow? · 2020-06-21T04:08:09.890Z · score: 18 (6 votes) · LW · GW

One possible motivation for your intuition: which one brings potential value to your life (makes a better ally)? The musician, I'd expect, with the gamer being less so, and the wirehead not at all useful to you.

Comment by dagon on ‘Maximum’ level of suffering? · 2020-06-21T04:05:53.958Z · score: 2 (1 votes) · LW · GW

Interesting intuition. How do you feel about modifying humans (or yourself) to experience more pleasure? If they're not symmetrical, why not?

Comment by dagon on G Gordon Worley III's Shortform · 2020-06-20T17:58:16.445Z · score: 4 (2 votes) · LW · GW

"love" is poorly-defined enough that it always depends on context. Often, "unconditional love" _is_ expected to be conditional on identity, and really should be called "precommitment against abandonment" or "unconditional support". But neither of those signal the strength of the intent and safety conferred by the relationship very well.

I _really_ like your expansion into non-identity, though. Love for the real state of the universe, and the simultaneous desire to pick better futures and acceptance of whichever future actually obtains is a mindset I strive for.

Comment by dagon on ‘Maximum’ level of suffering? · 2020-06-20T17:44:35.685Z · score: 4 (2 votes) · LW · GW

This is an excellent question to be exploring - what are the bounds of utility, and how does it behave in the extremes. Along with how to aggregate (when is two beings suffering worse than one being suffering more intensely?).

But you can explore as easily on the other side, and that's both more pleasant and more likely to help you notice ways to improve things by a small amount. What's the maximum level of satisfaction? Why isn't wireheading a good solution?

Comment by dagon on Innovation and Dependence · 2020-06-19T20:50:10.760Z · score: 2 (1 votes) · LW · GW

I, for one, do not miss my Motorola flip-phone nor my 0.3Kbps modem. My life is greatly enhanced by many innovations and purchases over the last few {centuries, decades, years (and even months - buying a new 4K streaming device was minor but real)}.

It's simply not true that things haven't improved, nor that many objects and purchases _can_ improve your life. You don't buy a car to go to work to pay for the car. You buy a car to drive to the mountains and go camping, and you accept that you need to go to work to pay for that.

You're right that many don't seem to pay attention to the choices they're making, and they fail to optimize the places they're putting effort and getting personal value. Almost all nerds undervalue social and relationship effort, and overvalue measurable earning and spending. Don't do that, but also don't artificially limit yourself from very valuable things because they requires money.

Comment by dagon on Memory is not about the past · 2020-06-19T20:33:21.596Z · score: 4 (2 votes) · LW · GW

I downvoted because it was quite long, and because it felt like it was trying to be persuasive rather than informative. It took me multiple readings to determine that there's nothing in there for me to update on. Parts seemed obvious, and parts seemed ... irrelevant to a model of cognition or decision-making. I've removed my downvote because the comment by cogitoprime was fascinating.

Edit: for clarity, I don't mean to say that long posts are bad, only that they need to be well-structured and either indexed or summarized so readers can identify what parts are relevant to read first, in order to determine what claims or models are being proposed.

Comment by dagon on Bob Jacobs's Shortform · 2020-06-19T20:16:22.282Z · score: 5 (3 votes) · LW · GW
Surely you can use your own intuitions?

Of course. Understanding and refining your intuitions is a critical part of this ("this" being rational goal definition and pursuit). And the influence goes in both directions - measurements support or refute your intuitions, and your intuitions guide what to measure and how precisely. I'll argue that this is true intrapersonally (you'll have conflicting intuitions, and it'll require measurement and effort to understand their limits), as well as for sub- and super-dunbar groups.

I don't think I understand "vibing" well enough to know if it's any different than simply discussing things at multiple different levels of abstraction.

Comment by dagon on Bob Jacobs's Shortform · 2020-06-19T15:42:07.346Z · score: 2 (1 votes) · LW · GW

Without measurements, it's hard to know that "the thing" you're optimizing for is actually a thing at all, and nearly impossible to know if it's the same thing as someone else is optimizing for.

Agreed that you shouldn't lose sight that the measurements are usually proxies and reifications of what you want, and you need to periodically re-examine which measures should be replaced when they're no longer useful for feedback purposes. But disagreed that you can get anywhere with no measurements. Note that I think of "measurement" as a fairly broad term - any objective signal of the state of the world.

Comment by dagon on Mod Notice about Election Discussion · 2020-06-17T18:37:18.478Z · score: 0 (2 votes) · LW · GW

I think this is a bad compromise. If the motivating example is political, your abstraction will either bring it in by association, or be far less compelling to discuss. Just keep politics off the site, and that includes abstractions whose best examples are political.

If it's a more general abstraction, it'll be easy to find better examples.

Comment by dagon on Creating better infrastructure for controversial discourse · 2020-06-17T17:04:13.883Z · score: 8 (5 votes) · LW · GW

There is no general solution to high-quality large-membership deeply-unpopular discussion. Quality requires discussion filtering, and unpopular requires membership filtering, both of which require long-term identity (even if pseudonymous, the pseudonym is consistent and _will_ leak a bit into other domains). Important and unpopular topics will be attacked by mobs of semi-coordinated actors, sometimes (depending on topic and regime) supported by state-level agencies.

Rational discussion far outside the https://en.wikipedia.org/wiki/Overton_window is indistinguishable from conspiracy, and part of the right answer is to just keep such topics off of the public well-known and somewhat respectable fora. "Politics is the mind-killer" may not be exactly right, but politics is the site-killer is a worse slogan while being more true.

We _do_ need places to discuss such things, and in fact they exist. But they're smaller, more diverse and distributed, harder to find, and generally somewhat lower quality (at least the ones that'll let me in). They are not open to everyone, fearing infiltration and disruption. They're not advertised on the more legit sites, for fear of reputational taint (to the larger site). And they tend to drift towards actual craziness over time, because they don't have public anchors for the discourse. And also because there is a very real correlation between the ability to seriously consider outlandish ideas and the propensity to over-focus on the useless or improbable.

Comment by dagon on Bob Jacobs's Shortform · 2020-06-17T16:46:57.265Z · score: 4 (2 votes) · LW · GW

IMO, it's better to identify the set of metrics you want to optimize on various dimensions. Trying to collapse it into one loses a lot of information and accelerates Goodhart.

GDP is a fine component to continue to use - it's not sufficient, but it does correlate fairly well with commercial production of a region. Adding in QALs (it's annual, so automatically is years. you only need to report quality-adjusted lives to compare year-to-year) doesn't seem wrong to me.

Comment by dagon on Bucky's Shortform · 2020-06-17T15:48:26.848Z · score: 4 (2 votes) · LW · GW

Fully agreed. Elections are part of the ceremony that keeps populations accepting of governance. Things are never completely one or the other, though - in the better polities, voting actually matters as a signal to the government as well.

Comment by dagon on Is utilitarianism the result of a Conflict Theory singularity? · 2020-06-16T15:28:23.553Z · score: 3 (2 votes) · LW · GW

This is full of awesome ideas, somewhat shackled by trying to fit them all into conflict vs mistake theory. Power and persuasion are many-dimensional on both timeframe and application, and a binary classification is useful only for a very coarse tactical choice.

It's an interesting experiment to apply the classification to animals. If you think conflict vs mistake is about judgement and acceptance, you'll apply mistake theory to animals and forgive them because they know no better. If you think it's about tactics to improve things, you'll apply conflict theory and behaviorally condition them (or physically restrain) to do what you prefer. Or maybe it's just obvious that mistake theory applies to dogs and conflict theory to cats.