Dagon's Shortform

post by Dagon · 2019-07-31T18:21:43.072Z · score: 3 (1 votes) · LW · GW · 21 comments

21 comments

Comments sorted by top scores.

comment by Dagon · 2020-04-17T17:17:23.808Z · score: 9 (3 votes) · LW(p) · GW(p)

Reminder to self: when posting on utiltarianism-related topics, include the disclaimer that I am a consequentialist but not a utilitarian. I don't believe there is an objective, or even outside-the-individual perspective to valuation or population aggregation.

Value is relative, and the evaluation of a universe-state can and will be different for different agents. There is no non-indexical utility, and each agent models the value of other agents' preferences idiosyncratically.

Strong anti-realism here. And yet it's fun to play with math and devise systems that mostly match my personal intuitions, so I can't stay away from those topics.

comment by MakoYass · 2020-04-19T00:16:33.002Z · score: 3 (2 votes) · LW(p) · GW(p)

I think I agree that there's no objectively true universal value aggregation process, but if you can't find (a method for finding) very broad value aggregation system, then you can't have peace. You have peace insofar as factions accept the rulings of the system. Simply giving up on the utilitarian project is not really an option.

comment by Dagon · 2020-04-19T01:03:15.691Z · score: 3 (2 votes) · LW(p) · GW(p)

That's an interesting reason to seek Utilitarianism that I hadn't considered. Not "this is true" or "this makes correct recommendations", but "if it works, it'll be a better world". I see some elements of Pascal's Wager in there, but it's a novel enough (to me) approach that I need to think more on it.

I do have to point out that perhaps "you can't have peace" is the actual true result of individual experience. You can still have long periods of semi-peace, when a coalition is strong enough to convince others to follow their regime of law (Pax Romana, or many of today's nations). But there's still a lot of individual disagreement and competition for resources, and some individuals use power within the system to allocate resources in ways that other individuals disprefer.

I'm not sure if that is "peace" within your definition. If so, Utilitarian aggregation isn't necessary - we have peace today without a working Utility calculation. If not, there's no evidence that it's possible. It may still be a worthwhile goal to find out.

comment by Bob Jacobs · 2020-05-21T10:27:48.775Z · score: 3 (2 votes) · LW(p) · GW(p)

Shameless self-promotion but I think meta-preference utilitarianism [LW · GW] is a way to aggregate value fairly without giving up anti-realism. Also, be sure to check out the comment [LW(p) · GW(p)] by Lukas_Gloor as he goes into more depth about the implications of the theory.

comment by Dagon · 2020-05-21T15:45:36.560Z · score: 3 (2 votes) · LW(p) · GW(p)

Well, that's one way to lean into the "this isn't justified by fundamentals or truth, but if people DID agree to it, things would be nicer than today (and it doesn't violate preferences that I think should be common)". But I'm not sure I buy that approach. Nor do I know how I'd decide between that and any other religion as a desirable coordination mechanism.

comment by Bob Jacobs · 2020-05-21T16:08:00.307Z · score: 3 (2 votes) · LW(p) · GW(p)

I'm not sure what flavor of moral anti-realism you subscribe to so I can't really help you there, but I am currently writing a post which aims to destroy a particular subsection of anti-realism. My hope is that if I can destroy enough of meta-ethics, we can hopefully eventually make some sort of justified ethical system. But meta-ethics is hard and even worse, it's boring. So I'll probably fail.

comment by Dagon · 2020-05-21T18:07:06.502Z · score: 3 (2 votes) · LW(p) · GW(p)

Good luck! I look forward to seeing it, and I applaud any efforts in that direction, even as I agree that it's likely to fail :)

That's a pretty good summary of my relationship to Utilitarianism: I don't believe it, but I do applaud it and I prefer most of it's recommendations to a more nihilistic and purer theory.

comment by Bob Jacobs · 2020-05-21T20:02:29.765Z · score: 1 (1 votes) · LW(p) · GW(p)
I don't believe it, but I do applaud it and I prefer most of it's recommendations to a more nihilistic and purer theory.

Wow what a coincidence, moral nihilism is exactly the subject of the post is was talking about. Here it is btw. [LW · GW]

comment by Dagon · 2020-01-30T16:57:46.938Z · score: 6 (3 votes) · LW(p) · GW(p)

Types and Degrees of Maze-like situations

Zvi has been writing about topics inspired by the book Moral Mazes, focused on a subset of large corporations where politics rather than productivity are seen as the primary competition dimension and life focus for participants. It's not clear how universal this is, nor what size and competitive thresholds might trigger it. This is just a list of other Maze-like situations that may share some attributes with those, and may have some shared causes, and I hope solutions.

  • Consumerism and fashion.
  • Social status signaling and competition.
  • Dating, to the extent you are thinking of eligibility pools and quality of partner, rather than individual high-dimensional compatibility.
  • Acutal politics (elected and appointed-by-elected office).
  • Academic publishing, tenure, chairs.
  • Law enforcement. Getting ahead in a police force, prison system, or judiciary (including prosecutors) is almost unrelated to actually reducing crime or improving the world.
  • Bureaucracies are a related, but different kind of maze. There, it's not about getting ahead, it's about ... what? Shares the feeling that it's not primarily focused on positive world impact, but maybe not other aspects of the pathology.
comment by jmh · 2020-01-30T17:28:51.197Z · score: 1 (1 votes) · LW(p) · GW(p)

We should also add law and legislation to that list.

comment by Dagon · 2020-01-30T18:12:58.652Z · score: 2 (1 votes) · LW(p) · GW(p)

Expand on that a bit - is this an employment/social maze for lawyers and legislators, or a bureaucratic maze for regular citizens, or something else? I'll definitely add policing (a maze for legal enforcers in that getting ahead is almost unrelated to stopping crime).

comment by jmh · 2020-01-31T14:41:41.940Z · score: 1 (1 votes) · LW(p) · GW(p)

Employment and social maze for both lawyers and legislators, and sometimes judges as well a police (as you mention). For lawyers perhaps there is some asymmetry between DA side and Defendant in criminal cases, not sure there. Examples for DA and police are the incentives to get convictions and so the image of making things safe and producing justice leading to some pretty bad cases of bad behavior that amounts to either framing and innocent in some cases, getting a conviction by illegal means (slippery slope situations) which then undermine the entire legal structure/culture of a rule of law society.

Similar for legislators -- election results tend to drive behavior with lots of wiggle room for not actually working on delivering campaign promises or not ever actually having intended to push the advertised agenda. The incentives of being a tenured politician, particularly at the federal and state level, related to personal benefit seem poorly aligned with serving the electorate (representing constituents values rather than maximizing the politician's income, status and the like.

There might be some dynamic related to rent-seeking results too. If rent-seeking is purely a structural reality (standard organizational costs facing special versus general interests) that should stabilize until the underlying costs structures change. But I'm not entirely sure that is the case, if we can make the cases that rent-seeking is growing then perhaps a moral maze would provide a better explanation of that growth where no shift in underlying cost are able to explain the growth.

For the regular citizen certainly the bureaucracy creates problems both for engaging with government as well as perhaps engaging with politics and elections -- though I'm less sure this is a real maze problem.

comment by Dagon · 2019-08-05T17:03:54.662Z · score: 6 (3 votes) · LW(p) · GW(p)

My daily karma tracker showed that 3 comments got downvoted. Honestly, probably justified - they were pretty low-value. No worries, but it got me thinking:

Can I find out how I'm voting (how many positive and negative karma I've given) over time periods? Can I find it out for others? I'd love it if I could distinguish a downvote from someone who rarely downvotes from one from a serial downvoter.

I've been downvoting less, recently, having realized how little signal is there, and how discouraging even small amounts can be. Silence or comment are the better response, for anything better than pure drivel.

comment by Raemon · 2019-08-05T19:49:10.473Z · score: 5 (2 votes) · LW(p) · GW(p)

Worth noting that by default the karma notifier doesn't show downvotes, for basically this reason.

I think it only makes sense to opt into seeing all downvotes if you're confident you can take this as information without it harming your motivation.

Meanwhile, there are benefits to actually downvoting things that are wrong, or getting undue attention, or subtly making the site worse. It matters how things can sorted for attention allocation, and relative karma scores in a discussion are important, among other things, so that people don't see bad arguments appearing highly endorsed and then feeling motivated to argue against them (when they wouldn't ordinarily consider this a good use of their time)

So I think people should actually just downvote more, and not use the "show me all my downvotes" feature, which most people aren't using and you probably shouldn't worry about overmuch.

comment by Dagon · 2019-08-05T22:36:50.675Z · score: 3 (2 votes) · LW(p) · GW(p)

I opted in to seeing downvotes, and I think my curiosity (both about personal interactions on the site and about the general topic of mechanism design) will compel me to continue. I'm not worried about them - as I say, they were low-value comments, though not much different from others that get upvoted more (and IMO shouldn't be). Mostly it shows that low-number-of-voters is mostly noise, and that's fine.

My main point was wondering how I can monitor my own voting behavior? (ok, and a little complaining that variable-point voting is confusing.) I think I upvote pretty liberally and downvote very rarely, but I really wonder if that's true, and I wonder things like how many votes I give (on multiple comments) on the same post.

comment by Raemon · 2019-08-05T22:43:36.169Z · score: 4 (2 votes) · LW(p) · GW(p)

I do agree that being able to see metrics on your own voting is pretty reasonable, although not something I expect to make it to the top of our priority queue for awhile.

comment by Dagon · 2019-08-07T16:43:47.705Z · score: 5 (3 votes) · LW(p) · GW(p)

"One of the big lessons of market design is that participants have big strategy sets, so that many kinds of rules can be bent without being broken. That is, there are lots of unanticipated behaviors that may be undesirable, but it's hard to write rules that cover all of them. (It's not only contracts that are incomplete...) " -- Al Roth

I think this summarizes my concerns with some of the recent discussions of rules and norm enforcement. People are complicated and the decision space is much larger than usually envisioned when talking about any specific rule, or with rules in general. Almost all interactions have some amount of adversarial (or at least not-fully-aligned) beliefs or goals, which is why we need the rules in the first place.

comment by Dagon · 2019-08-02T16:24:38.754Z · score: 4 (2 votes) · LW(p) · GW(p)

Al Roth interview discussing the historical academic path from simplistic game theory to market design, which covers interesting mixes of games with multiple dimensions of payoff. https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.33.3.118

comment by Dagon · 2019-07-31T18:21:43.389Z · score: 3 (2 votes) · LW(p) · GW(p)

Incentives vs agency - is this an attribution fallacy (and if so, in what direction)?

Most of the time, when I see people discussing incentives about LW participation (karma, voting, comment quality and tone), we're discussing average or other-person incentives, not our own. When we talk about our own reasons for participation, it's usually more nuanced and tied to truth-seeking and cruxing, rather than point-scoring.

I don't think you can create alignment or creative cooperation with incentives. You may be able to encourage it, and you can definitely encourage surface-cooperation, which is not valueless, but isn't what you actually want. CF variants of Goodheart's law - incentive design is _always_ misguided due to this, as visible incentives are _always_ a bad proxy for what you really want (deep and illegible cooperation).

comment by Pattern · 2019-08-01T18:22:09.823Z · score: 1 (1 votes) · LW(p) · GW(p)

There's two sides of discussing incentives, wrt. X:

  • Incentivize X/Make tools that make it easier for people to do X [1].
  • Get rid of incentives that push people to not do X[2] /Remove obstacles to people doing X.

Even if alignment can't be created with incentives, it can be made easier. I'm also curious about how the current incentives on LW are a bad proxy right now.

[1] There's a moderation log somewhere (whatever that's for?), GW is great for formatting things like bulleted lists, and we can make Sequences if we want.

[2] For example, someone made a post about "drive by criticism" a while back. I saw this post, and others, as being about "How can we make participating (on LW) easier (for people it's hard for right now)?"

comment by Dagon · 2020-05-20T21:20:18.911Z · score: 2 (1 votes) · LW(p) · GW(p)

Funny thought - I wonder if people were created (simulated/evolved/whatever) with a reward/utility function that prefers not to know their reward/utility function.

Is the common ugh field around quantifying our motivation (whether anti-economic sentiment, or just punishing those who explain the reasoning between unpleasant tradeoffs) a mechanism to keep us from goodhearting ourselves?