Posts

Laurie Anderson talks 2021-12-18T20:47:01.484Z
For moderately well-resourced people in the US, how quickly will things go from "uncomfortable" to "too late to exit"? 2020-06-12T16:59:56.845Z
Money isn't real. When you donate money to a charity, how does it actually help? 2020-02-02T17:03:04.426Z
Dagon's Shortform 2019-07-31T18:21:43.072Z
Did the recent blackmail discussion change your beliefs? 2019-03-24T16:06:52.811Z

Comments

Comment by Dagon on Newcomb's Lottery Problem · 2022-01-27T20:04:10.121Z · LW · GW

[ epistimic status: commenting for fun, not seriously objecting.  I like these posts, even if I don't see how they further our understanding of decisions ]

The range is a thousand numbers btw, it includes 1 and 1000
...
larger than 1 and smaller than or equal to 1000.

We're both wrong.  It includes 1000 but not 1.  Agreed with the "whatever" :)

I don't see how precommitting to one thing and then doing the other, thereby fooling Omega is possible

That's the problem with underspecified thought experiments.  I don't see how Omega's prediction is possible.  The reasons for 99% accuracy matter a lot.  If she just kills people if they're about to challenge her prediction, then one-boxing in 1 and two-boxing in 2 is right.  If she's only tried it on idiots who think their precommitment is binding, and yours isn't, then tricking her is right in 1 and still publicly two-box in 2.

BTW, I think you typo'd your description of one- and two-boxing.  Traditionally, it's "take box B or take both", but you write "take box A or take both".  

Comment by Dagon on Newcomb's Lottery Problem · 2022-01-27T18:03:14.147Z · LW · GW

I took the original "ultimate" post as mostly a joke - there didn't seem to be any interesting theoretical implications beyond the standard Newcomb's problem interactions between causality and decision theory.  This doesn't seem to make the joke any funnier, nor demonstrate any confusions not already identified by simpler thought experiments.

What am I missing?  (edit: this comment came out way more negative than I intended, sorry!  This question is legitimate, and I'd like someone to ELI5 what new conundrum this adds to decision-theory or modeling of decision causality).

Boring analysis:
before you play the game, but after you learn that you will play the game - EV of making Omega predict you'll one-box is $1000 (or $1001000 if you can make Omega mis-predict), because you can never win the lottery. Making Omega predict you'll two-box is worth $1000 + $4M * 168/998(there are 168 primes in the range 2..999) = $674346.  Problem 2, that's 1347693.  So problem 2 is simple, just 2-box and let everyone know it.

Problem 1 hinges, like most Newcomb problems, on whether Omega is WRONG in your specific case.  Precommitting to one-box, then actually 2-boxing is optimal, and perhaps possible in the world where Omega broadcasts her prediction in advance.  It'll depend on the specifics of why she has a 99% success rate.

In the situation given, where you already see the composite X and Y, two-box if you can.  

Comment by Dagon on Excessive Nuance and Derailing Conversations · 2022-01-24T00:21:58.918Z · LW · GW

I'm not sure how to use this advice/observation.  I think the purposes for discussion, style and knowledge of participants, and social expectations of that particular setting vary pretty widely, and other than "make the implicit explicit" in terms of getting what you want from the interaction, there's too much existing nuance to generalize advice like this.

There are specific conversations where allowing the conversation to meander and allowing the conversational stack to grow and blend are OK.

I think this exemplifies my point.  Except I think it's the majority of conversations, and in my experience, the default.  The exceptions are conversations explicitly trying to establish agreement on some topic or some other agreed purpose.

Comment by Dagon on There is a line in the sand, just not where you think it is · 2022-01-22T16:57:13.698Z · LW · GW

I think two mistakes in your friend's model.  The first is simple over-correction - seeing one instance and believing that's universal.   The second is over-simplification, which is what you're pointing at with this post.  People are complex, and most social decisions are heavily context-dependent.  Some people get away with things that others don't.  the very concept of "norm" is named for "normal", and is about the median/center of a set of behaviors.  Forgetting that people are actually on many distributions, which can have pretty long tails, is the error.

Comment by Dagon on Guidelines for cold messaging people · 2022-01-21T20:17:57.439Z · LW · GW

Thanks.  My intent was to dissuade people from taking the post as "these are conditions you should cold-contact people on LW" (which is how I interpreted it), by pointing out that I'd prefer not to be contacted at all, even with the recommended information.  

Comment by Dagon on MackGopherSena's Shortform · 2022-01-20T22:00:01.419Z · LW · GW

The concept of cost requires alternatives.   What do you cost, compared to the same universe with someone else in your place?  very little.  What do you cost, compared to no universe at all? you cost the universe.

Comment by Dagon on What's Up With Confusingly Pervasive Consequentialism? · 2022-01-20T21:25:42.528Z · LW · GW

I wonder if the confusion isn't about implications of consequentialism, but about the implications of independent agents.  Related to the (often mentioned, but never really addressed) problem that humans don't have a CEV, and we have competition built-in to our (inconsistent) utility functions.

I have yet to see a model of multiple agents WRT "alignment".  The ONLY reason that power/resources/self-preservation is instrumental is if there are unaligned agents in competition.  If multiple agents agree on the best outcomes and the best way to achieve them, then it doesn't matter which agent does what, or even which agent(s) exist.  

Fully-aligned agents are really just multiple processing cores of one agent.  

It's when we talk about partial-alignment that we go off the rails.  In this case, we should address competition and tradeoffs as actual things the agent(s) have to consider.

Comment by Dagon on Choosing battles (on the Internet) · 2022-01-20T18:03:36.952Z · LW · GW

obXKCD already linked, so I don't need to do that, good.  I like that you're coming to the same conclusion from a different direction: you don't want to improve their models or "fix" the wrongness on behalf of someone else, you just want to learn and improve your own model (ok, probably all of the above, but focusing on internal knowledge).

> my arguing is a limited resource.

This generalizes.  Your thinking is a limited resource.  Some discussions on the internet (or in person, for that matter) are more valuable than the next-best thing you could do.  Many are not.  

Of course, there's a search cost, too - the debate in front of you may "waste" more time than reading a good book or finding a better forum/topic to join, or building a new toy on your rPi or whatever else you could be doing.  But it doesn't "waste" time in figuring out what to do, or where/what the better topics are.  I don't have a good solution for that problem, other than to notice when your current activity is not satisfying and consider the alternatives.

Comment by Dagon on Guidelines for cold messaging people · 2022-01-20T17:54:43.001Z · LW · GW

I'm surprised that this is a controversial comment - 8 votes for a net of 0!  

Comment by Dagon on Guidelines for cold messaging people · 2022-01-18T15:01:39.566Z · LW · GW

Ah, that's important context.  Putting your contact info on your public website is an invitation to be contacted.  It's probably best to specify there (perhaps on a "contact me" page, which has your info AFTER this) under what conditions you'd like to connect.

Comment by Dagon on How to Build New Countries à la 1729? · 2022-01-18T08:12:37.555Z · LW · GW

Only skimmed, because it didn't seem worth a lot of time at first glance.  I didn't see any acknowledgement or plan for the base purpose of a country: to enforce some level of cooperation within the borders and to keep other nations, criminal groups, or just individuals from taking members' stuff or lives.

In other words, where does the force come from that defines and preserves property and bodily rights?  

Comment by Dagon on Is there a good way to read deep into LW comment histories on mobile? · 2022-01-17T22:40:23.629Z · LW · GW

The experience on mobile is bad enough (clicking brings up windows that can't be easily dismissed, wierd bugs where expanding a comment marks others as read, etc.) that this is a site that I just don't read except on desktop. 

Comment by Dagon on Guidelines for cold messaging people · 2022-01-17T22:34:02.891Z · LW · GW

Any generalizable rules you can think of about whom better not to cold message at all?

Yes.  Contact people you see posting on sites with a norm for individual contact on random topics (I don't know what those are, but I don't think it's LW).  Contact people whose profile description asks you to contact them.  Contact people if they post or comment that they'd like to be contacted.

Judgement call to contact people you have a comment exchange with that you want to explore further (I'd argue this isn't "cold").

Otherwise, leave them alone.

You can, of course, solicit contacts by setting up your profile and posting or shortform-ing that you'd like to be contacted.  That's way better than reaching out yourself to people you don't have any reason to believe want that.

 

Really, e-mail or DM on a site is ALMOST NEVER the right way to initiate "cold" contact.  That's what posts are for.

Comment by Dagon on Being the Hero is hard with the void · 2022-01-17T20:32:11.128Z · LW · GW

A hero is someone who suffers for a sympathetic cause.  The suffering can be abstracted to 'takes risk' or 'sacrifices something', but the sympathy is mandatory.  If the audience doesn't think the cause is "good", that's not a hero, it's a villain.  

It doesn't require success, or even reasonable hope of success, only the suffering and the sympathetic cause.  

Don't be a hero.  Instead, do what gives you the best world.

Comment by Dagon on Guidelines for cold messaging people · 2022-01-17T20:24:29.929Z · LW · GW

For me, I'd add 0: Don't.  A public note or post that something's available for me to opt into is fine (in related forums), but otherwise leave me alone unless I've explicitly asked to be contacted.

Comment by Dagon on Quinn's Shortform · 2022-01-17T19:58:03.859Z · LW · GW

Why are you specifying 100 or 0 value, and using fuzzy language like "acceptably small" for disvalue?

Is this based on "value" and "disvalue" being different dimensions, and thus incomparable?  Wouldn't you just include both in your prediction, and run it through your (best guess of) utility function and pick highest expectation, weighted by your probability estimate of which universe you'll find yourself in?   

Comment by Dagon on Reflections on Connect Developers · 2022-01-16T21:59:05.749Z · LW · GW

I'm not actually sure where you would go if you were looking for the latter type of conversation. Anything come to mind for you?

Nope, but then I'm not looking for this, and I can't quite identify WHY someone would look for this (with no geographic or in-person possibilities).   

I was thinking that you can ballpark it and assume that something like 50% of the matches end up having a video call, and 1% of those end up being friends

This is the principal thing in an early-stage business plan / sketch.  Write down your assumptions and unknowns, and figure out how to validate your beliefs.  For most of these kinds of ideas, you should be optimizing for 5-10 attempts, and getting as much information from each.  "Fail fast" is how this is often stated, but that's not complete.  "Fail legibly" would be better advice.  Generate hypotheses about why people aren't loving it, and find ways to test those ideas.

The app feels extremely side-project-y, so I wouldn't expect people to be worried about data harvesting.

You don't just need to assert that you're not misusing data, you need to overcome the expectations that have been set by all the crappy chat/discussion/etc. sites in the world.

My prior is pretty high that SOMEONE is harvesting any data I give out.  Whether it's you, or the stalkers who've created an account and filled out every possible profile on your survey, or just the fact that I don't like most people, and people is who your site attracts, I would only try it with burner info.  And most people won't bother with that.  

Comment by Dagon on Reflections on Connect Developers · 2022-01-16T16:58:35.289Z · LW · GW

Upvote for the post-mortem, and great happiness and congratulations for "did a thing"!

I think the "write a business plan already" is absolutely key here.  And really, you often only need a business sketch, not a plan.  What customers/developers have this need to connect, and why is this method any better than the hundreds of other community and discussion sites that exist?  

What IS success for this?  Number of surveys taken?  Number of initial messages sent?  Actual calls made?  Long-term friendships formed that still stay in contact after 10 years?  It's easy to tell if it's not successful if you don't get many page views or initial uses.  It's not easy to tell if part of it is successful and needs expansion, or if it just doesn't have as big an audience as you think,.  

Some people are really averse to talking to strangers on the internet. Other people are very eager to do so. Most are somewhere in between. But since there are so many developers out there, I only need that light blue group

That light blue group may be a VERY thin tail.  I think "chatting with strangers over the internet" is probably NOT attractive to the vast majority of people, and software developers even less likely to want that.  You also have the problem that this thing is poisoned by a few bad actors, and that happens SO frequently in other domains that it's a fair assumption that if I give any contact information to a stranger, I'll regret it.  

I may be wrong, and there's a significant market for this.  But I'd expect you have to solve the problem of initial anonymous/blockable contact and reputation before your target market will even try it.    

Comment by Dagon on School Daze · 2022-01-14T19:05:12.793Z · LW · GW

I think the biggest cause of societal decay is the fact that we’ve lost the ability to play to win on any game that can be criticized easily.

Comment by Dagon on davidad's Shortform · 2022-01-14T18:43:18.620Z · LW · GW

Why is that not Bayesian? The decision to bet is going to include a term for your counterparty’s willingness to bet, which is some evidence.

One way to overcome this in thought experiments is to frame it as a line with no spread, and no neutral option. At, say, 150k:1, which side would you bet? Then adjust to where you'd SET a line where you had to accept a wager on either side.

Comment by Dagon on Tips for productively using short (<30min) blocks of time · 2022-01-10T20:15:25.343Z · LW · GW

A few things that can help (which I do sometimes, but sometimes do just "waste" the interstitial periods).

  • Keep multiple task lists by granularity, or keep entries on your task list that can be done (or worked on) in short time periods with low cost to switch in or out of.
  • ABR: Always. Be. Reading/Researching.  15 minutes is enough to remove 1-5 browser "read later" bookmarks.   Or enough to read a few more pages of my current novel or lightweight non-fiction.
Comment by Dagon on adamzerner's Shortform · 2022-01-10T20:09:38.999Z · LW · GW

I remember this confusion from Jr. High, many decades ago.  I was lucky enough to have an approachable teacher who pointed me to books with more complete explanations, including the Strong Nuclear force and some details about why inverse-square doesn't apply, making it able to overcome EM at very small distances, when you'd think EM is strongest.

Comment by Dagon on Uncontroversially good legislation · 2022-01-10T04:15:27.842Z · LW · GW

I very much like this line of thinking, but I'm curious what you think are the reasons that these "uncontroversial" good laws haven't been passed yet.  Laws are somewhat similar to markets, in that they're the visible result of competing hidden desires, and to some extent the Efficient Market Hypothesis applies to politics.  If it's easy and universally beneficial, it's already done.  (note: the objections to EMH still apply, too: it can take a long time, and there's a LOT of irrationality and inefficient friction that opposes it).

Most of the proposals have some group who benefits from the "bad" current equilibrium.  And they probably care about it more than you or the politician, so their donations on their specific topic outweigh your preference on that topic.

Comment by Dagon on AMA: I was unschooled for most of my childhood, then voluntarily chose to leave to go to a large public high school. · 2022-01-07T17:23:08.737Z · LW · GW

An important factor that often goes unconsidered in these discussions are how much one size does not fit all.  Unschooling seems great for very smart kids with stable and conscientious parents.  How would you describe your background and capabilities, and do you think any aspect of your situation was more critical than others to your success?

At what point did you become involved in the decision to remain independent vs changing to a more traditional schooling?  Especially around high-school ages, what did you do to decide that continuing unschooling was better than other options, specifically for college prep.

Comment by Dagon on Signaling isn't about signaling, it's about Goodhart · 2022-01-07T17:05:23.332Z · LW · GW

Ah, that's a very helpful clarification, and a distinction that I missed on first reading.  I absolutely agree that a focus on the underlying behaviors and true good intents yields better results (both better signals and better outcomes, and most importantly, for many, is personally more satisfying) than trying to consciously work out the best signals.

I'm not sure it's feasible to totally forget the signaling portion of your interactions - knowing about it MAY be helpful in choosing some marginal actions or statements, and it's certainly valuable in interpreting others' statements and behaviors. But I'm with you that the vast majority of your life should be driven by actual good intents.

I kind of wonder how much of this (and other goodheart-related topics) is a question of complexity vs legibility, seen vs unseen.  When you introspect and consider signaling, there's a pretty limited set of factors you can model and consider.  When you just try to do something, there's a lot of unspecified consideration that goes into it.

Comment by Dagon on Signaling isn't about signaling, it's about Goodhart · 2022-01-06T20:53:14.189Z · LW · GW

I don't think you're "dropping all effort" to signal, you're rather getting good at signaling, by actually being truthful and information-focused.   The useful signals are those which are difficult/expensive to fake and cheap(er) to display truthfully.  

When you say to Bob "Let me know what you need here to make a good decision. I'll see what I can do", THAT is a great signal, in that it's a request for Bob to tell you what further signals he wants, and an indication that you intend to provide them, even if they'd be difficult to fake.
 

I really like the insight that signaling is related to goodheart - both are problematic due to the necessity of using proxies for the actual desired outcomes.  I don't think we can go so far as to say they're equivalent, just that signaling is yet another domain subject to goodheart's law.  

Comment by Dagon on How reasonable are concerns about asking for patents to be lifted for COVID-19 vaccines? · 2022-01-06T20:40:03.050Z · LW · GW

My impression is similar to yours - those who are advocating to break the patents are doing so because they don't like patents and this is a high-profile topic to use for that agenda.  I have seen no reasoned, logical path whereby the patents are the main concern.

Comment by Dagon on We need a theory of anthropic measure binding · 2022-01-04T16:58:54.090Z · LW · GW

I don't get a strong impression that you read the post. It was pretty clear about what rents the beliefs are paying.

I think I did, and I just read it again, and still don't see it.  What anticipated experiences are contingent on this?  What is the (potential) future evidence which will let you update your probability, and/or resolve whatever bets you're making?

Comment by Dagon on The Map-Territory Distinction Creates Confusion · 2022-01-04T16:52:31.750Z · LW · GW

Thanks for this.  I sometimes forget that "predicted experience" is not what everyone means by "map", and "actual experience" not what they mean when they say "territory".

Comment by Dagon on How an alien theory of mind might be unlearnable · 2022-01-04T16:39:38.884Z · LW · GW

I think the way around that issue is to bite the bullet - those things belong in a proper theory of mind.  Most people want to be conformist (or at least to maintain a pleasant-to-them self-image) more than they want to be rich.  That seems like a truth (lowercase t - it's culture-sensitive, not necessarily universal) that should be modeled more than a trap to be avoided.

Comment by Dagon on Quinn's Shortform · 2022-01-03T19:02:49.804Z · LW · GW

One thing to be careful about in such decisions - you don't know your own utility function very precisely, and your modeling of both future interactions and your value from such are EXTREMELY lossy.

The best argument for deontological approaches is that you're running on very corrupt hardware, and rules that have evolved and been tested over a long period of time are far more trustworthy than your ad-hoc analysis which privileges obvious visible artifacts over more subtle (but often more important) considerations.

Comment by Dagon on How an alien theory of mind might be unlearnable · 2022-01-03T18:25:00.973Z · LW · GW

I think "unlearnable" is a level removed from equally-important questions (like un-modelable, or sufficiently-precise model is not computable by Alice's means, even if she could learn/find it).  And that this treatment gives FAR too binary a focus on what a "theory of mind" actually does, and how precise and correct it is.

I think we have to start with "how good is Stuart Armstrong's theory of mind for humans (both for their best friend and for a randomly-selected person from a different culture and generation, and for a population making collective decisions)", as a an upper bound for how good Alice's theories of mind can be can be at all.   Identify specific predictions we want from the theory, and figure out how we'll evaluate it for humans and for aliens.

I'd expect that it's not very good to start with, but still better than random and we can do some useful modeling.  For aliens, you're correct that our assumptions matter a lot (well, what matters is the aliens' specifics, but our assumptions control our modeling of our ability to model).  For aliens much more complex than ourselves, our theory of mind will be less accurate, as there's less of their computations that we can simplify to fit in our heads/computers. 

Comment by Dagon on Each reference class has its own end · 2022-01-03T05:35:19.028Z · LW · GW

Each reference class has its own end.

I initially read this as "you infer or choose reference classes based on what you want to predict", taking end to mean purpose of the reference class.  But you're talking end more literally, for reference classes that have a duration.  I think that's a little more suspect.  

The reference class problem seems to apply equally to SIA and SSA, and in fact to non-anthropic probability as well.  Categories are simply not natural things, and there is no "correct" reference class.

Comment by Dagon on Can each event in the world be attributed conditions under which it occurs? · 2022-01-01T22:53:37.171Z · LW · GW

Well, the partitioning of time and space (or of experience, if you prefer) into "events" is already a modeling choice.   The underlying reality seems not to care - it's just a configuration of elementary particles which changes according to simple rules (but complicated state - there's really quite a lot of it).

So, yes, a modeler can attribute whatever they like to whatever they like.  Depending on the fidelity of the model, they may even be able to predict future abstractions over configurations (or localized configurations) with a limited precision.

Whether you call this "causality" or just "consistency" or "correlation" is up to you.

Comment by Dagon on What are sane reasons that Covid data is treated as reliable? · 2022-01-01T22:46:38.593Z · LW · GW

I figure my extended circle (including 2nd and 3rd degree connections who I've met or heard some detailed story about) is on the order of 10K people, spanning ages from young kids (mostly children or grandchildren of friends) to quite old (parents and grandparents of acquaintances).  I've heard plenty of reports of unpleasant vaccine reactions (including DAYS of downtime), and one or two where the reaction was bad enough that their doctor told them not to have the second shot.   ZERO that I'd call "serious medical problems".  

I'm aware of my bubble - this group is very strongly biased to educated wealthy(ish) Americans.  It does include people with chronic health problems, diagnosed and un-.  But I'd be shocked if there's a cluster where "many people" that someone knows were significantly harmed by the vaccines.  

I'm NOT shocked that reporting of such things is suspect - there are incentives (in both directions) to report, and it's really hard to be sure even in an individual case.  

Comment by Dagon on Taboo Truth · 2022-01-01T22:18:04.183Z · LW · GW

In addition to the risk that you'll feel bad about yourself for causing someone else to suffer for your truth, there's a significant risk that they'll do a much worse job than you, and make it easier for the truth to be denied.

Comment by Dagon on Hedging the Possibility of Russia invading Ukraine · 2021-12-31T23:55:38.566Z · LW · GW

https://twitter.com/RALee85/status/1476642007426777092

is a reasonable summary of what Russian military leaders might be thinking.  I'd say invasion with long-term troops is still unlikely, but some form of hot conflict seems to be brewing.

Comment by Dagon on We need a theory of anthropic measure binding · 2021-12-31T21:54:14.419Z · LW · GW

Wow.  someone really didn't like this.  any reason for the strong downvotes?

Comment by Dagon on We need a theory of anthropic measure binding · 2021-12-30T20:04:42.835Z · LW · GW

What is your probability that you're the heavier brain?

Undefined.  It matters a lot what rent the belief is paying.  The specifics of how you'll resolve your probability (or at least what differential evidence would let you update) will help you pick the reference class(es) which matter, and inform your choice of prior (in this case, the amalgam of experience and models, and unidentified previous evidence you've accumulated).

Comment by Dagon on DanielFilan's Shortform Feed · 2021-12-30T05:30:19.280Z · LW · GW

First, suppose that you think of all psychological descendants of your current self as 'you', but you don't think of descendants of your past self as 'you'.

I'm having trouble supposing this.  Aren't ALL descendants of my past selves "me", including the me who is writing this comment?  I'm good with differing degrees of "me-ness", based on some edit-distance measure that hasn't been formalized, but that's not based on path, it's based on similarity.  My intuition is that it's symmetrical.

Comment by Dagon on Internet Literacy Atrophy · 2021-12-27T23:16:25.615Z · LW · GW

Transcripts and playback at 1.5-2.5 speed (depending content) definitely helps a lot, as does a ToC with timestamps. You're right that it's higher bandwidth (in terms of information per second of participation), but I think my objection is that not all of that information is equally valuable, and I often prefer lower-bandwidth more-heavily-curated information.

Hmm, I wonder if I can generalize this to "communication bandwidth is a cost, not a benefit".  Spending lots more attention-effort to get a small amount more useful information isn't a tradeoff I'll make most of the time.

Comment by Dagon on Why did computer science get so galaxy-brained? · 2021-12-27T16:49:13.789Z · LW · GW

Note: I'm not sure what "galaxy-brained" means, so I'm not sure what aspect of software eating the world (can't find a good free link; the phrase is from a 2011 WSJ oped by Marc Andreeson) surprises you.

I think it's mostly because we live in a mechanistic universe, and being able to calculate/predict things with a fair amount of precision is incredibly valuable for almost all endeavors.  I doubt it's path-dependent (doesn't matter who invented it or which came first), more that software is simply a superset of other things.

BTW, this ship has sailed, but it still bugs me when people mix up "computer science" with "software development and usage".  They're not the same at all.  I suspect you're conflating the science behind nuclear power with the actual industry of power generation in the same way, which also makes no sense.

Academic science and math research remains a tiny part of the knowledge-based workforce.  Industrial use and application of that science is where almost all of the action is.  THAT distinction has good reason - there are many many more people who can understand and use a new formulation of knowledge than who can discover and formalize one.

Comment by Dagon on Gerald Monroe's Shortform · 2021-12-27T16:29:05.118Z · LW · GW

Hmm.  Either I'm misunderstanding, or you just described a completely amoral optimizer, which will kill billions as long as it can't be held financially liable.  Maybe just take over the governments (or at least currency control), so it can't be financially penalized for anything, ever.

Also, you're adding paperclip-differential to money, so the result won't be pure money.  That's probably good, because otherwise this beast stops making paperclips and optimizes for negative realized costs on one of the other dimensions.

Comment by Dagon on Internet Literacy Atrophy · 2021-12-27T16:23:15.345Z · LW · GW

I despise videos when text and photos would do - I'm far too often in a noisy (or shared quiet) space, and I read so much faster than people talk.  I'm even more annoyed at videos that pad their runtime to hit ad minima or something - I can't take a quick scroll to the end to see if it's worthwhile, then go back and absorb what I need at my own pace.

I recognized that videos take less time from the creator, and pay better.  So that's the way of the world, but I don't have to like it.  I mention this mostly as an explanation that I know I'm in the "old man yells at cloud" phase of my life, and a reason that I'm OK with some aspects of it.

Comment by Dagon on The Debtor's Revolt · 2021-12-27T01:57:58.087Z · LW · GW

I don't agree that the debt/capital distinction has changed all that much.  Personal debt (for a mortgage on a house you're occupying, or for student loans, or for other non-income-stream purposes) isn't much of a driver of economies or decisions at scale.  Corporate debt, as compared with share ownership, is still an important claim on future income/assets.  

I guess I'm saying the standard microenomic model dominates - profits are good, and debt represents reduction of future profits.  Investments (non-consumption lending or spending-with-expectation-of-future-returns) pretty much behave as you say.  Consumption spending never has.

It's not clear why you'd expect debt ties borrower and lender together more than share-based investment does.  Usually the opposite is claimed, and that matches my intuition as well.  

Or maybe I'm missing the "compared to what" in this claim.  Debt and shares are both "access to capital", and they have different result curves which lead to preferring one or the other for different risk profiles (and tax policy messes with this a lot).  But they're roughly the same in terms of how effectively the money can be deployed as capital.

edit: retracting this - I read the link and realized that the post's summary had very little context that tied it to the social justice roots of the concept.  I don't have a lot to say about the distribution of wealth, disconnected from use of capital to actually make stuff.  

Comment by Dagon on Gerald Monroe's Shortform · 2021-12-26T01:47:15.531Z · LW · GW

Put units on your equation.  I don't think H will end up being what you think it is.  Or, the coefficients R1-R4 are exactly as complex as the problem you started with, and you've accomplished no simplification with this.

Heck, even the first term, (quota - paperclips made) hand-waves where the quota comes from, and any non-linearity in making slightly more for next year being better than slightly fewer than needed this year.

Comment by Dagon on Gunnar_Zarncke's Shortform · 2021-12-25T02:56:39.416Z · LW · GW

Humans don't seem to have identifiable near-mode utility functions - they sometimes espouse words which might map to a far-mode value function, but it's hard to take them seriously.

What does stay the same

THAT is the primary question for a model of individuality, and I have yet to hear a compelling set of answers.   How different is a 5-year old from the "same" person 20 and 80 years later, and is that more or less different than from their twin at the same age?  Extend to any population - why does identity-over-time matter in ethical terms?

Comment by Dagon on Tough Choices and Disappointment · 2021-12-25T00:05:57.703Z · LW · GW

I'm not sure I subscribe to this conception of choice, nor disappointment.  To me, disappointment comes from a failure to believe that the world was as it is, and a sense of loss that something you'd hoped/believed is not true.  And that's not really connected to choices, which are generally about how to prioritize a multi-dimensional (perceived value over time, at least) future preference.

I think the "tough" choices are those where the net values of the options are similar (that is, it's non-obvious), but there's a large difference in the timing and/or certainty of the outcomes, so there's a perceived importance of the choice (that is, it's going to seem obvious in retrospect).    To that extent, we're agreed: these are decisions where you risk feeling like you've made the wrong one later.

I do completely agree with the overall advice - accept the world first, then make a decision un-influenced by your surprise or disappointment in that situation.  

Comment by Dagon on Gunnar_Zarncke's Shortform · 2021-12-24T17:08:21.279Z · LW · GW

For a VNM-agent (one which makes consistent rational decisions), the utility function is a precise description, not an abstraction.  There may be summaries or aggregations of many utility functions which are more abstract.

When an agent changes, and has a different utility function, can you be sure it's really the "same" agent?  Perhaps easier to model it being replaced by a different one.

Comment by Dagon on Dagon's Shortform · 2021-12-24T16:58:26.553Z · LW · GW

https://marginalrevolution.com/marginalrevolution/2021/12/hunting-smaller-animals-is-this-also-a-theory-of-early-economic-growth.html?utm_source=rss&utm_medium=rss&utm_campaign=hunting-smaller-animals-is-this-also-a-theory-of-early-economic-growth may explain a shift from stag-hunting to rabbits.  It's not a loss of cooperation, we killed all the stags.