Posts

What causes a decision theory to be used? 2023-09-25T16:33:36.161Z
Adversarial (SEO) GPT training data? 2023-03-21T18:55:01.330Z
{M|Im|Am}oral Mazes - any large-scale counterexamples? 2023-01-03T16:43:37.682Z
Does a LLM have a utility function? 2022-12-09T17:19:45.936Z
Is there a worked example of Georgian taxes? 2022-06-16T14:07:27.795Z
Believable near-term AI disaster 2022-04-07T18:20:16.843Z
Laurie Anderson talks 2021-12-18T20:47:01.484Z
For moderately well-resourced people in the US, how quickly will things go from "uncomfortable" to "too late to exit"? 2020-06-12T16:59:56.845Z
Money isn't real. When you donate money to a charity, how does it actually help? 2020-02-02T17:03:04.426Z
Dagon's Shortform 2019-07-31T18:21:43.072Z
Did the recent blackmail discussion change your beliefs? 2019-03-24T16:06:52.811Z

Comments

Comment by Dagon on Losing Faith In Contrarianism · 2024-04-25T21:25:53.822Z · LW · GW

I tend to read most of the high-profile contrarians with a charitable (or perhaps condescending) presumption that they're exaggerating for effect.  They may say something in a forceful tone and imply that it's completely obvious and irrefutable, but that's rhetoric rather than truth.  

In fact, if they're saying "the mainstream and common belief should move some amount toward this idea", I tend to agree with a lot of it (not all - there's a large streak of "contrarian success on some topics causes very strong pressure toward more contrarianism" involved).

Comment by Dagon on keltan's Shortform · 2024-04-25T20:33:58.373Z · LW · GW

Hmm.  I don't doubt that targeted voice-mimicking scams exist (or will soon).  I don't think memorable, reused passwords are likely to work well enough to foil them.  Between forgetting (on the sender or receiver end), claimed ignorance ("Mom,  I'm in jail and really need money, and I'm freaking out!  No, I don't remember what we said the password would be"), and general social hurdles ("that's a weird thing to want"), I don't think it'll catch on.

Instead, I'd look to context-dependent auth (looking for more confidence when the ask is scammer-adjacent), challenge-response (remember our summer in Fiji?), 2FA (let me call the court to provide the bail), or just much more context (5 minutes of casual conversation with a friend or relative is likely hard to really fake, even if the voice is close).

But really, I recommend security mindset and understanding of authorization levels, even if authentication isn't the main worry.  Most friends, even close ones, shouldn't be allowed to ask you to mail $500 in gift cards to a random address, even if they prove they are really themselves.

Comment by Dagon on Magic by forgetting · 2024-04-25T16:48:49.236Z · LW · GW

In deep meditation people become disconnected from reality

Only metaphorically, not really disconnected.  In truth, in deep meditation, the conscious attention is not focused on physical perceptions, but that mind is still contained in and part of the same reality.

This may be the primary crux of my disagreement with the post.  People are part of reality, not just connected to it.  Dualism is false, there is no non-physical part of being.  The thing that has experiences, thoughts, and qualia is a bounded segment of the universe, not a thing separate or separable from it.

Comment by Dagon on Magic by forgetting · 2024-04-24T21:10:50.575Z · LW · GW

Is your mind causally disconnected from the actual universe?  That's the only way I can understand the merging of minds that share some similarities (but are absolutely not identical across universes that aren't themselves identical).  Your forgetting may make two possible minds superficially the same, but they're simply not identical.

I don't know why you think path-based configuration of brain state would be false.  That may not be "identity" for all purposes - there may be purposes for which it doesn't suffice or is too restrictive, but it's probably good for this case.

Comment by Dagon on (When) Should you work through the night when inspiration strikes you? · 2024-04-23T22:15:25.921Z · LW · GW

I expect what the right call is to be very different from person to person and, for some people, from situation to situation.

Definitely.  And the balance changes as one ages as well.  For me, there are some kinds of work where it's very hard to get into the zone, and the cost of an interruption is very high.  However, I just get less effective over long sessions, and this has gotten much worse in the last few decades.   So the point of indifference between "I may not be able to recover this mind-state tomorrow" and "I may not be that useful tonight, and may not be good for ANYTHING tomorrow" has shifted.

I would recommend trying it at least a few times each year, in both directions.  Don't ever make one or the other the only option for yourself - it's always a choice.

Comment by Dagon on Subjective Questions Require Subjective information · 2024-04-23T19:21:27.420Z · LW · GW

If you have the memories of every single human up to that point, then you don't know which of them you are.

This depends on the mechanism of attaining all these memories.  In that world, it COULD be that you still know which memories are privileged, or at least which ones include meeting God and being in position to be asked the question. 

I mean, I'm with you fundamentally: it's not obvious that ANYTHING is truly objective - other people can report experiences, but that's mediated by your perceptions as well. In most cases, one can avoid the confusion by specifying predicting WHAT experiences will happen to WHICH observer.

Comment by Dagon on tailcalled's Shortform · 2024-04-22T15:12:14.357Z · LW · GW

My recommended way to resolve (aka disambiguate) definitional questions is "use more words".  Common understandings can be short, but unusual contexts require more signals to communicate.

Comment by Dagon on "You're the most beautiful girl in the world" and Wittgensteinian Language Games · 2024-04-22T15:09:00.421Z · LW · GW

I actually upvoted, but mostly because it was a hook for comedy, because it's so common a trope (the surprise value of taking something literally).  If it weren't for that, I'd probably have just passed, rather than downvoting, but I find it pretty low-value overall.

Some mix of "obvious parts are obvious, non-obvious parts are some mix of pretentious and and suspect."  I'd actually enjoy a (somewhat) deeper exploration of your agreement or disagreement with the Wittgenstein framing of this phrase, and the value of invoking cultural tropes.  Personally, this isn't one I'm confident enough to use, but there are other hyperbolic ideas I use for emphasis or humor, and I generally agree that communication is multimodal and contextual, much more than objective semantic content. 

Comment by Dagon on Johannes C. Mayer's Shortform · 2024-04-22T14:57:51.807Z · LW · GW

Where do you even put the 10^100 objects you're iterating through?  What made you pick 10^100 as the scale of difficulty?  I mean, even though you've ignored parallelism and the sheer number of processing pipelines available to simultaneously handle things, that's only a dozen orders of magnitude, not 100.   Exponents go up fast.

So, to answer your title, "no, I cannot".  Fortunately, I CAN abstract and model that many objects, if they're similar in most of the ways that matter to me.  The earth, for instance, has about 10^50 atoms (note: that's not half the size of your example, it's 1/10^50 the size).  And I can make a fair number of predictions about it.  And there's a LOT of behavior I can't.

Comment by Dagon on Upcoming unambiguously good tech possibilities? (Like eg indoor plumbing) · 2024-04-21T02:25:04.627Z · LW · GW

[epistemic status: just what I've read in popular-ish press, no actual knowledge nor expertise]

Two main mechanisms that I know of:

- Some cancers are caused (or enabled, or activated, or something) by viruses, and there's been immense progress in tailoring vaccines for specific viruses.

- Some cancers seem to be susceptible to targeted immune response (tailored antibodies).  Vaccines for these cancers enable one's body to reduce or eliminate spread of the cancer.

Comment by Dagon on How I Think, Part Four: Money is Weird · 2024-04-20T19:47:07.540Z · LW · GW

Note that everything is relative and marginal ("compared to what, for what increment?").  I don't think "favor" is the right word for surplus from trade, as it goes in both directions, and is unmeasurable.  If you buy a car for $66K, the dealer makes $11k profit, but also has effort and employment costs, so that's not net.  And you're getting more than $66k of value in owning the car (or you wouldn't have bought it - you're not intending to do a favor, just making a trade that benefits you and happens to benefit them).  So they're doing you a favor as much as you doing them one.  

Which is to say that the "favor" framing isn't very helpful, except in motivational terms - you may purposfully take a worse trade than you otherwise could, in order to benefit some specific person (or even a group, if you're weirdly altruistic enough).  But most economic analysis assumes this is a very small part of trade and work choices.

The key insight in figuring out the work and purchase decisions is that most things have different values to different people.  A given hour of effort in an endeavor you're relatively skilled at ("work") is worth some amount to you, and some amount to an employer.  It's worth more to an employer than to you, and your pay for that hour will be between those values.  For simplification reasons, and measurement difficulty, and preference for stability, it's usually traded in bundles - agreement to work 40+ hours per week for multiple weeks.  That doesn't change the underlying difference in valuation as the main transactional motivation.

Comment by Dagon on "You're the most beautiful girl in the world" and Wittgensteinian Language Games · 2024-04-20T15:58:56.569Z · LW · GW
Comment by Dagon on How to know whether you are an idealist or a physicalist/materialist · 2024-04-20T15:37:48.922Z · LW · GW

You probably need to be a bit more explicit in tieing your title to your text.  I'd guess you're just pointing out that these labels ("materialist" and "idealist") are both ridiculous when taken to the extreme, and that all sane people use different models for different decisions.  Oh, and that all cognition is about models and abstractions, which are always wrong (but often useful).

If I'm wrong in that, please use more words :)

As to your questions about the moon, I don't thing "observable" has ever meant only and exactly "directly viewable by the person doing the writing".  It means "inferrable from observations and experiences that are causally linked in simple/justifiable ways".

Comment by Dagon on A New Response To Newcomb's Paradox · 2024-04-20T15:17:30.875Z · LW · GW

It only makes sense to two-box if you believe that your decision is causally isolated from history in every way that Omega can discern.

Right.  That's why CDT is broken.  I suspect from the "disagree" score that people didn't realize that I do, in fact, assert that causality is upstream of agent decisions (including Omega, for that matter) and that "free will" is an illusion.

Comment by Dagon on Experiment on repeating choices · 2024-04-19T23:10:54.289Z · LW · GW

For me, these topics seem extremely contextual and variable with the situation and specifics of the tradeoff in the moment.  For many of them, I do somewhat frequently explore consciously what it might feel like (and for cheap ones, try out) to make a different tradeoff, but those experiments don't generalize well.

I suspect that for the impactful ones (heavily repeated or large), your first two bullet points don't apply - feedback is delayed from the decision, and if harmful, it will be significant.

Still, it's VERY GOOD to be reminded that these decisions are mostly made by type-1 thinking, out of habit or instinct (aka deep/early learning) that deserves reconsideration from time to time.

Comment by Dagon on What is the best way to talk about probabilities you expect to change with evidence/experiments? · 2024-04-19T21:06:48.995Z · LW · GW

If you're giving one number, that IS your all-inclusive probability.  You can't predict the direction that new evidence will change your probability (per https://www.lesswrong.com/tag/conservation-of-expected-evidence), but you CAN predict that there will be evidence with equal probability of each direction.  

An example is if you're flipping a coin twice.  Before any flips, you give 0.25 to each of HH, HT, TH, and TT.  But you strongly expect to get evidence (observing the flips) that will first change two of them to 0.5 and two to 0, then another update which will change one of the 0.5 to 1 and the other to 0.  

Likewise, p(doom) before 2035 - you strongly believe your probability will be 1 or 0 in 2036.  You currently believe 6%.  You may be able to identify intermediate updates, and specify the balance of probability * update that adds to 0 currently, but will be specific when the evidence is obtained.  

I don't know any shorthand for that - it's implied by the probability given.  If you want to specify your distribution of probable future probability assignments, you can certainly do so, as long as the mean remains 6%.  "There's a 25% chance I'll update to 15% and a 75% chance of updating to 3% over the next 5 years" is a consistent prediction.

Comment by Dagon on If digital goods in virtual worlds increase GDP, do we actually become richer? · 2024-04-19T19:00:18.593Z · LW · GW

Yes!  No!  What does "richer" actually mean to you?  For that matter, what does "we" mean to you (since the existing set of humans is changing hour to hour as people are born, come of age, and die, and even in a given set there's an extremely wide variance in what they have and in what's considered rich).

To the extent that GDP is your measure of a nation's richness, then it's tautological that increasing GDP makes the nation richer.  The weaker argument that it (often) correlates (not necessarily causes) with well-being (in some averages and aggregates) is more defensible, but makes it unsuitable for answering your question.

I think my intuition is that GDP is the wrong tool for measuring how "rich" or "overall satisfied" people are, and simple sum or average is probably the wrong aggregation function.  So I fall back on more personal and individual measures of "well-being".  This, for most people I know, and as far as I can tell, the majority of neurotypical people, is about lack of worry for near- and medium-term future, access to pleasurable experiences, and social acceptance among accessible sub-groups (family, friends, neighbors, online communities small enough to care about, etc.).

For that kind of  "general current human wants", a usable and cheap shared-but-excludable VR space seems to improve things for a lot of people, regardless of what happens to GDP.  In fact, if consumption of difficult-to-manufacture-and-deliver luxuries gets partially replaced by consumption of patterns of bits, that likely reduces GDP while increasing satisfaction.  

There will always be needs for non-virtual goods and experiences - it's not currently possible to virtualize food's nutrition OR pleasure, and this is true for many things.  Which means a mixed economy for a long long time.  I don't think anyone can tell you whether this makes those things cheaper or more expensive, relative to an hour spent working online or in the real world.

Comment by Dagon on Cooperation is optimal, with weaker agents too  -  tldr · 2024-04-19T14:09:44.120Z · LW · GW

Thanks for the conversation and exploration!  I have to admit that this doesn't match my observations and understanding of power and negotiation in the human agents I've been able to study, and I can't see why one would expect non-humans, even (perhaps especially) rational ones, to commit to alliances in this manner.

I can't tell if you're describing what you hope will happen, or what you think automatically happens, or what you want readers to strive for, but I'm not convinced.  This will likely be my last comment for awhile - feel free to rebut or respond, I'll read it and consider it, but likely not post.

Comment by Dagon on Blessed information, garbage information, cursed information · 2024-04-18T20:16:24.096Z · LW · GW

These are probably useful categories in many cases, but I really don't like the labels.  Garbage is mildly annoying, as it implies that there's no useful signal, not just difficult-to-identify signal.  It's also putting the attribute on the wrong thing - it's not garbage data, it's data that's useful for other purposes than the one at hand.  "verbose" or "unfiltered" data, or just "irrelevant" data might be better.  

Blessed and cursed are much worse as descriptors.  In most cases there's nobody doing the blessing or cursing, and it focuses the mind on the perception/sanctity of the data, not the use of it.  "How do I bless this data" is a question that shows a misunderstanding of what is needed.  I'd call this "useful" or "relevant" data, and "misleading" or "wrongly-applied" data.

To repeat, though, the categories are useful - actively thinking about what you know, and what you could know, about data in a dataset, and how you could extract value for understanding the system, is a VERY important skill and habit.

Comment by Dagon on How to coordinate despite our biases? - tldr · 2024-04-18T19:29:43.975Z · LW · GW

I've seen links to that video before (even before your previous post today).  Is there a text or short argument that justifies "Non-naive cooperation is provably optimal between rational decision makers" ALONG WITH "All or any humans are rational enough for this to apply"? 

I'm not sure who the "we" is in your thesis.  If something requires full agreement and goodwill, it cannot happen, as there will always be bad actors and incompatibly-aligned agents.  

Comment by Dagon on Cooperation is optimal, with weaker agents too  -  tldr · 2024-04-18T16:06:55.234Z · LW · GW

What does "stronger" mean in this context?  In casual conversation, it often means "able to threaten or demand concessions".  In game theory, it often means "able to see further ahead or predict other's behavior better".  Either of these definitions imply that weaker agents have less bargaining power, and will get fewer resources than stronger, whether it's framed as "cooperative" or "adversarial".

In other words, what enforcement mechanisms do you see for contracts (causal OR acausal) between agents or groups of wildly differing power and incompatible preferences?

Relatedly, is there a minimum computational power for the stronger or the weaker agents to engage in this?  Would you say humans are trading with mosquitoes or buffalo in a reliable way?

Another way to frame my objection/misunderstanding is to ask: what keeps an alliance together?  An alliance by definition contains members who are not fully in agreement on all things (otherwise it's not an alliance, but a single individual, even if separable into units).  So, in the real universe of limited (in time and scope), shifting, and breakable alliances, how does this argument hold up?

Comment by Dagon on Housing Supply (new discussion format) · 2024-04-18T14:34:35.798Z · LW · GW

What's the desired outcome of this debate?  Are you looking for cruxes (axioms or modeling choices that lead to the disagreement, separate from resolvable empirical measurements that you don't disagree on)?  Are you hoping to update your own beliefs, or to convince your partner (or readers) to update theirs?

I do not necessarily endorse my comments in this piece.

That's likely to need some explanation about why it's valuable to put such comments on LessWrong.  It's fine to put non-endorsed views here, but they should be labeled as to why they're worth mentioning.  Putting misleading or known-suspect arguments-as-soldiers on LW, especially mixed in with things you DO support, is a mistake.

Comment by Dagon on Discomfort Stacking · 2024-04-18T13:48:31.143Z · LW · GW

I think that insisting on comparing unmeasurable and different things is an error.  If forced to do so, you can make up whatever numbers you like, and nobody can prove you wrong.  If you make up numbers that don't fully contradict common intuitions based on much-smaller-range and much-more-complicated choices, you can probably convince yourself of almost anything.

Note that on smaller, more complicated, specific decisions, there are many that seem to be inconsistent with this comparison: some people accept painful or risky surgery over chronic annoyances, some don't.  There are extremely common examples of failing to mitigate pretty serious harm for distant strangers, in favor of mild comfort for oneself and closer friends/family (as well as some examples of the reverse).  There are orders of magnitude in variance, enough to overwhelm whatever calculation you think is universal.

Comment by Dagon on Discomfort Stacking · 2024-04-18T02:09:34.279Z · LW · GW

it’s very possible that it could become a practical problem at some point in the future.

I kind of doubt it.  Practical problems will have complexity and details that overwhelm this simple model, making it near-irrelevant.  Alternately, it may be worth trying to frame a practical decision that an individual or small group (so as not to have to abstract away crowd and public choice issues) could make where this is important.

Do you think a logarithmic scale makes more sense than a linear scale?

Yes, but it probably doesn't fix the underlying problem that quantifications are unstable and highly variable across agents.

Comment by Dagon on shortplav · 2024-04-17T17:20:08.694Z · LW · GW

From their side.  Your explanation and arguments against that seem reasonable to me.

Comment by Dagon on Discomfort Stacking · 2024-04-17T17:19:12.338Z · LW · GW

A number of us (probably a minority around here) don't think "stacking" or any simple, legible, aggregation function is justified, not within an individual over time and certainly not across individuals.  There is a ton of nonlinearity and relativity in how we perceive and value changes in world-state.  

Comment by Dagon on Should we maximize the Geometric Expectation of Utility? · 2024-04-17T16:43:34.072Z · LW · GW

I think this misses out on the fact that utility is always indirect - there is a function from world-state to utility that each rational agent has, so there can never be a lottery that directly awards utility.  Meaning you can model the utility valuation linearly, but the mapping of resources to utility with logarithmic declining marginal utility. 

Comment by Dagon on shortplav · 2024-04-17T16:33:07.169Z · LW · GW

That seems an odd motte-and-bailey style explanation (and likely, belief.  As you say, misgeneralized).

I will agree that humans can execute TINY arbitrary Turing calculations, and slightly less tiny (but still very small) with some external storage.  And quite a bit larger with external storage and computation.  At what point is the brain not doing the computation is perhaps an important crux in that claim, as is whether the ability to emulate a Turing machine in the conscious/intentional layer is the same as being Turing-complete in the meatware substrate.  

And the bailey of "if we can expand storage and speed up computation, then it would be truly general" is kind of tautological, and kind of unjustified without figuring out HOW to expand storage and computation while remaining human.

Comment by Dagon on shortplav · 2024-04-16T16:22:24.459Z · LW · GW

Can you point to one or two of the claims that the human brain is a general-purpose turing machine that can run any program?  I don't think I'd seen that, and it seems trivially disprovable by a single example.  Most humans cannot perform even fairly simple arithmetic in their heads, let alone computations that would require a longer (but still finite) tape.

Of course, using tools, humans can construct turing-machine-equivalent mechanisms of large (but not infinite) size, but that seems like a much weaker claim than humans BEING such machines.

Comment by Dagon on shortplav · 2024-04-16T15:08:41.913Z · LW · GW

It's not a topic I've heard debated for quite some time, but I generally saw it going the other direction.  Not "humans are a general turing-complete processing system", that's clearly false, and kind of irrelevant.  But rather "humans are fully implementable on a turing machine", or "humans are logically identical to a specific tape on a turing machine".  

This was really just another way of asserting that consciousness is computation.

Comment by Dagon on A New Response To Newcomb's Paradox · 2024-04-16T02:07:23.986Z · LW · GW

I'm not sure I follow why Aumann's agreement theorem is relevant here - the survey does not include any rational agents, agents with mutual knowledge of their rationality, nor agents with the same priors.   It makes sense to one-box ONLY if you calculate EV by that assigns a significant probability to causality violation (your decision somehow affecting the previously-committed Omega behavior).

Comment by Dagon on Taking into account preferences of past selves · 2024-04-15T16:59:09.674Z · LW · GW

if I were perfectly rational agent ...

Yeah, "perfectly rational" implies consistency over time, which is the whole question you're struggling with.

if you keep adding strict prohibitions onto your future self, you are limiting all your future options

Right, that's kind of the point, isn't it?  You don't trust your future self to be consistent with your present beliefs, so you constrain it.  Note that there are current-self tactics based on the same principle: you might not trust parts of your decision apparatus to make food choices, for instance, that other parts of you prefer, and therefore don't keep junk food at hand.

Most older people I know (including myself), recognize that their younger selves were jerks, or at least confused about a lot of things.  They often regret commitments made previously (and often agree with them, but in those cases the commitment isn't binding, as they like it anyway).

I'd recommend not framing this as a negotiation or trade (acausal trade is close, but is pretty suspect in itself).  Your past self(ves) DO NOT EXIST anymore, and can't judge you. Your current self will be dead when your future self is making choices.  Instead, frame it as love, respect, and understanding. You want your future self to be happy and satisfied, and your current choices impact that.  You want your current choices to honor those parts of your past self(ves) you remember fondly.  This can be extended to the expectation that your future self will want to act in accordance with a mosty-consistent self-image that aligns in big ways with it's past (your current) self.

This framing is consistent with most of your concrete suggestions - those are reinforcing the importance (to you currently) of these values, and the memory of this exploration, documentation, and thinking about why you currently care about future actions will inform your future self's values.  It's not a contract, it's persuasion.

Comment by Dagon on Work ethic after 2020? · 2024-04-13T19:00:58.787Z · LW · GW

Fully on-board with the annoyance at this equilibrium.  I don't see a better way, unfortunately, with the information and motivation asymmetry between software employers and employees, both of which have large variances in quality.

I've focused on the technical and social/team aspects of software development as a (very) senior IC, rather than as a manager in title.  Even so, I've been deeply involved in hiring, organizing, motivating, and aligning teams for a number of large projects.  I've found a very strong correlation between the signaling of "work ethic" in terms of energy and hours and the actual performance and impact of an employee.  Like all heuristics, it's nowhere near 100%, and it's sad that there's no easy way to identify the exceptions.  Sad as it is, it's true - as an employer of software engineers, I would prefer not to hire part-time.  

Which means the expected-productivity curve for employers is nonlinear, so there's no reasonable way to make the pay/effort ratio constant.

100% (or more - this justifies hyperbole) on applying "work ethic" to other aspects of life.  This difficult tradeoff of motivated effort on behalf of others applies to housekeeping, care for partner/children, some parts of hobbies, and a lot of other things.  It's not work-for-money, it's work-for-others-preferences.  

Comment by Dagon on jacquesthibs's Shortform · 2024-04-12T16:57:13.614Z · LW · GW

I love this idea!  I don't actually like videos, preferring searchable, exerptable text, but I may not be typical and there's room for all. At first glance, I agree with your guess that the overview/intro is more value per effort (for you and for consumers, IMO) than a deep-dive into the code. There IS probably a section of code or core modeling idea for each where it would be worth going half-deep into (algorithm and usage, not necessarily line-by-line).

Note that this list is itself incredibly valuable, and you might start with an intro video (and associated text) that spends 1 minute on each and why you're planning to do it, and what you currently think will be the most important intro concept(s) for each.

Comment by Dagon on Work ethic after 2020? · 2024-04-12T16:45:22.690Z · LW · GW

he point is that wanting a part-time job without having a really good excuse is a bad signal of one's work ethic. What else could be a good excuse?

One part of the declining societal work ethic is it's quite a bit more common to decide to work less, without needing much of an excuse.  I know a LOT of people under normal retirement age, who describe themselves as "semi-retired" or "working enough to keep my head in the game", rather than optimizing for career and future earnings potential.   Note that this is mostly long-term software devs who've amassed quite a lot of savings, and I ALSO know many who are near or past normal retirement age and need to keep working for the money.  A lot of motivation isn't about work ethic, it's about transactional optimization.

The tradeoff of "pretty good compensation" for "more time/energy than I'd like to spend" is pretty rampant.  That part isn't about sexism or having a good excuse, it's just the bundling that seems to work for most employers.

Comment by Dagon on Work ethic after 2020? · 2024-04-12T15:00:42.088Z · LW · GW

Big feels on the "This sucks, it's gotten worse, but objectively I'm so much better off than the VAST majority of past and present humans that it feels petty and unjustified to complain" front. Your framing of "in theory it shouldn't be so hard, but in practice, it is" is an excellent summary.

I suspect it comes down to the fact that all perception (of the physical and of the social world) is comparative - you notice things and evaluate them only and exactly in comparison to expectations.  And since those expectations are mutable and imaginary, we are noticing how shitty things are compared to what our rosy memories and what media shows us.  Comparing to a 12th century peasant farmer would make us happier, but that's far less available.

(the sexism comment doesn't match my experience in software dev in England and in the US - there are almost no good part-time jobs for women, only adjunct or career-limited positions, which could go to men if they wanted, but they generally don't.  Very flexible schedules for childcare or anything else are pretty available to both men and women, regardless of who's primary caretaker.  This likely varies widely, but I'd be shocked if you had to provide proof of divorce to get accommodation).

Comment by Dagon on Upcoming unambiguously good tech possibilities? (Like eg indoor plumbing) · 2024-04-12T13:48:10.890Z · LW · GW

Cancer (and other) vaccines seem pretty cool. 

Comment by Dagon on Work ethic after 2020? · 2024-04-11T18:43:15.838Z · LW · GW

[ speculative; I see some indications of this in my personal experiences and those I talk with, but I have no idea how prevalent it is. ]

I've long struggled with motivation and akrasia - since before the Internet was a thing.  It's gotten slightly better and slightly worse a few times, more based on my age and situation than (I think) on external factors.  I AM more open about it in the last decade or so than previously, mostly because others have started talking about it, and I'm (slightly) less afraid that I'll be held accountable for my failings more forcefully when I admit it than when I hide it.  It may be that it's harder for me to overcome it when I'm not trying to show the world it doesn't affect me.

I think this social acceptability and visibility plays a MUCH larger part in people's behavior than is often admitted.  The tension between "accept and love people uncondionally" and "encourage cooperative/conformist behavior, even if that's not someone's natural default" may have no great equilibrium point.  It's definitely shifted AWAY from "conforming enables cooperation", which seemed to be the common viewpoint in my youth and young adulthood, TOWARD "be yourself, even if it inconveniences others.  Anyone who sees a downside is a bigot (even if they are only pointing out direct risks and suggesting mitigations that don't seem to them to deny someone's identity)."   

It's massively exacerbated by the changes in media and scalable individual visibility - what we see as "normal" is the tail of the distribution that is shown to us over and over, and there are way fewer role models for conformist success in the mix. 

This applies to a whole lot of topics, but "work ethic" is probably one that seems most impactful.  I really do look forward to seeing if anything remains of a world where a large majority is not willing to be drones supporting the general equilibrium (including vast wealth for the oligarchs).  

In my own mind, there's a very large tension between "nobody should have to do that" and "boy, it's going to suck if nobody actually does that".  This applies to picking berries, office work, doing laundry, raising children (the MASSIVELY time-consuming and unpleasant parts, especially), and almost everything else.  I don't have a good answer, and I don't have much expectation that "AI will save us" will happen at all, let alone happening before societal meltdown.

Comment by Dagon on Is Consciousness Simulated? · 2024-04-10T18:23:13.760Z · LW · GW

I think this would benefit from a crisp definition of "consciousness" and of "simulation".  THEN you can clarify your question to one of:

  • MUST consciousness be simulated, because it can't exist in a base-level reality.
  • DOES my consciousness happen to be simulated, even though it's feasible to exist in a base-level reality.
  • CAN consciousness exist in a simulation, or is it only conscious in the base-level reality, with the simulation being some sort of interference layer.

I haven't seen good enough definitions of either thing for these questions to make sense.  Most conceptions of 'simulation' are complete enough that it's impossible to determine from inside whether or not it's a simulation, so that would lead to "with that conception of simulation, with the consciousness that I'm experiencing, it is untestable and unimportant whether it's in a simulation".

Comment by Dagon on Non-ultimatum game problem · 2024-04-09T00:23:33.695Z · LW · GW

It's not actually iterated, it just allows communication before the final agreement (or not).  Simply assign values for each player for s and t, and you can calculate various equilibrium outcomes.  For fun, it's not guaranteed that either party knows the other's utility function over (s,t), so you can make each player assign a distribution of utilities for the other, and figure out the optimum for each.  Then negotiation can reveal some of it (or can fail and lead to sub-optimal outcomes).

Comment by Dagon on How does the ever-increasing use of AI in the military for the direct purpose of murdering people affect your p(doom)? · 2024-04-08T14:42:41.633Z · LW · GW

I can only speak for myself, but I downvoted for leaning very heavily on a current political conflict, because it's notoriously difficult to reason about generalities due to the mindkilling effect of taking sides.  The fact that I seem to be on a different side than you (though there ain't no side that's fully in the right - the whole idea of ethnic and religious hatred is really intractable) is only secondary.

I regret engaging on that level.  I should have stuck with my main reaction that "individual human conflict is no more likely to lead to AI doom than nuclear doom".  It didn't change the overall probability IMO.

Comment by Dagon on The Poker Theory of Poker Night · 2024-04-07T15:30:07.586Z · LW · GW

I've played a lot of poker, both socially and in cardrooms.  I would probably not join a game that needs legible commitment mechanisms.  If it's not sufficiently high-value to succeed on it's own, I probably have better things to do most of the time.

Mostly my recommendation is to find/create games that people like enough and feel safe enough to tell each other their expected attendance. Also, get good at short-handed play - it's useful if a few people drop out, but even more useful to be able to play while some people are still en-route.

Comment by Dagon on Dagon's Shortform · 2024-04-07T15:17:23.693Z · LW · GW

Not on my profile page, as far as I've been able to tell.  Not important enough to dig around GW.  I think I'll start just saving local copies of some of my comments, and decide whether to turn them into a shortform if I notice they're deleted.  

Would it be simple for the system to send me a PM with the contents when a comment is deleted by a post author (either directly or by deleting the post that contains it)?

Comment by Dagon on How does the ever-increasing use of AI in the military for the direct purpose of murdering people affect your p(doom)? · 2024-04-06T18:55:12.194Z · LW · GW

Ehn.  Kind of irrelevant to p(doom).  War and violent conflict is disturbing, but not all that much more so with tool-level AI.  

Especially in conflicts where the "victims" aren't particularly peaceful themselves, it's hard to see AI as anything but targeting assistance, which may reduce indiscriminate/large-scale killing.

Comment by Dagon on Open Thread Spring 2024 · 2024-04-06T16:05:17.241Z · LW · GW

I haven't noticed it (literally at all - I don't think I've seen it, though I'm perhaps wrong).  Based on this comment, I just looked at https://www.lesswrong.com/users/review-bot?from=search_autocomplete and it seems a good idea (and it points me to posts I may have missed - I tend to not look at the homepage, just focusing on recent posts and new comments on posts on https://www.lesswrong.com/allPosts).

I think putting a comment there is a good mechanism to track, and probably easier and less intrusive than a built-in site feature.  I have no clue if you're actually getting enough participation in the markets to be useful - it doesn't look like it at first glance, but perhaps I'm wrong. 

It does seem a little weird (and cool, but mostly in the "experiment that may fail, or may work so well we use it elsewhere" way) to have yet another voting mechanism for posts.  I kind of like the explicitness of "make a prediction about the future value of this post" compared to "loosely-defined up or down".  

Comment by Dagon on Open Thread Spring 2024 · 2024-04-06T15:43:56.675Z · LW · GW

I liked it, but probably don't want it there all the time.  I wonder if it's feasible (WRT your priority list) to repeat some of the site feature options from account settings on a "quick feature menu", to make it easy to turn on and off.

Comment by Dagon on Dagon's Shortform · 2024-04-06T15:05:50.800Z · LW · GW

If I make a comment, then the author deletes the post, is my comment lost as well?  I'm pretty sure I don't actually need a record of things I've said, and that my comments aren't valuable enough to justify extra effort, but it kind of bugs me.

Has anyone written the scraper or script to save a copy of everything you write on LW, so it's available even if it gets deleted later?  If it archived the post and comment thread you're responding to, it would be all right.

Comment by Dagon on [deleted post] 2024-04-05T19:55:12.078Z

My suspicion is that the legibility and exactness of financial system won't be repeated for other kinds of "social credit".  The keys that make the financial system work are:

  • Fungibility - a quantity of credit from A or B is equally valuable.  This applies to money or "simple goods", but does not apply to trust, liking, respecting, or fearing.
  • Fixed quantity - financial values are neither created nor destroyed (by normal transactions - there are underlying system actions that do change totals).  Trust, liking, fear, and respect are not conserved in transactions.
  • Transferability - the previous two properties make it pretty straightforward to exchange at scale, leading to large benefits from specialization and trade.  It's not clear how status, power, or friendship can be made to scale in those ways.
Comment by Dagon on How Often Does ¬Correlation ⇏ ¬Causation? · 2024-04-04T23:50:41.241Z · LW · GW

Really interesting exploration, I’m glad for it. I would warn about applying it to non-randomly-selected pairs - most claims of (non-)causality are not taken equally from a distribution of functions.

Comment by Dagon on StartAtTheEnd's Shortform · 2024-04-04T23:32:27.717Z · LW · GW

Butlerian Jihad? Sign me up!