Posts

Should LW suggest standard metaprompts? 2024-08-21T16:41:07.757Z
What causes a decision theory to be used? 2023-09-25T16:33:36.161Z
Adversarial (SEO) GPT training data? 2023-03-21T18:55:01.330Z
{M|Im|Am}oral Mazes - any large-scale counterexamples? 2023-01-03T16:43:37.682Z
Does a LLM have a utility function? 2022-12-09T17:19:45.936Z
Is there a worked example of Georgian taxes? 2022-06-16T14:07:27.795Z
Believable near-term AI disaster 2022-04-07T18:20:16.843Z
Laurie Anderson talks 2021-12-18T20:47:01.484Z
For moderately well-resourced people in the US, how quickly will things go from "uncomfortable" to "too late to exit"? 2020-06-12T16:59:56.845Z
Money isn't real. When you donate money to a charity, how does it actually help? 2020-02-02T17:03:04.426Z
Dagon's Shortform 2019-07-31T18:21:43.072Z
Did the recent blackmail discussion change your beliefs? 2019-03-24T16:06:52.811Z

Comments

Comment by Dagon on Evolution's selection target depends on your weighting · 2024-11-19T22:20:09.511Z · LW · GW

Wish I could upvote and disagree.  Evolution is a mechanism without a target.  It's the result of selection processes, not the cause of those choices.

Comment by Dagon on AtillaYasar's Shortform · 2024-11-19T16:45:31.695Z · LW · GW

There have been a number of debates (which I can't easily search on, which is sad) about whether speech is an action (intended to bring about a consequence) or a truth-communication or truth-seeking (both imperfect, of course) mechanism.  It's both, at different times to different degrees, and often not explicit about what the goals are.

The practical outcome seems spot-on.  With some people you can have the meta-conversation about what they want from an interaction, with most you can't, and you have to make your best guess, which you can refine or change based on their reactions.

Out of curiosity, when chatting with an LLM, do you wonder what its purpose is in the responses it gives?  I'm pretty sure it's "predict a plausible next token", but I don't know how I'll know to change my belief.

Comment by Dagon on Anvil Problems · 2024-11-19T16:38:18.249Z · LW · GW

Gah!  I missed my chance to give one of my favorite Carl Sagan quotes, a recipe for Apple Pie, which demonstrates the universality and depth of this problem:

If you wish to make an apple pie from scratch you must first invent the universe.

Comment by Dagon on Ethical Implications of the Quantum Multiverse · 2024-11-19T16:34:17.904Z · LW · GW

Note that the argument whether MWI changes anything is very different from the argument about what matters and why.  I think it doesn't change anything, independently of how much what things in-universe matter.

Separately, I tend to think "mattering is local".  I don't argue as strongly for this, because it's (recursively) a more personal intuition, less supported by type-2 thinking.  

Comment by Dagon on Ethical Implications of the Quantum Multiverse · 2024-11-18T23:20:21.570Z · LW · GW

I think all the same arguments that it doesn't change decisions also apply to why it doesn't change virtue evaluations.  It still all adds up to normality.  It's still unimaginably big.  Our actions as well as our beliefs and evaluations are irrelevant at most scales of measurement.

Comment by Dagon on A Theory of Equilibrium in the Offense-Defense Balance · 2024-11-15T18:21:41.212Z · LW · GW

I think this is the right way to think of most anti-inductive (planner-adversarial or competitive exploitation) situations.  Where there are multiple dimensions of assymetric capabilities, any change is likely to shift the equilibrium, but not necessarily by as much as the shift in component.  

That said, tipping points are real, and sometimes a component shift can have a BIGGER effect, because it shifts the search to a new local minimum.  In most cases, this is not actully entirely due to that component change, but the discovery and reconfiguration is triggered by it.  The rise of mass shootings in the US is an example - there are a lot of causes, but the shift happened quite quickly.

Offense-defense is further confused as an example, because there are at least two different equilibria involved.  when you say

The offense-defense balance is a concept that compares how easy it is to protect vs conquer or destroy resources.

Conquer control vs retain control is a different thing than destroy vs preserve.  Frank Herbert claimed (via fiction) that "The people who can destroy a thing, they control it." but it's actually true in very few cases.  The equilibrium of who gets what share of the value from something can shift very separately from the equilibrium of how much total value that thing provides.

Comment by Dagon on nikola's Shortform · 2024-11-14T22:25:08.839Z · LW · GW

Hmm. I think there are two dimensions to the advice (what is a reasonable distribution of timelines to have, vs what should I actually do).  It's perfectly fine to have some humility about one while still giving opinions on the other.  "If you believe Y, then it's reasonable to do X" can be a useful piece of advice.  I'd normally mention that I don't believe Y, but for a lot of conversations, we've already had that conversation, and it's not helpful to repeat it.

 

Comment by Dagon on Why Bayesians should two-box in a one-shot · 2024-11-14T21:21:24.214Z · LW · GW

note: this was 7 years ago and I've refined my understanding of CDT and the Newcomb problem since.

My current understanding of CDT is that it's does effectively assign a confidence of 1 to the decision not being causally upstream of Omega's action, and that is the whole of the problem.  It's "solved" by just moving Omega's action downstream (by cheating and doing a rapid switch).  It's ... illustrated? ... by the transparent version, where a CDT agent just sees the second box as empty before it even realizes it's decided.  It's also "solved" by acausal decision theories, because they move the decision earlier in time to get the jump on Omega.  

For non-rigorous DTs (like human intuition, and what I personally would want to do), there's a lot of evidence in the setup that Omega is going to turn out to be correct, and one-boxing is an easy call.  If the setup is somewhat difference (say, neither Omega nor anyone else makes any claims about predictions, just says "sometimes both boxes have money, sometimes only one"), then it's a pretty straightforward EV calculation based on kind of informal probability assignments.

But it does require not using strict CDT, which rejects the idea that the choice has backward-causality.

Comment by Dagon on Anvil Problems · 2024-11-14T00:44:03.871Z · LW · GW

Thanks for this - it's important to keep in mind that a LOT of systems are easier to sustain or expand than to begin.  Perhaps most systems face this.

In a lot of domains, this is known as the "bootstrap" problem, based on the concept of "lift yourself up by your bootstraps", which doesn't actually work well as a metaphor.  See Bootstrapping - Wikipedia

In CS, for instance, compilers are pieces of software that turn source code into machine code.  Since they're software, they need a complier to build them.  GCC (and some other from-scratch compilers, but many other compilers just depend on GCC) includes a "bootstrap C compiler", which is some hand-coded (actually nowadays it's not, it's compiled as well) executable code which can compile a minimal "stage 2" compiler, which then compiles the main compiler, and then the main compiler is used to build itself again, with all optimizations available.

In fact, you've probably heard the term "booting up" or "rebooting" your computer.  This is a shortening of the word "bootstrap", and refers to powering on without any software, loading a small amount of code from ROM or Flash (or other mostly-static store), and using that code to load further stages of Operating System.  

Comment by Dagon on papetoast's Shortforms · 2024-11-11T20:31:33.529Z · LW · GW

Allocation of blame/causality is difficult, but I think you have it wrong.

ex. 1 ... He would also waste Tim's $100 which counterfactually could have been used to buy something else for Bob. So Bob is stuck with using the $100 headphone and spending the $300 somewhere else instead.

No.  TIM wasted $100 on a headset that Bob did not want (because he planned to buy a better one).  Bob can choose whether to to hide this waste (at a cost of the utility loss by having $300 and worse listening experience, but a "benefit" of misleading Tim about his misplaced altruism), or to discard the gift and buy the headphones like he'd already planned (for the benefit of being $300 poorer and having better sound, and the cost of making Tim feel bad but perhaps learning to ask before wasting money).

ex. 2 The world is now stuck with Chris' poor translation on book X with Andy and Bob never touching it again because they have other books to work on.

Umm, here I just disagree.  The world is no worse off for having a bad translation than having no translation.  If the bad translation is good enough that the incremental value of a good translation doesn't justify doing it, then that is your answer.  If it's not valuable enough to change the marginal decision to translate, then Andy or Bob should re-translate it.  Either way, Chris has improved the value of books, or has had no effect except wasting his own time.

Comment by Dagon on The Modern Problems with Conformity · 2024-11-11T18:42:39.222Z · LW · GW

You need to be careful to define "us" in these discussions.  The people for whom it worked in the past are not the people making behavioral choices now.  They are the ancestors of today's people.  You also have to be more specific about what "worked" means - they were able to reproduce and create the current people.  That is very different from what most people mean by "it works" when evaluating how to behave today.

It's also impossible to distinguish what parts of historical behavior "worked" in this way.  Perhaps it was conformity per se, perhaps it was the specific conformist behaviors that previous eras preferred, perhaps it was other parts of the environment that made it work, which no longer does.

Comment by Dagon on Viliam's Shortform · 2024-11-10T17:39:30.025Z · LW · GW

It gets very complicated when you add in incentives and recognize that science and scientists are also businesses.  There's a LOT of the world that scientists haven't (or haven't in the last century or so) really tried to prove, replicate, and come to consensus on.

Comment by Dagon on Sleeping Beauty – the Death Hypothesis · 2024-11-10T17:32:04.358Z · LW · GW

Yes for the first half, no for the second.  I would reply 1/2, but not JUST because of conventional probability theory.  It's also because the unstated parts of "what will resolve the prediction", in my estimation and modeling, match the setup of conventional probability theory.  It's generally assumed there's no double-counting or other experience-affecting tomfoolery.

Comment by Dagon on Tapatakt's Shortform · 2024-11-09T19:28:07.565Z · LW · GW

I'm very much not sure discouraging HFT is a bad thing.

It's not just the "bad" HFT.  It's any very-low-margin activity.

But normal taxes have the same effect, don't they?

Nope, normal taxes scale with profit, not with transaction size.  

Comment by Dagon on Tapatakt's Shortform · 2024-11-09T17:57:48.491Z · LW · GW

It's too much for some transactions, and too little for others.  For high-frequency (or mid-frequency) trading, 1% of the transaction is 3 or 4 times the expected value from the trade.  For high-margin sales (yachts or software), 1% doesn't bring in enough revenue to be worth bothering (this probably doesn't matter unless the transaction tax REPLACES other taxes rather than being in addition to).  

It also interferes with business organization - it encourages companies to do things in-house rather than outsourcing or partnering, since inside-company "transactions" aren't real money and aren't taxed.

It's not a bad idea per se, it just needs as many adjustments and carveouts as any other tax, so it ends up as politically complicated as any other tax and doesn't actually help with anything.

Comment by Dagon on Quantum Immortality: A Perspective if AI Doomers are Probably Right · 2024-11-07T22:53:07.660Z · LW · GW

I suspect we don't agree on what it means for something to matter.  If outside the causal/observable cone (add dimensions to cover MWI if you like), the difference or similarity is by definition not observable.  

And the distinction between "imaginary" and "real, but fully causally disconnected" is itself imaginary.

There is no identity substance, and only experience-reachable things matter.  All agency and observation is embedded, there is no viewpoint from outside.

Comment by Dagon on Quantum Immortality: A Perspective if AI Doomers are Probably Right · 2024-11-07T22:00:19.240Z · LW · GW

I'm not sure why 

  • Universe is finite (or only countably infinite), and MWI is irrelevant (makes it larger, but doesn't change the cardinality).  When you die, you die.  There may or may not exist near-but-not-exact duplicates outside of current-you's lightcone.

is not one of your considerations.  This seems most likely to me.

Comment by Dagon on Quantum Immortality: A Perspective if AI Doomers are Probably Right · 2024-11-07T19:36:35.008Z · LW · GW

If quantum immortality is true

This is a big if.  It may be true (though it also implies that events as unlikely as Boltzmann Brains are true as well), but it's not true in a way that has causal impact on my current predicted experiences.  If so, then the VAST VAST MAJORITY of universes don't contain me in the first place, and the also-extreme majority of those that do will have me die.  

Assume quantum uncertainty affects how the coins land. I survive the night only if I correctly guess the 10th digit of π and/or all seven coins land heads, otherwise I will be killed in my sleep.

In a literal experiment, where a human researcher kills you based on their observations of coins and calculation of pi, I don't think you should be confident of surviving the night.  If you DO survive, you don't learn much about uncorrelated probabilities - there's a near-infinite number of worlds, and fewer and fewer of them will contain you.

I guess this is a variant of option (1) - Deny that QI is meaningful.  You don't give up on probability - you can estimate a (1/2)^7 * 1/10 = 0.00078 chance of surviving.   

Comment by Dagon on The Case Against Moral Realism · 2024-11-07T19:21:40.658Z · LW · GW

I think there's a much simpler case against it: show me the instrument readings, or at least tell me the unit of measure.

Comment by Dagon on LDT (and everything else) can be irrational · 2024-11-06T17:04:59.692Z · LW · GW

This is an important theorem.  There is no perfect decision theory, especially against equal-or-better opponents.  I tend to frame it as "the better predictor wins".  Almost all such adversarial/fixed-sum cases are about power, not fairness or static strategy/mechanism.  

We (humans, including very smart theorists) REALLY want to frame it as clever ways to get outcomes that fit our intuitions.  But it's still all about "who goes first (in the logical/credible-committment sense)".

 

Comment by Dagon on Saul Munn's Shortform · 2024-11-06T16:53:10.245Z · LW · GW

I've sometimes used "crux weight" for a related but different concept - how important that crux is to a decision.  I'd propose "crux belief strength" for your topic - that part of it fits very well into a Bayesean framework for evidence.

Most decisions (for me, as far as I can tell) are overdetermined - there are multiple cruxes, with different weights and credences, which add up to more than 51% "best".  They're inter-correlated, but not perfectly, so it's REALLY tricky to be very explicit or legible in advance what would actually change my mind.

Comment by Dagon on Daniel Kokotajlo's Shortform · 2024-11-06T16:41:12.318Z · LW · GW

With modern mobility (air, drones, fast armor, etc.), it's not clear that "your line having an exploitable hole" is preventable in cities and built-up areas, even without significant tunneling.  For "normal" tunnels (such that there could be one every 10 meters in a kilometer as your example), as distinct from "big" tunnels like multilane highways and underground plazas, it doesn't take much to fortify or seal off an exit, once known, and while it's not clear which side will decide it's more danger than help, one of them will seal it off (or collapse it or whatever).  Surprise ad-hoc tunnels are problematic, but technology is still pretty limited in making and disguising them.

Note that really effective small, unmechanized and unsupported surprise attacks are only really effective with a supply of suicide-ready expert soldiers.  This is, logically, a rare combination.

That said, I don't much have a handle on modern state-level or guerilla warfare doctrines.  So I'd be happy to learn that there are reasons this is more important than it feels to me.  

Edit to add: I get the sense that MUCH of modern warfare planning is about cost/benefit.  When is it cheaper to just fill the tunnels (or just some of them - once it's known that it's common, there won't be many volunteers to attack that way) with nerve gas or just blow them up, than to defend against them or use them yourselves?  

Comment by Dagon on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-04T21:28:34.412Z · LW · GW

Age and popularity of an idea or practice have some predictive power as to how useful it has been.  Old and surviving is some evidence.  Popular is some evidence.  Old and NOT popular is conflicting evidence - it's useful (or at least not very harmful) to some, perhaps limited by context or covariant factors that don't apply elsewhere.  

Whether your interpretation of a practice will get benefits for you should probably be determined by more specific analysis than "it worked for a small set of people in a very different environment, and never caught on universally".

Comment by Dagon on Set Theory Multiverse vs Mathematical Truth - Philosophical Discussion · 2024-11-01T21:13:54.418Z · LW · GW

[note: I dabble, at best.  This is likely wrong in some ways, so I look forward to corrections. ]

I find myself appealing to basic logical principles like the law of non-contradiction. Even if we can't currently prove certain axioms, doesn't this just reflect our epistemological limitations 

It's REALLY hard to distinguish between "unprovable" and "unknown truth value".  In fact, this is recursively hard - there are lots of things that are not proven, but it's not known if they're provable.  And so on.

Mathematical truth is very much about provability from axioms.  

rather than implying all axioms are equally "true"?

"true" is hard to apply to axioms.  There's the common-sense version of "can't find a counterexample, and have REALLY tried", which is unsatisfying but pretty effective for practical use.  The formal version is just not to use "true", but "chosen" for axioms.  Some are more USEFUL than others.  Some are more easily justified than others.  It's not clear how to know which (if any) are true, but that doesn't make them equally true.

Comment by Dagon on Prediction markets and Taxes · 2024-11-01T19:38:32.819Z · LW · GW

Tax (and other frictions) is pretty well-known as a market distortion.  There is some symmetry to the taxation - both sides of the market have the same friction (tax if win, no tax if loss), so this will make it less noticeable.  And in robust markets, there will be enough professional involvement (who get taxed on net winnings, not on wins ignoring losses), that the effect is probably small.

Still, thanks for pointing it out - the effects are real, and important.

Comment by Dagon on Dentistry, Oral Surgeons, and the Inefficiency of Small Markets · 2024-11-01T19:26:34.945Z · LW · GW

[The Simpsons] "I'm Not Made of Stone !", Says Krusty the Clown

Comment by Dagon on Dentistry, Oral Surgeons, and the Inefficiency of Small Markets · 2024-11-01T17:47:59.233Z · LW · GW

"there was a model that worked ok, and there weren't enough businesses savvy people who understood enough of the details to really scale the DSO model."

This applies to a lot of the enshittification of the world.  There used to be tons of small/family businesses, where "successful" for the owner was defined as "make a decent living, by working harder than average".  There was tons of value left on the table (or rather, lots of unmeasured surplus went to consumers).  When things started getting moneyballed - optimized financially and reframed in terms of capital and returns, that surplus got squeezed out.

Comment by Dagon on cubefox's Shortform · 2024-10-28T18:55:02.721Z · LW · GW

"coordination" is a very unspecific term, and one concrete form of coordination is being able to vote for cooperation

Ah.  I'd say that "voting" is pretty non-specific as well. It's the enforcement mechanisms that bind behaviors after the votes are counted that are actually the coordination mechanisms.  Voting is the easy, un-impactful part, enforcing (socially as well as legally/violently) the result is impactful.

Voting is well-known and OFTEN used as a mechanism for determining the most agreeable (or least likely to result in riots) result.  It's a key prerequisite to many coordination mechanisms.  But it isn't a complete mechanism on its own.  

It's often said that controlling the ballot is more important than controlling the vote.  The pre-voting process to figure out how to coordinate the options to choose among (and the pre-pre-voting decisions for preliminary votes) matter a whole lot.  
 

Comment by Dagon on cubefox's Shortform · 2024-10-28T18:12:11.446Z · LW · GW

Note that the game-theoretic "true" prisoner's dilemma is formulated to make coordination (both communication and outside-of-game considerations like reputation, side-payments, self-image, etc.) are ignored.  All of the non-Nash "solutions" are by introducing factors into the game that change the game very significantly.

Voting (especially when defectors aren't penalized, just forced to cooperate) is a pretty big variation, and needs to be explicitly modeled in order to determine what equilibra are rational.  

This is almost unrelated to real-world voting, which has SO MANY complicating and interfering factors that simple models just don't tell us much.

Comment by Dagon on A Logical Proof for the Emergence and Substrate Independence of Sentience · 2024-10-26T15:35:39.738Z · LW · GW

There are no sentient details going on that you wouldn't perceive.

I think we're spinning on an undefined term.  I'd bet there are LOTS of details that effect my perception in subtle and aggregate ways which I don't consciously identify.  but i have no clue which perceived or unperceived details add up to my conception of sentience, and even less do I understand yours.

Comment by Dagon on Against Job Boards: Human Capital and the Legibility Trap · 2024-10-25T17:34:10.401Z · LW · GW

This is pretty well-trodden ground.  The scalable hiring/matching paths are effective enough and cheap for the bulk of the bell curves (of positions and of candidates).  If you're pretty normal, or seeking a pretty normal candidate, that's fine.  

If you're exceptional, or if you're a startup looking for exceptional employees, then it's not going to work very well.  Tyler Cowen's Talent (Talent: How to Identify Energizers, Creatives, and Winners Around the World: Cowen, Tyler, Gross, Daniel: 9781250275813: Amazon.com: Books) makes this point very clearly.  

It's kind of sad that it's so difficult to fire people nowadays, because it makes it very risky to hire someone that you're not pretty confident won't be terrible.  It changes the dynamic from "take risks, hire high-potential" to "avoid risks, hire the safe".  And this really sucks in a lot of ways.

Comment by Dagon on A Logical Proof for the Emergence and Substrate Independence of Sentience · 2024-10-25T17:23:38.269Z · LW · GW

It cannot be perpetual coincidence that our subjective experience always lines up with what our brain and body are doing.

It doesn't line up, for me at least. What it feels like is not clearly the same thing as others understand my communication of it to be.  Nor the reverse - it's unclear that how I interpret their reports tracks very well with their actual perception of their experiences.  And there are orders of magnitude more detail going on in my body (and even just in my brain) than I perceive, let alone that I communicate.

Until you operationally define "sentience", as in how do you detect and measure it, in the face of potential errors and lies of reported experiences, you should probably taboo it.  Circular arguments that "something is discussed, therefore that thing exists" are pretty weak, and don't show anything important about that something.

Comment by Dagon on Derivative AT a discontinuity · 2024-10-24T17:40:57.555Z · LW · GW

What’s its derivative?

The graph is nonstandard and misleading. It should not have a vertical segment at 0, it should have an open-circle at 0,0, and a closed circle at 0,1, showing that the lines do not and do contain the 0 point, respectively.

This makes the intuition pump a little easier.  The derivative at all nonzero Xs is 0.  The derivative AT ZERO, is 0 to the right (as X increases), and undefined to the left (as X decreases).  There is no connection between 0 and 0 - epsilon, and therefore no slope.

You CAN use more complicated models to describe some features of it (hyperreals, or just limits), but those are modeling tools to answer different questions than the intuitive use of derivative (slope of a continuous curve).  It's probably not right to say that any of them are "true", without some caveats.

 

Comment by Dagon on Gorges of gender on a terrain of traits · 2024-10-22T16:36:54.746Z · LW · GW

I follow the idea of correlations and hidden (or just aggregate/synthetic) parameters.  I don't understand the movement of perception balls, nor the metaphor of gravity and friction.  I'd first assumed the W graph was an upside-down bimodal distribution (frequency) chart, but that doesn't track with how you're using it.

Can you clarify?

Comment by Dagon on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-21T22:04:58.208Z · LW · GW

PLEASE tell me there's a version that asks "is the answer 1/2", and that it currently has a price of 33%!

Comment by Dagon on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-21T18:10:17.057Z · LW · GW

[ bowing out after this - I'll read responses and perhaps update on them, but probably won't respond (until next time) ]
 

To see that the probability of Heads is not "either 1/2 or 1/3, depending on what reference class you choose
 

I disagree.  Very specifically, it's 1/2 if your reference class is "fair coin flips" and 1/3 if your reference class is "temporary, to-be-erased experience of victims with adversarial memory problems".  

If your reference  class is "wakenings who are predicting what day it is", as the muffin variety, then 1/3 is a bit easier to work with (though you'd need to specify payoffs to explain why she'd EVER eat the muffin, and then 1/2 becomes pretty easy too).  This is roughly equivalent to the non-memory-wiping wager: I'll flip a fair coin, you predict heads or tails.  If it's heads, the wager will be $1, if it's tails, the wager is $2.  The probability of tails is not 2/3, but you'd pay up to $0.50 to play, right?

Comment by Dagon on If far-UV is so great, why isn't it everywhere? · 2024-10-21T17:58:53.183Z · LW · GW

Ah, OK.  So the claim is that the isolated effect (one building, even an office or home with significant time-spent) is small, but the cumulative effect is nonlinear in some way (either threshold effect or higher-order-than-linear).  That IS a lot harder to measure, because it's distributed long-term statistical impact, rather than individually measurable impact.  I'd think that we have enough epidemiology knowledge to model the threshold or effect, but I've been disappointed on this front so many times that I'm certainly wrong.  

It, unfortunately, shares this difficulty with other large-scale interventions.  If it's very expensive, personally annoying (rationally or not), and impossible to show an overwhelming benefit, it's probably not going to happen.  And IMO, it's probably overstated in feasibility of benefit.

Comment by Dagon on If far-UV is so great, why isn't it everywhere? · 2024-10-19T21:36:01.345Z · LW · GW

My very naive baseline for questions like this is "large effects are easily measured", and the contrapositive "if it's hard to measure, the effect is small".  Can you explain why this isn't the case for far-UV? Also, what are the reasons there doesn't seem to be much ground-up interest?  Why aren't companies and homeowners installing it in such numbers that it becomes standard?

Comment by Dagon on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-19T21:27:28.625Z · LW · GW

probabilities do not depend on the purpose. 

I think this is a restatement of the crux.  OF COURSE the model chosen depends on the purpose of the model.  For probabilities, the choice of reference class for a given prediction/measurement is key.  For Sleeping Beauty specifically, the choice of whether an experientially-irrelevant wakening (which is immediately erased and has no impact) is distinct from another is a modeling choice.

Either choice for probability modeling can answer either wagering question, simply by applying the weights to the payoffs if it's not already part of the probability 

Comment by Dagon on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-19T15:33:52.827Z · LW · GW

So how do you actually use probability to make decisions?

I think about what model fits the needs, roughly multiply payouts by probability estimates, then do whatever feels right in the moment.

I’m not sure that resolves any of these questions, since choice of model for different purposes is the main crux.

Comment by Dagon on Is there a known method to find others who came across the same potential infohazard without spoiling it to the public? · 2024-10-17T22:35:13.647Z · LW · GW

How can these isolated, silent individuals find the others without going public about it?

They can't.  Literally any change in behavior is (a bit of) Bayesean evidence for their beliefs.  More importantly, if they believe it's that dangerous and should be left unknown for as long as possible, they probably should not try to find each other.  Just burn your work and make it impossible to disclose (deceptively, by acquiring syphilis or something).

In more sane belief-sets, there are probably no discoveries that are both foreseeably dangerous and possible to delay by more than a few years.  Someone in this situation should focus on making trusted people aware earlier than untrusted ones will discover it independently.

Comment by Dagon on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-16T17:47:26.974Z · LW · GW

I have read and participated in many of these debates, and it continually frustrates me that people use the word "probability" AS IF it were objective and a property of the territory, when your bayesean tenet, "Probability is a property of the map (agent's beliefs), not the territory (environment)" is binding in every case I can think of.  I'm actually agnostic on whether some aspects of the universe are truly unknowable by any agent in the universe, and even more so on whether that means "randomness is inherent" or "randomness is a modeling tool".   Yes, this means I'm agnostic on MWI vs Copenhagen, as I can't define "true" on that level (though I generally use MWI for reasoning, as I find it easier.  That framing helps me remember that it's a modelling choice, not a fact about the universe(s).  

In practice, probability is a modeling and prediction tool, and works pretty much the same for all kinds of uncertainty: contingent (which logically-allowed way does this universe behave), indexical (which set of possible experiences in this universe am I having) and logical (things that must be so but I don't know which way).  There are probably edge cases where the difference between these matter, but I don't know of any that I expect to be resolved by foreseeable humans or our creations.

My pretty strong belief is that 1/2 is easier to explain and work with - the coin is fair and Beauty has no new information.  And that 1/3 is justified if you are predicting "weight" of experience, and the fact that tails will be experienced twice as often.  But mostly I'm rather sure that anyone who believes that their preference is the right model is in the wrong (on that part of the question).  

They're "doing epistemology wrong" no more than you.  Thinking either choice is best is justified.  Thinking the other choice is wrong is itself wrong.

Comment by Dagon on LW resources on childhood experiences? · 2024-10-14T22:51:35.970Z · LW · GW

Upvoted, but I worry that it's not a good fit for LessWrong.  Much of Social Dark Matter has pretty significant reasons for being dark, and LW is a public forum without much prior restraint on speech.  For many of the same reasons we avoid politics (Politics is the Mind-Killer — LessWrong, summarized as "some otherwise-rational people go funny in the head on certain topics"), we should be very careful about making personal trauma and reactions to such a very visible topic here.

I very much hope you get a few good pointers, and that there aren't many posts on LW on the topic.

Comment by Dagon on Prices are Bounties · 2024-10-13T02:43:17.159Z · LW · GW

Short-term gasoline availability after a natural disaster is a good example.  Very high prices do not actually lead to more supply, because transportation constraints are binding, and the road crews don't get any of the increase in prices.  In a model world, it's good, the gasoline vendors pay some of their earnings to the road crews to prioritize their (very valuable) goods.  In the real world, the supply chain is too long and interconnected for price signals to change much in terms of availability.

Over longer timeframes, and for more normal goods and situations, the model is very powerful, and price fixing is extremely harmful.  I'm not supporting rent control in any fashion.  In fact, I'm not exactly supporting gasoline price limits in an emergency, just pointing out that the usual arguments for price flexibility may not apply.

For even more clarity: I don't think the line of argument (high prices motivate supply) is valid for this particular case, but I do NOT mean to say that price-fixing is actually justified.  It also doesn't increase supply, and it imposes arbitrary government involvement in private affairs, in a way that usually outlasts the emergency.  Slippery-slope arguments are ALSO somewhat weak, of course - I don't know of an obvious killer argument in either direction.  And that's mostly my point with this comment.

.

Comment by Dagon on Prices are Bounties · 2024-10-12T17:53:03.724Z · LW · GW

[epistemic status: somewhere between steelman of the complaints and recognition that theory is correct but there's a lot of details that matter and are ignored in this piece. ]

This is true in theory, but for MANY goods and services, supply is artificially limited by regulations and restrictions on who and how something can be provided, or by emergency conditions that block travel and transport.  When these restrictions are controlling (the main reason that marginal supply is slow/difficult), price signals can't actually get used, and don't change behavior.  This is especially true for short-term shocks, when the price isn't expected to last long enough to pay for investments.  

It's perfectly reasonable to be angry at the sheriff who spends public funds on a bounty that doesn't change anything (say, the guy is already in custody, and the bounty is just a giveaway to the lucky person who has him).  

Comment by Dagon on Most arguments for AI Doom are either bad or weak · 2024-10-12T17:05:55.535Z · LW · GW

Over what timeframe?  2-20% seems a reasonable range to me, and I would not call it "very low".  I'm not sure there is a true consensus, even around the LW frequent posters, but maybe I'm wrong and it is very low in some circles, though it's not in the group I watch most.   It seems plenty high to motivate behaviors or actions you see as influencing it.

Comment by Dagon on An argument that consequentialism is incomplete · 2024-10-07T21:49:11.365Z · LW · GW

Quite possibly, but without SOME framework of evaluating wishes, it's hard to know which wishes (even of oneself) to support and which to fight/deprioritize.

Humans (or at least this one) often have desires or ideas that aren't, when considered, actually good ideas.  Also, humans (again, at least this one) have conflicting desires, only a subset of which CAN be pursued.  

It's not perfect, and it doesn't work when extended too far into the tails (because nothing does), but consequentialism is one of the better options for judging one's desires and picking which to pursue.

Comment by Dagon on sarahconstantin's Shortform · 2024-10-07T19:42:50.608Z · LW · GW

Thank you, this is interesting and important.  I worry that it overstates similarity of different points on a spectrum, though.

in a certain sense, you are doing the exact same thing as the more overtly irrational person, just hiding it better!

In a certain sense, yes.  In other, critical senses, no.  This is a case where quantitative differences are big enough to be qualitative.  When someone is clinically delusional, there are a few things which distinguish it from the more common wrong ideas.  Among them, the inability to shut up about it when it's not relevant, and the large negative impact on relationships and daily life.  For many many purposes, "hiding it better" is the distinction that matters.

I fully agree that "He's not wrong but he's still crazy" is valid (though I'd usually use less-direct phrasing).  It's pretty rare that "this sounds like a classic crazy-person thought, but I still separately have to check whether it's true" happens to me, but it's definitely not never.

Comment by Dagon on An argument that consequentialism is incomplete · 2024-10-07T18:14:18.232Z · LW · GW

This may be a complaint about legibilism, not specifically consequentialism.  Godel was pretty clear - a formal system is either incomplete or inconsistent.  Any moral or decision system that demands that everything important about a decision is clear and well-understood is going to have similar problems.  Your TRUE reasons for a lot of things are not accessible, so you will look for legible reasons to do what you want, and you will find yourself a rationalizing agent, rather than a rational one.

That said, consequentialism is still a useful framework for evaluating how closely your analytic self matches with your acting self.  It's not going to be perfect, but you can choose to get closer, and you can get better at understanding which consequences actually matter to you.

Climbing a mountain has a lot of consequences that you didn't mention, but probably should consider.  It connects you to people in new ways.  It gives you interesting stories to tell at parties.  It's a framework for improving your body in various ways.  If you die, it lets you serve as a warning to others.  It changes your self-image (honestly, this one may be the most important impact).  

Comment by Dagon on ektimo's Shortform · 2024-10-05T16:00:08.584Z · LW · GW

We can be virtually certain that 2+2=4 based on priors.

I don't understand this model.  For me, 2+2=4 is an abstract analytic concept that is outside of bayesean probability.  For others, it may be "just" a probability, about which they might be virtually certain about, but it won't be on priors, it'll be on mountains of evidence and literally zero counterevidence (presumably because every experience that contradicts it gets re-framed as having a different cause).

There's no way to update on evidence outside of your light cone, let alone on theoretical other universes or containing universes.  Because there's no way to GET evidence from them.