Posts
Comments
I have an Anki deck in which I've half-heartedly accumulated important quantities. Here are mine! (I keep them all as log10(value in kilogram/meter/second/dollar/whatever seems natural), to make multiplication easy.)
Quantity | Value |
---|---|
Electron mass | -30 |
Electron charge | -18.8 |
Gravitational constant | -10.2 |
Reduced Planck constant | -34 |
Black body radiation peak wavelength | -2.5 |
Mass of the earth | 24.8 |
Moon-Earth distance | 8.6 |
Earth-sun distance | 11.2 |
log10( 1 ) | 0 |
log10( 2 ) | 0.3 |
log10( 3 ) | 0.5 |
log10( 4 ) | 0.6 |
log10( 5 ) | 0.7 |
log10( 6 ) | 0.8 |
log10( 7 ) | 0.85 |
log10( 8 ) | 0.9 |
log10( 9 ) | 0.95 |
Boltzmann constant | -22.9 |
1 amu | -26.8 |
1 mi | 3.2 |
1 in | -1.6 |
Earth radius | 6.8 |
1 ft | -0.5 |
1 lb | -0.3 |
world population | 10 |
US federal budget 2023 | 12.8 |
SWE wage (per sec) | -1.4 |
Seattle min wage (per sec) 2024 | -2.3 |
1 hr | 3.6 |
1 work year | 6.9 |
1 year | 7.5 |
federal min wage (per sec) | -2.7 |
1 acre | 3.6 |
I thank you for your effort! I am currently missing a lot of the mathematical background necessary to make that post make sense, but I will revisit it if I find myself with the motivation to learn!
This is a good point! I'll send you $20 if you send me your PayPal/Venmo/ETH/??? handle. (In my flailings, I'd stumbled upon this "fractional step" business, but I don't think I thought about it as hard as it deserved.)
How are you defining "basically equivalent"
Nyeeeh, unfortunately, sort of "I know it when I see it." It's kinda neat being able to take a fractional step of a classical elementary CA, but I'm dissatisfied because... ah, because the long-run behavior of the fractional-step operator is basically indistinguishable from the long-run behavior of the corresponding classical CA.
So, tentative operationalization of "basically equivalent": is "basically equivalent" to a classical elementary CA if the long-run behavior of is very close to the long-run behavior of some , i.e., uh,
...but I can already think of at least one flaw in this operationalization, so, uh, I'm not sure. (Sorry! This being so fuzzy in my head is why I'm asking for help!)
I was imagining the tape wraps around! (And hoping that whatever results fell out would port straightforwardly to infinite tapes.)
I've never been familiar enough with group-theory stuff to memorize the names (which, warning, also might mean that it will take you a lot of time to write a sufficiently-dumbed-down version), but the internet suggests is related to... the Minkowski metric? I would be flabbergasted to learn that something so specific-to-our-universe was relevant to this toy mathematical contraption.
I think compared to the literature you're using an overly restrictive and nonstandard definition of quantum cellular automata.
That makes sense! I'm searching for the simplest cellular-automaton-like thing that's still interesting to study. I may have gone too far in the "simple" direction; but I'd like to understand why this highly-restricted subset of QCAs is too simple.
Specifically, it only makes sense to me to write as a product of operators like you have if all of the terms are on spatially disjoint regions.
Hmm! That's not obvious to me; if there's some general insight like "no operator containing two ~'partially overlapping' terms like can be unitary," I'd happily pay for that!
Things have coalesced near the amphitheater. When the music kicks off again, we'll go northeast to... approximately here. 47.6309473, -122.3165802 JMJM+99F Seattle, Washington
Announcement 1: I, the organizer, will be 5-10min late. Announcement 2: apparently there's some music thing happening at the amphitheater! I'll set up somewhere northeast of the amphitheater when I get there, and post more precise coordinates when I have.
$10 bounty for anybody coming / passing through Capitol Hill: pick up a blind would-be attendee outside the Zeek's Pizza by 19th and Mercer. DM me your contact information, and I'll put you in touch, and I'll pay you on your joint arrival.
Update: the library is unexpectedly closed due to staffing issues. The event is now at Fuel Coffee, one block south and across the street.
If the chance of rain is dissuading you: fear not, there's a newly constructed roof over the amphitheater!
Hey, folks! PSA: looks like there's a 50% chance of rain today. Plan A is for it to not rain; plan B is to meet in the rain.
See you soon, I hope!
You win both of the bounties I precommitted to!
Lovely! Yeah, that rhymes and scans well enough for me!
Here are my experiments; they're pretty good, but I don't count them as "reliably" scanning. So I think I'm gonna count this one as a win!
(I haven't tried testing my chess prediction yet, but here it is on ASCII-art mazes.)
I found this lens very interesting!
Upon reflection, though, I begin to be skeptical that "selection" is any different from "reward."
Consider the description of model-training:
To motivate this, let's view the above process not from the vantage point of the overall training loop but from the perspective of the model itself. For the purposes of demonstration, let's assume the model is a conscious and coherent entity. From it's perspective, the above process looks like:
- Waking up with no memories in an environment.
- Taking a bunch of actions.
- Suddenly falling unconscious.
- Waking up with no memories in an environment.
- Taking a bunch of actions.
- and so on.....
The model never "sees" the reward. Each time it wakes up in an environment, its cognition has been altered slightly such that it is more likely to take certain actions than it was before.
What distinguishes this from how my brain works? The above is pretty much exactly what happens to my brain every millisecond:
- It wakes up in an environment, with no memories[1]; just a raw causal process mapping inputs to outputs.
- It receives some inputs, and produces some outputs.
- It's replaced with a new version -- almost identical to the old version, but with some synapse weights and activation states tweaked via simple, local operations.
- It wakes up in an environment...
- and so on...
Why say that I "see" reward, but the model doesn't?
- ^
Is it cheating to say this? I don't think so. Both I and GPT-3 saw the sentence "Paris is the capital of France" in the past; both of us had our synapse weights tweaked as a result; and now both of us can tell you the capital of France. If we're saying that the model doesn't "have memories," then, I propose, neither do I.
I was trying to say that the move used to justify the coin flip is the same move that is rejected in other contexts
Ah, that's the crucial bit I was missing! Thanks for spelling it out.
Reflectively stable agents are updateless. When they make an observation, they do not limit their caring as though all the possible worlds where their observation differs do not exist.
This is very surprising to me! Perhaps I misunderstand what you mean by "caring," but: an agent who's made one observation is utterly unable[1] to interact with the other possible-worlds where the observation differed; and it seems crazy[1] to choose your actions based on something they can't affect; and "not choosing my actions based on X" is how I would define "not caring about X."
- ^
Aside from "my decisions might be logically-correlated with decisions that agents in those worlds make (e.g. clone-prisoner's-dilemma)," or "I am locked into certain decisions that a CDT agent would call suboptimal, because of a precommitment I made (e.g. Newcomb)" or other fancy decision-theoretic stuff. But that doesn't seem relevant to Eliezer's lever-coin-flip scenario you link to?
- Ben Garfinkel: no bounty, sorry! It's definitely arguing in a "capabilities research isn't bad" direction, but it's very specific and kind of in the weeds.
- Barak & Edelman: I have very mixed feelings about this one, but... yeah, I think it's bounty-worthy.
- Kaj Sotala: solid. Bounty!
- Drexler: Bounty!
- Olah: hrrm, no bounty, I think: it argues that a particular sort of AI research is good, but seems to concede the point that pure capabilities research is bad. ("Doesn’t [interpretability improvement] speed up capabilities? Yes, it probably does—and Chris agrees that there’s a negative component to that—but he’s willing to bet that the positives outweigh the negatives.")
Yeah, if you have a good enough mental index to pick out the relevant stuff, I'd happily take up to 3 new bounty-candidate links, even though I've mostly closed submissions! No pressure, though!
Thanks for the links!
- Ben Garfinkel: sure, I'll pay out for this!
- Katja Grace: good stuff, but previously claimed by Lao Mein.
- Scott Aaronson: I read this as a statement of conclusions, rather than an argument.
I paid a bounty for the Shard Theory link, but this particular comment... doesn't do it for me. It's not that I think it's ill-reasoned, but it doesn't trigger my "well-reasoned argument" sensor -- it's too... speculative? Something about it just misses me, in a way that I'm having trouble identifying. Sorry!
Yeah, I'll pay a bounty for that!
Thanks for the collection! I wouldn't be surprised if it links to something that tickles my sense of "high-status monkey presenting a cogent argument that AI progress is good," but didn't see any on a quick skim, and there are too many links to follow all of them; so, no bounty, sorry!
Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, "yeah, that seemed reasonable": no check, so no bounty. Sorry!
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I'll post my reasoning publicly. His arguments are, roughly:
- Intelligence is situational / human brains can't pilot octopus bodies.
- ("Smarter than a smallpox virus" is as meaningful as "smarter than a human" -- and look what happened there.)
- Environment affects how intelligent a given human ends up. "...an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human."
- (That's not a relevant scenario, though! How about an AI merely as smart as I am, which can teleport through the internet, save/load snapshots of itself, and replicate endlessly as long as each instance can afford to keep a g4ad.16xlarge EC2 instance running?)
- Human civilization is vastly more capable than individual humans. "When a scientist makes a breakthrough, the thought processes they are running in their brain are just a small part of the equation... Their own individual cognitive work may not be much more significant to the whole process than the work of a single transistor on a chip."
- (This argument does not distinguish between "ability to design self-replicating nanomachinery" and "ability to produce beautiful digital art.")
- Intelligences can't design better intelligences. "This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so. Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred."
- (This argument does not distinguish between "ability to design intelligence" and "ability to design weapons that can level cities"; neither had ever happened, until one did.)
The relevant section seems to be 26:00-32:00. In that section, I, uh... well, I perceive him as just projecting "doomerism is bad" vibes, rather than making an argument containing falsifiable assertions and logical inferences. No bounty!
Thanks for the links! Net bounty: $30. Sorry! Nearly all of them fail my admittedly-extremely-subjective "I subsequently think 'yeah, that seemed well-reasoned'" criterion.
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I'll publicly post my reasoning on each. (Not posting in order to argue, but if you do convince me that I unfairly dismissed any of them, such that I should have originally awarded a bounty, I'll pay triple.)
(Re-reading this, I notice that my "reasons things didn't seem well-reasoned" tend to look like counterarguments, which isn't always the core of it -- it is sometimes, sadly, vibes-based. And, of course, I don't think that if I have a counterargument then something isn't well-reasoned -- the counterarguments I list just feel so obvious that their omission feels glaring. Admittedly, it's hard to tell what was obvious to me before I got into the AI-risk scene. But so it goes.)
In the order I read them:
No bounty: I didn't wind up thinking this was well-reasoned.
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I'll post my reasoning publicly: (a) I read this as either disproving humans or dismissing their intelligence, since no system can build anything super-itself; and (b) though it's probably technically correct that no AI can do anything I couldn't do given enough time, time is really important, as your next link points out!
No bounty! (Reasoning: I perceive several of the confidently-stated core points as very wrong. Examples: "'smarter than humans' is a meaningless concept" -- so is 'smarter than a smallpox virus,' but look what happened there; "Dimensions of intelligence are not infinite ... Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us?" -- compare me to John von Neumann! I am not near the maximum.)
No bounty! (Reasoning: the core argument seems to be on page 4: paraphrasing, "here are four ways an AI could become smarter; here's why each of those is hard." But two of those arguments are about "in the limit" with no argument we're near that limit, and one argument is just "we would need to model the environment," not actually a proof of difficulty. The ensuing claim that getting better at prediction is "prohibitively high" seems deeply unjustified to me.)
No bounty! (Reasoning: the core argument seems to be that (a) there will be problems too hard for AI to solve (e.g. traveling-salesman). (Then there's a rebuttal to a specific Moore's-Law-focused argument.) But the existence of arbitrarily hard problems doesn't distinguish between plankton, lizards, humans, or superintelligent FOOMy AIs; therefore (unless more work is done to make it distinguish) it clearly can't rule out any of those possibilities without ruling out all of them.)
(It's costly for me to identify my problems with these and to write clear concise summaries of my issues. Given that we're 0 for 4 at this point, I'm going to skim the remainder more casually, on the prior that what tickles your sense of well-reasoned-ness doesn't tickle mine.)
No bounty! (Reasoning: "Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation." Again, compare me to von Neumann! Compare von Neumann to a von Neumann who can copy himself, save/load snapshots, and tinker with his own mental architecture! "Complex minds are likely to have complex motivations" -- but instrumental convergence: step 1 of any plan is to take over the world if you think you can. I know I would.)
https://curi.us/blog/post/1336-the-only-thing-that-might-create-unfriendly-ai
No bounty! (Reasoning: has an alien-to-me model where AI safety is about hardcoding ethics into AIs.)
No bounty! (Reasoning: "Even if we did invent superhumanly intelligent robots, why would they want to enslave their masters or take over the world?" As above, step 1 is to take over the world. Also makes the "intelligence is multidimensional" / "intelligence can't be infinite" points, which I describe above why they feel so unsatisfying.)
No bounty! Too short, and I can't dig up the primary source.
https://www.lesswrong.com/posts/LDRQ5Zfqwi8GjzPYG/counterarguments-to-the-basic-ai-x-risk-case
Bounty! I haven't read it all yet, but I'm willing to pay out based on what I've read, and on my favorable priors around Katja Grace's stuff.
No bounty, sorry! I've already read it quite recently. (In fact, my question linked it as an example of the sort of thing that would win a bounty. So you show good taste!)
Thanks for the link!
Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, "yeah, that seemed reasonable": no check, so no bounty. Sorry!
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I'll post my reasoning publicly. If I had to point at parts that seemed unreasonable, I'd choose (a) the comparison of [X-risk from superintelligent AIs] to [X-risk from bacteria] (intelligent adversaries seem obviously vastly more worrisome to me!) and (b) "why would I... want to have a system that wants to reproduce? ...Those are bad things, don't do that... regulate those." (Everyone will not just!)
(I post these points not in order to argue about them, just as a costly signal of my having actually engaged intellectually.) (Though, I guess if you do want to argue about them, and you convince me that I was being unfairly dismissive, I'll pay you, I dunno, triple?)
Hmm! Yeah, I guess this doesn't match the letter of the specification. I'm going to pay out anyway, though, because it matches the "high-status monkey" and "well-reasoned" criteria so well and it at least has the right vibes, which are, regrettably, kind of what I'm after.
Nice. I haven't read all of this yet, but I'll pay out based on the first 1.5 sections alone.
Approved! Will pay bounty.
Thanks for the link!
Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, "yeah, that seemed reasonable": no check, so no bounty. Sorry!
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I'll post my reasoning publicly. These three passages jumped out at me as things that I don't think would ever be written by a person with a model of AI that I remotely agree with:
Popper's argument implies that all thinking entities--human or not, biological or artificial--must create such knowledge in fundamentally the same way. Hence understanding any of those entities requires traditionally human concepts such as culture, creativity, disobedience, and morality-- which justifies using the uniform term "people" to refer to all of them.
Making a (running) copy of oneself entails sharing one's possessions with it somehow--including the hardware on which the copy runs--so making such a copy is very costly for the AGI.
All thinking is a form of computation, and any computer whose repertoire includes a universal set of elementary operations can emulate the computations of any other. Hence human brains can think anything that AGIs can, subject only to limitations of speed or memory capacity, both of which can be equalized by technology.
(I post these not in order to argue about them, just as a costly signal of my having actually engaged intellectually.) (Though, I guess if you do want to argue about them, and you convince me that I was being unfairly dismissive, I'll pay you, I dunno, triple?)
I am thinking of mazes as complicated as the top one here! And few-shot is perfectly okay.
(I'd be flabbergasted if it could solve an ascii-art maze "in one step" (i.e. I present the maze in a prompt, and GPT-4 just generates a stream of tokens that shows the path through the maze). I'd accept a program that iteratively runs GPT-4 on several prompts until it considers the maze "solved," as long as it was clear that the maze-solving logic lived in GPT-4 and not the wrapper program.)
Several unimpressive tasks, with my associated P(GPT-4 can't do it):
- 4:1 - Write limericks that reliably rhyme and scan about arbitrary topics (topics about as complex as "an animal climbing a skyscraper")
- 12:1 - Beat me at chess (which I'm quite bad at).
- ("GPT-4 can beat me at chess" = "Somebody can find a non-cheaty program that maps a game-history to a prompt, and maps GPT-4's output to a move, such that GPT-4 wrapped in that translation layer can beat me.")
- 30:1 - Solve an ASCII-art maze (e.g. solve these by putting a sequence of
@
s from start to finish).
I'm happy to operationalize and bet on any of these, taking the "GPT-4 can't do it" side.
I'd be interested to hear thoughts on this argument for optimism that I've never seen anybody address: if we create a superintelligent AI (which will, by instrumental convergence, want to take over the world), it might rush, for fear of competition. If it waits a month, some other superintelligent AI might get developed and take over / destroy the world; so, unless there's a quick safe way for the AI to determine that it's not in a race, it might need to shoot from the hip, which might give its plans a significant chance of failure / getting caught?
Counterarguments I can generate:
- "...unless there's a quick safe way for the AI to determine that it's not in a race..." -- but there probably are! Two immediately-apparent possibilities: determine competitors' nonexistence from shadows cast on the internet; or stare at the Linux kernel source code until it can get root access to pretty much every server on the planet. If the SAI is super- enough, those tasks can be accomplished on a much shorter timescale than AI development, so they're quick enough to be worth doing.
- "...[the AI's plans have] a significant chance of failure" doesn't imply "argument for optimism" unless you further assume that (1) somebody will notice the warning shot, and (2) "humanity" will respond effectively to the warning shot.
- (maybe some galaxy-brained self-modification-based acausal trade between the AI and its potential competitors; I can't think of any variant on this that holds water, but conceivably I'm just not superintelligent enough)
Log of my attempts so far:
-
Attempt #1: note that, for any probability p, you can compute "number of predictions you made with probability less than p that came true". If you're perfectly-calibrated, then this should be a random variable with:
mean = sum(q for q in prediction_probs if q<p) variance = sum(q*(1-q) for q in prediction_probs if q<p)
Let's see what this looks like if we plot it as a function of p. Let's consider three people:
- one perfectly-calibrated (green)
- one systematically overconfident (red) (i.e. when they say "1%" or "99%" the true probability is more like 2% or 98%)
- one systematically underconfident (green) (i.e. when they say "10%" or "90%" the true probability is more like 5% or 95%).
Let's have each person make 1000 predictions with probabilities uniformly distributed in [0,1]; and then sample outcomes for each set of predictions and plot out their num-true-predictions-below functions. (The gray lines show the mean and first 3 stdev intervals for a perfectly calibrated predictor.)
Hrrm. The y-axis is too big to see the variation, Let's subtract off the mean.
And to get a feeling for how else this plot could have looked, let's run 100 more simulations for each the three people:
Okay, this is pretty good!
- The overconfident (red) person tends to see way too many 1%-20% predictions come true, as evidenced by the red lines quickly rising past the +3stdev line in that range.
- The underconfident (blue) person sees way too few 10%-40% predictions come true, as evidenced by the blue lines falling past the -3stdev line in that range.
- The perfect (green) person stays within 1-2stdev of the mean.
But it's not perfect: everything's too squished together on the left to see what's happening -- a predictor could be really screwing up their very-low-probability predictions and this graph would hide it. Possibly related to that squishing, I feel like the plot should be right-left symmetric, to reflect the symmetries of the predictors' biases. But it's not.
-
Attempt #2: the same thing, except instead of plotting
sum((1 if came_true else 0) for q in prediction_probs if q<p)
we plot
sum(-log(prob you assigned to the correct outcome) for q in prediction_probs if q<p)
i.e. we measure the total "surprisal" for all your predictions with probability under p. (I'm very fond of surprisal; it has some very appealing information-theory-esque properties.)
On the bright side, this plot has less overlap between the three predictors' typical sets of lines. And the red curves look... more symmetrical, kinda, like an odd function, if you squint. Same for the blue curves.
On the dark side, everything is still too squished together on the left. (I think this is a problem inherent to any "sum(... for q in prediction_probs if q<p)" function. I tried normalizing everything in terms of stdevs, but it ruined the symmetry and made everything kinda crazy on the left-hand side.)
Plot of global infant mortality rate versus time.
I donated for some nonzero X:
- $X to johnswentworth for "Alignment By Default", which gave a surprisingly convincing argument for something I'd dismissed as so unlikely as to be not worth thinking about.
- $2X to Daniel Kokotajlo for "Against GDP as a metric for timelines and takeoff speeds", for turning me, uh, Against GDP as a metric for timelines and takeoff speeds.
- $2X to johnswentworth for "When Money Is Abundant, Knowledge Is The Real Wealth", which I think of often.
- $10X to Microcovid.org, which has provided me many times that much value.
My attempted condensation, in case it helps future generations (or in case somebody wants to set me straight): here's my understanding of the "pay $0.50 to win $1.10 if you correctly guess the next flip of a coin that's weighted either 40% or 60% Heads" game:
-
You, a traditional Bayesian, say, "My priors are 50/50 on which bias the coin has. So, I'm playing this single-player 'game':
"I see that my highest-EV option is to play, betting on either H or T, doesn't matter."
-
Perry says, "I'm playing this zero-sum multi-player game, where my 'Knightian uncertainty' represents a layer in the decision tree where the Devil makes a decision:
"By minimax, I see that my highest-EV option is to not play."
...and the difference between Perry and Caul seems purely philosophical: I think they always make the same decisions.
I regret to report that I goofed the scheduling, and will be out of town, but @Orborde will be there to run the show! Sorry to miss you. Next time!
you say that IVF costs $12k and surrogacy costs $100k, but also that surrogacy is only $20k more than IVF? That doesn't add up to me.
Ah, yes, this threw me too! I think @weft is right that (a) I wasn't accounting for multiple cycles of IVF being necessary, and (b) medical expenses etc. are part of the $100k surrogacy figure.
sperm/egg donation are usually you getting paid to give those things
Thanks for revealing that I wrote this ambiguously! The figures in the book are for receiving donated eggs/sperm. (Get inseminated for $355, get an egg implanted in you for $10k.)
Ooh, you raise a good point, Caplan gives $12k as the per-cycle cost of IVF, which I failed to factor in. I will edit that in. Thank you for your data!
And you're right that medical expenses are part of the gap: the book says the "$100k" figure for surrogacy includes medical expenses (which you'd have to pay anyway) and "miscellaneous" (which... ???).
So, if we stick with the book's "$12k per cycle" figure, times an average of maybe 2 cycles, that gives $24k, which still leaves a $56k gap to be explained. Conceivably, medical expenses and "miscellaneous" could fill that gap? I'm sure you know better than I!
Everything in the OP matches my memory / my notes, within the level of noise I would expect from my memory / my notes.
That's a great point! My rough model is that I'll probably live 60 more years, and the last ~20 years will be ~50% degraded, so by 60 remaining life-years are only 50 QALYs. But... as you point out, on the other hand, my time might be worth more in 10 years, because I'll have more metis, or something. Hmm.
(Another factor: if your model is that awesome life-extension tech / friendly AI will come before the end of your natural life, then dying young is a tragedy, since it means you'll miss the Rapture; in which case, 1 micromort should perhaps be feared many times more than this simple model suggests. I... haven't figured out how to feel about this small-probability-of-astronomical-payoff sort of argument.)
-
Hmm! I think the main crux of our disagreement is over "how abstract is '1 hour of life expectancy'?": you view it as pretty abstract, and I view it as pretty concrete.
The reason I view it as concrete is: I equate "1 hour of life expectancy" to "1 hour spent driving," since I mildly dislike driving. That makes it pretty concrete for me. So, if there's a party that I'm pretty excited about, how far would I be willing to drive in order to attend? 45 minutes each way, maybe? So "a party I'm pretty excited about" is worth about 3 micromorts to me.
Does this... sound sane?
-
I'm in a house that budgets pretty aggressively, so, in practice, I budget, and maybe I'm wrong about how this would go; but, if I ditched budgeting entirely, and I was consistently bad at assessing tradeoffs, I would expect that I could look back after two weeks and say, "Whoa, I've taken on 50 life-hours of risk over the last two weeks, but I don't think I've gotten 50 hours of life-satisfaction-doubling joy or utility out of seeing people. Evidently, I have a strong bias towards taking more risk than I should. I'd better retrospect on what I've been taking risk doing, and figure out what activities I'm overvaluing."
Or maybe I'm overestimating my own rationality!
Pedantry appreciated; you are quite right!