Posts

Collection of Scientific and Other Classifications 2024-02-15T12:58:50.626Z
Conversation Visualizer 2023-12-31T01:18:01.424Z
Please Bet On My Quantified Self Decision Markets 2023-12-01T20:07:38.284Z
Self-Blinded L-Theanine RCT 2023-10-31T15:24:57.717Z
Examples of Low Status Fun 2023-10-10T23:19:26.212Z
Precision of Sets of Forecasts 2023-09-19T18:19:18.053Z
Have Attention Spans Been Declining? 2023-09-08T14:11:55.224Z
The Evidence for Question Decomposition is Weak 2023-08-28T15:46:31.529Z
If I Was An Eccentric Trillionaire 2023-08-09T07:56:46.259Z
Self-Blinded Caffeine RCT 2023-06-27T12:38:55.354Z
Properties of Good Textbooks 2023-05-07T08:38:05.243Z
Iqisa: A Library For Handling Forecasting Datasets 2023-04-14T15:16:55.726Z
Subscripts for Probabilities 2023-04-13T18:32:17.267Z
High Status Eschews Quantification of Performance 2023-03-19T22:14:16.523Z
Open & Welcome Thread — March 2023 2023-03-01T09:30:09.639Z
Open & Welcome Thread - December 2022 2022-12-04T15:06:00.579Z
Open & Welcome Thread - Oct 2022 2022-10-02T11:04:06.762Z
Turning Some Inconsistent Preferences into Consistent Ones 2022-07-18T18:40:02.243Z
What is Going On With CFAR? 2022-05-28T15:21:51.397Z
Range and Forecasting Accuracy 2022-05-27T18:47:44.315Z
scipy.optimize.curve_fit Is Awesome 2022-05-07T10:57:28.278Z
Brain-Computer Interfaces and AI Alignment 2021-08-28T19:48:52.614Z
Open & Welcome Thread – April 2021 2021-04-04T19:25:09.049Z
An Exploratory Toy AI Takeoff Model 2021-01-13T18:13:14.237Z
Cryonics Cost-Benefit Analysis 2020-08-03T17:30:42.307Z
"Do Nothing" utility function, 3½ years later? 2020-07-20T11:09:36.946Z
niplav's Shortform 2020-06-20T21:15:06.105Z

Comments

Comment by niplav on Cryonics p(success) estimates are only weakly associated with interest in pursuing cryonics in the LW 2023 Survey · 2024-03-27T10:44:25.264Z · LW · GW

The arguments about about net-negative futures are in this section, especially here. I find the whole being revived as an emulation & forced to work example pretty unconvincing.

Comment by niplav on Barefoot FAQ · 2024-03-27T08:21:44.076Z · LW · GW

I would worry about that if sharp objects, pointing upward, were often found on the ground. In my experience, such things are rare. I should and do, for safety, look around a bit at where I walk.

When I tried walking barefoot for a summer, the visible glass shards weren't a very big problem (I never stepped into any of them, though it was annoying having to be on the lookout to avoid them), but I stepped twice into tiny glass shards (few millimeters in length) that were then also subsequently more difficult to remove. They didn't hurt as much, though.

Another annoying thing was that I always had to wash my feet when I came home.

Comment by niplav on Why don't singularitarians bet on the creation of AGI by buying stocks? · 2024-03-26T18:10:55.026Z · LW · GW

I'm curious, how did this turn out?

Comment by niplav on Habryka's Shortform Feed · 2024-03-24T23:59:42.215Z · LW · GW

One small idea: Have the ability to re-publish posts to allPosts or the front page after editing. This worked in the past, but now doesn't anymore (as I noticed recently when updating this post).

Comment by niplav on Malentropic Gizmo's Shortform · 2024-03-24T18:36:30.406Z · LW · GW

The titular short story from Luminous (Greg Egan, 1995).

Comment by niplav on niplav's Shortform · 2024-03-24T18:35:12.497Z · LW · GW

A minor bottleneck I've recently solved:

I found it pretty annoying to having to search for links (especially for Wikipedia) when trying to cite stuff. So I wrote a plugin for the editor I use to easily insert lines from a file (in this case files with one link per line).

I don't think many other people are using the same editor, but I wanted to report that this has made writing with a lot of links much easier.

Comment by niplav on Cryonics Cost-Benefit Analysis · 2024-03-23T16:34:14.053Z · LW · GW

After improving the cost-benefit analysis in multiple small ways (e.g. adding more fleshed out x-risk concerns), it is actually optimal to sign up at age 50 if one has complete self-trust. If one doesn't have complete self-trust, signing up immediately is still optimal.

Comment by niplav on Why The Insects Scream · 2024-03-23T09:03:10.089Z · LW · GW

Since this article is getting a quite negative reaction, I want to register that I agree that insect suffering is (if it exists) horrendous and a crucial consideration which would change a lot of priorization, and it seems plausible to me that insects mattering is indeed the case.

About the Black Soldier fly 1.3% figure: I think it'd be better to have a probability distribution over the quality of a typical second of a Black Soldier Fly's life. (But the Rethink Priorities welfare range investigation is still really cool and I like it).

Comment by niplav on Vernor Vinge, who coined the term "Technological Singularity", dies at 79 · 2024-03-21T23:53:59.518Z · LW · GW

And judging from Alcor and the Cryonics Institute logs he was not cryopreserved :-/

Comment by niplav on Slim overview of work one could do to make AI go better (and a grab-bag of other career considerations) · 2024-03-21T08:20:12.613Z · LW · GW

Prevent sign flip and other near misses

The problem that we have with one proposed solution (adding a dummy utility function that highly disvalues a specific non-suffering thing) is that the resulting utility function is not reflectively stable.

So a theory of value formation and especially on achieving vNM coherence (or achieving whatever framework for rational preferences turns out to be the "correct" one) would be useful here. Then during the process of value formation humans can supervise decision points (i.e., in which direction to resolve the preference).

Comment by niplav on Open Thread Spring 2024 · 2024-03-20T16:03:41.329Z · LW · GW

Thanks! I know about Investopedia & often use GPT-4 for this kind of thing, but I prefer books because I can also read them offline.

Comment by niplav on Open Thread Spring 2024 · 2024-03-19T20:14:16.260Z · LW · GW

Does anyone recommend a good book on finance? I'd like to understand most of the terms used in this post, and then some, but the old repository doesn't have any suggestions.

Comment by niplav on niplav's Shortform · 2024-03-19T19:46:29.868Z · LW · GW

Paul Christiano: If the world continues to go, well … If all that happens is that we build AI, and it just works the way that it would work in an efficient market worlds, there’s no crazy turbulence, then the main change is, you shift from having … Currently two-thirds of GDP gets paid out roughly as income. I think if you have a transition too human labor being obsolete then you fall to roughly zero of GDP is paid as income, and all of it is paid out as returns on capital. From the perspective of a normal person, you either want to be benefiting from capital indirectly, like living in a state that uses capital to fund redistribution, or you just wanna have some savings. There’s a question of how you’d wanna … The market is not really anticipating AI being a huge thing over 10 or 20 years, so you might wanna further hedge and say … If you thought this was pretty likely, then you may want to take a bet against the market and say invest in stuff that’s gonna be valuable in those cases.

I think that, mostly, the very naive guess is not a crazy guess for how to do that. Investing more in tech companies. I am pretty optimistic about investing in semiconductor companies. Chip companies seem reasonable. A bunch of stuff that’s complimentary to AI is going to become valuable, so natural resources bid up. In an efficient market world, the price of natural resources is one of the main things that benefits. As you make human labor really cheap, you just become limited on resources a lot. People who own Amazon presumably benefits a huge amount. People who run logistics, people who run manufacturing, etc. I think that generally just owning capital seems pretty good. Unfortunately, right now is not a great time to be investing, but still, I think that’s not a dominant consideration when determining how much you should save.

From a 80000 hours interview with Paul Christiano, 2018.

Comment by niplav on FUTARCHY NOW BABY · 2024-03-19T19:33:27.000Z · LW · GW

Maybe it's worth it hiring some people from Dickgirls research (now renamed to clearmind?) to do investing, especially in risk-neutral contexts like philanthropic investing. @sapphire.

Comment by niplav on [deleted post] 2024-03-19T07:40:53.945Z

Well, you wrote that "concept origin: planecrash". Which of the concepts you mentioned originated from planecrash?

Measuring charisma has been done by psychologists using the Conger-Kanungo scale, and the original paper analysis the metric using a factor analysis.

Comment by niplav on Polyphasic Sleep Seed Study: Reprise · 2024-03-18T23:23:42.138Z · LW · GW

I didn't expect a response! It's probably too late now, which is a shame—there's not that much information on polyphasic sleep. But thanks for answering anyway!

Comment by niplav on Polyphasic Sleep Seed Study: Reprise · 2024-03-17T21:26:11.443Z · LW · GW

I'd appreciate a sentence or two about the outcome of this.

Comment by niplav on CronoDAS's Shortform · 2024-03-17T09:18:49.770Z · LW · GW

Portia spiders.

Comment by niplav on Shortform · 2024-03-15T08:25:50.611Z · LW · GW

Man, I keep preventing myself from saying "parts of e/acc are at least somewhat fascist", because that's not a very useful thing to say in discourse, and prevents Thermodynamic-God-ism (or e/acc in general) from developing into a more sophisticated & fleshed out system of thought. I think this kind of post works if it's phrased in a way that encourages more cooperation in the future (with the risk of this running a "cooperate with defection-rock" situation), but it only works if it encourages such cooperation.

I prefer the term "fascist" to "national socialist" because nazism was a really specific thing, bound to German conceptions of race at the time. Although—"fascism" is also really associated with Italy at that specific time.

Comment by niplav on Claude vs GPT · 2024-03-14T17:32:40.550Z · LW · GW

The biggest advantage Claude has over GPT is a more concise writing style. Claude gets to the point quickly and it usually gets it right. Even though writing isn’t the main task I use LLMs for, this advantage is massive. All of GPT’s extra fluff sentences add up to a frustrating experience even when you’re just coding.

 

Adding the custom prompt "Please keep your response brief unless asked to elaborate." reduces GPT-4s responses down to an acceptable level.

Comment by niplav on 'Empiricism!' as Anti-Epistemology · 2024-03-14T08:34:43.281Z · LW · GW

Ah, but there is some non-empirical cognitive work done here that is really relevant, namely the choice of what equivalence class to put Bernie Bankman into when trying to forecast. In the dialogue, the empiricists use the equivalence class of Bankman in the past, while you propose using the equivalence class of all people that have offered apparently-very-lucrative deals.

And this choice is in general non-trivial, and requires abstractions and/or theory. (And the dismissal of this choice as trivial is my biggest gripe with folk-frequentism—what counts as a sample, and what doesn't?)

Comment by niplav on How do you improve the quality of your drinking water? · 2024-03-13T08:41:18.950Z · LW · GW

This suggests a (self-)experiment: Drink only filtered water/distilled water with selectively added minerals vs. tap water. Problem is if the effects are really small or really long-term, so we'd need samples that are several weeks long.

Is tap water expected to contain fewer microplastics than bottled water? In this case there's surely some natural experiment[1] that could shed light on whether microplastics are harmful.


  1. Although things like natural disasters which limit tap water access, surges in bottled water prices or tap water contaminations don't count. Maybe the plastic supply chain experienced a disruption due to some natural disaster at some point, similar to GPU prices soaring when there was a flood in Thailand? There was a (small) sudden decline in price for HDPE/LDPE/LLDPE in early 2018, maybe that works. Although it seems like PET didn't have a similar price drop. ↩︎

Comment by niplav on [deleted post] 2024-03-13T08:27:04.910Z

I was trying to figure out what the human score on SWE-Bench is, but the paper doesn't mention it, as far as I can tell.

Comment by niplav on Daniel Kokotajlo's Shortform · 2024-03-07T15:14:43.275Z · LW · GW

Bet (1) resolved in Connor's favor, right?

Comment by niplav on If you weren't such an idiot... · 2024-03-06T21:37:06.481Z · LW · GW

Thank you, that's really cool! I'll look at your post when I have time.

Comment by niplav on niplav's Shortform · 2024-03-05T16:27:38.180Z · LW · GW

Thanks for the pushback :-)

Property rights are ALREADY eroding, at least in big cities - there's a whole lot more brazen theft and destruction than ever, almost all without consequence to the perpetrators.

I assume you live in the US? Statista & FBI charts seem to speak against this on a nation-wide level (it could be that there's less crime but it's punished even less often).

But, in general, I decided to make this analysis in equilibrium as a simplifying assumption—if I'd tried to make it out of equilibrium I'd simply not been able to calculate anything.

Maybe my number is a soft upper bound: most out-of-equilibrium factors seem to point in the direction of more redistribution rather than less.

If most humans are trading something else than money (time, attention, whatever), then money won't be used much.

There will presumably be some entity that is producing basic goods that humans need to survive. If property rights are still a thing, this entity will require payments for the basic goods. Humans will additionally then care about non-basic goods, and eventually time, attention and status. The basic-goods-producing-entity will not accept human attention, time or status as payment—so capital is the next best thing that pre-singularity humans own.

Comment by niplav on niplav's Shortform · 2024-03-05T07:28:14.759Z · LW · GW

Oh, yeah, I completely forgot inflation. Oops.

If I make another version I'll add it.

Comment by niplav on If you weren't such an idiot... · 2024-03-04T18:19:45.158Z · LW · GW

If you're a student, you sometimes have to hand in papers on a deadline (or are on a tight schedule), in which case another charger might be useful. (This is less relevant today, when many laptops have identical charger plugs).

Comment by niplav on niplav's Shortform · 2024-03-04T18:09:55.401Z · LW · GW

Next up: Monte-Carlo model with sensitivity analysis.

Comment by niplav on niplav's Shortform · 2024-03-04T17:58:19.386Z · LW · GW

Consider the problem of being automated away in a period of human history with explosive growth, and having to subsist on one's capital. Property rights are respected, but there is no financial assistance by governments or AGI corporations.

How much wealth does one need to have to survive, ideally indefinitely?

Finding: If you lose your job at the start of the singularity, with monthly spending of $1k, you need ~$71k in total of capital. This number doesn't look very sensitive to losing one's job slightly later.

At the moment, the world economy is growing at a pace that leads to doublings in GWP every 20 years, steadily since ~1960. Explosive growth might instead be hyperbolic (continuing the trend we've seen seen through human history so far), with the economy first doubling in 20, then in 10, then in 5, then 2.5, then 15 months, and so on. I'll assume that the smallest time for doublings is 1 year.

    initial_doubling_time=20
    final_doubling_time=1
    initial_growth_rate=2^(1/(initial_doubling_time*12))
    final_growth_rate=2^(1/(final_doubling_time*12))

    function generate_growth_rate_array(months::Int)
        growth_rate_array = zeros(Float64, years)
        growth_rate_step = (final_growth_rate - initial_growth_rate) / (years - 1)

        current_growth_rate = initial_growth_rate

        for i in 1:years
            growth_rate_array[i] = current_growth_rate
            current_growth_rate += growth_rate_step
        end

        return growth_rate_array
    end

We can then define the doubling sequence:

    years=12*ceil(Int, 10+5+2.5+1.25+final_doubling_time)
    economic_growth_rate = generate_growth_rate_array(years)
    economic_growth_rate=cat(economic_growth_rate, repeat([final_growth_rate], 60*12-size(economic_growth_rate)[1]), dims=1)

And we can then write a very simple model of monthly spending to figure out how our capital develops.

    capital=collect(1:250000)
    monthly_spending=1000 # if we really tighten our belts

    for growth_rate in economic_growth_rate
            capital=capital.*growth_rate
            capital=capital.-monthly_spending
    end

capital now contains the capital we end up with after 60 years. To find the minimum amount of capital we need to start out with to not lose out we find the index of the number closest to zero:

    julia> findmin(abs.(capital))
    (1.1776066747029436e13, 70789)

So, under these requirements, starting out with more than $71k should be fine.

But maybe we'll only lose our job somewhat into the singularity already! We can simulate that as losing a job when initial doubling times are 15 years:

    initial_doubling_time=15
    initial_growth_rate=2^(1/(initial_doubling_time*12))
    years=12*ceil(Int, 10+5+2.5+1.25+final_doubling_time)
    economic_growth_rate = generate_growth_rate_array(years)
    economic_growth_rate=cat(economic_growth_rate, repeat([final_growth_rate], 60*12-size(economic_growth_rate)[1]), dims=1)

    capital=collect(1:250000)
    monthly_spending=1000 # if we really tighten our belts

    for growth_rate in economic_growth_rate
            capital=capital.*growth_rate
            capital=capital.-monthly_spending
    end

The amount of initially required capital doesn't change by that much:

    julia> findmin(abs.(capital))
    (9.75603002635271e13, 68109)
Comment by niplav on If you weren't such an idiot... · 2024-03-04T17:56:37.599Z · LW · GW

Ah, agree on the cigarettes thing (edited it in).

Comment by niplav on If you weren't such an idiot... · 2024-03-04T17:55:47.169Z · LW · GW

I got the caffeine recommendation from here, Nicotine here. I don't have great sources for this, and the ranges were mostly gut estimates.

Comment by niplav on If you weren't such an idiot... · 2024-03-04T12:56:04.497Z · LW · GW

Maybe I can be clearer: I use 2mg nicotine rarely (15 times since December 2022), so on average about once a month, which I've edited in the comment above. It's been really useful when used surgically. Why do you advise against it? The strongest counterarguments I've seen were in here.

Comment by niplav on mattmacdermott's Shortform · 2024-03-03T22:42:40.958Z · LW · GW

Interpretability might then refer to creating architectures/activation functions that are easier to be interpreted.

Comment by niplav on The Best Software For Every Need · 2024-03-03T12:04:13.036Z · LW · GW

Software: ffmpeg

Need: Converting arbitrary video/audio file formats into another

Other programs I've tried: Various free conversion websites

Not tried: Any Adobe products

Advantage: ffmpeg doesn't have ads, or spam, or annoying pop-ups

Software: pandoc

Need: Converting arbitrary document file formats into another

Other programs I've tried: In-built conversion for office-like products

Advantage: pandoc supports a lot of formats, and generally deals with them really well—loss of information through conversion is pretty rare

Comment by niplav on Arguments for optimism on AI Alignment (I don't endorse this version, will reupload a new version soon.) · 2024-03-03T03:26:19.701Z · LW · GW

I'm confused about how POC||GTFO fits together with cryptographers starting to worry about post-quantum cryptography already in 2006, when the proof of concept was we have factored 15 into 3×5 using Shor's algorithm? (They were running a whole conference on it!)

Comment by niplav on The World in 2029 · 2024-03-03T00:06:42.031Z · LW · GW

I think it looks nice to use subscripts for probabilities.

Comment by niplav on If you weren't such an idiot... · 2024-03-02T02:10:34.040Z · LW · GW

Laptop chargers are also an object for which it's trivial to own multiple, at a low cost and high (potential) advantage.

Other examples: Take caffeine once a week (and nicotine (not cigarettes!) once a month) instead of never or daily. Leave social situations when they're not fun or useful anymore. Do small cost-benefit analyses when they make sense[1].

See also: Solved Problems Repository, Boring Advice Repository.


  1. I've done two already this year: One to decide whether to leave a bootcamp, and another to decide which gym to select. (The second one misfired: I made a mistake in my calculation, taking only the way there as a cost and not the way back to public transport, which led me to choose the wrong one (by <100€ of cost over the time I go there)). I should've done the math (done ✓), then burned the math and gone with my gut (not done ✗).) ↩︎

Comment by niplav on Can we get an AI to do our alignment homework for us? · 2024-02-27T19:42:06.275Z · LW · GW

(I have not read your recent work on AI control, so feel free to refer me to any material that answers my questions there.)

At least with C, in my experience these kinds of mistakes are not easily caught by testing, syntax highlighting and basic compiler linting. (There's a reason why valgrind exists!) Looking over the winners for 2008, I have no idea what is going on, and think it'd take me quite a while to figure out what the hell is happening, and whether it's sketchy.

I'd enjoy reading about experiments where people have to figure out whether a piece of C code is underhanded.

Another example that feels relevant, but where I'm not sure about the exact lesson, that comes to mind is the NSA modifying S-boxes for DES in order to make them more resistant to differential cryptanalysis.

  • The NSA hid a defense from a specific attack in the S-boxes
  • People figured this out only when the specific attack was found
  • It is unknown whether they hid anything that makes offense easier

Is it possible that they hid anything that makes offense easier? I don't know.

Edit: After some searching, I came upon the pRNG Dual_EC_DRBG, which did have a bunch of NSA involvement, and where they were pretty much caught biasing the numbers in a specific direction. So attacks here are definitely possible, though in this case it took more than five years to get caught.

As for the rest of your comment, I think we have different models, where you analogize negatively reinforcing AI systems to firing, which would be more applicable if we were training several systems. I'm pretty sure you've written an answer to "negatively reinforcing bad behavior can reinforce both good behavior and better hiding", so I'll leave it at that.

Comment by niplav on Can we get an AI to do our alignment homework for us? · 2024-02-27T17:49:54.192Z · LW · GW

It seems pretty relevant to note that we haven't found an easy way of writing software[1] or making hardware that, once written, can be easily evaluated to be bug-free and works as intended (see the underhanded C contest or cve-rs). The artifacts that AI systems will produce will be of similar or higher complexity.

I think this puts a serious dent into "evaluation is easier than generation"—sure, easier, but how much easier? In practice we can also solve a lot of pretty big SAT instances.


  1. Unless we get AI systems to write formal verifications for the code they've written, in which case a lot of complexity gets pushed into the specification of formal properties, which presumably would also be written by AI systems. ↩︎

Comment by niplav on Opinions survey (with rationalism score at the end) · 2024-02-17T02:43:35.661Z · LW · GW

Also 12—what's going on?

Comment by niplav on Believing In · 2024-02-08T22:53:46.307Z · LW · GW

Several disjointed thoughts, all exploratory.

I have the intuition that "believing in"s are how allocating Steam feels like from the inside, or that they are the same thing.

C. “Believing in”s should often be public, and/or be part of a person’s visible identity.

This makes sense if "believing in"s are useful for intra- and inter-agent coordination, which is the thing people accumulate to go on stag hunts together. Coordinating with your future self, in this framework, requires the same resource as coordinating with other agents similar to you along the relevant axes right now (or across time).

Steam might be thought of as a scalar quantity assigned to some action or plan, and which changes depending on the actions being executed or not. Steam is necessarily distinct from probability or utility because if you start making predictions about your own future actions, your belief estimation process (assuming it has some influence on your actions) has a fixed point in predicting the action will not be carried out, and then intervening to prevent the action from being carried out. There is also another fixed point in which the agent is maximally confident it will do something, and then just doing it, but it can't be persuaded to not do it.[1]

As stated in the original post, steam helps solve the procrastination paradox. I have the intuition that one can relate the (change) in steam/utility/probability to each other: Assuming utility is high,

  1. If actions/plans are performed, their steam increases
  2. If actions/plans are not performed, steam decreases
  3. If steam decreases slowly and actions/plans are executed, increase steam
  4. If steam decreases quickly and actions/plans are not executed, decrease steam even more quickly(?)
  5. If actions/plans are completed, reduce steam

If utility decreases a lot, steam only decreases a bit (hence things like sunk costs). Differential equations look particularly useful to talking more rigorously about this kind of thing.

Steam might also be related to how cognition on a particular topic gets started; to avoid the infinite regress problem of deciding what to think about, and deciding how to think about what to think about, and so on.

For each "category" of thought we have some steam which is adjusted as we observe our own previous thoughts, beliefs changing and values being expressed. So we don't just think the thoughts that are highest utility to think in expectation, we think the thoughts that are highest in steam, where steam is allocated depending on the change in probability and utility.

Steam or "believing in" seem to be bound up with abstraction à la teleosemantics: When thinking or acting, steam decides where thoughts and actions are directed to create higher clarity on symbolic constructs or plans. I'm especially thinking of the email-writing example: There is a vague notion of "I will write the email", into which cognitive effort needs to invested to crystallize the purpose, and then bringing up the further effort to actually flesh out all the details.


  1. This is not quite true, a better model would be that the agent discontinuously switches if the badness of the prediction being wrong is outweighed by the badness of not doing the thing. ↩︎

Comment by niplav on Believing In · 2024-02-08T13:14:13.880Z · LW · GW

This seems very similar to the distinction between Steam and probability.

Comment by niplav on Choosing a book on causality · 2024-02-08T11:56:44.024Z · LW · GW

(Commenting rather than answering because I haven't read the books I talk about.)

People have recommended Causation, Prediction and Search to me, but I haven't read it (yet), mostly due to the fact that it doesn't have exercises. Probabilistic Graphical Models talks about causality in chapter 21 (pg. 1009-1059).

Comment by niplav on Choosing a book on causality · 2024-02-08T11:50:30.091Z · LW · GW
Comment by niplav on Conditional prediction markets are evidential, not causal · 2024-02-07T22:34:28.495Z · LW · GW

See also Futarchy implements evidential decision theory (Caspar Oesterheld, 2017).

Comment by niplav on On the Debate Between Jezos and Leahy · 2024-02-07T09:41:14.549Z · LW · GW

In the spirit of pointing out local invalidities, the first paragraph (while perhaps true) is a display of learning from fictional evidence, which is usually regarded as fallacious.

Therefore I have weak-downvoted your comment. It would be much improved by trying to learn from real-world cases (which might as well support the position).

Comment by niplav on Attitudes about Applied Rationality · 2024-02-03T17:59:59.000Z · LW · GW

I like this post! It doesn't feel like a fully fleshed out partitioning, but its great that it was done at all.

Object Level and Basics Theory seem entangled with Feedback Loop Theory.

Subtype of Reading Theory: Math Theory, if you understand the relevant mathematics deeply enough that constitutes rationality.

Comment by niplav on niplav's Shortform · 2024-01-31T08:52:05.444Z · LW · GW

Good point, I meant extinction risk. Will clarify.

(I disagree with the other statements you made, but didn't downvote).

Comment by niplav on niplav's Shortform · 2024-01-30T15:24:06.369Z · LW · GW

Update to the cryonics cost-benefit analysis:

After taking existential risk into account more accurately (16.5% risk before 2100, then annual risk of 2×10e-5):

  • The expected value of signing up if you're currently 20 has dropped to ~850k
    • It's $3 mio. if you're 30 right now
  • Cryocrastination is rational given complete self-trust
    • Still very much not worth it if you're even slightly wary of motivation/discipline drift

No motivation drift, current age 20:

Value of signing up for cryonics in n years at age 20, standard parameters.

Motivation drift, current age 20:

Value of signing up for cryonics in n years at age 20, no motivation drift.

Problems this analysis still has: Guesstimate model not updated for x-risk numbers, still working on the old Alcor membership fees (which were slightly lower, especially for new members).