Posts

Range and Forecasting Accuracy 2020-11-16T13:06:45.184Z
Considerations on Cryonics 2020-08-03T17:30:42.307Z
"Do Nothing" utility function, 3½ years later? 2020-07-20T11:09:36.946Z
niplav's Shortform 2020-06-20T21:15:06.105Z

Comments

Comment by niplav on The LessWrong 2019 Review · 2020-12-02T18:05:00.156Z · LW · GW

The URL for the 2018 review results gives a 404. This makes sense, since it has been reserved for the 2019 review. However, I'd like to finish my read-through of the 2018 results. Where (except in the new book series) can I do that?

Comment by niplav on My Fear Heuristic · 2020-12-01T13:13:50.865Z · LW · GW

This post would be strongly improved by 3 examples of decisions you made differently due to this heuristic.

Comment by niplav on Does a site exist for keeping track of casual wagers with others? · 2020-11-28T16:30:17.945Z · LW · GW

A workaround could be to use PredictionBook, with the amount wagered in the comments. Supports odds & resolution date, resolution criteria & amount wagered would just be comments, similar with comments on wagers. Integration with crypto is out of the window. I don't know whether it supports mailing you about resolution times, but it does remind you on the site if questions you predicted on are past their resolution date. It does show edits of the question title in full.

You can also invite users to Metaculus private questions.

Comment by niplav on Is this a good way to bet on short timelines? · 2020-11-28T14:51:49.009Z · LW · GW

I might get back on your offer when I have put more effort into finding out timelines, and then deciding what I should/could do about it :-)

Comment by niplav on Is this a good way to bet on short timelines? · 2020-11-28T13:27:20.324Z · LW · GW

Under the condition that influence is still useful after a singularity-is scenario,betting on percentages of one's net worth (in influence, spacetime regions, matterenergy, plain old money etc.) at the resolution time seems to account for some of the change in situation (e.g. "If X, you give me 1% of your networth, otherwise I give you 2% of mine."). The scenarios where those things matter not at all after important events seem pretty unlikely to me.

(Posting here because I expect more attention, tell me if I should move this to the previous question or elsewhere)

Comment by niplav on Evading Mind Control · 2020-11-25T16:55:53.387Z · LW · GW

I agree.

Another advantage of reading is to keep open the option of discovering unknown unknowns, shifting your worldview, finding mental tools and maybe even better philosophies in unexpected places (for example, I have been (mentally) referencing cryptonormativity quite alot recently, and Nerst pulled it from reading Habermas – not quite rationalist canon). The idea of the intelligence explosion was sitting in a text by I.J. Good for around 35 years until people seriously thought about what implications that might have, and what could & should be done about it.

This ties in nicely with reading books from before the 21st century (and perhaps even before the 20th century!). Also, one should consider reading books that noone from one's main intellectual group has read.

Comment by niplav on Range and Forecasting Accuracy · 2020-11-22T10:22:13.757Z · LW · GW

I'll correct the typos.

As for Klong, I agree that it's not an optimal choice. I started this as a hobby project, and for odd reasons, it seemed to be the best tool at my disposal. I'll add a sentence that explains when I start using it, and will maybe try to replicate the analysis in a language that can be understood by more than a couple hundred people on the planet.

Comment by niplav on Range and Forecasting Accuracy · 2020-11-22T10:19:23.108Z · LW · GW

On Metaculus: I assume that these are forecasts on questions that resolved retroactively. Examples:

For PredictionBook: The datetime of resolution seems to be the datetime of the first attempted resolution, not the last. Example: Total deaths due to coronavirus in the Netherlands will go over >5000 by the end of April. .

I think I might change the PredictionBook data fetching script to output the datetime of the last resolution.

Comment by niplav on Range and Forecasting Accuracy · 2020-11-18T09:23:34.934Z · LW · GW

Nice to hear! I plan working on this a little bit more (editing, limitations section etc.).

Do you have any critical feedback?

Comment by niplav on Insufficiently Awesome · 2020-11-17T13:49:44.165Z · LW · GW

Personal additions:

  • Learn different kinds of meditation really well (loving-kindness, concentration, insight). This can also be practiced at any time.
  • Learn some poems with fluidity (this can of course be done with spaced repetition). If you learn for long enough, maybe you can learn parts of an epic. If you want to be really impressive, learn it in the original language (however, try to get the pronounciation right!)
  • Make predictions until you know you're well calibrated
  • Try to be mindful of your posture – how straight is your back, where are your shoulders? Maybe set up a random timer that reminds you to do this
  • Learn the basics of dressing well, then refactor your wardrobe (starting point for men, practical information for women seems to need no resource)
  • Learn the basics of investing, and actually put some money into it
Comment by niplav on My default frame: some fundamentals and background beliefs · 2020-11-10T11:02:21.523Z · LW · GW

Nice post.

I mostly agree, but this bit stood out to me:

  1. Every modern intellectual citizen ought to become familiar with at least some of the major ideas in the rationalist canon. This includes R:AZ, The Codex, Superforecasting, How to Measure Anything, Inadequate Equilibria, and Good and Real.

I am not sure what exactly you mean with "modern intellectual citizen". At the broadest, it could encompass all adults, at the narrowest, it would be limited to college professors & public intellectuals.

I also doubt that this is a productive method of raising the Sanity Waterline. We're here in a place where many people have had their minds pretty strongly changed by these texts, but reading e.g. the reviews of R:AZ on Amazon & Goodreads, I observe that many people read it, say "meh" and go on with their lives – a pretty disappointing result for reading ~2000 pages!

Furthermore, aren't sufficiently intellectual people already familiar with some of the ideas in the "rationalist canon", just by exposure to the scientific method? I think yes, and I also think that the most valuable aspect of these texts is not the ideas in and of themselves, but rather the type/structure of thinking they demand? (E.g. scout vs. soldier mindset).

Comment by niplav on Gifts Which Money Cannot Buy · 2020-11-04T23:31:15.680Z · LW · GW

My strategy for gifts is to give cash, and a hand-written letter why and a suggestion for something to buy from that money. This finds a useful compromise between

  • signaling effort spent on the present
  • sentimental value
  • self-determination for the present-receiver
  • their use of my expertise

Ao far, this has worked well (n=4).

Comment by niplav on What posts do you want written? · 2020-11-03T07:11:15.179Z · LW · GW

An overview of past attempts at wireheading humans/animals, what the effects were & how we could do better.

Comment by niplav on What posts do you want written? · 2020-11-03T07:10:25.519Z · LW · GW

A formal statement of the problem of Pascal's mugging (or a discussion of several ways to formally state it), and a summary/review of different people's approaches to solving/dissolving it.

Comment by niplav on Launching Forecast, a community for crowdsourced predictions from Facebook · 2020-10-20T10:38:39.185Z · LW · GW

Generally, I'm super in favour of forecasting & prediction market type platforms.

I'd like to hear from you why you think another of platform is a good idea, since on the hobbyist side of things I know of at least 4 projects (foretold, predictionbook, good judgement open and metaculus) and two major prediction markets (predictit and augur).

Is there a reason why you believe that another such platform is a good idea, as opposed to additional work going into existing platforms?

Comment by niplav on What posts do you want written? · 2020-10-19T20:13:23.970Z · LW · GW

Have you by any chance seen this? (It's not published yet, but I read it a year ago and thought it was quite good, as far as I can judge such things).

Comment by niplav on Considerations on Cryonics · 2020-10-18T20:37:12.700Z · LW · GW

If you have any questions, don't hesitate to shoot me a message.

Comment by niplav on niplav's Shortform · 2020-10-16T10:33:32.491Z · LW · GW

Two-by-two for possibly important aspects of reality and related end-states:

Coordination is hard Coordination is easy
Defense is easier Universe fractured into many parties, mostly stasis Singleton controlling everything
Attack is easier Pure Replicator Hell Few warring factions? Or descent into Pure Replicator hell?
Comment by niplav on Considerations on Cryonics · 2020-10-15T22:27:42.345Z · LW · GW

Hello again, I put your consideration into this section. Basically, if you trust yourself completely & are younger than 26 years, wait until you're 26 (but the benefit is tiny), otherwise, it's still optimal to sign up now.

Comment by niplav on niplav's Shortform · 2020-10-15T22:22:45.996Z · LW · GW

I just updated my cryonics cost-benefit analysis with

along with some small fixes and additions.

The basic result has not changed, though. It's still worth it.

Comment by niplav on niplav's Shortform · 2020-10-06T07:40:43.362Z · LW · GW

Thoughts on Fillers Neglect Framers in the context of LW.

  • LW seems to have more framers as opposed to fillers
  • This is because
    • framing probably has slightly higher status (framing posts get referenced more often, since they create useful concept handles)
    • framing requires more intelligence as opposed to meticulous research, and because most LWers are hobbyists, they don't really have a lot of time for research
    • there is no institutional outside pressure to create more fillers (except high status fillers such as Gwern)
  • The rate of fillers to framers seems to have increased since LWs inception (I am least confident in this claim)
    • Perhaps this is a result of EY leaving behind a relatively accepted frame
  • I am probably in the 25th percentile-ish of intelligence among LWers, and enjoy researching and writing filling posts. Therefore, my comparative advantage is probably writing filling type posts as opposed to framing ones.
Comment by niplav on Rationality and Climate Change · 2020-10-06T07:16:45.392Z · LW · GW

Ugh, I should pay more attention.

Comment by niplav on Rationality and Climate Change · 2020-10-05T23:28:39.158Z · LW · GW

An angle that's interesting (though only tangentially connected with climate change) is how civilizations deal with waste heat.

Comment by niplav on Rationality and Climate Change · 2020-10-05T22:18:02.146Z · LW · GW

I would really like to see estimates of the cost of climate change, along with their probabilities. A paper I found is this one, but it is not quite up to the standard I have. It states that 1 bio. humans will die counterfactually due to climate change.

Also, the probabilities given for human extinction from climate change are quite low in comparison to other risks (8% for 10% human population decline from climate disaster conditional on such decline occuring, 1% (probably even less) on 95% decline till 2100).

Current belief: Something (nuclear war, biological catastrophe, unaligned AI, something else entirely) will either get humanity before climate change does (27%), humanity gets through and grows insanely quickly (Roodman 2020) (33%), neither happens & basically status quo with more droughts, famines, poverty, small scale wars etc. due to climate change, which cause several hundred million/few billion deaths over next centuries, but don't let humanity go extinct (17%), something else entirely (23%) (scenarios fitting very loosely onto reality, probabilities are intuition after calibration training).

Comment by niplav on Rationality and Climate Change · 2020-10-05T22:05:43.930Z · LW · GW

My personal opinion: climate change (and more directly, conflict caused or exacerbated by it) is the single biggest risk to human-like intelligence flourishing in the galaxy - very likely that it's a large component of the Great Filter.

I don't think that the idea of the Great Filter fits very well here. The Great Filter would be something so universal that it eliminates ~100% of all civilizations. Climate change seems to be conditional on a number of factors specific to earth, e.g. carbon-based life, green-house gas effects, interdependent civilization etc., that it doesn't really work well as a factor that eliminates nearly all civilizations at a specific level of development.

Comment by niplav on niplav's Shortform · 2020-09-26T07:28:03.019Z · LW · GW

Happy Petrov Day everyone :-)

Comment by niplav on niplav's Shortform · 2020-09-25T17:31:55.345Z · LW · GW

Adblockers have positive externalities: they remove much of the incentive to make websites addictive.

Comment by niplav on niplav's Shortform · 2020-09-18T20:25:32.152Z · LW · GW

If we don't program philosophical reasoning into AI systems, they won't be able to reason philosophically.

Comment by niplav on What happens if you drink acetone? · 2020-09-14T17:19:32.382Z · LW · GW

Thank you, I found this slightly amusing.

Comment by niplav on niplav's Shortform · 2020-09-05T14:10:53.603Z · LW · GW

Actually not true. If "ethics" is a complete & transitive preference over universe trajectories, then yes, otherwise not necessarily.

Comment by niplav on niplav's Shortform · 2020-09-05T13:46:04.068Z · LW · GW

Politics supervenes on ethics and epistemology (maybe you also need metaphysics, not sure about that).

There's no degrees of freedom left for political opinions.

Comment by niplav on niplav's Shortform · 2020-09-02T19:50:38.270Z · LW · GW

Mistake theory/conflict theory seem more like biases (often unconscious, hard to correct in the moment of action) or heuristics (should be easily over-ruled by object-level considerations).

Comment by niplav on niplav's Shortform · 2020-08-12T23:20:32.625Z · LW · GW

I feel like this meme is related to the troll bridge problem, but I can't explain how exactly.

Comment by niplav on AllAmericanBreakfast's Shortform · 2020-08-09T21:06:36.993Z · LW · GW

Math is interesting in this regard because it is both very precise and there's no clear-cut way of checking your solution except running it by another person (or becoming so good at math to know if your proof is bullshit).

Programming, OTOH, gives you clear feedback loops.

Comment by niplav on Considerations on Cryonics · 2020-08-05T01:01:20.230Z · LW · GW

I have been putting this off because my medical knowledge is severely lacking, and I would have to estimate how the leading factors of death influence the possibility to get crypreserved mainly by subjectively evaluating them. That said, I'll look up some numbers, update the post and notify you about it (other people have been requesting this as well).

Comment by niplav on "Do Nothing" utility function, 3½ years later? · 2020-07-20T21:15:07.994Z · LW · GW

Thanks for the links! I'll check them out.

Comment by niplav on niplav's Shortform · 2020-07-19T18:27:26.359Z · LW · GW

Idea to for an approach how close GPT-3 is to "real intelligence": generalist forecasting!

Give it the prompt: "Answer with a probability between 0 and 1. Will the UK's Intelligence and Security committee publish the report into Russian interference by the end of July?", repeat for a bunch of questions, grade it at resolution.

Similar things could be done for range questions: "Give a 50% confidence interval on values between -35 and 5. What will the US Q2 2020 GDP growth rate be, according to the US Bureau of Economic Analysis Advance Estimate?".

Perhaps include the text of the question to allow more priming.

Upsides: It seems to me that making predictions is a huge part of intelligence, relatively easy to check and compare with humans

Downsides: Resolution will not be available for quite some time, and when the results are in, everybody will already be interested in the next AI project. Results only arrive "after the fact".

Comment by niplav on How to Get Off the Hedonic Treadmill? · 2020-07-05T10:56:01.858Z · LW · GW

[epistemic status: weak to very weak]

I see two different questions here:

One relating to the lack of "meaning" in your life, and one to related to the hedonic trademill.

As for the first, I don't know much about about crises in meaning, I've never had any. People have been saying good things about Man's Search for Meaning by Victor Frankl (LW review), but I haven't read it yet, so I personally can't vouch for it.

If you're simply looking to become happier, How to Be Happy is pretty much the go to resource here on the site. This won't grant you exit out of the hedonic treadmill, but it may shift the baseline upwards.

As for the hedonic treadmill: It seems extraordinarily difficult to exit it. Some possible approaches could include Wireheading (also, also, also), very high amounts of meditation, chronic pain and death. These are either very speculative, dangerous or undesirable (in the case of exiting the hedonic treadmill at the bottom).

Disclaimer: I'm not sure whether higher attainments in meditation could be classified as actually "exiting the hedonic treadmill". I think @romeostevensit has some higher attainments, maybe he or somebody else who is very experienced in meditation wants to chip in.

Comment by niplav on What's the most easy, fast, efficient way to create and maintain a personal Blog? · 2020-07-01T20:20:50.475Z · LW · GW

As far as I know, many people just create a wordpress account, which enables you to just start quickly. I don't like wordpress, so here my experience:

[epistemic status: personal experience]

My own website is dead simple: using CSS from this site, converting Markdown into HTML, hosted on github.io.

So, concrete steps I took:

  • Make a github profile
  • Learn Markdown
  • Download this CSS file
  • Start a git repository named "$YOURPAGE.github.io"
  • Write Markdown content
  • Convert content to HTML (via a build script using one of these, maybe google "Markdown to HTML $MYPLATFORM")
  • Git push

If you want more inspiration, you could try to understand the build system of my site, but it's perhaps a bit cluttered.

Comment by niplav on Raemon's Shortform · 2020-06-24T22:51:07.627Z · LW · GW

Judge it as "right". PB automatically converts your 10% predictions into 90%-not predictions for the calibration graph, but under the hood everything stays with the probabilities you provided. Hope this cleared things up.

Comment by niplav on [META] Building a rationalist communication system to avoid censorship · 2020-06-23T16:25:20.338Z · LW · GW

This might become difficult with values of , though…

Comment by niplav on niplav's Shortform · 2020-06-23T11:53:12.645Z · LW · GW

Hot shower.

Comment by niplav on niplav's Shortform · 2020-06-20T21:15:06.685Z · LW · GW

[epistemic status: tried it once] Possible life improvement: Drinking cold orange juice in the shower. Just did it, it felt amazing.

Comment by niplav on Are rationalists least suited to guard AI? · 2020-06-20T21:10:43.959Z · LW · GW

If you haven't read it yet, you might be very interested in Reason as memetic immune disorder.

Comment by niplav on Your best future self · 2020-06-07T11:48:14.931Z · LW · GW

I had a very similar thought a while back, but was thinking more of best possible current versions of myself.

You said this:

I think your coherent, extrapolated self may know things you don't know, and may have learned that some of your goals were misguided. Because your ability to communicate with them is bottlenecked on your current skills and beliefs, I can't vouch for the advice they might give.

I called it "Coherent Extrapolated Niplav", where I was sort-of having a conversation with CEN, and since it was CEN, it was also sympathetic to me (after all, my best guess is that if I was smarter, thought longer etc., I'd be sympathetic to other people's problems!).

Comment by niplav on Kids and Moral Agency · 2020-05-08T16:15:08.666Z · LW · GW

Purely pragmatically, something having moral agency seems to me to be just another way of saying "Will this thing learn to behave better if I praise/blame it." (Praise and Blame are Instrumental).

But this is, of course, just a definitional debate.

Comment by niplav on What physical trigger-action plans did you implement? · 2020-04-27T20:48:51.333Z · LW · GW

Trigger: I crack my knuckles

Action: I adjust my posture

It's stupid, but it works.

Comment by niplav on What are examples of Rationalist posters or Rationalist poster ideas? · 2020-03-22T10:59:53.627Z · LW · GW

There are these posters for the different virtues of rationality.

I'd personally like to see posters for "Tsuyoku Naritai!", the Litany of Tarski, the Litany of Gendlin, and (maybe) the Litany against Fear (although not strictly Less Wrong, it's sort of part of the culture). Maybe I'll make one or the other myself.

Comment by niplav on What are you reading? · 2019-12-29T14:58:42.954Z · LW · GW

Interesting! Sounds quite similar to the contents on the blog.

Comment by niplav on What are you reading? · 2019-12-27T20:25:19.417Z · LW · GW

What is your verdict?

I'm currently reading through his blog Metamoderna and feel like there are some similarities to rationalist thoughts on there (e.g. this post on what he calls "game change" and this post on what he calls proto-synthesis).