Posts

Considerations on Cryonics 2020-08-03T17:30:42.307Z · score: 34 (13 votes)
"Do Nothing" utility function, 3½ years later? 2020-07-20T11:09:36.946Z · score: 5 (3 votes)
niplav's Shortform 2020-06-20T21:15:06.105Z · score: 2 (1 votes)

Comments

Comment by niplav on niplav's Shortform · 2020-09-18T20:25:32.152Z · score: 0 (2 votes) · LW · GW

If we don't program philosophical reasoning into AI systems, they won't be able to reason philosophically.

Comment by niplav on What happens if you drink acetone? · 2020-09-14T17:19:32.382Z · score: 13 (11 votes) · LW · GW

Thank you, I found this slightly amusing.

Comment by niplav on niplav's Shortform · 2020-09-05T14:10:53.603Z · score: 1 (1 votes) · LW · GW

Actually not true. If "ethics" is a complete & transitive preference over universe trajectories, then yes, otherwise not necessarily.

Comment by niplav on niplav's Shortform · 2020-09-05T13:46:04.068Z · score: 1 (1 votes) · LW · GW

Politics supervenes on ethics and epistemology (maybe you also need metaphysics, not sure about that).

There's no degrees of freedom left for political opinions.

Comment by niplav on niplav's Shortform · 2020-09-02T19:50:38.270Z · score: 3 (2 votes) · LW · GW

Mistake theory/conflict theory seem more like biases (often unconscious, hard to correct in the moment of action) or heuristics (should be easily over-ruled by object-level considerations).

Comment by niplav on niplav's Shortform · 2020-08-12T23:20:32.625Z · score: 1 (1 votes) · LW · GW

I feel like this meme is related to the troll bridge problem, but I can't explain how exactly.

Comment by niplav on AllAmericanBreakfast's Shortform · 2020-08-09T21:06:36.993Z · score: 6 (4 votes) · LW · GW

Math is interesting in this regard because it is both very precise and there's no clear-cut way of checking your solution except running it by another person (or becoming so good at math to know if your proof is bullshit).

Programming, OTOH, gives you clear feedback loops.

Comment by niplav on Considerations on Cryonics · 2020-08-05T01:01:20.230Z · score: 3 (2 votes) · LW · GW

I have been putting this off because my medical knowledge is severely lacking, and I would have to estimate how the leading factors of death influence the possibility to get crypreserved mainly by subjectively evaluating them. That said, I'll look up some numbers, update the post and notify you about it (other people have been requesting this as well).

Comment by niplav on "Do Nothing" utility function, 3½ years later? · 2020-07-20T21:15:07.994Z · score: 1 (1 votes) · LW · GW

Thanks for the links! I'll check them out.

Comment by niplav on niplav's Shortform · 2020-07-19T18:27:26.359Z · score: 2 (2 votes) · LW · GW

Idea to for an approach how close GPT-3 is to "real intelligence": generalist forecasting!

Give it the prompt: "Answer with a probability between 0 and 1. Will the UK's Intelligence and Security committee publish the report into Russian interference by the end of July?", repeat for a bunch of questions, grade it at resolution.

Similar things could be done for range questions: "Give a 50% confidence interval on values between -35 and 5. What will the US Q2 2020 GDP growth rate be, according to the US Bureau of Economic Analysis Advance Estimate?".

Perhaps include the text of the question to allow more priming.

Upsides: It seems to me that making predictions is a huge part of intelligence, relatively easy to check and compare with humans

Downsides: Resolution will not be available for quite some time, and when the results are in, everybody will already be interested in the next AI project. Results only arrive "after the fact".

Comment by niplav on How to Get Off the Hedonic Treadmill? · 2020-07-05T10:56:01.858Z · score: 0 (2 votes) · LW · GW

[epistemic status: weak to very weak]

I see two different questions here:

One relating to the lack of "meaning" in your life, and one to related to the hedonic trademill.

As for the first, I don't know much about about crises in meaning, I've never had any. People have been saying good things about Man's Search for Meaning by Victor Frankl (LW review), but I haven't read it yet, so I personally can't vouch for it.

If you're simply looking to become happier, How to Be Happy is pretty much the go to resource here on the site. This won't grant you exit out of the hedonic treadmill, but it may shift the baseline upwards.

As for the hedonic treadmill: It seems extraordinarily difficult to exit it. Some possible approaches could include Wireheading (also, also, also), very high amounts of meditation, chronic pain and death. These are either very speculative, dangerous or undesirable (in the case of exiting the hedonic treadmill at the bottom).

Disclaimer: I'm not sure whether higher attainments in meditation could be classified as actually "exiting the hedonic treadmill". I think @romeostevensit has some higher attainments, maybe he or somebody else who is very experienced in meditation wants to chip in.

Comment by niplav on What's the most easy, fast, efficient way to create and maintain a personal Blog? · 2020-07-01T20:20:50.475Z · score: 8 (4 votes) · LW · GW

As far as I know, many people just create a wordpress account, which enables you to just start quickly. I don't like wordpress, so here my experience:

[epistemic status: personal experience]

My own website is dead simple: using CSS from this site, converting Markdown into HTML, hosted on github.io.

So, concrete steps I took:

  • Make a github profile
  • Learn Markdown
  • Download this CSS file
  • Start a git repository named "$YOURPAGE.github.io"
  • Write Markdown content
  • Convert content to HTML (via a build script using one of these, maybe google "Markdown to HTML $MYPLATFORM")
  • Git push

If you want more inspiration, you could try to understand the build system of my site, but it's perhaps a bit cluttered.

Comment by niplav on Raemon's Shortform · 2020-06-24T22:51:07.627Z · score: 5 (4 votes) · LW · GW

Judge it as "right". PB automatically converts your 10% predictions into 90%-not predictions for the calibration graph, but under the hood everything stays with the probabilities you provided. Hope this cleared things up.

Comment by niplav on [META] Building a rationalist communication system to avoid censorship · 2020-06-23T16:25:20.338Z · score: 4 (3 votes) · LW · GW

This might become difficult with values of , though…

Comment by niplav on niplav's Shortform · 2020-06-23T11:53:12.645Z · score: 1 (1 votes) · LW · GW

Hot shower.

Comment by niplav on niplav's Shortform · 2020-06-20T21:15:06.685Z · score: 3 (2 votes) · LW · GW

[epistemic status: tried it once] Possible life improvement: Drinking cold orange juice in the shower. Just did it, it felt amazing.

Comment by niplav on Are rationalists least suited to guard AI? · 2020-06-20T21:10:43.959Z · score: 4 (3 votes) · LW · GW

If you haven't read it yet, you might be very interested in Reason as memetic immune disorder.

Comment by niplav on Your best future self · 2020-06-07T11:48:14.931Z · score: 1 (1 votes) · LW · GW

I had a very similar thought a while back, but was thinking more of best possible current versions of myself.

You said this:

I think your coherent, extrapolated self may know things you don't know, and may have learned that some of your goals were misguided. Because your ability to communicate with them is bottlenecked on your current skills and beliefs, I can't vouch for the advice they might give.

I called it "Coherent Extrapolated Niplav", where I was sort-of having a conversation with CEN, and since it was CEN, it was also sympathetic to me (after all, my best guess is that if I was smarter, thought longer etc., I'd be sympathetic to other people's problems!).

Comment by niplav on Kids and Moral Agency · 2020-05-08T16:15:08.666Z · score: 9 (6 votes) · LW · GW

Purely pragmatically, something having moral agency seems to me to be just another way of saying "Will this thing learn to behave better if I praise/blame it." (Praise and Blame are Instrumental).

But this is, of course, just a definitional debate.

Comment by niplav on What physical trigger-action plans did you implement? · 2020-04-27T20:48:51.333Z · score: 3 (3 votes) · LW · GW

Trigger: I crack my knuckles

Action: I adjust my posture

It's stupid, but it works.

Comment by niplav on What are examples of Rationalist posters or Rationalist poster ideas? · 2020-03-22T10:59:53.627Z · score: 5 (4 votes) · LW · GW

There are these posters for the different virtues of rationality.

I'd personally like to see posters for "Tsuyoku Naritai!", the Litany of Tarski, the Litany of Gendlin, and (maybe) the Litany against Fear (although not strictly Less Wrong, it's sort of part of the culture). Maybe I'll make one or the other myself.

Comment by niplav on What are you reading? · 2019-12-29T14:58:42.954Z · score: 1 (1 votes) · LW · GW

Interesting! Sounds quite similar to the contents on the blog.

Comment by niplav on What are you reading? · 2019-12-27T20:25:19.417Z · score: 1 (1 votes) · LW · GW

What is your verdict?

I'm currently reading through his blog Metamoderna and feel like there are some similarities to rationalist thoughts on there (e.g. this post on what he calls "game change" and this post on what he calls proto-synthesis).