Posts

Comments

Comment by servy on How would you build Dath Ilan on earth? · 2022-05-30T09:34:03.066Z · LW · GW

The cool thing about prediction markets is that if you disagree with them, you can just bet against them and win money. Put up or shut up.

Suppose, there is a prediction market for a question like:

"If Alice is elected president, will GDP grow more than 20% by the end of the next 4 years?"

Current bets are 10 to 1 against Alice succeeding if elected. I strongly disagree, so I would like to bet $5000 on Alice and win a lot of money. Alice does not end up being elected, the prediction market probably being largely responsible for this outcome. So, the outcome is unresolved, I'm angry and frustrated since I was not able to make money, nor got my preferred politician got elected, and I lost some opportunity cost due to those $5000 being committed to this bet for some time. So I vote against prediction markets next time since those are obviously an evil plot to keep Alice from becoming president and fixing things.

Maybe, this can be somehow amended if I could instead bet on the effects of some policy that Alice endorses and I find compelling, and it is possible to test this policy on a smaller scale, and we could allocate some money from this presumably large betting pool for actually testing it, and we could somehow guarantee that those who bet lots of money won't be able to influence this test much. There are a lot of "ifs" here, and this seems to require way more than just legalized prediction markets. I don't immediately see how legalizing prediction markets necessarily leads to incentives to legalize and make it possible to easily resolve something like that.

Comment by servy on Open Thread August 2018 · 2018-08-03T14:10:32.267Z · LW · GW

As a non-native English speaker, it was a surprise that "self-conscious" normally means "shy", "embarassed", "uncomfortable", ... I blame lesswrong for giving me the wrong idea of this word meaning.

Comment by servy on Open Thread August 2018 · 2018-08-03T14:09:41.993Z · LW · GW

If there are side effects that someone can observe then the virtual machine is potentially escapable.

An unfriendly AI might not have a goal of getting out. A psycho that would prefer a dead person to a live person, and who would prefer to stay in a locked room instead of getting out, is not particularly friendly.

Since you would eventually let out the AI that won't halt after a certain finite amount of time, I see no reason why unfriendly AI would halt instead of waiting for you to believe it is friendly.

Comment by servy on Hammertime Day 8: Sunk Cost Faith · 2018-02-06T11:11:55.609Z · LW · GW

I'm curious what are the "ejector seats" that you mention in this post and in Day 1 post, that can help with the time sinks and planning. While other concepts seem familiar, I don't think I heared about the ejector seats before. I can guess that those are something like TAP's with the action of "adandoning current project/activity". Looking forward to your Day 10 post on planning that will hopefully have an in depth explanation and best practices of building those.

Thanks for the sequence that focuses on instrumental every-day rationality.