Comment by ike on Is there a way to hire academics hourly? · 2019-02-16T17:37:13.180Z · score: 11 (8 votes) · LW · GW

Stop by your local college, locate the relevant department, and ask around.

Comment by ike on Individual profit-sharing? · 2019-02-13T21:12:08.176Z · score: 4 (3 votes) · LW · GW

Poker players do this sometimes, see e.g. https://lasvegassun.com/news/2016/jun/26/buying-in-whats-in-it-for-pokers-big-money-backers/

Comment by ike on What is a reasonable outside view for the fate of social movements? · 2019-01-04T03:20:36.796Z · score: 1 (1 votes) · LW · GW

Luddites and communist movements in countries that didn't adopt communism come to mind

Comment by ike on On Doing the Improbable · 2018-10-28T23:11:10.723Z · score: 20 (17 votes) · LW · GW

Looking at my own experience, the thing that motivated me to do things likely to fail is the expectation of getting other benefits, even if it failed. One such thing is "experience", but it could also be "it'll be fun" or "attempting will give you status even if you fail" or any number of other things.

Or, if there is feedback after relatively little effort (you find out after the first few chapters if people like it).

There's just something about "work hard for an extended period of time with no feedback until you find out if you won, which is a binary event with low odds" that turns people off, I guess.

Comment by ike on Quantum theory cannot consistently describe the use of itself · 2018-09-20T23:51:38.496Z · score: 1 (1 votes) · LW · GW

I identified one paper, and it cites another that also claims this is flawed. Don't see a reason to believe the original paper over those

Comment by ike on Quantum theory cannot consistently describe the use of itself · 2018-09-20T23:35:14.148Z · score: 3 (3 votes) · LW · GW

https://link.springer.com/article/10.1007/s10701-017-0082-7

Here's a paper claiming to identify the error. This is enough, I'm convinced the original paper is just mistaken

Comment by ike on Quantum theory cannot consistently describe the use of itself · 2018-09-20T23:27:10.598Z · score: 2 (2 votes) · LW · GW

https://motls.blogspot.com/2018/09/frauchiger-renner-qm-is-inconsistent.html calls BS, now we just need Scott A to do the same and I'll be convinced

Comment by ike on Quantum theory cannot consistently describe the use of itself · 2018-09-20T23:04:49.432Z · score: 1 (1 votes) · LW · GW

Feels like there has to be something wrong with the paper. I don't have the knowledge to analyze it myself, but I read through the paper until the methods section and they don't discuss much beyond the math. It's unclear to me how they're arriving at a conclusion where different things happened from different perspectives, and particularly what percent of the time that would happen.

If someone familiar with the math could explain what the probability of each step is I think it could be a lot simpler to follow.

Comment by ike on Why Bayesians should two-box in a one-shot · 2017-12-18T00:33:37.386Z · score: 0 (0 votes) · LW · GW

It's not just the one post, it's the whole sequence of related posts.

It's hard for me to summarize it all and do it justice, but it disagrees with the way you're framing this. I would suggest you read some of that sequence and/or some of the decision theory papers for a defense of "should" notions being used even when believing in a deterministic world, which you reject. I don't really want to argue the whole thing from scratch, but that is where our disagreement would lie.

Comment by ike on Why Bayesians should two-box in a one-shot · 2017-12-16T22:00:40.554Z · score: 0 (0 votes) · LW · GW

Have you read http://lesswrong.com/lw/rb/possibility_and_couldness/ and the related posts and have some disagreement with them?

Comment by ike on Why Bayesians should two-box in a one-shot · 2017-12-15T19:37:17.559Z · score: 0 (0 votes) · LW · GW

If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless.

This was argued against in the Sequences and in general, doesn't seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory - all the functional decision theory stuff is assuming a deterministic decision process, I think.

Re QM: sometimes I've seen it stipulated that the world in which the scenario happens is deterministic. It's entirely possible that the amount of noise generated by QM isn't enough to affect your choice (besides for a very unlikely "your brain has a couple bits changed randomly in exactly the right way to change your choice", but that should be way too many orders of magnitude unlikely so as to not matter in any expected utility calculation).

Comment by ike on Why Bayesians should two-box in a one-shot · 2017-12-15T19:04:37.139Z · score: 0 (0 votes) · LW · GW

What part of physics implies someone cannot scan your brain and simulate inputs so as to perfectly predict your actions?

Comment by ike on What conservatives and environmentalists agree on · 2017-04-08T15:35:57.505Z · score: 0 (0 votes) · LW · GW

An evolved system is complex and dynamic, and can lose its stability. A created system is presumed to be static and always stable, so Christians don't consider LUC to be an issue with respect to the environment.

The distinction here would be that a created system's complexity is designed to be stable even with changes, not that it isn't complex and dynamic.

Comment by ike on A Problem for the Simulation Hypothesis · 2017-03-16T19:53:11.264Z · score: 0 (0 votes) · LW · GW

The "directly relevant information" is the information you know, and not any information you don't know.

If you want to construct a bet, do it among all possibly existing people that, as far as they know, could be each other. So any information that one person knows at the time of the bet, everyone else also knows.

If you don't know the time, then the bet is among all similarly situated people who also don't know the time, which may be people in the future.

Comment by ike on A Problem for the Simulation Hypothesis · 2017-03-16T19:52:03.810Z · score: 2 (2 votes) · LW · GW

If you don't know the current time, you obviously can't reason as if you did. If we were in a simulation, we wouldn't know the time in the outside world.

Reasoning of the sort "X people exist in state A at time t, and Y people exist in state B at time t, therefore I have a X:Y odds ratio of being in state A compared to state B" only work if you know you're in time t.

If you carefully explicate what information each person being asked to make a decision has, I'm pretty sure your argument would fall apart. You definitely aren't being explicit enough now about whether the people in your toy scenario know what timeslice they're in.

Comment by ike on A semi-technical question about prediction markets and private info · 2017-02-20T03:49:32.033Z · score: 0 (0 votes) · LW · GW

Has anyone rolled the die more than once? If not, it's hard to see how it could converge on that outcome unless everybody that's betting saw a 3 (even a single person seeing differently should drive the price downward). Therefore, it depends on how many people saw rolls, and you should update as if you've seen as many 3s as other people have bet.

You should bet on six if your probability is still higher than 10%.

If the prediction market caused others to update previously then it's more complicated. Probably you should assume it reflects all available information, and therefore exactly one 3 was seen. Ultimately there's no good answer because there's Knightian uncertainty in markets.

Attacking machine learning with adversarial examples

2017-02-17T00:28:09.908Z · score: 3 (4 votes)

Gates 2017 Annual letter

2017-02-15T02:39:12.352Z · score: 4 (5 votes)

Raymond Smullyan has died

2017-02-12T14:20:57.626Z · score: 3 (4 votes)
Comment by ike on Are we running out of new music/movies/art from a metaphysical perspective? · 2017-02-05T20:17:04.490Z · score: 2 (2 votes) · LW · GW

But is this because of a fault of the Hollywood system, or is it because there are few significant movie story ideas left that have not been done?

Neither: revealed preferences of consumers are in favor of reboots so that's what gets made. That's only a "fault" if your preferences differ from that of most consumers.

(Although I've heard someone argue that piracy made independent films less viable: to the extent consumers would be willing to pay were no pirate option available, but lack of such payments causes fewer films to be made, that would be a market failure argument. I don't really have enough knowledge to judge that as an explanation.)

Comment by ike on Expected Error, or how wrong you expect to be · 2016-12-24T23:35:44.910Z · score: 1 (1 votes) · LW · GW

The other tells you that it will be between $50 and $70 million, with an average of $10 million.

Typo?

Comment by ike on Reframing Average Utilitarianism · 2016-12-09T20:02:13.641Z · score: 1 (1 votes) · LW · GW

It's the difference between SIA and SSA. If you work with SIA, then you're randomly chosen from all possible beings, and so in world B you're less likely to exist.

A Few Billionaires Are Turning Medical Philanthropy on Its Head

2016-12-04T15:08:22.933Z · score: 0 (1 votes)
Comment by ike on Double Crux — A Strategy for Resolving Disagreement · 2016-11-29T22:47:16.385Z · score: 1 (1 votes) · LW · GW

2 points:

  1. I've used something similar when evaluating articles: I ask "what statement of fact would have to be true to make the main vague conclusion of this article correct"? Then I try to figure out if that fact is correct.

2.

For instance: (wildly asserts) "I bet if everyone wore uniforms there would be a fifty percent reduction in bullying." (pauses, listens to inner doubts) "Actually, scratch that—that doesn't seem true, now that I say it out loud, but there is something in the vein of reducing overt bullying, maybe?"

A problem with doing that is that saying something may "anchor" you into giving the wrong confidence level. You might be underconfident since you're doing this without data, or you might just expect yourself to not believe it on a second look.

Comment by ike on Voting is like donating hundreds of thousands to charity · 2016-11-02T23:58:22.541Z · score: 4 (4 votes) · LW · GW

You're implicitly assuming that the way one candidate spends the money is completely valueless and the way the other spends it is maximally efficient. Also, that 75% influence over budget is way off, come on.

Comment by ike on Risk Contracts: A Crackpot Idea to Save the World · 2016-09-30T16:58:51.005Z · score: 1 (1 votes) · LW · GW

On a re-read, I understand what you mean. But the issue is that it's hard to measure what level of risk certain activities have (citation needed). I kind of assumed you were saying something along the lines of "let the market decide" but apparently not. But then how do you plan on measuring risk?

If we had a iron-clad process for determining how risky things are, this would be a lot simpler.

Comment by ike on Risk Contracts: A Crackpot Idea to Save the World · 2016-09-30T14:50:30.137Z · score: 0 (0 votes) · LW · GW

I will offer you a bet at any odds you want that humanity will still be around in 10 years.

See http://lesswrong.com/lw/ie/the_apocalypse_bet/

Comment by ike on Linkposts now live! · 2016-09-28T15:54:41.129Z · score: 4 (4 votes) · LW · GW

In feedly, I need to click once to get to the post and a second time to get to the link. Can you include a link within the body of the RSS so I can click to it directly?

Comment by ike on No negative press agreement · 2016-09-01T12:14:38.996Z · score: 4 (4 votes) · LW · GW

Name and shame media entities that fail to comply with no negative press, or fail to consider a policy.

Ironically, this suggestion is precisely the kind of "negative press" you ostensibly want to eradicate.

You haven't nearly done enough to explain why so called negative press is bad, nor what exactly it is. Many good things have resulted from a negative exposé published by the media.

Comment by ike on The call of the void · 2016-08-28T15:48:06.191Z · score: 2 (2 votes) · LW · GW

https://link.springer.com/article/10.1023/A:1016636214258 but I don't see the raw data on a quick look.

From the study (free from the link above)

Impulsive Experiences Scale. The Impulsive Experiences Scale is a 10- item questionnaire designed for this study that asks subjects to rate the urges they have had to do inappropriate and harmful behaviors on a six item likert scale. Subjects were asked to rate the presence and strength of five different impulses/urges; the urge to shout something inappropriate while surrounded by other people, the urge to jump off of a high place, the urge to steal or take something, the urge to jump in front of a train, subway, or car, and the urge to strike or slap someone.

Comment by ike on New Pascal's Mugging idea for potential solution · 2016-08-19T10:47:14.758Z · score: 0 (0 votes) · LW · GW

I'd love to understand what you said about re-arranging terms, but I don't. Can you explain in more detail how you get from the first set of hypotheses/choices (which I understand) to the second?

I just moved the right hand side down by two spaces. The sum still stays the same, but the relative inequality flips.

As I said earlier, my solution is an argument that in every case there will be an action that strictly dominates all the others.

Why would you think that? I don't really see where you argued for that, could you point me at the part of your comments that said that?

Comment by ike on New Pascal's Mugging idea for potential solution · 2016-08-10T19:14:11.187Z · score: 0 (0 votes) · LW · GW

The problem there, and the problem with Pascal's Mugging in general, is that outcomes with a tiny amount of probability dominate the decisions. A could be massively worse than B 99.99999% of the time, and still naive utility maximization says to pick B.

One way to fix it is to bound utility. But that has its own problems.

The problem with your solution is that it's not complete in the formal sense: you can only say some things are better than other things if they strictly dominate them, but if neither strictly dominates the other you can't say anything.

I would also claim that your solution doesn't satisfy framing invariants that all decision theories should arguably follow. For example, what about changing the order of the terms? Let us reframe utility as after probabilities, so we can move stuff around without changing numbers. E.g. if I say utility 5, p:.01, that really means you're getting utility 500 in that scenario, so it adds 5 total in expectation. Now, consider the following utilities:

1<2 p:.5

2<3 p:.5^2

3<4 p:.5^3

n<n+1 p:.5^n

...

etc. So if you're faced with choosing between something that gives you the left side or the right side, choose the right side.

But clearly re-arranging terms doesn't change the expected utility, since that's just the sum of all terms. So the above is equivalent to:

1>0 p:.5

2>0 p:.5^2

3>2 p:.5^3

4>3 p:.5^4

n>n-1 p:.5^n

So your solution is inconsistent if it satisfies the invariant of "moving around expected utility between outcomes doesn't change the best choice".

Comment by ike on Now is the time to eliminate mosquitoes · 2016-08-07T04:43:17.915Z · score: 1 (1 votes) · LW · GW

By the way, looking at https://www.reddit.com/r/Futurology/duplicates/4vhqoc/should_we_wipe_mosquitoes_off_the_face_of_the/ there are another couple submissions in other subs right after mine which were presumably inspired by my post (the article is from February), one in TIL which also hit the frontpage.

Comment by ike on Now is the time to eliminate mosquitoes · 2016-08-07T02:45:33.301Z · score: 9 (9 votes) · LW · GW

I just submitted it and was lucky. It's the kind of thing that sub likes. I've had around three posts hit the front page out of probably thousands since I started, there's definitely a large luck factor that goes in.

We're consequentialists here, so I get all the credit for it even if it wasn't much effort, right?

Comment by ike on Now is the time to eliminate mosquitoes · 2016-08-06T22:51:28.039Z · score: 15 (15 votes) · LW · GW

I got https://www.reddit.com/r/Futurology/comments/4vhqoc/should_we_wipe_mosquitoes_off_the_face_of_the/ to the front page of Reddit, which probably got somewhere on the order of magnitude of 50,000 people to read it or at least think about the idea, which can only help in terms of moving it into the Overton Window.

I know at one point it was number 6 for logged out users.

Comment by ike on New Pascal's Mugging idea for potential solution · 2016-08-06T22:47:01.036Z · score: 1 (1 votes) · LW · GW

In your example, how much should you spend to choose A over B? Would you give up an unbounded amount of utility to do so?

Comment by ike on New Pascal's Mugging idea for potential solution · 2016-08-05T00:24:21.294Z · score: 1 (1 votes) · LW · GW

See https://arxiv.org/abs/0712.4318 , you need to formally reply to that.

Comment by ike on Is this dark arts and if it, is it justified? · 2016-07-26T13:06:04.980Z · score: 0 (0 votes) · LW · GW

I fixed the link, the period at the end was messing it up.

Comment by ike on A rational unfalsifyable believe · 2016-07-25T02:42:10.268Z · score: 1 (1 votes) · LW · GW

Can believing an unfalsifyable believe be rational?

Sure, see http://lesswrong.com/lw/ss/no_logical_positivist_i/

Comment by ike on Two forms of procrastination · 2016-07-16T22:39:53.210Z · score: 0 (0 votes) · LW · GW

For interesting long articles you can use pocket or a similar app to save it for when you have more time.

Comment by ike on Zombies Redacted · 2016-07-03T02:23:29.113Z · score: 8 (8 votes) · LW · GW

Are you planning on doing this for more of the sequences? I think that would be great.

Comment by ike on Market Failure: Sugar-free Tums · 2016-06-30T12:51:55.484Z · score: 6 (6 votes) · LW · GW

Seeing prices go up doesn't mean there's demand for them. If demand is low, then this isn't a market failure, it can make perfect sense that products with low demand don't get large companies producing them and so the prices don't reflect economies of scale.

So let's look at the actual sales. I've sold a bit on Amazon and know some tools that can give you good estimates on how many sales an items has had.

https://www.amazon.com/gp/product/B00REF5PM2/, the generic currently selling for ~$30, is currently ranked 246,691 in Health & Personal Care (archived: https://archive.is/5RRNF) (this number fluctuates, so might be different when you look). According to http://junglescout.com/estimator , such a rank sells less than 5 a month. Other tools I've checked have similar results, under 5 a month.

https://www.amazon.com/dp/B000GCECRO/, the Tum brand, is ranked 33,992 in Health & Personal Care (https://archive.is/cP0k4). Junglescout estimates 122 sales a month. Another source I checked says 91 a month, so 100 a month is probably close. Now, maybe it would sell more if the price was lower? Sales rank when the price was lower seems to have been in the 10-20,000 range, or 200-300 sales a month.

Let's say there are 2000 sales a year, if you offer it at $5 and make $2 profit on each you're making $4,000 a year. That doesn't seem enough for a large company to deal with. (You should also account for sales at other locations, though. But you pointed to Amazon of proof of demand, when it can simply be proof of lack of supply and lackluster demand.)

I don't actually know what thresholds companies tend to have for keeping products alive. If you have better information on that it would be helpful.

Comment by ike on Morality of Doing Simulations Is Not Coherent [SOLVED, INVALID] · 2016-06-14T05:41:27.335Z · score: 0 (0 votes) · LW · GW

I never intended to claim otherwise, or even, the whole original point doesn't make sense without this.

I'm not sure how the original post makes sense if you agree. I understood the original point as:

  1. Through some tricks with physics we can "skip" the middle states when simulating
  2. So we can evaluate actions without instantiating those middle states

This seems to imply that our evaluations don't need to take into account middle states. Value is definitely not linear, so you can't do subtraction of the trick states.

This is a problem even if your skip turns out to be possible.

Comment by ike on Morality of Doing Simulations Is Not Coherent [SOLVED, INVALID] · 2016-06-14T01:49:30.328Z · score: 0 (0 votes) · LW · GW

"final state" - state of what?

The world. You're assuming that the value of the world only depends on its state at the end of your simulation. But it doesn't: events that happen between now and then also matter. So if you want to check how bad the world will be if you do action X, you can't just use your trick to find out how the world will be after doing action X, because you also need to know what happened in between.

If you don't agree that states in between matter, consider whether torturing someone and then erasing their memory is morally problematic.

Comment by ike on Morality of Doing Simulations Is Not Coherent [SOLVED, INVALID] · 2016-06-10T15:10:08.716Z · score: 0 (0 votes) · LW · GW

Even if your other assumptions work, I dispute the claim that value only depends on final state. If you reach the same outcome in two different paths, but one involved torture and the other didn't, they aren't valued equally.

Therefore, if you didn't simulate the torture, you can't get a value for how bad it is.

Comment by ike on Rationality Quotes June 2016 · 2016-06-03T18:52:13.052Z · score: 6 (8 votes) · LW · GW

If you wish to diagnose an illness, design a computer, or discover a new scientific law, you do not do it by picking a dozen people at random, forming them into a committee, and demanding that they give you an answer.

David Friedman

Comment by ike on How do you learn Solomonoff Induction? · 2016-05-17T18:39:39.067Z · score: 3 (3 votes) · LW · GW

https://wiki.lesswrong.com/wiki/Solomonoff_induction http://lesswrong.com/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/ should get you started.

Comment by ike on Newcomb versus dust specks · 2016-05-16T20:18:36.680Z · score: 0 (0 votes) · LW · GW

You don't get to specify a universe without the kind of causality that the kind of CDT we use in our universe depends on, and then claim that this says something significant about decision theory.

What kind of causality is this, given that you assert that the correct thing to do in smoking lesions is refrain from smoking, and smoking lesions is one of the standard things where CDT says to smoke?

"A causes B, therefore B causes A" is a fallacy no matter what arguments you put forward.

In terms of CDT, we can say that smoking causes the gene

CDT asserts the opposite, and so if you claim this then you disagree with CDT.

You don't understand what counterfactuals are.

Comment by ike on Newcomb versus dust specks · 2016-05-16T18:41:31.055Z · score: 0 (0 votes) · LW · GW

No it doesn't. It assumes a "perfect predictor" is what it is. I don't give a damn about evidence - we're specifying properties of a universe here.

You said "you shouldn't smoke", which is a decision-theoretical claim, not a specification. It's consistent with EDT, but not CDT.

You don't get to say "Everybody who smokes has this gene" as a property of the universe, and then pretend to be an exception to a property of the universe because you have a bizarre and magical agency that gets to bypass properties of the universe.

In other words, you're denying the exact thing that CDT asserts.

There is a contradiction there

Which is what a counterfactual is.

Whatever your theory is, it is denying core claims that CDT makes, so you're denying CDT (and implicitly assuming EDT as the method for making decisions, your arguments literally map directly onto EDT arguments).

Comment by ike on Newcomb versus dust specks · 2016-05-16T17:09:27.500Z · score: 0 (0 votes) · LW · GW

you will smoke if and only if you have the gene, and you will have the gene if and only if you smoke, and in which case you shouldn't smoke

This implicitly assumes EDT.

At the point at which the gene is a perfect predictor, if you have a genetic test and you don't have the gene, and then smoke

But that's not what CDT counterfactuals do. You cut off previous nodes. As the choice to smoke doesn't causally affect the gene, smoking doesn't counterfactually contradict the prediction. If you would actually smoke, then yes, but counterfactuals don't imply there's any chance of it happening in reality.

Comment by ike on Newcomb versus dust specks · 2016-05-15T17:40:00.926Z · score: 0 (0 votes) · LW · GW

I'm not equating them. TDT is CDT with some additional claims about causality for logical uncertainties.

You deny those claims, but causality doesn't matter to you anyway, because you deny CDT.

Comment by ike on Newcomb versus dust specks · 2016-05-15T17:30:51.176Z · score: 0 (0 votes) · LW · GW

So all you're doing is denying CDT and asserting EDT is the only reasonable theory, like I thought.

Comment by ike on Newcomb versus dust specks · 2016-05-15T16:43:13.519Z · score: 0 (0 votes) · LW · GW

Do you think that if a lesion has a 100% chance to cause you to decide to smoke, and you do not decide to smoke, you might have the lesion anyway?

No. But the counterfactual probability of having the lesion given that you smoke is identical to the counterfactual probability given that you don't smoke. This follows directly from the meaning of counterfactual, and you claimed to know what they are. Are you just arguing against the idea of counterfactual probability playing a role in decisions?

Comment by ike on Newcomb versus dust specks · 2016-05-15T15:28:50.338Z · score: 0 (0 votes) · LW · GW

If you choose one-boxing / not smoking, it turns out that you get the million and didn't have the lesion. If you choose two-boxing / smoking, it turns out that you don't get the million, and you had the lesion.

Well as I said above, this ignores causality. Of course if you ignore causality, you'll get the EDT answers.

And if you define the right answer as the EDT answer, then whenever it differs from another decision theory you'll think the other theory gets the wrong answer.

None of this is particularly interesting, and I already made these points above.

Newcomb versus dust specks

2016-05-12T03:02:29.720Z · score: -1 (6 votes)

The guardian article on longevity research [link]

2015-01-11T19:02:52.830Z · score: 8 (9 votes)

Discussion of AI control over at worldbuilding.stackexchange [LINK]

2014-12-14T02:59:47.239Z · score: 6 (7 votes)

Rodney Brooks talks about Evil AI and mentions MIRI [LINK]

2014-11-12T04:50:23.828Z · score: 3 (6 votes)