Posts

Tales from Prediction Markets 2021-04-03T23:38:22.728Z
Links for Feb 2021 2021-03-01T05:13:08.562Z
Links for January 2021 2021-02-01T23:54:02.103Z
Links for Dec 2020 2021-01-05T19:53:04.672Z
Philosophical Justifications for Torture 2020-12-03T22:14:29.020Z
Links for Nov 2020 2020-12-01T01:31:51.886Z
The Exploitability/Explainability Frontier 2020-11-26T00:59:18.105Z
Solomonoff Induction and Sleeping Beauty 2020-11-17T02:28:59.793Z
The Short Case for Verificationism 2020-09-11T18:48:00.372Z
This Territory Does Not Exist 2020-08-13T00:30:25.700Z
ike's Shortform 2019-09-01T18:48:35.461Z
Attacking machine learning with adversarial examples 2017-02-17T00:28:09.908Z
Gates 2017 Annual letter 2017-02-15T02:39:12.352Z
Raymond Smullyan has died 2017-02-12T14:20:57.626Z
A Few Billionaires Are Turning Medical Philanthropy on Its Head 2016-12-04T15:08:22.933Z
Newcomb versus dust specks 2016-05-12T03:02:29.720Z
The guardian article on longevity research [link] 2015-01-11T19:02:52.830Z
Discussion of AI control over at worldbuilding.stackexchange [LINK] 2014-12-14T02:59:47.239Z
Rodney Brooks talks about Evil AI and mentions MIRI [LINK] 2014-11-12T04:50:23.828Z

Comments

Comment by ike on What weird beliefs do you have? · 2021-04-16T22:22:51.348Z · LW · GW

I granted your supposition of such things existing. I myself don't believe any objective external reality exists, as I don't think those are meaningful concepts.

Comment by ike on What weird beliefs do you have? · 2021-04-16T21:35:03.494Z · LW · GW

Perhaps. It's not clear to me how such facts could exist, or what claims about them mean.

If you've got self locating uncertainty, though, you can't have objective facts about what atoms near you are doing.

Comment by ike on What weird beliefs do you have? · 2021-04-16T16:42:33.395Z · LW · GW

>If they didn't write the sentence, then they are not identical to me and don't have to accept that they are me.

Sure, some of those people are not identical to some other people. But how do you know which subset you belong to? A version of you that deluded themselves into thinking they wrote the sentence is subjectively indistinguishable from any other member of the set. You can only get probabilistic knowledge against, i.e. "most of the people in my position are not deluding themselves", which lets you make probabilistic predictions. But saying "X is true" and grounding that as "X is probable" doesn't seem to work. What does "X is true" mean here, when there's a chance it's not true for you?  

Comment by ike on Tales from Prediction Markets · 2021-04-15T21:33:34.874Z · LW · GW

This post got linked from https://www.coindesk.com/why-crypto-whales-love-this-prediction-market

Comment by ike on What weird beliefs do you have? · 2021-04-15T02:17:26.922Z · LW · GW

I'm tentatively ok with claims of the sort that a multiverse exists, although I suspect that too can be dissolved.

Note that in your example, the relevant subset of the multiverse is all the people who are deluding themselves into thinking they typed that sentence. If there's no meaningful sense in which you're self located as someone else vs that subset, then there's no meaningful sense in which you "actually" typed it.

Comment by ike on What weird beliefs do you have? · 2021-04-15T01:22:48.171Z · LW · GW

What form of realism is consistent with my statement about level 4?

Comment by ike on What weird beliefs do you have? · 2021-04-14T13:14:12.162Z · LW · GW

External reality is not a meaningful concept, some form of verificationism is valid. I argued for it in various ways previously on LW, one plausible way to get there is through a multiverse argument.

Verificationism w.r.t level 3 multiverse - "there's no fact of the matter where the electron is before it's observed, it's in both places and you have self locating uncertainty."

Verificationism w.r.t. level 4 multiverse - "there's no fact of the matter as to anything, as long as it's true in some subsets of the multiverse and false in others, you just have self locating uncertainty."

Lots of people seem to accept the first but not the second.

Comment by ike on What weird beliefs do you have? · 2021-04-14T13:07:49.791Z · LW · GW

How is that different than say the CIA taking ESP seriously, MKULTRA etc?

Comment by ike on Tales from Prediction Markets · 2021-04-04T23:00:42.443Z · LW · GW

From what I can tell, most of the people who lost significant sums on the CO2 markets were generally profitable and +EV. Although I guess I'm mostly seeing input from the people who hang out on the discord all day, which is a skewed sample.

Comment by ike on Tales from Prediction Markets · 2021-04-04T13:59:26.359Z · LW · GW

Prediction markets are tiny compared to real world markets. Something like $100 million total volume on Polymarket since inception. There just aren't as many people making sure they're efficient.

Comment by ike on Tales from Prediction Markets · 2021-04-04T04:43:16.924Z · LW · GW

It's actually a bit worse - there's a 2% fee paid to liquidity providers, so if you only bet and don't provide liquidity then you lose money on average. Of course you can lose money providing liquidity too if the market moves against you. Anyone can provide liquidity and get a share of that 2%.

Comment by ike on What is the semantics of assigning probabilities to future events? · 2021-04-01T14:14:04.604Z · LW · GW

Probability is in the mind. It's relative to the information you have.

In practical terms, you typically don't have good enough resolution to get individual percentage point precision, unless it's in a quantitative field with well understood processes.

Comment by ike on Speculations Concerning the First Free-ish Prediction Market · 2021-03-31T13:31:41.265Z · LW · GW

USDC is a very different thing than tether.

Do you have most of your net worth tied up in Eth, or something other than USD at any rate? If not I don't see how the volatility point could apply.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-14T17:16:55.356Z · LW · GW

Harvest automatically does this, so your only exposure is to farm, which seems likely to hold its value as long as money is locked up there.

Comment by ike on Strong Evidence is Common · 2021-03-14T02:13:53.304Z · LW · GW

How much evidence is breaking into the top 50 on metaculus in ~6 months?

I stayed out of finance years ago because I thought I didn't want to compete with Actually Smart People.

Then I jumped in when the prediction markets were clearly not dominated by the Actually Smart.

But I still don't feel confident to try in the financial markets.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T20:16:36.678Z · LW · GW

20% annually on USDC vault at harvest.finance.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T19:57:42.432Z · LW · GW

Yes, you need to deposit USD. If you don't have USD, you should convert using a non-crypto service, and you'll probably get lower costs, although I don't have experience with that.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T19:04:34.959Z · LW · GW

Yeah, if you do it through Poly instead of matic it's more expensive.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T17:37:57.746Z · LW · GW

Costs around $50 to withdraw depending on gas and eth fees at the time. It's cheaper if you use matic directly.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T15:20:34.134Z · LW · GW

I'm currently interested in the 100M vaccine market, please PM me if you want to spend some time modelling it with me. I spent a lot of time last week collecting relevant data and I have a pretty substantial position currently.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T15:19:26.202Z · LW · GW

I used gemini to purchase GUSD and used curve to convert to USDC, and also bought a bunch of USDC on coinbase directly.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T15:17:55.831Z · LW · GW

This is a decent guide, but some points are wrong - for instance you can convert to USDC on regular coinbase just fine with no fees.

I successfully made around 90k on the various Trump and Biden markets on Polymarket and have been meaning to write up something about it. The free money is mostly gone, and I haven't bothered to get the 4% returns over two months because I can get better returns for the same level of risk in DeFi (which I also want to write an article about. 20% close to risk free returns, or 40% slightly risky returns are both very high.)

Comment by ike on Sleep math: red clay blue clay · 2021-03-08T04:46:30.464Z · LW · GW

Lbh pna trg O neovgenevyl pybfr gb 100.

Cebbs ol vaqhpgvba. Fhccbfr fbzr cebprqher pna trg O gb 100-K, naq N gb K. Gura gurer vf n cebprqher gung jvyy trg N gb K-(K/20)^2. Ercrng rabhtu gvzrf gb trg O neovgenevyl pybfr gb 100.

Gur cebprqher: gnxr unys bs O naq unys bs N naq qb gur bevtvany cebprqher. Gura gnxr guvf unys bs N naq pbzovar jvgu gur bgure unys bs O, gura gnxr gur bgure unys bs N (ng 100) naq pbzovar jvgu gur 2aq unys bs O ol hfvat gur bevtvany cebprqher, gura zvk gur Nf naq Of gbtrgure frcnengryl.

Erfhygf ng rnpu fgrc:

  1. Unys bs O: 100-K. Unys bs N: K
  2. 2aq unys bs O: K/2. 1fg unys bs N: K/2
  3. 2aq unys bs O: K/2 + (100-K)*(100-K/2)/100 2aq unys bs N: K/2 + K(100-K/2)/100
  4. Nirentr bs Nf: K/2 + K(100-K/2)/200 K-(K/20)^2

Jr pna gura cyht va K=50 naq frr jurer jr trg: K=43.75 K=38.96 K=35.17 K=32.08

Guvf shapgvba znl gnxr n ybat gvzr gb trg gb mreb ohg vs lbh vgrengr ybat rabhtu vg unf gb trg gurer, gurer'f ab bgure cbvagf nybat gur jnl vg pbhyq fgbc ng.

Haven't fully verified this.

Comment by ike on Promoting Prediction Markets With Meaningless Internet-Point Badges · 2021-02-09T20:06:12.403Z · LW · GW

Yes, you should definitely milk your PhD for as much status as possible, Dr. Manheim. 

Comment by ike on Promoting Prediction Markets With Meaningless Internet-Point Badges · 2021-02-09T17:42:08.780Z · LW · GW

https://www.metaculus.com/accounts/profile/114222/

My current all-time brier is .1 vs .129 for metaculus prediction and .124 for community prediction on the same questions. 

I'm also in the top 20 in points per question on https://metaculusextras.com/points_per_question 

Both of those metrics heavily depend on question selection, so it's difficult to compare people directly. But neither have to do with volume of questions. 

Comment by ike on Promoting Prediction Markets With Meaningless Internet-Point Badges · 2021-02-08T20:40:30.317Z · LW · GW

As a top-50 metaculuser, I endorse all proposals that give me more status.

Comment by ike on Short, Extreme, Forgotten Torture vs Death · 2021-02-07T16:08:55.998Z · LW · GW

I'm skeptical that physical pain scales beyond 2 or so orders of magnitude in a given span of time. I'm also skeptical of the coherence of death as an ontological possibility.

Being forced to choose between two things I believe are incoherent, I'd pick the torture. I'm more worried that there's a coherent notion of death being referenced than that some entity will experience a level of pain that seems impossible. There's multiple problems with the concept of pain here: it's not clear the entity experiencing it would be conscious during that time frame (especially if they have no memory, as memory is tied to consciousness), it's not clear that entity would be indentifiable as me, it's not clear that upping some pain number actually corresponds to that level of utility, as utility is plausibly bounded over short intervals, etc.

Comment by ike on Does anyone else sometimes "run out of gas" when trying to think? · 2021-02-06T03:58:41.225Z · LW · GW

I've found taking a long bath is quite useful if I want to think about a specific topic in depth without distractions. At least one of my LW posts was prompted by this.

Comment by ike on Poll: Which variables are most strategically relevant? · 2021-01-22T22:37:27.326Z · LW · GW

How important will scaling relatively simple algorithms be, compared to innovation on the algorithms?

Comment by ike on Why do stocks go up? · 2021-01-18T05:46:07.805Z · LW · GW

Did you see my initial reply at https://www.lesswrong.com/posts/4vcTYhA2X99aGaGHG/why-do-stocks-go-up?commentId=wBEnBKqqB7TRXya8N which was left before you replied to me at all? I thought that added sufficient caveats. 

>"While it is expected that stocks will go up, and go up more than bonds, it is yet to be explained why they have gone up so much more than bonds." 

Yeah, I'd emphasize slightly more in expectation. 

Comment by ike on Why do stocks go up? · 2021-01-18T05:23:30.962Z · LW · GW

The vast majority of the equity premium is unexplained. When people say "just buy stocks and hold for a long period and you'll make 10% a year", they're asserting that the unexplained equity premium will persist, and I have a problem with that assumption.

I tried to clarify this in my first reply. You should interpret it as saying that stocks were massively undervalued and shouldn't have gone up significantly more than bonds. I was trying to explain and didn't want to include too many caveats, instead leaving them for the replies.

It's interesting to note that several other replies gave the simplistic risk response without the caveat that risk can only explain a small minority of the premium.

Comment by ike on Why do stocks go up? · 2021-01-18T03:30:54.731Z · LW · GW

Start with https://en.wikipedia.org/wiki/Equity_premium_puzzle. There's plenty of academic sources there. 

People have grown accustomed to there being an equity premium to the extent that there's a default assumption that it'll just continue forever despite nobody knowing why it existed in the past. 

>Isn't there more real wealth today than during the days of the East India Company? If a stock represents a piece of a businesses, and those businesses now have more real wealth today than 300 years ago, why shouldn't stock returns be quite positive?

I simplified a bit above. What's unexplained is the excess return of stocks over risk-free bonds. When there's more real wealth in the future, the risk free rate is higher. Stock returns would end up slightly above the risk-free rate because they're riskier. The puzzle is that stock returns are way, way higher than the risk-free rate and this isn't plausibly explained by their riskiness. 

Comment by ike on Why do stocks go up? · 2021-01-17T22:15:43.555Z · LW · GW

Well there's some probability of it paying out before then.

If the magic value is a martingale, and the payout timing is given by a poisson process then the stock price should remain a constant discount off of the magic value. You will gain on average by holding the stock until the payout, but won't gain in expectation by buying and selling the stock.

Comment by ike on Why do stocks go up? · 2021-01-17T22:00:57.347Z · LW · GW

It seems obvious to me I shouldn't expect this company's price to go up faster than the risk free rate, yet the volatility argument seems to apply to it.

You should, because the company's current value will be lower than $10 million due to the risk. Your total return over time will be positive, while the return for a similar company that never varies will be 0 (or the interest rate if nonzero).

Comment by ike on Why do stocks go up? · 2021-01-17T21:53:57.940Z · LW · GW

The classic answer is risk. Stocks are riskier than bonds, so they should be underpriced (and therefore have higher returns) than bonds.

But we know how risky stocks have been, historically. We can calculate how much higher a return that level of risk should lead to, under plausible risk tolerances. The equity premium puzzle is that the observed returns on stocks is significantly higher than this.

Read through the wikipedia page on the equity premium puzzle. It's good.

Comment by ike on Why do stocks go up? · 2021-01-17T21:50:43.991Z · LW · GW

The equity premium puzzle is still unsolved. The answer to your question is that nobody knows the answer. Stocks shouldn't have gone up historically, none of our current theories are capable of explaining why stocks did go up. Equivalently, stocks were massively underpriced over the last century or so and nobody knows why.

If you don't know why something was mispriced in the past, you should be very careful about asserting that it will or won't continue to be mispriced in the future.

Comment by ike on ike's Shortform · 2020-12-31T17:35:39.751Z · LW · GW

The other day a piece fell off of one side of my glasses (the part that touches the nose.)

The glasses stay on, but I've noticed a weird feeling of imbalance at times. I could be imagining it, I'm able to function apparently regularly. But I was thinking that the obvious analogy is to filmography: directors consciously adjust camera angles and framings in order to induce certain emotions or reactions to a scene. It's plausible that even a very slight asymmetry in your vision can affect you.

If this is true, might there be other low hanging fruit for adjusting your perception to increase focus?

Comment by ike on Cultural accumulation · 2020-12-06T13:48:07.352Z · LW · GW

If you had our entire society, you'd have enough people that know what they're trying to do that they should be able to figure out how to get to there from 1200 artifacts. It might take several decades to set up the mining/factories etc, and it might take several decades or more to get the politics to a place where you'd be able to try.

Comment by ike on The Hard Problem of Magic · 2020-12-05T01:34:53.413Z · LW · GW

I thought it was extremely clear that magic is meant to mean consciousness, from the title alone as well as the examples, and that the post is criticizing / satirizing those that make the corresponding arguments about consciousness/qualia.

Comment by ike on Links for Nov 2020 · 2020-12-01T05:14:45.214Z · LW · GW

My favorite link this time around was the baseball antitrust one, although the qntm series has been really good.

Comment by ike on The Exploitability/Explainability Frontier · 2020-11-27T15:25:15.886Z · LW · GW

Assume you're at the frontier of being able to do research in that area and have similar abilities to others in that reference class. The total amount of effort most of those people will put in is the same, but it will be split across these two factors differently. The system being unexploitable corresponds to the sum here being constant.

There can be examples where both sides are difficult, which are out of the frontier.

Re politics, there are some issues that are difficult, some issues that are value judgments, and some that are fairly simple in the sense that spending a week seriously researching is enough to be pretty confident of the direction policy should be moved in.

Comment by ike on The Exploitability/Explainability Frontier · 2020-11-27T14:48:45.608Z · LW · GW

My point is that it's rare and therefore difficult to discover.

The kinds that are less rare are easier to discover but harder to convince others of, or at least harder to convince people that they matter.

I was drawing off this example, by the way: https://econjwatch.org/articles/recalculating-gravity-a-correction-of-bergstrands-1985-frictionless-case

A 35 year old model had a simple typo in it that got repeated in papers that built on it. Very easy to convince people that this is the case, but very difficult to discover such errors - most such papers don't have those errors so you need to replicate a lot of correct papers to find the one that's wrong.

If it's difficult to show that the typo actually matters, that's part of the difficulty of discovering it. My point is you should expect the sum of the difficulty in explaining and the difficulty in discovery to be roughly constant.

Comment by ike on Why is there a "clogged drainpipe" effect in idea generation? · 2020-11-20T21:14:28.218Z · LW · GW

Your mind tracks the idea so as not to forget it. This reduces the effective working memory space, which makes it harder to think.

Comment by ike on The Presumptuous Philosopher, self-locating information, and Solomonoff induction · 2020-11-17T02:34:00.273Z · LW · GW

I've written a post that argues that Solomonoff Induction actually is a thirder, not a halfer, and sketches an explanation. 

https://www.lesswrong.com/posts/Jqwb7vEqEFyC6sLLG/solomonoff-induction-and-sleeping-beauty 

Comment by ike on Down with Solomonoff Induction, up with the Presumptuous Philosopher · 2020-11-17T02:33:38.561Z · LW · GW

I've written a post that argues that Solomonoff Induction actually is a thirder, not a halfer, and sketches an explanation. 

https://www.lesswrong.com/posts/Jqwb7vEqEFyC6sLLG/solomonoff-induction-and-sleeping-beauty 

Comment by ike on Reading/listening list for the US failing or other significant shifts? · 2020-11-13T19:59:38.570Z · LW · GW

https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9477.2011.00265.x

Recommend this paper, which suggests that wealthy democracies never fall.

Comment by ike on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-13T14:37:58.340Z · LW · GW

I've been trying to understand, but your model appears underspecified and I haven't been able to get clarification. I'll try again. 

treat perspectives as fundamental axioms

Have you laid out the axioms anywhere? None of the posts I've seen go into enough detail for me to be able to independently apply your model. 

like saying I assumed Beauty knows she’s not the clone while I clearly stated the opposite

This is not clear at all. In this comment you wrote 

the first-person perspective is primitively given simply means you instinctively know which person you are, because you are experiencing everything from its viewpoint.

In the earlier comment:

from the first-person perspective it is primevally clear the other copy is not me.

I don't know how these should be interpreted other than implying that you know you're not a clone (if you're not). If there's another interpretation, please clarify. It also seems obviously false, because "I don't know which person I am among several subjectively indistinguishable persons" is basically tautological. 

 If MWI does not require perspective-independent reality. Then what is the universal wave function describing?

It's a model that's useful for prediction. As I said in that post, this is my formulation of MWI; I prefer formulations that don't postulate reality, because I find the concept incoherent. 

But when I followed-up your statement that some CI can be considered a special version of MWI and explained why I think that is not possible, I get no feedback from you...

That was a separate thread, where I was responding to someone who apparently had a broader conception of CI. They never explained what assumptions go into that version, I was merely responding to their point that CI doesn't say much. If you disagree with their conception of CI then my comment doesn't apply. 

Your position that SIA is the “natural choice” and paradox free is a very strong claim.

It seems natural to me, and none of the paradoxes I've seen are convincing. 

what is the framework

Start with a standard universal prior, plus the assumption that if an entity "exists" in both worlds A and B and world A "exists" with probability P(A) and P(B) for world B, then the relative probability of me "being" that entity inside world A, compared to world B, is P(A)/P(B). I can then condition on all facts I know about me, which collapses this to only entities that I "can" be given this knowledge. 

Per my metaphysics, the words in quotes are not ontological claims but just a description of how the universal prior works - in the end, it spits out probabilities and that's what gets used. 

If you don’t know what my theory would predict, then give me some scenarios or thought experiments and make me answer them.

I would like to understand in what scenarios your theory refuses to assign probabilities. My framework will assign a probability to any observation, but you've acknowledged that there are some questions your theory will refuse to answer, even though there's a simple observation that can be done to answer the question. This is highly counter-intuitive to me.

Comment by ike on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-12T21:29:46.472Z · LW · GW

I'm trying to understand your critiques, but I haven't seen any that present an issue for my model of SIA, MWI, or anything else. Either you're critiquing something other than what I mean by SIA etc, or you're explaining them bad, or I'm not understanding the critiques correctly. I don't think it should take ten posts to explain your issues with them, but even so I've read through your posts and couldn't figure it out. 

It might help if you explained what you take SIA and MWI to mean. When you gave a list of assumptions you believed to be entailed by MWI, I said I didn't agree with that. Something similar may be going on with SIA. A fully worked out example showing what SIA and what your proposed alternative say for various scenarios would also help. What statements does PBR say are meaningful? When is a probability meaningful? 

Comment by ike on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-12T21:24:45.132Z · LW · GW

From my point of view, you keep making new posts building on your theory/critique of standard anthropic thinking without really responding to the issues. I've tried to get clarifications and failed. 

In the above post, I explained the problem of SSA and SIA: they assume a specific imaginary selection process, and then base their answers on that, whereas the first-person perspective is primitive given.

I have no idea what this means. 

Re paradoxes, you appear to not understand how SIA would apply to those cases using the framework I laid out. I asked you why those paradoxes apply and you didn't answer. If there are particular SIA advocates that believe the paradoxes apply, you haven't pointed at any of them. 

In another post, I argued that the MWI requires the basic assumption of a perspective-independent objective reality. Your entire response is “I deny that MWI requires that. In fact, all three of your postulates are incoherent, and I believe in a form of MWI that doesn't require any of them.” No explanations.

You gave no explanation for why MWI would imply those statements, why am I expected to spend more time proving a negative than you spent arguing for the positive? You asserted MWI implies those postulates, I asserted otherwise. I've written two posts here arguing for a form of verificationism in which those postulates end up incoherent. 

 

Instead of adding more and more posts to your theory, I think you should single in on one or two points of disagreement and defend that. Your scenarios and your perspective based theory are poorly defined, and I can't tell what the theory says in any given case. 

Comment by ike on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-11T19:30:21.270Z · LW · GW

>I am arguing they are both wrong.

You keep saying that you're arguing, but as far as I can tell you just say that everyone's wrong and don't really argue for it. I've pointed out issues with all of your posts and you haven't been responding substantively. 

Here, you're assuming that Beauty knows she's not the clone. In that scenario, even thirders would agree the probability of heads is 1/2. This assumption is core to your claims - if not, we don't get "there is no principle of indifference among copies", among other statements above.