Posts

Mega Links Post for Mar-Apr-May 2021 2021-05-31T03:47:47.290Z
Tales from Prediction Markets 2021-04-03T23:38:22.728Z
Links for Feb 2021 2021-03-01T05:13:08.562Z
Links for January 2021 2021-02-01T23:54:02.103Z
Links for Dec 2020 2021-01-05T19:53:04.672Z
Philosophical Justifications for Torture 2020-12-03T22:14:29.020Z
Links for Nov 2020 2020-12-01T01:31:51.886Z
The Exploitability/Explainability Frontier 2020-11-26T00:59:18.105Z
Solomonoff Induction and Sleeping Beauty 2020-11-17T02:28:59.793Z
The Short Case for Verificationism 2020-09-11T18:48:00.372Z
This Territory Does Not Exist 2020-08-13T00:30:25.700Z
ike's Shortform 2019-09-01T18:48:35.461Z
Attacking machine learning with adversarial examples 2017-02-17T00:28:09.908Z
Gates 2017 Annual letter 2017-02-15T02:39:12.352Z
Raymond Smullyan has died 2017-02-12T14:20:57.626Z
A Few Billionaires Are Turning Medical Philanthropy on Its Head 2016-12-04T15:08:22.933Z
Newcomb versus dust specks 2016-05-12T03:02:29.720Z
The guardian article on longevity research [link] 2015-01-11T19:02:52.830Z
Discussion of AI control over at worldbuilding.stackexchange [LINK] 2014-12-14T02:59:47.239Z
Rodney Brooks talks about Evil AI and mentions MIRI [LINK] 2014-11-12T04:50:23.828Z

Comments

Comment by ike on Outlawing Anthropics: Dissolving the Dilemma · 2021-09-19T13:04:35.349Z · LW · GW

You can start with Bostrom's book on anthropic bias. https://www.anthropic-principle.com/q=book/table_of_contents/

The bet is just each agent is independently offered a 1:3 deal. There's no dependence as in EYs post.

Comment by ike on Outlawing Anthropics: Dissolving the Dilemma · 2021-09-18T22:57:16.314Z · LW · GW

You're just rejecting one of the premises here, and not coming close to dissolving the strong intuitions / arguments many people have for SIA. If you insist the probability is 50/50 you run into paradoxes anyway (if each agent is offered a 1:3 odds bet, they would reject it if they believe the probability is 50%, but you would want in advance for agents seeing green to take the bet.)

Comment by ike on The Validity of Self-Locating Probabilities · 2021-08-21T23:54:40.525Z · LW · GW

Yes, rejecting probability and refusing to make predictions about the future is just wrong here, no matter how many fancy primitives you put together.

I disagree that standard LW rejects that, though.

Comment by ike on Will the US have more than 100,000 new daily COVID-19 cases before January 1, 2022? · 2021-07-10T16:28:46.950Z · LW · GW

Variance only increases chance of Yes here. If cases spike and we're averaging over 100k, reporting errors won't matter. If we've averaging 75k, a state dumping extra cases could plausibly push it over 100k

Comment by ike on Which rationalists faced significant side-effects from COVID-19 vaccination? · 2021-06-14T12:52:31.338Z · LW · GW

Two Moderna doses here with no significant side effects

Comment by ike on What's your probability that the concept of probability makes sense? · 2021-05-23T00:42:13.151Z · LW · GW

No

Comment by ike on What is the strongest argument you know for antirealism? · 2021-05-12T21:23:21.046Z · LW · GW

I know what successful communication looks like. 

What does successful representation look like? 

Comment by ike on What is the strongest argument you know for antirealism? · 2021-05-12T21:03:10.891Z · LW · GW

Yes, it appears meaningless, I and others have tried hard to figure out a possible account of it.

I haven't tried to get a fully general account of communication but I'm aware there's been plenty of philosophical work, and I can see partial accounts that work well enough.

Comment by ike on What is the strongest argument you know for antirealism? · 2021-05-12T20:13:00.454Z · LW · GW

I'm communicating, which I don't have a fully general account of, but is something I can do and has relatively predictable effects on my experiences. 

Comment by ike on What is the strongest argument you know for antirealism? · 2021-05-12T18:46:20.276Z · LW · GW

Not at all, to the extent head is a territory. 

Comment by ike on What is the strongest argument you know for antirealism? · 2021-05-12T17:21:00.906Z · LW · GW

What does it mean for a model to "represent" a territory?

Comment by ike on What is the strongest argument you know for antirealism? · 2021-05-12T17:13:32.530Z · LW · GW

>On the other hand, when I observe that other nervous systems are similar to my own nervous system, I infer that other people have subjective experiences similar to mine.

That's just part of my model. To the extent that empathy of this nature is useful for predicting what other people will do, that's a useful thing to have in a model. But to then say "other people have subjective experiences somewhere 'out there' in external reality" seems meaningless - you're just asserting your model is "real", which is a category error in my view. 

Comment by ike on What is the strongest argument you know for antirealism? · 2021-05-12T12:22:37.704Z · LW · GW

My own argument, see https://www.lesswrong.com/posts/zm3Wgqfyf6E4tTkcG/the-short-case-for-verificationism and the post it links back to.

It seems that if external reality is meaningless, then it's difficult to ground any form of morality that says actions are good or bad insofar as they have particular effects on external reality.

Comment by ike on What weird beliefs do you have? · 2021-05-06T19:45:59.158Z · LW · GW

But, provided you speak about this notion, why would verificationismism lead to external world anti-realism?

Anti-realism is not quite correct here, it's more that claims about external reality are meaningless as opposed to false. 

One could argue that synthetic statements aren't really about external reality: What we really mean is "If I were to check, my experiences would be as if there were a tree in what would seem to be my garden". Then our ordinary language wouldn't be meaningless. But this would be a highly revisionary proposal. We arguably don't mean to say something like the above. We plausibly simply mean to assert the existence of a real tree in a real garden.

I'm not making a claim about what people actually mean by the words they say. I'm saying that some interpretations of what people say happen to lack meaning. I agree that many people fervently believe in some form of external reality, I simply think that belief is meaningless, in the same way that a belief about where the electron "truly is" is meaningless. 

Comment by ike on What weird beliefs do you have? · 2021-04-16T22:22:51.348Z · LW · GW

I granted your supposition of such things existing. I myself don't believe any objective external reality exists, as I don't think those are meaningful concepts.

Comment by ike on What weird beliefs do you have? · 2021-04-16T21:35:03.494Z · LW · GW

Perhaps. It's not clear to me how such facts could exist, or what claims about them mean.

If you've got self locating uncertainty, though, you can't have objective facts about what atoms near you are doing.

Comment by ike on What weird beliefs do you have? · 2021-04-16T16:42:33.395Z · LW · GW

>If they didn't write the sentence, then they are not identical to me and don't have to accept that they are me.

Sure, some of those people are not identical to some other people. But how do you know which subset you belong to? A version of you that deluded themselves into thinking they wrote the sentence is subjectively indistinguishable from any other member of the set. You can only get probabilistic knowledge against, i.e. "most of the people in my position are not deluding themselves", which lets you make probabilistic predictions. But saying "X is true" and grounding that as "X is probable" doesn't seem to work. What does "X is true" mean here, when there's a chance it's not true for you?  

Comment by ike on Tales from Prediction Markets · 2021-04-15T21:33:34.874Z · LW · GW

This post got linked from https://www.coindesk.com/why-crypto-whales-love-this-prediction-market

Comment by ike on What weird beliefs do you have? · 2021-04-15T02:17:26.922Z · LW · GW

I'm tentatively ok with claims of the sort that a multiverse exists, although I suspect that too can be dissolved.

Note that in your example, the relevant subset of the multiverse is all the people who are deluding themselves into thinking they typed that sentence. If there's no meaningful sense in which you're self located as someone else vs that subset, then there's no meaningful sense in which you "actually" typed it.

Comment by ike on What weird beliefs do you have? · 2021-04-15T01:22:48.171Z · LW · GW

What form of realism is consistent with my statement about level 4?

Comment by ike on What weird beliefs do you have? · 2021-04-14T13:14:12.162Z · LW · GW

External reality is not a meaningful concept, some form of verificationism is valid. I argued for it in various ways previously on LW, one plausible way to get there is through a multiverse argument.

Verificationism w.r.t level 3 multiverse - "there's no fact of the matter where the electron is before it's observed, it's in both places and you have self locating uncertainty."

Verificationism w.r.t. level 4 multiverse - "there's no fact of the matter as to anything, as long as it's true in some subsets of the multiverse and false in others, you just have self locating uncertainty."

Lots of people seem to accept the first but not the second.

Comment by ike on What weird beliefs do you have? · 2021-04-14T13:07:49.791Z · LW · GW

How is that different than say the CIA taking ESP seriously, MKULTRA etc?

Comment by ike on Tales from Prediction Markets · 2021-04-04T23:00:42.443Z · LW · GW

From what I can tell, most of the people who lost significant sums on the CO2 markets were generally profitable and +EV. Although I guess I'm mostly seeing input from the people who hang out on the discord all day, which is a skewed sample.

Comment by ike on Tales from Prediction Markets · 2021-04-04T13:59:26.359Z · LW · GW

Prediction markets are tiny compared to real world markets. Something like $100 million total volume on Polymarket since inception. There just aren't as many people making sure they're efficient.

Comment by ike on Tales from Prediction Markets · 2021-04-04T04:43:16.924Z · LW · GW

It's actually a bit worse - there's a 2% fee paid to liquidity providers, so if you only bet and don't provide liquidity then you lose money on average. Of course you can lose money providing liquidity too if the market moves against you. Anyone can provide liquidity and get a share of that 2%.

Comment by ike on What is the semantics of assigning probabilities to future events? · 2021-04-01T14:14:04.604Z · LW · GW

Probability is in the mind. It's relative to the information you have.

In practical terms, you typically don't have good enough resolution to get individual percentage point precision, unless it's in a quantitative field with well understood processes.

Comment by ike on Speculations Concerning the First Free-ish Prediction Market · 2021-03-31T13:31:41.265Z · LW · GW

USDC is a very different thing than tether.

Do you have most of your net worth tied up in Eth, or something other than USD at any rate? If not I don't see how the volatility point could apply.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-14T17:16:55.356Z · LW · GW

Harvest automatically does this, so your only exposure is to farm, which seems likely to hold its value as long as money is locked up there.

Comment by ike on Strong Evidence is Common · 2021-03-14T02:13:53.304Z · LW · GW

How much evidence is breaking into the top 50 on metaculus in ~6 months?

I stayed out of finance years ago because I thought I didn't want to compete with Actually Smart People.

Then I jumped in when the prediction markets were clearly not dominated by the Actually Smart.

But I still don't feel confident to try in the financial markets.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T20:16:36.678Z · LW · GW

20% annually on USDC vault at harvest.finance.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T19:57:42.432Z · LW · GW

Yes, you need to deposit USD. If you don't have USD, you should convert using a non-crypto service, and you'll probably get lower costs, although I don't have experience with that.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T19:04:34.959Z · LW · GW

Yeah, if you do it through Poly instead of matic it's more expensive.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T17:37:57.746Z · LW · GW

Costs around $50 to withdraw depending on gas and eth fees at the time. It's cheaper if you use matic directly.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T15:20:34.134Z · LW · GW

I'm currently interested in the 100M vaccine market, please PM me if you want to spend some time modelling it with me. I spent a lot of time last week collecting relevant data and I have a pretty substantial position currently.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T15:19:26.202Z · LW · GW

I used gemini to purchase GUSD and used curve to convert to USDC, and also bought a bunch of USDC on coinbase directly.

Comment by ike on Exploiting Crypto Prediction Markets for Fun and Profit · 2021-03-13T15:17:55.831Z · LW · GW

This is a decent guide, but some points are wrong - for instance you can convert to USDC on regular coinbase just fine with no fees.

I successfully made around 90k on the various Trump and Biden markets on Polymarket and have been meaning to write up something about it. The free money is mostly gone, and I haven't bothered to get the 4% returns over two months because I can get better returns for the same level of risk in DeFi (which I also want to write an article about. 20% close to risk free returns, or 40% slightly risky returns are both very high.)

Comment by ike on Sleep math: red clay blue clay · 2021-03-08T04:46:30.464Z · LW · GW

Lbh pna trg O neovgenevyl pybfr gb 100.

Cebbs ol vaqhpgvba. Fhccbfr fbzr cebprqher pna trg O gb 100-K, naq N gb K. Gura gurer vf n cebprqher gung jvyy trg N gb K-(K/20)^2. Ercrng rabhtu gvzrf gb trg O neovgenevyl pybfr gb 100.

Gur cebprqher: gnxr unys bs O naq unys bs N naq qb gur bevtvany cebprqher. Gura gnxr guvf unys bs N naq pbzovar jvgu gur bgure unys bs O, gura gnxr gur bgure unys bs N (ng 100) naq pbzovar jvgu gur 2aq unys bs O ol hfvat gur bevtvany cebprqher, gura zvk gur Nf naq Of gbtrgure frcnengryl.

Erfhygf ng rnpu fgrc:

  1. Unys bs O: 100-K. Unys bs N: K
  2. 2aq unys bs O: K/2. 1fg unys bs N: K/2
  3. 2aq unys bs O: K/2 + (100-K)*(100-K/2)/100 2aq unys bs N: K/2 + K(100-K/2)/100
  4. Nirentr bs Nf: K/2 + K(100-K/2)/200 K-(K/20)^2

Jr pna gura cyht va K=50 naq frr jurer jr trg: K=43.75 K=38.96 K=35.17 K=32.08

Guvf shapgvba znl gnxr n ybat gvzr gb trg gb mreb ohg vs lbh vgrengr ybat rabhtu vg unf gb trg gurer, gurer'f ab bgure cbvagf nybat gur jnl vg pbhyq fgbc ng.

Haven't fully verified this.

Comment by ike on Promoting Prediction Markets With Meaningless Internet-Point Badges · 2021-02-09T20:06:12.403Z · LW · GW

Yes, you should definitely milk your PhD for as much status as possible, Dr. Manheim. 

Comment by ike on Promoting Prediction Markets With Meaningless Internet-Point Badges · 2021-02-09T17:42:08.780Z · LW · GW

https://www.metaculus.com/accounts/profile/114222/

My current all-time brier is .1 vs .129 for metaculus prediction and .124 for community prediction on the same questions. 

I'm also in the top 20 in points per question on https://metaculusextras.com/points_per_question 

Both of those metrics heavily depend on question selection, so it's difficult to compare people directly. But neither have to do with volume of questions. 

Comment by ike on Promoting Prediction Markets With Meaningless Internet-Point Badges · 2021-02-08T20:40:30.317Z · LW · GW

As a top-50 metaculuser, I endorse all proposals that give me more status.

Comment by ike on Short, Extreme, Forgotten Torture vs Death · 2021-02-07T16:08:55.998Z · LW · GW

I'm skeptical that physical pain scales beyond 2 or so orders of magnitude in a given span of time. I'm also skeptical of the coherence of death as an ontological possibility.

Being forced to choose between two things I believe are incoherent, I'd pick the torture. I'm more worried that there's a coherent notion of death being referenced than that some entity will experience a level of pain that seems impossible. There's multiple problems with the concept of pain here: it's not clear the entity experiencing it would be conscious during that time frame (especially if they have no memory, as memory is tied to consciousness), it's not clear that entity would be indentifiable as me, it's not clear that upping some pain number actually corresponds to that level of utility, as utility is plausibly bounded over short intervals, etc.

Comment by ike on Does anyone else sometimes "run out of gas" when trying to think? · 2021-02-06T03:58:41.225Z · LW · GW

I've found taking a long bath is quite useful if I want to think about a specific topic in depth without distractions. At least one of my LW posts was prompted by this.

Comment by ike on Poll: Which variables are most strategically relevant? · 2021-01-22T22:37:27.326Z · LW · GW

How important will scaling relatively simple algorithms be, compared to innovation on the algorithms?

Comment by ike on Why do stocks go up? · 2021-01-18T05:46:07.805Z · LW · GW

Did you see my initial reply at https://www.lesswrong.com/posts/4vcTYhA2X99aGaGHG/why-do-stocks-go-up?commentId=wBEnBKqqB7TRXya8N which was left before you replied to me at all? I thought that added sufficient caveats. 

>"While it is expected that stocks will go up, and go up more than bonds, it is yet to be explained why they have gone up so much more than bonds." 

Yeah, I'd emphasize slightly more in expectation. 

Comment by ike on Why do stocks go up? · 2021-01-18T05:23:30.962Z · LW · GW

The vast majority of the equity premium is unexplained. When people say "just buy stocks and hold for a long period and you'll make 10% a year", they're asserting that the unexplained equity premium will persist, and I have a problem with that assumption.

I tried to clarify this in my first reply. You should interpret it as saying that stocks were massively undervalued and shouldn't have gone up significantly more than bonds. I was trying to explain and didn't want to include too many caveats, instead leaving them for the replies.

It's interesting to note that several other replies gave the simplistic risk response without the caveat that risk can only explain a small minority of the premium.

Comment by ike on Why do stocks go up? · 2021-01-18T03:30:54.731Z · LW · GW

Start with https://en.wikipedia.org/wiki/Equity_premium_puzzle. There's plenty of academic sources there. 

People have grown accustomed to there being an equity premium to the extent that there's a default assumption that it'll just continue forever despite nobody knowing why it existed in the past. 

>Isn't there more real wealth today than during the days of the East India Company? If a stock represents a piece of a businesses, and those businesses now have more real wealth today than 300 years ago, why shouldn't stock returns be quite positive?

I simplified a bit above. What's unexplained is the excess return of stocks over risk-free bonds. When there's more real wealth in the future, the risk free rate is higher. Stock returns would end up slightly above the risk-free rate because they're riskier. The puzzle is that stock returns are way, way higher than the risk-free rate and this isn't plausibly explained by their riskiness. 

Comment by ike on Why do stocks go up? · 2021-01-17T22:15:43.555Z · LW · GW

Well there's some probability of it paying out before then.

If the magic value is a martingale, and the payout timing is given by a poisson process then the stock price should remain a constant discount off of the magic value. You will gain on average by holding the stock until the payout, but won't gain in expectation by buying and selling the stock.

Comment by ike on Why do stocks go up? · 2021-01-17T22:00:57.347Z · LW · GW

It seems obvious to me I shouldn't expect this company's price to go up faster than the risk free rate, yet the volatility argument seems to apply to it.

You should, because the company's current value will be lower than $10 million due to the risk. Your total return over time will be positive, while the return for a similar company that never varies will be 0 (or the interest rate if nonzero).

Comment by ike on Why do stocks go up? · 2021-01-17T21:53:57.940Z · LW · GW

The classic answer is risk. Stocks are riskier than bonds, so they should be underpriced (and therefore have higher returns) than bonds.

But we know how risky stocks have been, historically. We can calculate how much higher a return that level of risk should lead to, under plausible risk tolerances. The equity premium puzzle is that the observed returns on stocks is significantly higher than this.

Read through the wikipedia page on the equity premium puzzle. It's good.

Comment by ike on Why do stocks go up? · 2021-01-17T21:50:43.991Z · LW · GW

The equity premium puzzle is still unsolved. The answer to your question is that nobody knows the answer. Stocks shouldn't have gone up historically, none of our current theories are capable of explaining why stocks did go up. Equivalently, stocks were massively underpriced over the last century or so and nobody knows why.

If you don't know why something was mispriced in the past, you should be very careful about asserting that it will or won't continue to be mispriced in the future.