Posts

Tips/tricks/notes on optimizing investments 2020-05-06T23:21:53.153Z · score: 76 (28 votes)
Have epistemic conditions always been this bad? 2020-01-25T04:42:52.190Z · score: 163 (57 votes)
Against Premature Abstraction of Political Issues 2019-12-18T20:19:53.909Z · score: 64 (23 votes)
What determines the balance between intelligence signaling and virtue signaling? 2019-12-09T00:11:37.662Z · score: 71 (29 votes)
Ways that China is surpassing the US 2019-11-04T09:45:53.881Z · score: 57 (24 votes)
List of resolved confusions about IDA 2019-09-30T20:03:10.506Z · score: 100 (35 votes)
Don't depend on others to ask for explanations 2019-09-18T19:12:56.145Z · score: 78 (25 votes)
Counterfactual Oracles = online supervised learning with random selection of training episodes 2019-09-10T08:29:08.143Z · score: 45 (12 votes)
AI Safety "Success Stories" 2019-09-07T02:54:15.003Z · score: 105 (32 votes)
Six AI Risk/Strategy Ideas 2019-08-27T00:40:38.672Z · score: 63 (32 votes)
Problems in AI Alignment that philosophers could potentially contribute to 2019-08-17T17:38:31.757Z · score: 84 (35 votes)
Forum participation as a research strategy 2019-07-30T18:09:48.524Z · score: 113 (38 votes)
On the purposes of decision theory research 2019-07-25T07:18:06.552Z · score: 66 (22 votes)
AGI will drastically increase economies of scale 2019-06-07T23:17:38.694Z · score: 42 (16 votes)
How to find a lost phone with dead battery, using Google Location History Takeout 2019-05-30T04:56:28.666Z · score: 56 (29 votes)
Where are people thinking and talking about global coordination for AI safety? 2019-05-22T06:24:02.425Z · score: 100 (36 votes)
"UDT2" and "against UD+ASSA" 2019-05-12T04:18:37.158Z · score: 49 (16 votes)
Disincentives for participating on LW/AF 2019-05-10T19:46:36.010Z · score: 80 (35 votes)
Strategic implications of AIs' ability to coordinate at low cost, for example by merging 2019-04-25T05:08:21.736Z · score: 57 (23 votes)
Please use real names, especially for Alignment Forum? 2019-03-29T02:54:20.812Z · score: 40 (13 votes)
The Main Sources of AI Risk? 2019-03-21T18:28:33.068Z · score: 78 (33 votes)
What's wrong with these analogies for understanding Informed Oversight and IDA? 2019-03-20T09:11:33.613Z · score: 39 (9 votes)
Three ways that "Sufficiently optimized agents appear coherent" can be false 2019-03-05T21:52:35.462Z · score: 69 (18 votes)
Why didn't Agoric Computing become popular? 2019-02-16T06:19:56.121Z · score: 54 (16 votes)
Some disjunctive reasons for urgency on AI risk 2019-02-15T20:43:17.340Z · score: 38 (11 votes)
Some Thoughts on Metaphilosophy 2019-02-10T00:28:29.482Z · score: 57 (16 votes)
The Argument from Philosophical Difficulty 2019-02-10T00:28:07.472Z · score: 49 (15 votes)
Why is so much discussion happening in private Google Docs? 2019-01-12T02:19:19.332Z · score: 87 (26 votes)
Two More Decision Theory Problems for Humans 2019-01-04T09:00:33.436Z · score: 59 (20 votes)
Two Neglected Problems in Human-AI Safety 2018-12-16T22:13:29.196Z · score: 82 (29 votes)
Three AI Safety Related Ideas 2018-12-13T21:32:25.415Z · score: 64 (26 votes)
Counterintuitive Comparative Advantage 2018-11-28T20:33:30.023Z · score: 79 (31 votes)
A general model of safety-oriented AI development 2018-06-11T21:00:02.670Z · score: 71 (24 votes)
Beyond Astronomical Waste 2018-06-07T21:04:44.630Z · score: 100 (46 votes)
Can corrigibility be learned safely? 2018-04-01T23:07:46.625Z · score: 75 (26 votes)
Multiplicity of "enlightenment" states and contemplative practices 2018-03-12T08:15:48.709Z · score: 102 (27 votes)
Online discussion is better than pre-publication peer review 2017-09-05T13:25:15.331Z · score: 18 (15 votes)
Examples of Superintelligence Risk (by Jeff Kaufman) 2017-07-15T16:03:58.336Z · score: 5 (5 votes)
Combining Prediction Technologies to Help Moderate Discussions 2016-12-08T00:19:35.854Z · score: 13 (14 votes)
[link] Baidu cheats in an AI contest in order to gain a 0.24% advantage 2015-06-06T06:39:44.990Z · score: 14 (13 votes)
Is the potential astronomical waste in our universe too small to care about? 2014-10-21T08:44:12.897Z · score: 28 (30 votes)
What is the difference between rationality and intelligence? 2014-08-13T11:19:53.062Z · score: 13 (13 votes)
Six Plausible Meta-Ethical Alternatives 2014-08-06T00:04:14.485Z · score: 52 (53 votes)
Look for the Next Tech Gold Rush? 2014-07-19T10:08:53.127Z · score: 47 (42 votes)
Outside View(s) and MIRI's FAI Endgame 2013-08-28T23:27:23.372Z · score: 16 (19 votes)
Three Approaches to "Friendliness" 2013-07-17T07:46:07.504Z · score: 20 (23 votes)
Normativity and Meta-Philosophy 2013-04-23T20:35:16.319Z · score: 12 (14 votes)
Outline of Possible Sources of Values 2013-01-18T00:14:49.866Z · score: 14 (16 votes)
How to signal curiosity? 2013-01-11T22:47:23.698Z · score: 21 (22 votes)
Morality Isn't Logical 2012-12-26T23:08:09.419Z · score: 19 (35 votes)

Comments

Comment by wei_dai on High Stock Prices Make Sense Right Now · 2020-07-07T05:58:26.740Z · score: 4 (2 votes) · LW · GW

In case people want to know more about this stuff, most of my understanding comes from Perry Mehrling’s coursera course (which I recommend)

Thanks! I've been hoping to come across something like this, to learn about the details of the modern banking system.

Comment by wei_dai on Cryonics without freezers: resurrection possibilities in a Big World · 2020-07-06T18:55:20.882Z · score: 4 (2 votes) · LW · GW

I agree recent events don't justify a huge update by themselves if one started with a reasonable prior. It's more that I somehow failed to consider the possibility of that scenario, the recent events made me consider it, and that's why it triggered a big update for me.

Comment by wei_dai on High Stock Prices Make Sense Right Now · 2020-07-05T06:17:19.446Z · score: 2 (1 votes) · LW · GW

The institutions which own Treasuries (e.g. banks) do so with massive amounts of cheap leverage, and those are the only assets they’re allowed to hold with that much leverage.

I'm curious about this. What source of leverage do banks have access to, that cost less than interest on Treasuries? (I know there are retail deposit accounts that pay almost no interest, but I think those are actually pretty expensive for the banks to obtain, because they have to maintain a physical presence to get those customers. I doubt those banks can make a profit if they just put those deposits into Treasuries. You must be talking about something else?)

Comment by wei_dai on Cryonics without freezers: resurrection possibilities in a Big World · 2020-07-01T09:46:26.878Z · score: 4 (3 votes) · LW · GW

My subjective anticipation is mollified by the thought that I’ll probably either never experience dying or wake up to find that I’ve been in an ancestral simulation, which leaves the part of me that wants to prevent all the empty galaxies from going to waste to work in peace. :)

Update: Recent events have made me think that the fraction of advanced civilizations in the multiverse that are sane may be quite low. (It looks like our civilization will probably build a superintelligence while suffering from serious epistemic pathologies, and this may be be typical for civilizations throughout the multiverse.) So now I'm pretty worried about "waking up" in some kind of dystopia (powered or controlled by a superintelligence with twisted beliefs or values), either in my own future lightcone or in another universe.

Actually, I probably shouldn't have been so optimistic even before the recent events...

Comment by wei_dai on Self-sacrifice is a scarce resource · 2020-06-28T22:49:34.881Z · score: 5 (3 votes) · LW · GW

If you find yourself doing too much self-sacrifice, injecting a dose of normative and meta-normative uncertainty might help. (I've never had this problem, and I attribute it to my own normative/meta-normative uncertainty. :) Not sure which arguments you heard that made you extremely self-sacrificial, but try Shut Up and Divide? if it was "Shut Up and Multiply", or Is the potential astronomical waste in our universe too small to care about? if it was "Astronomical Waste".

Comment by wei_dai on Atemporal Ethical Obligations · 2020-06-27T00:18:46.991Z · score: 24 (10 votes) · LW · GW

Thus, in order to be truly good people, we must take an active role, predict the future of moral progress, and live by tomorrow’s rules, today.

Suppose you think X is what is actually moral (or is a distribution representing your moral uncertainty after doing your best to try to figure out what is actually moral) and Y is what you expect most people will recognize as moral in the future (or is a distribution representing your uncertainty about that). Are you proposing to follow Y instead of X? (It sounds that way but I want to make sure I'm not misunderstanding.)

Assuming the answer is yes, is that because you think that trying to predict what most people will recognize as moral is more likely to lead to what is actually moral than directly trying to figure it out yourself? Or is it because you want to be recognized by future people as being moral and following Y is more likely to lead to that result?

Comment by wei_dai on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-25T11:19:10.589Z · score: 15 (9 votes) · LW · GW

But if you claim to be charitable and openminded, except when confronted by a test that affects your own community, then you’re using those words as performative weapons, deliberately or not.

I guess "charitable" here is referring to the principle of charity, but I think that is supposed to apply in a debate or discussion, to make them more productive and less likely to go off the rails. But in this case there is no debate, as far as I can tell. The NYT reporter or others representing NYT have not given a reason for doxxing Scott (AFAIK, except to cite a "policy" for doing so, but that seems false because there have been plenty of times when they've respected their subjects' wishes to remain pseudonymous), so what are people supposed to be charitable about?

If instead the intended meaning of "charitable and openminded" is something like "let's remain uncertain about NYT's motives for doxxing Scott until we know more", it seems like absence of any "principled reasons" provided so far is already pretty strong evidence for ruling out certain motives, leaving mostly "dumb mistake" and "evil or selfish" as the remaining possibilities. Given that, I'm not sure what people are doing that Richard thinks is failing the test to be "charitable and openminded", especially given that NYT has not shown a willingness to engage in a discussion so far and the time-sensitive nature of the situation.

Comment by wei_dai on Open & Welcome Thread - February 2020 · 2020-06-25T05:54:58.600Z · score: 12 (6 votes) · LW · GW

Another reason for attributing part of the gains (from betting on the coronavirus market crash) to luck, from Rob Henderson's newsletter which BTW I highly recommend:

The geneticist Razib Khan has said that the reason the U.S. took so long to respond to the virus is that Americans do not consider China to be a real place. For people in the U.S., “Wuhan is a different planet, mentally.” From my view, it didn’t seem “real” to Americans (or Brits) until Italy happened.

Not only have I lived in China, my father was born in Wuhan and I've visited there multiple times.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-06-19T06:14:26.842Z · score: 4 (2 votes) · LW · GW

Thanks for the feedback. I guess I in part was expecting people to learn about portfolio margin and box spread options for other reasons (so the additional work to pull out equity into CDs isn't that much), and in part forgot how difficult it might be for someone to learn about these things. Maybe there's an opportunity for someone to start a business to do this for their customers...

BTW you'll have to pass a multiple-choice test to be approved for PM at TDA, which can be tough. Let me know if you need any help with that. Also I've been getting 0.5%-0.55% interest rate from box spreads recently, and CDs are currently 1.25%-1.3%. CDs were around 1.5% when I first wrote this, so it was significantly more attractive then. I would say it's still worth it because once you learn these things you can get the extra return every year without that much additional work, and over several decades it can add up to a lot.

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-19T04:51:17.599Z · score: 13 (7 votes) · LW · GW

Personal update: Over the last few months, I've become much less worried that I have a tendency to be too pessimistic (because I frequently seem to be the most pessimistic person in a discussion). Things I was worried about more than others (coronavirus pandemic, epistemic conditions getting significantly worse) have come true, and when I was wrong in a pessimistic direction, I updated quickly after coming across a good argument (so I think I was wrong just because I didn't think of that argument, rather than due to a tendency to be pessimistic).

Feedback welcome, in case I've updated too much about this.

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-17T10:04:32.620Z · score: 6 (3 votes) · LW · GW

I should also address this part:

For example, if the threat model is that they just adopt the dominant ideology around them (which happens to be false on many points), then that results in them having false beliefs (#1), but may not cause any harm to come to them from it (#3) (and may even be to their benefit, in some ways).

Many Communist true believers in China met terrible ends as waves of "political movements" swept through the country after the CCP takeover, and pitted one group against another, all vying to be the most "revolutionary". (One of my great-grandparents could have escaped but stayed in China because he was friends with a number of high-level Communists and believed in their cause. He ended up committing suicide when his friends lost power to other factions and the government turned on him.)

More generally, ideology can change so quickly that it's very difficult to follow it closely enough to stay safe, and even if you did follow the dominant ideology perfectly you're still vulnerable to the next "vanguard" who pushes the ideology in a new direction in order to take power. I think if "adopt the dominant ideology" is sensible as a defensive strategy for living in some society, you'd still really want to avoid getting indoctrinated into being a true believer, so you can apply rational analysis to the political struggles that will inevitably follow.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-06-17T07:06:25.752Z · score: 2 (1 votes) · LW · GW

I came here to say that I'm surprised this advice isn't on top of every list of personal investment advice. Almost 1% risk-free extra return per year, on top of whatever else you're getting from your investments. Isn't it crazy that this is possible, when 10 year treasuries are yielding only ~0.7%? How is every financial columnist not shouting this from their rooftops?

Then I noticed that it's on the bottom of my own advice list, due to not having received a single up-vote. What gives, LW?

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-17T05:41:06.067Z · score: 5 (3 votes) · LW · GW

I guess I'm worried about

  1. They will "waste their life", for both the real opportunity cost and the potential regret they might feel if they realize the error later in life.
  2. My own regret in knowing that they've been indoctrinated into believing wrong things (or into having unreasonable certainty about potentially wrong things), when I probably could have done something to prevent that.
  3. Their views making family life difficult. (E.g., if they were to secretly record family conversations and post them on social media as examples of wrongthink, like some kids have done.)

Can't really think of any mitigations for these aside from trying not to let them get indoctrinated in the first place...

Comment by wei_dai on Mod Notice about Election Discussion · 2020-06-17T03:18:49.517Z · score: 3 (2 votes) · LW · GW

You mean tag people so they get notified, like on FB? I don't think you can. Just send them a PM with the link, I guess.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-06-13T01:40:34.860Z · score: 3 (2 votes) · LW · GW

Yeah, I've become suspicious of it myself, which is why I retracted the comment. (It should show as struck out?)

Comment by wei_dai on Covid-19 6/11: Bracing For a Second Wave · 2020-06-12T09:05:24.323Z · score: 16 (6 votes) · LW · GW

Thanks for writing these.

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-11T06:20:22.643Z · score: 6 (2 votes) · LW · GW

I was initially pretty excited about the idea of getting another passport, but on second thought I'm not sure it's worth the substantial costs involved. Today people aren't losing their passports or having their movements restricted for (them or their family members) having expressed "wrong" ideas, but just(!) losing their jobs, being publicly humiliated, etc. This is more the kind of risk I want to hedge against (with regard to AI), especially for my family. If the political situation deteriorates even further to where the US government puts official sanctions on people like me, humanity is probably just totally screwed as a whole and having another passport isn't going to help me that much.

Comment by wei_dai on ESRogs's Shortform · 2020-06-10T06:24:30.152Z · score: 6 (3 votes) · LW · GW

sell a long-dated $5 call

This page explains why the call option would probably get exercised early and ruin your strategy:

ITM calls get assigned in a hard to borrow stock all the time

The second most common form of assignment is in a hard to borrow stock. Since the ability to short the stock is reduced, selling an ITM call option is the next best thing. A liquidity provider might have to pay a negative cost of carry just to hold a short stock position. Since the market on balance wants to short the stock, the value of the ITM call gets reduced relative to the underlying stock price. Moreover, a liquidity provider might have to exercise all their long calls to come into compliance with REG SHO. That means the short call seller gets assigned.

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-08T06:28:13.829Z · score: 9 (3 votes) · LW · GW

Do you think that having your kids consume rationalist and effective altruist content and/or doing homeschooling/unschooling are insufficient for protecting your kids against mind viruses?

Homeschooling takes up too much of my time and I don't think I'm very good at being a teacher (having been forced to try it during the current school closure). Unschooling seems too risky. (Maybe it would produce great results, but my wife would kill me if it doesn't. :) "Consume rationalist and effective altruist content" makes sense but some more specific advice would be helpful, like what material to introduce, when, and how to encourage their interest if they're not immediately interested. Have any parents done this and can share their experience?

and not talking to other kids (I didn’t have any friends from US public school during grades 4 to 11)

Yeah that might have been a contributing factor for myself as well, but my kids seem a lot more social than me.

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-08T05:00:19.556Z · score: 30 (13 votes) · LW · GW

Please share ideas/articles/resources for immunizing ones' kids against mind viruses.

I think I was lucky myself in that I was partially indoctrinated in Communist China, then moved to the US before middle school, which made it hard for me to strongly believe any particular religion or ideology. Plus the US schools I went to didn't seem to emphasize ideological indoctrination as much as schools currently do. Plus there was no social media pushing students to express the same beliefs as their classmates.

What can I do to help prepare my kids? (If you have specific ideas or advice, please mention what age or grade they are appropriate for.)

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-05T07:05:59.431Z · score: 13 (9 votes) · LW · GW

Or you can use Bypass Paywalls with Firefox or Chrome.

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-05T04:31:25.688Z · score: 11 (5 votes) · LW · GW

Get your exit plan ready to execute on very short notice, and understand that it’ll be costly if you do it.

What would be a good exit plan? If you've thought about this, can you share your plan and/or discuss (privately) my specific situation?

Do what you can to keep your local environment sane, so you don’t have to run, and so the world gets back onto a positive trend.

How? I've tried to do this a bit, but it takes a huge amount of time, effort, and personal risk, and whatever gains I manage to eek out seem to be highly ephemeral at best. It doesn't seem like a very good use of my time when I can spend it on something like AI safety instead. Have you been doing this yourself, and if so what has been your experience?

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-04T08:55:49.188Z · score: 25 (14 votes) · LW · GW

You'll have to infer it from the fact that I didn't explain more and am not giving a straight answer now. Maybe I'm being overly cautious, but my parents and other relatives lived through (and suffered in) the Cultural Revolution and other "political movements", and wouldn't it be silly if I failed to "expect the Spanish Inquisition" despite that?

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-03T22:52:46.908Z · score: 32 (16 votes) · LW · GW
  1. People I followed on Twitter for their credible takes on COVID-19 now sound insane. Sigh...

  2. I feel like I should do something to prep (e.g., hedge risk to me and my family) in advance of AI risk being politicized, but I'm not sure what. Obvious idea is to stop writing under my real name, but cost/benefit doesn't seem worth it.

Comment by wei_dai on Inaccessible information · 2020-06-03T08:23:13.381Z · score: 7 (4 votes) · LW · GW

or we need to figure out some way to access the inaccessible information that “A* leads to lots of human flourishing.”

To help check my understanding, your previously described proposal to access this "inaccessible" information involves building corrigible AI via iterated amplification, then using that AI to capture "flexible influence over the future", right? Have you become more pessimistic about this proposal, or are you just explaining some existing doubts? Can you explain in more detail why you think it may fail?

(I'll try to guess.) Is it that corrigibility is about short-term preferences-on-reflection and short-term preferences-on-reflection may themselves be inaccessible information?

I can pay inaccessible costs for an accessible gain — for example leaking critical information, or alienating an important ally, or going into debt, or making short-sighted tradeoffs. Moreover, if there are other actors in the world, they can try to get me to make bad tradeoffs by hiding real costs.

This seems similar to what I wrote in an earlier thread: "What if the user fails to realize that a certain kind of resource is valuable? (By “resources” we’re talking about things that include more than just physical resources, like control of strategic locations, useful technologies that might require long lead times to develop, reputations, etc., right?)" At the time I thought you proposed to solve this problem by using the user's "preferences-on-reflection", which presumably would correctly value all resources/costs. So again is it just that "preferences-on-reflection" may itself be inaccessible?

Overall I don’t think it’s very plausible that amplification or debate can be a scalable AI alignment solution on their own, mostly for the kinds of reasons discussed in this post — we will eventually run into some inaccessible knowledge that is never produced by amplification, and so never winds up in your distilled agents.

Besides the above, can you give some more examples of (what you think may be) "inaccessible knowledge that is never produced by amplification"?

(I guess an overall feedback is that in most of the post you discuss inaccessible information without talking about amplification, and then quickly talk about amplification in the last section, but it's not easy to see how the two ideas relate without more explanations and examples.)

Comment by wei_dai on Possible takeaways from the coronavirus pandemic for slow AI takeoff · 2020-06-02T06:20:00.659Z · score: 40 (20 votes) · LW · GW

Thanks for writing this. I've been thinking along similar lines since the pandemic started. Another takeaway for me: Under our current political system, AI risk will become politicized. It will be very easy for unaligned or otherwise dangerous AI to find human "allies" who will help to prevent effective social response. Given this, "more competent institutions" has to include large-scale and highly effective reforms to our democratic political structures, but political dysfunction is such a well-known problem (i.e., not particularly neglected) that if there were easy fixes, they would have been found and applied already.

So whereas you're careful to condition your pessimism on "unless our institutions improve", I'm just pessimistic. (To clarify, I was already pessimistic before COVID-19, so it just provided more details about how coordination/institutions are likely to fail, which I didn't have a clear picture of. I'm curious if COVID-19 was an update for you as far as your overall assessment of AI risk. That wasn't totally clear from the post.)

On a related note, I recall Paul said the risk from failure of AI alignment (I think he said or meant "intent alignment") is 10%; Toby Ord gave a similar number for AI risk in his recent book; 80,000 Hours, based on interviews with multiple AI risk researchers, said "We estimate that the risk of a serious catastrophe caused by machine intelligence within the next 100 years is between 1 and 10%." Until now 1-10% seems to have been the consensus view among the most prominent AI risk researchers. I wonder if that has changed due to recent events.

Comment by wei_dai on The EMH Aten't Dead · 2020-05-28T11:19:09.384Z · score: 13 (4 votes) · LW · GW

I’d also flag that going all-in on EMH and modern financial theory still leads to fairly unusual investing behavior for a retail investor, moreso than I had thought before delving into it.

Seconding this. It turns out that investing under the current academic version of EMH (with time-varying risk premia and multifactor models) is a lot more complicated than putting one's money into an index fund. I'm still learning, but one thing even Carl didn't mention is that modern EMH is compatible with (even demands) certain forms of market timing, if your financial situation (mainly risk exposure through your non-investment income) differs from the average investor. This paper gives advice based on academic research but was apparently written in 1999 so may already be outdated.

Comment by wei_dai on The EMH Aten't Dead · 2020-05-28T11:00:23.655Z · score: 4 (2 votes) · LW · GW

I found out a couple of days ago that there's a debate within academia over EMH. Here's how John Cochrane described it in an interview:

When house prices are high relative to rents, when stock prices are high relative to earnings—that seems to signal a period of low returns. When prices are high relative to earnings, it’s not going to be a great time to invest over the next seven to ten years. That’s a fact. It took us ten years to figure it out, but that’s what (Robert) Shiller’s volatility stuff was about; it is what Gene (Fama)’s regressions in the nineteen-eighties were about. That was a stunning new fact. Before, we would have guessed that prices high relative to earnings means we are going to see great growth in earnings. It turned out to be the opposite. We all agree on the fact. If prices are high relative to earnings that means this is going to be a bad ten years for stocks. It doesn’t reliably predict a crash, just a period of low returns, which sometimes includes a crash, but sometimes not.

Ok, this is the one and only fact in this debate. So what do we say about that? Well, one side says that people were irrationally optimistic. The other side says, wait a minute, the times when prices are high are good economic times, and the times when prices are low are times when the average investor is worried about his job and his business. Look at last December (2008). Lots of people saw this was the biggest buying opportunity of all time, but said, “Sorry, I’m about to lose my job, I’m about to lose my business, I can’t afford to take more risk right now.” So we would say, “Aha, the risk premium is higher!”

So that’s now where this debate is. We’re chewing out: Is it a risk premium that varies over time, or is it psychological variation? So your question is right, but it is not as obvious as: “Stocks crashed. We must all be irrational.”

Perhaps more directly relevant to this post is this quote from Robert Shiller's 2013 Nobel lecture:

There is another important argument widely used for efficient markets, the argument that a model like (4) with an intermediate φ cannot represent a stable equilibrium because the smart money would get richer and richer and eventually take over the market, and φ would go to zero. In fact this will not generally happen, for there is a natural recycling of investor abilities, the smart money people usually do not start out with a lot of money, and it takes them many years to acquire enough wealth to influence the market. Meanwhile they get old and retire, or they rationally lose interest in doing the work to pursue their advantage after they have acquired sufficient wealth to live on. The market will be efficient enough that advantages to beating the market are sufficiently small and uncertain and slow to repay one’s efforts that most smart people will devote their time to more personally meaningful things, like managing a company, getting a PhD in finance, or some other more enjoyable activity, leaving the market substantially to ordinary investors. Genuinely smart money investors cannot in their normal life cycle amass enough success experience to prove to ordinary investors that they can manage their money effectively: it takes too many years and there is too much fundamental uncertainty for them to be able to do that assuredly, and by the time they prove themselves they may have lost the will or ability to continue (Shiller 1984; Shleifer and Vishny 1997).

See also Shiller on Does Covid-19 Prove the Stock Market is Inefficient?.

Comment by wei_dai on Predicted Land Value Tax: a better tax than an unimproved land value tax · 2020-05-28T06:09:57.922Z · score: 4 (3 votes) · LW · GW

Upvoted for an interesting idea, but I'm not sure who would want to make bids on houses through this system. It seems like bidders are at an information disadvantage versus owners, so to be safe (i.e., not make a bid that the owner knows is too high because they have more information) they can only make bids that are substantially lower than what they expect the market value of the house to be, but then the owner would almost never want to sell through the bidding system, unless they want to move for reasons like changing jobs, in which case they'd "list" their house and invite people in for tours etc. similar to selling a house today. But this means at any given time only a few houses (the ones that are actively being sold) have accurate bids with the rest having no bids or very low bids. Can your system handle this? Am I misunderstanding anything?

Comment by wei_dai on Predicted Land Value Tax: a better tax than an unimproved land value tax · 2020-05-28T05:56:41.586Z · score: 4 (3 votes) · LW · GW

Would marketable securities (e.g., exchange-traded stocks) be a good candidate for this kind of tax? I guess not, because that would introduce an incentive to own non-marketable securities instead, which would distort the economy and make it less efficient. So do we also need that the taxed asset class has no untaxed substitutes?

Comment by wei_dai on AGIs as populations · 2020-05-28T05:16:36.889Z · score: 4 (2 votes) · LW · GW

Having said this, I’m open to trying it for one of your arguments. So perhaps you can point me to one that you particularly want engagement on?

Perhaps you could read all three of these posts (they're pretty short :) and then either write a quick response to each one and then I'll decide which one to dive into, or pick one yourself (that you find particularly interesting, or you have something to say about).

Also, let me know if you prefer to do this here, via email, or text/audio/video chat. (Also, apologies ahead of time for any issues/delays as my kid is home all the time now, and looking after my investments is a much bigger distraction / time-sink than usual, after I updated away from "just put everything into an index fund".)

Comment by wei_dai on AGIs as populations · 2020-05-27T09:01:13.358Z · score: 10 (3 votes) · LW · GW

This seems about right. In general when someone proposes a mechanism by which the world might end, I think the burden of proof is on them. You’re not just claiming “dangerous”, you’re claiming something like “more dangerous than anything else has ever been, even if it’s intent-aligned”. This is an incredibly bold claim and requires correspondingly thorough support.

  1. "More dangerous than anything else has ever been" does not seem incredibly bold to me, given that superhuman AI will be more powerful than anything else the world has seen. Historically the risk of civilization doing damage to itself seems to grow with the power that it has access to (e.g., the two world wars, substantial risks of nuclear war and man-made pandemic that continue to accumulate each year, climate change) so I think I'm just extrapolating a clear trend. (Past risks like these could not have been eliminated by solving a single straightforward, self-contained, technical problem analogous to "intent alignment" so why expect that now?)

To risk being uncharitable, your position seems analogous to someone saying, before the start of the nuclear era, "I think we should have a low prior that developing any particular kind of nuclear weapon will greatly increase the risk of global devastation in the future, because (1) that would be unprecedentedly dangerous and (2) nobody wants global devastation so everyone will work to prevent it. The only argument that has been developed well enough to overcome this low prior is that some types of nuclear weapons could potentially ignite the atmosphere, so to be safe we'll just make sure to only build bombs that definitely can't do that." (What would be a charitable historical analogy to your position if this one is not?)

  1. "The world might end" is not the only or even the main thing I'm worried about, especially because there are more people who can be expected to worry about "the world might end" and try to do something about it. My focus is more on the possibility that humanity survives but the values of people like me (or human values, or objective morality, depending on what the correct metaethics turn out to be) end up controlling only a small fraction of universe so we end up with astronomical waste or Beyond Astronomical Waste as a result. (Or our values become corrupted and the universe ends up being optimized for completely alien or wrong values.) There is plenty of precedence for the world becoming quite suboptimal according to some group's values, and there is no apparent reason to think the universe has to evolve according to objective morality (if such a thing exists), so my claim also doesn't seem very extraordinary from this perspective.

First because quite a few countries are handling it well. Secondly because I wasn’t even sure that lockdowns were a tool in the arsenal of democracies, and it seemed pretty wild to shut the economy down for so long.

If you think societal response to a risk like pandemic (and presumably AI) is substantially suboptimal by default (and it clearly is given that large swaths of humanity are incurring a lot of needless deaths), doesn't that imply significant residual risks, and plenty of room for people like us to try to improve the response? To a first approximation, the default suboptimal social response reduces all risks by some constant amount, so if some particular x-risk is important to work on without considering default social response, it's probably still important to work on after considering "whatever efforts people will make when the problem starts becoming more apparent". Do you disagree this argument? Did you have some other reason for saying that, that I'm not getting?

Comment by wei_dai on AGIs as populations · 2020-05-26T23:26:37.565Z · score: 10 (3 votes) · LW · GW

It looks like someone strong downvoted a couple of my comments in this thread (the parent and this one). (The parent comment was at 5 points with 3 votes, now it's 0 points with 4 votes.) This is surprising to me as I can't think of what I have written that could cause someone to want to do that. Does the person who downvoted want to explain, or anyone else want to take a guess?

Comment by wei_dai on AGIs as populations · 2020-05-26T21:13:41.035Z · score: 8 (6 votes) · LW · GW

To try to encourage you to engage with my arguments more (as far as pointing out where you're not convinced), I think I'm pretty good at being skeptical of my own ideas and have a good track record in terms of not spewing off a lot of random ideas that turn out to be far off the mark. But I am too lazy / have too many interests / am too easily distracted to write long papers/posts where I lay out every step of my reasoning and address every possible counterargument in detail.

So what I'd like to do is to just amend my posts to address the main objections that many people actually have, enough for more readers like you to "assign moderate probability that the argument is true". In order to do that, I need to have a better idea what objections people actually have or what counterarguments they currently find convincing. Does this make sense to you?

Comment by wei_dai on AGIs as populations · 2020-05-26T10:20:45.027Z · score: 10 (3 votes) · LW · GW

but when we’re trying to make claims that a given effect will be pivotal for the entire future of humanity despite whatever efforts people will make when the problem starts becoming more apparent, we need higher standards to get to the part of the logistic curve with non-negligible gradient.

I guess a lot of this comes down to priors and burden of proof. (I guess I have a high prior that making something smarter than human is dangerous unless we know exactly what we're doing including the social/political aspects, and you don't, so you think the burden of proof is on me?) But (1) I did write a bunch of blog posts which are linked to in the second post (maybe you didn't click on that one?) and it would help if you could point out more where you're not convinced, and (2) does the current COVID-19 disaster not make you more pessimistic about "whatever efforts people will make when the problem starts becoming more apparent"?

When you think about the arguments made in your disjunctive post, how hard do you try to imagine each one conditional on the knowledge that the other arguments are false? Are they actually compelling in a world where Eliezer is wrong about intelligence explosions and Paul is wrong about influence-seeking agents?

I think I did? Eliezer being wrong about intelligence explosions just means we live in a world without intelligence explosions, and Paul being wrong about influence-seeking agents just means he (or someone) succeeds in building intent-aligned AGI, right? Many of my "disjunctive" arguments were written specifically with that scenario in mind.

Comment by wei_dai on AGIs as populations · 2020-05-26T08:59:36.257Z · score: 7 (8 votes) · LW · GW

For now my epistemic state is: extreme agency is an important component of thee main argument for risk, so all else equal reducing it should reduce risk.

I appreciate the explanation, but this is pretty far from my own epistemic state, which is that arguments for AI risk are highly disjunctive, most types of AGI (not just highly agentic ones) are probably unsafe (i.e., are likely to lead us away from rather than towards a success story), at best probably only a few very specific AGI designs (which may well be agentic if combined with other properties) are both feasible and safe (i.e., can count as success stories), so it doesn't make sense to say that an AGI is "safer" just because it's less agentic.

Having said that, I also believe that most safety work will be done by AGIs, and so I want to remain open-minded to success stories that are beyond my capability to predict.

Getting to an AGI that can safely do human or superhuman level safety work would be a success story in itself, which I labeled "Research Assistant" in my post.

Comment by wei_dai on AGIs as populations · 2020-05-22T23:47:30.676Z · score: 16 (6 votes) · LW · GW

I don’t think such work should depend on being related to any specific success story.

The reason I asked was that you talk about "safer" and "less safe" and I wasn't sure if "safer" here should be interpreted as "more likely to let us eventually achieve some success story", or "less likely to cause immediate catastrophe" (or something like that). Sounds like it's the latter?

Maybe I should just ask directly, what you tend to mean when you say "safer"?

Comment by wei_dai on AGIs as populations · 2020-05-22T23:06:26.572Z · score: 3 (2 votes) · LW · GW

What success story (or stories) did you have in mind when writing this?

Comment by wei_dai on The EMH Aten't Dead · 2020-05-15T22:07:02.544Z · score: 4 (2 votes) · LW · GW

See also the bottom of this comment for a more complete record of my significant (non-EMH) investments.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-05-15T21:43:56.760Z · score: 1 (2 votes) · LW · GW

Pay your monthly bills with margin loans

Instead of maintaining a positive balance in a bank checking account that pays virtually no interest and having to worry about overdrafts, switch your bill payment to a brokerage account that offers low margin rates, and pay your bills "on margin". (Interactive Brokers currently offers 1.55% (for loans <$100k), or negotiate with your current broker (I got 0.75% starting at the first dollar)). Once a while, sell some securities, move money back from a high yield savings account or CD, or get cash from box spread financing, to zero out the margin balance.

Comment by wei_dai on What are Michael Vassar's beliefs? · 2020-05-15T19:10:44.068Z · score: 14 (5 votes) · LW · GW

There was a previous thread about this.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-05-14T06:21:59.059Z · score: 10 (5 votes) · LW · GW

I kind of miss the days when I believed in the EMH... Denial of EMH, along with realizing that 100% and 0% are not practical upper and lower bounds for exposure to the market (i.e., there are very cheap ways to short and leverage the market), is making me a lot more anxious (and potentially regretful) about not making the best investment choices. Would be interested in coping tips/strategies from people who have been in this position longer.

(It seems that in general, fewer constraints means more room for regret. See https://www.wsj.com/articles/bill-gates-coronavirus-vaccine-covid-19-11589207803 for example.)

Comment by wei_dai on Zoom Technologies, Inc. vs. the Efficient Markets Hypothesis · 2020-05-12T02:07:45.101Z · score: 17 (7 votes) · LW · GW

You don’t have to be smarter than them to exploit them, since they’re optimizing a different goal: keep their customers happy, instead of making maximum money for them.

What trades does this suggest?

Comment by wei_dai on Zoom Technologies, Inc. vs. the Efficient Markets Hypothesis · 2020-05-12T01:16:50.585Z · score: 9 (5 votes) · LW · GW

such high variance looks much more obviously like ‘gambling’ or ‘taking on an enormous amount of risk’ than ‘it’s fun and easy to seek out alpha and beat the market’

I know someone else who made the opposite mistake as me and sold their coronavirus puts too early. If you only saw their record, there would be no "high variance". They just made less money than they could have. It seems to me that the correct lesson from both outcomes is that it's possible to beat the market (without putting in so much effort as to make it not worthwhile to try), but we haven't figured out how to time the exits at exactly or very close to the best times.

Comment by wei_dai on Zoom Technologies, Inc. vs. the Efficient Markets Hypothesis · 2020-05-11T23:14:40.464Z · score: 44 (16 votes) · LW · GW

Followup is important; I notice that Eliezer has not exactly gone around trumpeting Wei Dai's followup comment where he mentions losing almost all of his coronavirus profits as evidence that maybe the EMH is right after all

When Eliezer posted about my bet, I had only gained 7x my initial bet (and that's what he posted about), and although I ended up losing 80% of my paper profits (which were 50x at one point) I still gained 10x my initial bet. So him not posting further followup seems fine? And at least from my personal perspective (i.e., where selection bias isn't an issue) the final outcome still seems to be strong evidence against EMH.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-05-11T22:12:13.872Z · score: 2 (1 votes) · LW · GW

Can you go into more detail about this, or link to an article?

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-05-11T07:47:49.517Z · score: 2 (1 votes) · LW · GW

Yes, you can open accounts at multiple banks and use other tricks to get more insurance. See https://smartasset.com/checking-account/how-much-is-fdic-insurance

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-05-11T06:38:15.800Z · score: 3 (2 votes) · LW · GW

Both of these references fall under the "mean-variance" paradigm, but according to Strategic Asset Allocation: Determining the Optimal Portfolio with Ten Asset Classes:

Both academics and practitioners agree that the mean-variance analysis is extremely sensitive to small changes and errors in the assumptions. We therefore take another approach to the asset allocation problem, in which we estimate the weights of the asset classes in the market portfolio. The composition of the observed market portfolio embodies the aggregate return, risk, and correlation expectations of all market participants and is by definition the optimal portfolio.

I think "by definition" is wrong here, but there are strong theoretical reasons to think that the global market portfolio is the optimal portfolio for all investors (investors with different risk tolerance should differ only in how much exposure or leverage they use). However that theory makes some unrealistic assumptions, and here's a fuller picture, from Q&A: Seeking the Optimal Country Weighting Scheme :

Financial theory suggests that a global value-weight market portfolio is the logical default position for an equity investor seeking the optimal allocation scheme across countries.

What are the implications for this approach if we take structural factors into account that encourage a home bias? Australia, for example, offers tax incentives applicable only to local investors, so their citizens earn higher returns than foreign investors do holding the same stocks. Brazil accomplishes the same thing by imposing additional taxes on foreign investors.

[...]

One can argue that asset pricing theory always makes unrealistic assumptions, which are irrelevant if a model does a good job describing observed average returns. True, if one is only concerned with describing average returns. But this argument does not imply that a simplified model can be used as a prescription for optimal portfolios. Here one must face the implications of real world frictions in international investment.

One might argue that the effects of all frictions are captured by the aggregate portfolio of local and foreign assets held by the investors of a country. This is, in a limited sense, true, and it is reasonable to argue that this portfolio is a starting point for investment decisions (perhaps the best we can do). But there are caveats. Within a country, there are taxable and non-taxable investors, and if the data are available, it makes more sense to start with separate aggregate portfolios for the two groups, which also seems a reasonable starting point. There are real problems, however, because not all taxable investors face the same tax rates. Etc. Etc.

So it seems like the default/optimal portfolio should be the average portfolio of investors like me, and not the global market portfolio. If that information is not directly available, we can try to infer it from other data, e.g., if some investors unlike me are known to overweight some asset class (e.g., treasury bonds) then I should underweight that asset class.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-05-07T01:33:46.022Z · score: 6 (4 votes) · LW · GW

Negotiate with your brokers

A lot of brokerages will pay you cash bonuses to transfer your assets to them (and typically keep them there for a year) and this is another source of extra risk-free return. These public offers are usually capped at $2500 bonus for $1M of assets, but some places will give you $2500 per $1M of assets, plus deep discounts on futures/options commissions and margin rates. (You can PM or email me to get details and contact info of the brokerage representatives I've talked with.)

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-05-07T01:22:27.232Z · score: 5 (3 votes) · LW · GW

Bankruptcy risk for a leveraged portfolio

From Brian Tomasik's Should Altruists Leverage Investments?

Also note that this continuous-time model doesn't allow margin accounts to go bankrupt. Because a continuous-time margin account maintains constant leverage, if its assets fall, it rebalances immediately by selling some securities. In the real world, margin accounts can go bankrupt. This could, with low probability, even happen if the account rebalances daily. For instance, a 5X-leveraged margin account that rebalanced once per day might have been wiped out by 1987's Black Monday. By not allowing for bankruptcy (and by ignoring black swans in general), continuous-time equations like those above may slightly overstate the expected value of leverage. In the extreme case, taking t = ∞, a margin investor who doesn't rebalance continuously would go bankrupt with probability 1 (since eventually there would be a huge, near-instantaneous market downturn that destroys the account), while the leveraged mean equation concludes that the margin investor ends up with infinite expected wealth.

One might think that rebalancing more frequently than daily would help (perhaps with the help of an algorithm), but you can't rebalance when markets are closed, e.g., during weekends. I haven't figured out the best way to mitigate this risk yet (which isn't really so much about bankruptcy as being over-leveraged when asset values fall too much before you're able to rebalance), but two ideas are (1) keep some put options in one's portfolio, and (2) have some assets that are protected during bankruptcy (e.g., retirement accounts, spendthrift trusts).