Posts

Tips/tricks/notes on optimizing investments 2020-05-06T23:21:53.153Z · score: 85 (29 votes)
Have epistemic conditions always been this bad? 2020-01-25T04:42:52.190Z · score: 161 (58 votes)
Against Premature Abstraction of Political Issues 2019-12-18T20:19:53.909Z · score: 64 (23 votes)
What determines the balance between intelligence signaling and virtue signaling? 2019-12-09T00:11:37.662Z · score: 71 (29 votes)
Ways that China is surpassing the US 2019-11-04T09:45:53.881Z · score: 58 (25 votes)
List of resolved confusions about IDA 2019-09-30T20:03:10.506Z · score: 100 (35 votes)
Don't depend on others to ask for explanations 2019-09-18T19:12:56.145Z · score: 78 (25 votes)
Counterfactual Oracles = online supervised learning with random selection of training episodes 2019-09-10T08:29:08.143Z · score: 47 (13 votes)
AI Safety "Success Stories" 2019-09-07T02:54:15.003Z · score: 105 (32 votes)
Six AI Risk/Strategy Ideas 2019-08-27T00:40:38.672Z · score: 63 (32 votes)
Problems in AI Alignment that philosophers could potentially contribute to 2019-08-17T17:38:31.757Z · score: 84 (35 votes)
Forum participation as a research strategy 2019-07-30T18:09:48.524Z · score: 117 (42 votes)
On the purposes of decision theory research 2019-07-25T07:18:06.552Z · score: 66 (22 votes)
AGI will drastically increase economies of scale 2019-06-07T23:17:38.694Z · score: 42 (16 votes)
How to find a lost phone with dead battery, using Google Location History Takeout 2019-05-30T04:56:28.666Z · score: 57 (31 votes)
Where are people thinking and talking about global coordination for AI safety? 2019-05-22T06:24:02.425Z · score: 100 (36 votes)
"UDT2" and "against UD+ASSA" 2019-05-12T04:18:37.158Z · score: 49 (16 votes)
Disincentives for participating on LW/AF 2019-05-10T19:46:36.010Z · score: 81 (36 votes)
Strategic implications of AIs' ability to coordinate at low cost, for example by merging 2019-04-25T05:08:21.736Z · score: 57 (23 votes)
Please use real names, especially for Alignment Forum? 2019-03-29T02:54:20.812Z · score: 40 (13 votes)
The Main Sources of AI Risk? 2019-03-21T18:28:33.068Z · score: 78 (33 votes)
What's wrong with these analogies for understanding Informed Oversight and IDA? 2019-03-20T09:11:33.613Z · score: 39 (9 votes)
Three ways that "Sufficiently optimized agents appear coherent" can be false 2019-03-05T21:52:35.462Z · score: 69 (18 votes)
Why didn't Agoric Computing become popular? 2019-02-16T06:19:56.121Z · score: 54 (16 votes)
Some disjunctive reasons for urgency on AI risk 2019-02-15T20:43:17.340Z · score: 38 (11 votes)
Some Thoughts on Metaphilosophy 2019-02-10T00:28:29.482Z · score: 57 (16 votes)
The Argument from Philosophical Difficulty 2019-02-10T00:28:07.472Z · score: 49 (15 votes)
Why is so much discussion happening in private Google Docs? 2019-01-12T02:19:19.332Z · score: 87 (26 votes)
Two More Decision Theory Problems for Humans 2019-01-04T09:00:33.436Z · score: 59 (20 votes)
Two Neglected Problems in Human-AI Safety 2018-12-16T22:13:29.196Z · score: 82 (29 votes)
Three AI Safety Related Ideas 2018-12-13T21:32:25.415Z · score: 66 (27 votes)
Counterintuitive Comparative Advantage 2018-11-28T20:33:30.023Z · score: 80 (32 votes)
A general model of safety-oriented AI development 2018-06-11T21:00:02.670Z · score: 71 (24 votes)
Beyond Astronomical Waste 2018-06-07T21:04:44.630Z · score: 106 (48 votes)
Can corrigibility be learned safely? 2018-04-01T23:07:46.625Z · score: 75 (26 votes)
Multiplicity of "enlightenment" states and contemplative practices 2018-03-12T08:15:48.709Z · score: 105 (30 votes)
Online discussion is better than pre-publication peer review 2017-09-05T13:25:15.331Z · score: 18 (15 votes)
Examples of Superintelligence Risk (by Jeff Kaufman) 2017-07-15T16:03:58.336Z · score: 5 (5 votes)
Combining Prediction Technologies to Help Moderate Discussions 2016-12-08T00:19:35.854Z · score: 13 (14 votes)
[link] Baidu cheats in an AI contest in order to gain a 0.24% advantage 2015-06-06T06:39:44.990Z · score: 14 (13 votes)
Is the potential astronomical waste in our universe too small to care about? 2014-10-21T08:44:12.897Z · score: 30 (31 votes)
What is the difference between rationality and intelligence? 2014-08-13T11:19:53.062Z · score: 13 (13 votes)
Six Plausible Meta-Ethical Alternatives 2014-08-06T00:04:14.485Z · score: 53 (54 votes)
Look for the Next Tech Gold Rush? 2014-07-19T10:08:53.127Z · score: 47 (42 votes)
Outside View(s) and MIRI's FAI Endgame 2013-08-28T23:27:23.372Z · score: 16 (19 votes)
Three Approaches to "Friendliness" 2013-07-17T07:46:07.504Z · score: 20 (23 votes)
Normativity and Meta-Philosophy 2013-04-23T20:35:16.319Z · score: 12 (14 votes)
Outline of Possible Sources of Values 2013-01-18T00:14:49.866Z · score: 14 (16 votes)
How to signal curiosity? 2013-01-11T22:47:23.698Z · score: 21 (22 votes)
Morality Isn't Logical 2012-12-26T23:08:09.419Z · score: 19 (35 votes)

Comments

Comment by wei_dai on What Does "Signalling" Mean? · 2020-09-17T02:39:56.119Z · score: 6 (3 votes) · LW · GW

eg, birds warning each other that there is a snake in the grass

Wait, this is not the example in the Wikipedia page, which is actually "When an alert bird deliberately gives a warning call to a stalking predator and the predator gives up the hunt, the sound is a signal."

I found this page which gives a good definition of signaling:

Signalling theory (ST) tackles a fundamental problem of communication: how can an agent, the receiver, establish whether another agent, the signaller, is telling or otherwise conveying the truth about a state of affairs or event which the signaller might have an interest to misrepresent? And, conversely, how can the signaller persuade the receiver that he is telling the truth, whether he is telling it or not? This two-pronged question potentially arises every time the interests between signallers and receivers diverge or collide and there is asymmetric information, namely the signaller is in a better position to know the truth than the receiver is. ST, which is only a little more than 30 years old, has now become a branch of game theory. In economics it was introduced by Michael Spence in 1973. In biology it took off not so much when Amotz Zahavi first introduced the idea in 1975, but since, in 1990, Alan Grafen proved formally that ‘honest’ signals can be an evolutionarily stable strategy.

Typical situations that signalling theory covers have two key features:

  • there is some action the receiver can do which benefits a signaller, whether or not he has the quality k, for instance marry him, but
  • this action benefits the receiver if and only if the signaller truly has k, and otherwise hurts her — for instance, marry an unfaithful man.

So in the alarm example, the quality k is whether the bird has really detected the predator, and the "action" is for the predator to give up the hunt. Later in the Wikipedia article, it says "For example, if foraging birds are safer when they give a warning call, cheats could give false alarms at random, just in case a predator is nearby."

Comment by wei_dai on Open & Welcome Thread - September 2020 · 2020-09-14T17:54:17.015Z · score: 1 (2 votes) · LW · GW

Did it make you or your classmates doubt your own morality a bit? If not, maybe it needs to be taught along with the outside view and/or the teacher needs to explicitly talk about how the lesson from history is that we shouldn't be so certain about our morality...

Comment by wei_dai on Open & Welcome Thread - September 2020 · 2020-09-13T21:30:52.324Z · score: 8 (5 votes) · LW · GW

I wonder if anyone has ever written a manifesto for moral uncertainty, maybe something along the lines of:

We hold these truths to be self-evident, that we are very confused about morality. That these confusions should be properly reflected as high degrees of uncertainty in our moral epistemic states. That our moral uncertainties should inform our individual and collective actions, plans, and policies. ... That we are also very confused about normativity and meta-ethics and don't really know what we mean by "should", including in this document...

Yeah, I realize this would be a hard sell in today's environment, but what if building Friendly AI requires a civilization sane enough to consider this common sense? I mean, for example, how can it be a good idea to gift a super-powerful "corrigible" or "obedient" AI to a civilization full of people with crazy amounts of moral certainty?

Comment by wei_dai on Open & Welcome Thread - September 2020 · 2020-09-13T08:12:56.765Z · score: 20 (11 votes) · LW · GW

I don't recall learning in school that most of "the bad guys" from history (e.g., Communists, Nazis) thought of themselves as "the good guys" fighting for important moral reasons. It seems like teaching that fact, and instilling moral uncertainty in general into children, would prevent a lot of serious man-made problems (including problems we're seeing play out today). So why hasn't civilization figured that out already? Or is not teaching moral uncertainty some kind of Chesterton's Fence, and teaching it widely would make the world even worse off on expectation?

Comment by wei_dai on "The Holy Grail" of portfolio management · 2020-09-12T16:58:18.091Z · score: 6 (3 votes) · LW · GW

I have changed my mind about shorting stocks and especially call options. The problem is that sometimes a stock I shorted rises sharply on significant or insignificant news (which I didn't notice myself until the price already shot up a lot), and I get very worried that maybe it's the next Tesla and will keep rising and wipe out all or a significant fraction of my net worth, and so I panic buy the stock/options to close out the short position. Then a few days later people realize that the news wasn't that significant and the stock falls again. Other than really exceptional circumstances like the recent Kodak situation, perhaps it's best to leave shorting to professionals who can follow the news constantly and have a large enough equity cushion that they can ride out any short-term spikes in the stock price. I think my short portfolio is still showing an overall profit, but it's just not worth the psychological stress involved and the constant attention that has to be paid.

Comment by wei_dai on What should we do once infected with COVID-19? · 2020-08-31T05:52:12.165Z · score: 4 (2 votes) · LW · GW

I haven't been following developments around hydroxychloroquine very closely. My impression from incidental sources is that it's probably worth taking along with zinc, at least early in the course of a COVID-19 infection. I'll probably do a lot more research if and when I actually need to make a decision.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T18:52:16.305Z · score: 2 (1 votes) · LW · GW

With a little patience and a limit order, you can usually get the midpoint between bid and ask, or close to it.

How do you do this when the market is moving constantly and so you'd have to constantly update your limit price to keep it at the midpoint? I've been doing this manually and unless the market is just not moving for some reason, I often end up chasing the market with my limit price, and then quickly get a fill (probably not that close to the midpoint although it's hard to tell) when the market turns around and moves into my limit order.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T18:30:21.453Z · score: 4 (3 votes) · LW · GW

Good points.

And in a margin account, a broker can typically sell any of your positions (because they’re collateral) to protect its interests, even part of a spread, which can again expose you to delta risk if they don’t close your whole box at once.

I guess technically it's actually "expose you to gamma risk" because the broker would only close one of your positions if doing so reduced margin requirements / increased buying power, and assuming you're overall long the broad market, that can only happen if doing so decreases overall delta risk. Another way to think about it is that as far as delta risk, it's the same whether they sell one of your options that long the SPX or sell one of your index ETFs. Hopefully they'll be smart enough to sell your index ETFs because that's much more liquid?

The above is purely theoretical though. Has this actually happened to you, or do you know a case of it actually happening?

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T18:14:55.233Z · score: 3 (2 votes) · LW · GW

Another way to get leverage in a retirement account is with leveraged ETFs.

Yeah, and another way I realized after I wrote my comment is that you can also buy stock index futures contracts in IRA accounts, and I forgot exactly but I think you can get around 5x max leverage that way. Compared to leveraged ETFs this should incur less expense cost and allow you to choose your own rebalancing schedule for a better tradeoff between risk and trading costs. (Of course at the cost of having to do your own rebalancing.)

Also after writing my comment, I realized that with leveraged CEFs there may be a risk that they deleverage quickly on the way down (because they're forced by law or regulation to not exceed some maximum leverage) and then releverage slowly on the way up (because they're afraid of being forced to deleverage again) which means they could systematically capture more downside than upside. Should probably research this more before putting a lot of money into leveraged CEFs.

I’m still interested in these CEFs for diversification though, how do you find these?

SeekingAlpha.com has a CEF section if you want to look for other people's recommendations. CEFAnalyzer.com and CEFConnect.com have screeners you can use to find what you want on your own.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T17:07:41.186Z · score: 5 (3 votes) · LW · GW
  1. Look for sectors that crash more than they should in a market downturn, due to correlated forced deleveraging, and load up on them when that happens. The energy midstream/MLP sector is a good recent example, because a lot of those stocks were held in closed end funds in part for tax reasons, those funds all tend to use leverage, and because they have a maximum leverage ratio that they're not allowed to exceed, they were forced to deleverage during the March crash, which caused more price drops and more deleveraging, and so on.
Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T16:47:23.326Z · score: 7 (4 votes) · LW · GW

What are some reputable activist short-sellers?

I'm reluctant to give out specific names because I'm still doing "due diligence" on them myself. But generally, try to find activist short-sellers who have a good track record in the past, and read/listen to some of their interviews/reports/articles to see how much sense they make.

Where do you go to identify Robinhood bubbles?

I was using Robintrack.net but it seems that Robinhood has stopped providing the underlying data. So now I've set up a stock screener to look for big recent gains, and then check whether the stock has any recent news to justify the rally, and check places like SeekingAlpha, Reddit, and StockTwits to see what people are saying about it. Also just follow general market news because really extreme cases like Hertz will be reported.

I guess this question is really a general question about where you go for information about the market, in a general sense.

Podcasts seem to be a good source, especially ones that interview a variety of guests so I can get diverse perspectives without seeking them out myself. I currently follow "Real Vision Daily", "Macro Voices", and "What Goes Up".

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T16:19:25.691Z · score: 2 (1 votes) · LW · GW

Note on 5: Before you try this, make sure you understand what you're getting into and the risks involved. (There are rarely completely riskless arbitrage opportunities, and this isn't one of them.)

  1. Stock borrowing cost might be the biggest open secret that few investors know about. Before buying or shorting any individual stock, check its borrowing cost and "utilization ratio" (how much available stock to borrow have already been borrowed for short selling) using Interactive Broker's Trader Workstation. If borrowing cost is high and utilization ratio isn't very low (not sure why that happens sometimes) that means some people are willing to pay a high cost per day to hold a short position in the stock, which means it very likely will tank in the near future. But if utilization ratio is very high, near 100%, that means no new short selling can take place so the stock can easily zoom up more due to lack of short selling pressure and potential for short squeeze, before finally tanking.

If you do decide you want to bet against the short sellers and buy the stock anyway, at least hold the position at a broker that offers a Fully Paid Lending Program, so you can capture part of the borrowing cost that short sellers pay.

Comment by wei_dai on What posts on finance would your find helpful or interesting? · 2020-08-22T09:04:06.749Z · score: 16 (10 votes) · LW · GW

Technical analysis, momentum, trend following, and the like, from an EMH-informed perspective.

I've been dismissive of anything that looks at past price information, but given that markets are clearly sometimes inefficient due to short selling being constrained by availability and cost of borrowing stock (which causes prices to be too high which can cause short squeezes), this can "infect" the market with inefficiency during other times as well (because potential short sellers are afraid of being short squeezed), which means there's no (obvious) theoretical reason to dismiss technical analysis and the like anymore.

Comment by wei_dai on "The Holy Grail" of portfolio management · 2020-08-22T08:25:44.158Z · score: 10 (6 votes) · LW · GW

Recently I started thinking that it's a good idea to add short positions (on individual stocks or call options) to one's portfolio. Then you can win if either the short thesis turns out to be correct (e.g., the company really is faking its profits), or the market tanks as a whole and the short positions act as a hedge. I wrote about some ways to find short ideas in a recent comment.

Question for the audience: do you know of a good way to measure the worst case correlation?

Not sure if this is the best way, but I've just been looking at the drawdown percentage from the Feb top to the March bottom of each asset.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T07:55:13.678Z · score: 9 (6 votes) · LW · GW

Possible places to look for alpha:

  1. Articles on https://seekingalpha.com/. Many authors there give free ideas/tips as advertisement for their paid subscription services. The comments section of articles often have useful discussions.
  2. Follow the quarterly reports of small actively managed funds (or the portfolio/holdings reports on Morningstar, which show fund portfolio changes) to get stock ideas.
  3. Follow reputable activist short-sellers on Twitter. (They find companies that commit fraud, like Luckin Coffee or Wirecard, and report on them after shorting their stock.)
  4. Look for Robinhood bubble stocks (famous examples being Nikola, Hertz and Kodak) and short them as they start to burst. (But watch out for Hard To Borrow fees, and early assignment risk if you're shorting call options.)
  5. Arbitrage between warrants and call options for the same stock. Robinhood users can't buy warrants but can buy call options, so call options can be way overpriced relative to warrants. (I'm not sure why hedge funds haven't arbitraged away the mispricings already, but maybe it's because options markets are small/illiquid enough that it's hard to make enough money to be worthwhile for them.)
Comment by wei_dai on The Wrong Side of Risk · 2020-08-16T09:13:20.612Z · score: 9 (5 votes) · LW · GW

Recently I had the epiphany that an investor's real budget constraint isn't how much money they have (with portfolio margin you can get 6x or even 12x leverage) but how much risk-taking capacity they have. So another way of making what I think is your main point is that the market pays you to take (certain kinds of) risks, so don't waste your risk-taking capacity by taking too little risk. But one should be smart and try to figure out where the market is paying the most per unit of risk.

Standard finance theory says the market should pay you the most for taking "market risk", i.e., holding the total market portfolio. But the total market portfolio includes no options, because short and long options cancel each other out giving a sum of 0. So the only way that it makes sense for someone to hold an options position is if they differ from the average investor in some way, and figuring out how they differ should be the starting point for deciding what kind of options positions to hold, right?

In this case, it seems that you're saying the average investor manages someone else's money, which makes them want to buy puts. They have to pay extra for this because most assets are managed by investors like this, so there's a lot of demand and little supply of puts. If you're not like this, you can therefore make above-market risk-adjusted returns by selling puts to meet this demand. (I'm not totally sure this is true empirically, but wanted to spell out the reasoning I think you're using more.)

Comment by wei_dai on Alignment By Default · 2020-08-15T19:31:02.816Z · score: 6 (3 votes) · LW · GW

So similarly, a human could try to understand Alice's values in two ways. The first, equivalent to what you describe here for AI, is to just apply whatever learning algorithm their brain uses when observing Alice, and form an intuitive notion of "Alice's values". And the second is to apply explicit philosophical reasoning to this problem. So sure, you can possibly go a long way towards understanding Alice's values by just doing the former, but is that enough to avoid disaster? (See Two Neglected Problems in Human-AI Safety for the kind of disaster I have in mind here.)

(I keep bringing up metaphilosophy but I'm pretty much resigned to be living in a part of the multiverse where civilization will just throw the dice and bet on AI safety not depending on solving it. What hope is there for our civilization to do what I think is the prudent thing, when no professional philosophers, even ones in EA who are concerned about AI safety, ever talk about it?)

Comment by wei_dai on Alignment By Default · 2020-08-13T03:28:25.951Z · score: 12 (6 votes) · LW · GW

To help me check my understanding of what you're saying, we train an AI on a bunch of videos/media about Alice's life, in the hope that it learns an internal concept of "Alice's values". Then we use SL/RL to train the AI, e.g., give it a positive reward whenever it does something that the supervisor thinks benefits Alice's values. The hope here is that the AI learns to optimize the world according to its internal concept of "Alice's values" that it learned in the previous step. And we hope that its concept of "Alice's values" includes the idea that Alice wants AIs, including any future AIs, to keep improving their understanding of Alice's values and to serve those values, and that this solves alignment in the long run.

Assuming the above is basically correct, this (in part) depends on the AI learning a good enough understanding of "improving understanding of Alice's values" in step 1. This in turn (assuming "improving understanding of Alice's values" involves "using philosophical reasoning to solve various confusions related to understanding Alice's values, including Alice's own confusions") depends on that the AI can learn a correct or good enough concept of "philosophical reasoning" from unsupervised training. Correct?

If AI can learn "philosophical reasoning" from unsupervised training, GPT-N should be able to do philosophy (e.g., solve open philosophical problems), right?

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-13T02:04:14.609Z · score: 4 (2 votes) · LW · GW

I don't have a detailed analysis to back it up, but my guess is that CEFs are probably superior because call options don't pay dividends so you're not getting as much tax benefit as holding CEFs. It's also somewhat tricky to obtain good pricing on options (the bid-ask spread tends to be much higher than on regular securities so you get a terrible deal if you just do market orders).

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-13T01:33:44.614Z · score: 5 (3 votes) · LW · GW

For people in the US, the best asset class to put in a tax-free or tax-deferred account seems to be closed-end funds (CEF) that invest in REITs. REITs because they pay high dividends, which would usually be taxed as non-qualified dividends, and CEF (instead of ETF or open-end mutual funds) because these funds can use leverage (up to 50%), and it's otherwise hard or impossible to obtain leverage in a tax-free/deferred account (because they usually don't allow margin). (The leverage helps maximize the value of tax-freeness or deferral, but if you don't like the added risk you can compensate by using less leverage or invest in less risky assets in your taxable accounts.)

As an additional bonus, CEFs usually trade at a premium or discount to their net asset value (NAV) and those premiums/discounts show a (EMH-violating) tendency to revert to the mean, so you can obtain alpha by buying CEFs that have higher than historical average discounts and waiting for the mean reversion. There's a downside in that CEFs also tend to have active management fees, but the leverage, discount, and mean reversion should more than make up for that.

Comment by wei_dai on Property as Coordination Minimization · 2020-08-06T08:20:59.037Z · score: 2 (1 votes) · LW · GW

Many different landlords can make many different decisions, whereas one Housing Bureau will either make one decision for everyone, or make unequal decisions in a corrupt way.

In our economy we have all three of:

  1. individual landlords making decisions about property that they directly own
  2. groups of people pooling capital to buy property, then hiring professional managers to make decisions on behalf of the group (c.f. REIT)
  3. property (e.g., public housing projects, parks) that is owned by various government departments/agencies, and managed by bureaucrats

The point is that 2 and 3 aren't that different in terms of "corruption". In both cases, we (at least in theory) made a deliberate trade-off to accept greater principal-agent costs ("corruption") for some expected benefit the arrangement brings, e.g., greater diversification / spreading of risk in the case of 2. Why isn't the same true for letting the government own everything or a lot more things? (Not sure who you're arguing against, but presumably there's a steelman-version of them that argues that we should accept the "corruption" in that case too because the benefits are greater.)

the people who would rent out the additional floors I add to the house generally don’t comment at the public meeting, whereas the retiree who would have to deal with more cars on the road or a blocked view of the Bay does.

This isn't as bad as it sounds, because one of these is a priced externality, and the other one is an unpriced externality. That is, since you would get rent from the renter, you already have an incentive to speak on their behalf at the meeting. The alternative to such meetings is either you just ignore the unpriced externality (the retiree's blocked view) when you make your decision or the externality has to be handled some other way, like the retiree paying you for a "no additional floor" covenant, or suing you through the court system, both of which also involve coordination costs (that can add up quickly when there are many externalities). Again it's not that clear, at least from this post, that the current system (where everyone who may be affected speaks at the meeting and then some bureaucrat makes a decision that at least supposedly takes all of them into account) isn't actually optimal given the constraints we face.

ETA: Consider if there is in fact a bunch of negative externalities that together outweigh the benefits of building another floor. Without this meeting how would all those affected people realistically coordinate (supposing none of them individually has enough incentive) to stop you?

Comment by wei_dai on Predictions for GPT-N · 2020-07-30T03:06:56.151Z · score: 22 (8 votes) · LW · GW

Anyone want to predict when we'll reach the same level of translation and other language capability as GPT-3 via iterated amplification or another "aligned" approach? (How far behind is alignment work compared to capability work?)

Comment by wei_dai on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-13T04:58:48.234Z · score: 14 (9 votes) · LW · GW

Another big update for me is that according to modern EMH, big stock market movements mostly reflect changes in risk premium, rather than changes in predicted future cash flows. (The recent COVID-19 crash however was perhaps driven even more by liquidity needs.)

Comment by wei_dai on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-13T04:51:14.151Z · score: 15 (9 votes) · LW · GW

My understanding of banking and monetary policy was pretty wrong until very recently. Apparently the textbook I read in the 90s was explaining how banking and central banking worked in the 50s. John Wentsworth pointed me to a Coursera course by Perry Mehrling and here are the same lectures without having to register for the course.

Comment by wei_dai on Open & Welcome Thread - July 2020 · 2020-07-12T23:39:07.131Z · score: 32 (15 votes) · LW · GW

Many AI issues will likely become politicized. (For example, how much to prioritize safety versus economic growth and military competitiveness? Should AI be politically neutral or be explicitly taught "social justice activism" before they're allowed to be deployed or used by the public?) This seems to be coming up very quickly and we are so not prepared, both as a society and as an online community. For example I want to talk about some of these issues here, but we haven't built up the infrastructure to do so safely.

Comment by wei_dai on High Stock Prices Make Sense Right Now · 2020-07-07T05:58:26.740Z · score: 4 (2 votes) · LW · GW

In case people want to know more about this stuff, most of my understanding comes from Perry Mehrling’s coursera course (which I recommend)

Thanks! I've been hoping to come across something like this, to learn about the details of the modern banking system.

Comment by wei_dai on Cryonics without freezers: resurrection possibilities in a Big World · 2020-07-06T18:55:20.882Z · score: 4 (2 votes) · LW · GW

I agree recent events don't justify a huge update by themselves if one started with a reasonable prior. It's more that I somehow failed to consider the possibility of that scenario, the recent events made me consider it, and that's why it triggered a big update for me.

Comment by wei_dai on High Stock Prices Make Sense Right Now · 2020-07-05T06:17:19.446Z · score: 5 (3 votes) · LW · GW

The institutions which own Treasuries (e.g. banks) do so with massive amounts of cheap leverage, and those are the only assets they’re allowed to hold with that much leverage.

I'm curious about this. What source of leverage do banks have access to, that cost less than interest on Treasuries? (I know there are retail deposit accounts that pay almost no interest, but I think those are actually pretty expensive for the banks to obtain, because they have to maintain a physical presence to get those customers. I doubt those banks can make a profit if they just put those deposits into Treasuries. You must be talking about something else?)

Comment by wei_dai on Cryonics without freezers: resurrection possibilities in a Big World · 2020-07-01T09:46:26.878Z · score: 7 (5 votes) · LW · GW

My subjective anticipation is mollified by the thought that I’ll probably either never experience dying or wake up to find that I’ve been in an ancestral simulation, which leaves the part of me that wants to prevent all the empty galaxies from going to waste to work in peace. :)

Update: Recent events have made me think that the fraction of advanced civilizations in the multiverse that are sane may be quite low. (It looks like our civilization will probably build a superintelligence while suffering from serious epistemic pathologies, and this may be be typical for civilizations throughout the multiverse.) So now I'm pretty worried about "waking up" in some kind of dystopia (powered or controlled by a superintelligence with twisted beliefs or values), either in my own future lightcone or in another universe.

Actually, I probably shouldn't have been so optimistic even before the recent events...

Comment by wei_dai on Self-sacrifice is a scarce resource · 2020-06-28T22:49:34.881Z · score: 9 (6 votes) · LW · GW

If you find yourself doing too much self-sacrifice, injecting a dose of normative and meta-normative uncertainty might help. (I've never had this problem, and I attribute it to my own normative/meta-normative uncertainty. :) Not sure which arguments you heard that made you extremely self-sacrificial, but try Shut Up and Divide? if it was "Shut Up and Multiply", or Is the potential astronomical waste in our universe too small to care about? if it was "Astronomical Waste".

Comment by wei_dai on Atemporal Ethical Obligations · 2020-06-27T00:18:46.991Z · score: 24 (10 votes) · LW · GW

Thus, in order to be truly good people, we must take an active role, predict the future of moral progress, and live by tomorrow’s rules, today.

Suppose you think X is what is actually moral (or is a distribution representing your moral uncertainty after doing your best to try to figure out what is actually moral) and Y is what you expect most people will recognize as moral in the future (or is a distribution representing your uncertainty about that). Are you proposing to follow Y instead of X? (It sounds that way but I want to make sure I'm not misunderstanding.)

Assuming the answer is yes, is that because you think that trying to predict what most people will recognize as moral is more likely to lead to what is actually moral than directly trying to figure it out yourself? Or is it because you want to be recognized by future people as being moral and following Y is more likely to lead to that result?

Comment by wei_dai on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-25T11:19:10.589Z · score: 16 (10 votes) · LW · GW

But if you claim to be charitable and openminded, except when confronted by a test that affects your own community, then you’re using those words as performative weapons, deliberately or not.

I guess "charitable" here is referring to the principle of charity, but I think that is supposed to apply in a debate or discussion, to make them more productive and less likely to go off the rails. But in this case there is no debate, as far as I can tell. The NYT reporter or others representing NYT have not given a reason for doxxing Scott (AFAIK, except to cite a "policy" for doing so, but that seems false because there have been plenty of times when they've respected their subjects' wishes to remain pseudonymous), so what are people supposed to be charitable about?

If instead the intended meaning of "charitable and openminded" is something like "let's remain uncertain about NYT's motives for doxxing Scott until we know more", it seems like absence of any "principled reasons" provided so far is already pretty strong evidence for ruling out certain motives, leaving mostly "dumb mistake" and "evil or selfish" as the remaining possibilities. Given that, I'm not sure what people are doing that Richard thinks is failing the test to be "charitable and openminded", especially given that NYT has not shown a willingness to engage in a discussion so far and the time-sensitive nature of the situation.

Comment by wei_dai on Open & Welcome Thread - February 2020 · 2020-06-25T05:54:58.600Z · score: 13 (7 votes) · LW · GW

Another reason for attributing part of the gains (from betting on the coronavirus market crash) to luck, from Rob Henderson's newsletter which BTW I highly recommend:

The geneticist Razib Khan has said that the reason the U.S. took so long to respond to the virus is that Americans do not consider China to be a real place. For people in the U.S., “Wuhan is a different planet, mentally.” From my view, it didn’t seem “real” to Americans (or Brits) until Italy happened.

Not only have I lived in China, my father was born in Wuhan and I've visited there multiple times.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-06-19T06:14:26.842Z · score: 4 (2 votes) · LW · GW

Thanks for the feedback. I guess I in part was expecting people to learn about portfolio margin and box spread options for other reasons (so the additional work to pull out equity into CDs isn't that much), and in part forgot how difficult it might be for someone to learn about these things. Maybe there's an opportunity for someone to start a business to do this for their customers...

BTW you'll have to pass a multiple-choice test to be approved for PM at TDA, which can be tough. Let me know if you need any help with that. Also I've been getting 0.5%-0.55% interest rate from box spreads recently, and CDs are currently 1.25%-1.3%. CDs were around 1.5% when I first wrote this, so it was significantly more attractive then. I would say it's still worth it because once you learn these things you can get the extra return every year without that much additional work, and over several decades it can add up to a lot.

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-19T04:51:17.599Z · score: 16 (9 votes) · LW · GW

Personal update: Over the last few months, I've become much less worried that I have a tendency to be too pessimistic (because I frequently seem to be the most pessimistic person in a discussion). Things I was worried about more than others (coronavirus pandemic, epistemic conditions getting significantly worse) have come true, and when I was wrong in a pessimistic direction, I updated quickly after coming across a good argument (so I think I was wrong just because I didn't think of that argument, rather than due to a tendency to be pessimistic).

Feedback welcome, in case I've updated too much about this.

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-17T10:04:32.620Z · score: 7 (4 votes) · LW · GW

I should also address this part:

For example, if the threat model is that they just adopt the dominant ideology around them (which happens to be false on many points), then that results in them having false beliefs (#1), but may not cause any harm to come to them from it (#3) (and may even be to their benefit, in some ways).

Many Communist true believers in China met terrible ends as waves of "political movements" swept through the country after the CCP takeover, and pitted one group against another, all vying to be the most "revolutionary". (One of my great-grandparents could have escaped but stayed in China because he was friends with a number of high-level Communists and believed in their cause. He ended up committing suicide when his friends lost power to other factions and the government turned on him.)

More generally, ideology can change so quickly that it's very difficult to follow it closely enough to stay safe, and even if you did follow the dominant ideology perfectly you're still vulnerable to the next "vanguard" who pushes the ideology in a new direction in order to take power. I think if "adopt the dominant ideology" is sensible as a defensive strategy for living in some society, you'd still really want to avoid getting indoctrinated into being a true believer, so you can apply rational analysis to the political struggles that will inevitably follow.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-06-17T07:06:25.752Z · score: 4 (2 votes) · LW · GW

I came here to say that I'm surprised this advice isn't on top of every list of personal investment advice. Almost 1% risk-free extra return per year, on top of whatever else you're getting from your investments. Isn't it crazy that this is possible, when 10 year treasuries are yielding only ~0.7%? How is every financial columnist not shouting this from their rooftops?

Then I noticed that it's on the bottom of my own advice list, due to not having received a single up-vote. What gives, LW?

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-17T05:41:06.067Z · score: 6 (4 votes) · LW · GW

I guess I'm worried about

  1. They will "waste their life", for both the real opportunity cost and the potential regret they might feel if they realize the error later in life.
  2. My own regret in knowing that they've been indoctrinated into believing wrong things (or into having unreasonable certainty about potentially wrong things), when I probably could have done something to prevent that.
  3. Their views making family life difficult. (E.g., if they were to secretly record family conversations and post them on social media as examples of wrongthink, like some kids have done.)

Can't really think of any mitigations for these aside from trying not to let them get indoctrinated in the first place...

Comment by wei_dai on Mod Notice about Election Discussion · 2020-06-17T03:18:49.517Z · score: 3 (2 votes) · LW · GW

You mean tag people so they get notified, like on FB? I don't think you can. Just send them a PM with the link, I guess.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-06-13T01:40:34.860Z · score: 3 (2 votes) · LW · GW

Yeah, I've become suspicious of it myself, which is why I retracted the comment. (It should show as struck out?)

Comment by wei_dai on Covid-19 6/11: Bracing For a Second Wave · 2020-06-12T09:05:24.323Z · score: 16 (6 votes) · LW · GW

Thanks for writing these.

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-11T06:20:22.643Z · score: 6 (2 votes) · LW · GW

I was initially pretty excited about the idea of getting another passport, but on second thought I'm not sure it's worth the substantial costs involved. Today people aren't losing their passports or having their movements restricted for (them or their family members) having expressed "wrong" ideas, but just(!) losing their jobs, being publicly humiliated, etc. This is more the kind of risk I want to hedge against (with regard to AI), especially for my family. If the political situation deteriorates even further to where the US government puts official sanctions on people like me, humanity is probably just totally screwed as a whole and having another passport isn't going to help me that much.

Comment by wei_dai on ESRogs's Shortform · 2020-06-10T06:24:30.152Z · score: 6 (3 votes) · LW · GW

sell a long-dated $5 call

This page explains why the call option would probably get exercised early and ruin your strategy:

ITM calls get assigned in a hard to borrow stock all the time

The second most common form of assignment is in a hard to borrow stock. Since the ability to short the stock is reduced, selling an ITM call option is the next best thing. A liquidity provider might have to pay a negative cost of carry just to hold a short stock position. Since the market on balance wants to short the stock, the value of the ITM call gets reduced relative to the underlying stock price. Moreover, a liquidity provider might have to exercise all their long calls to come into compliance with REG SHO. That means the short call seller gets assigned.

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-08T06:28:13.829Z · score: 9 (3 votes) · LW · GW

Do you think that having your kids consume rationalist and effective altruist content and/or doing homeschooling/unschooling are insufficient for protecting your kids against mind viruses?

Homeschooling takes up too much of my time and I don't think I'm very good at being a teacher (having been forced to try it during the current school closure). Unschooling seems too risky. (Maybe it would produce great results, but my wife would kill me if it doesn't. :) "Consume rationalist and effective altruist content" makes sense but some more specific advice would be helpful, like what material to introduce, when, and how to encourage their interest if they're not immediately interested. Have any parents done this and can share their experience?

and not talking to other kids (I didn’t have any friends from US public school during grades 4 to 11)

Yeah that might have been a contributing factor for myself as well, but my kids seem a lot more social than me.

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-08T05:00:19.556Z · score: 32 (15 votes) · LW · GW

Please share ideas/articles/resources for immunizing ones' kids against mind viruses.

I think I was lucky myself in that I was partially indoctrinated in Communist China, then moved to the US before middle school, which made it hard for me to strongly believe any particular religion or ideology. Plus the US schools I went to didn't seem to emphasize ideological indoctrination as much as schools currently do. Plus there was no social media pushing students to express the same beliefs as their classmates.

What can I do to help prepare my kids? (If you have specific ideas or advice, please mention what age or grade they are appropriate for.)

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-05T07:05:59.431Z · score: 13 (9 votes) · LW · GW

Or you can use Bypass Paywalls with Firefox or Chrome.

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-05T04:31:25.688Z · score: 11 (5 votes) · LW · GW

Get your exit plan ready to execute on very short notice, and understand that it’ll be costly if you do it.

What would be a good exit plan? If you've thought about this, can you share your plan and/or discuss (privately) my specific situation?

Do what you can to keep your local environment sane, so you don’t have to run, and so the world gets back onto a positive trend.

How? I've tried to do this a bit, but it takes a huge amount of time, effort, and personal risk, and whatever gains I manage to eek out seem to be highly ephemeral at best. It doesn't seem like a very good use of my time when I can spend it on something like AI safety instead. Have you been doing this yourself, and if so what has been your experience?

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-04T08:55:49.188Z · score: 25 (14 votes) · LW · GW

You'll have to infer it from the fact that I didn't explain more and am not giving a straight answer now. Maybe I'm being overly cautious, but my parents and other relatives lived through (and suffered in) the Cultural Revolution and other "political movements", and wouldn't it be silly if I failed to "expect the Spanish Inquisition" despite that?

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-03T22:52:46.908Z · score: 33 (17 votes) · LW · GW
  1. People I followed on Twitter for their credible takes on COVID-19 now sound insane. Sigh...

  2. I feel like I should do something to prep (e.g., hedge risk to me and my family) in advance of AI risk being politicized, but I'm not sure what. Obvious idea is to stop writing under my real name, but cost/benefit doesn't seem worth it.

Comment by wei_dai on Inaccessible information · 2020-06-03T08:23:13.381Z · score: 7 (4 votes) · LW · GW

or we need to figure out some way to access the inaccessible information that “A* leads to lots of human flourishing.”

To help check my understanding, your previously described proposal to access this "inaccessible" information involves building corrigible AI via iterated amplification, then using that AI to capture "flexible influence over the future", right? Have you become more pessimistic about this proposal, or are you just explaining some existing doubts? Can you explain in more detail why you think it may fail?

(I'll try to guess.) Is it that corrigibility is about short-term preferences-on-reflection and short-term preferences-on-reflection may themselves be inaccessible information?

I can pay inaccessible costs for an accessible gain — for example leaking critical information, or alienating an important ally, or going into debt, or making short-sighted tradeoffs. Moreover, if there are other actors in the world, they can try to get me to make bad tradeoffs by hiding real costs.

This seems similar to what I wrote in an earlier thread: "What if the user fails to realize that a certain kind of resource is valuable? (By “resources” we’re talking about things that include more than just physical resources, like control of strategic locations, useful technologies that might require long lead times to develop, reputations, etc., right?)" At the time I thought you proposed to solve this problem by using the user's "preferences-on-reflection", which presumably would correctly value all resources/costs. So again is it just that "preferences-on-reflection" may itself be inaccessible?

Overall I don’t think it’s very plausible that amplification or debate can be a scalable AI alignment solution on their own, mostly for the kinds of reasons discussed in this post — we will eventually run into some inaccessible knowledge that is never produced by amplification, and so never winds up in your distilled agents.

Besides the above, can you give some more examples of (what you think may be) "inaccessible knowledge that is never produced by amplification"?

(I guess an overall feedback is that in most of the post you discuss inaccessible information without talking about amplification, and then quickly talk about amplification in the last section, but it's not easy to see how the two ideas relate without more explanations and examples.)