Posts

A tale from Communist China 2020-10-18T17:37:42.228Z
Everything I Know About Elite America I Learned From ‘Fresh Prince’ and ‘West Wing’ 2020-10-11T18:07:52.623Z
Tips/tricks/notes on optimizing investments 2020-05-06T23:21:53.153Z
Have epistemic conditions always been this bad? 2020-01-25T04:42:52.190Z
Against Premature Abstraction of Political Issues 2019-12-18T20:19:53.909Z
What determines the balance between intelligence signaling and virtue signaling? 2019-12-09T00:11:37.662Z
Ways that China is surpassing the US 2019-11-04T09:45:53.881Z
List of resolved confusions about IDA 2019-09-30T20:03:10.506Z
Don't depend on others to ask for explanations 2019-09-18T19:12:56.145Z
Counterfactual Oracles = online supervised learning with random selection of training episodes 2019-09-10T08:29:08.143Z
AI Safety "Success Stories" 2019-09-07T02:54:15.003Z
Six AI Risk/Strategy Ideas 2019-08-27T00:40:38.672Z
Problems in AI Alignment that philosophers could potentially contribute to 2019-08-17T17:38:31.757Z
Forum participation as a research strategy 2019-07-30T18:09:48.524Z
On the purposes of decision theory research 2019-07-25T07:18:06.552Z
AGI will drastically increase economies of scale 2019-06-07T23:17:38.694Z
How to find a lost phone with dead battery, using Google Location History Takeout 2019-05-30T04:56:28.666Z
Where are people thinking and talking about global coordination for AI safety? 2019-05-22T06:24:02.425Z
"UDT2" and "against UD+ASSA" 2019-05-12T04:18:37.158Z
Disincentives for participating on LW/AF 2019-05-10T19:46:36.010Z
Strategic implications of AIs' ability to coordinate at low cost, for example by merging 2019-04-25T05:08:21.736Z
Please use real names, especially for Alignment Forum? 2019-03-29T02:54:20.812Z
The Main Sources of AI Risk? 2019-03-21T18:28:33.068Z
What's wrong with these analogies for understanding Informed Oversight and IDA? 2019-03-20T09:11:33.613Z
Three ways that "Sufficiently optimized agents appear coherent" can be false 2019-03-05T21:52:35.462Z
Why didn't Agoric Computing become popular? 2019-02-16T06:19:56.121Z
Some disjunctive reasons for urgency on AI risk 2019-02-15T20:43:17.340Z
Some Thoughts on Metaphilosophy 2019-02-10T00:28:29.482Z
The Argument from Philosophical Difficulty 2019-02-10T00:28:07.472Z
Why is so much discussion happening in private Google Docs? 2019-01-12T02:19:19.332Z
Two More Decision Theory Problems for Humans 2019-01-04T09:00:33.436Z
Two Neglected Problems in Human-AI Safety 2018-12-16T22:13:29.196Z
Three AI Safety Related Ideas 2018-12-13T21:32:25.415Z
Counterintuitive Comparative Advantage 2018-11-28T20:33:30.023Z
A general model of safety-oriented AI development 2018-06-11T21:00:02.670Z
Beyond Astronomical Waste 2018-06-07T21:04:44.630Z
Can corrigibility be learned safely? 2018-04-01T23:07:46.625Z
Multiplicity of "enlightenment" states and contemplative practices 2018-03-12T08:15:48.709Z
Online discussion is better than pre-publication peer review 2017-09-05T13:25:15.331Z
Examples of Superintelligence Risk (by Jeff Kaufman) 2017-07-15T16:03:58.336Z
Combining Prediction Technologies to Help Moderate Discussions 2016-12-08T00:19:35.854Z
[link] Baidu cheats in an AI contest in order to gain a 0.24% advantage 2015-06-06T06:39:44.990Z
Is the potential astronomical waste in our universe too small to care about? 2014-10-21T08:44:12.897Z
What is the difference between rationality and intelligence? 2014-08-13T11:19:53.062Z
Six Plausible Meta-Ethical Alternatives 2014-08-06T00:04:14.485Z
Look for the Next Tech Gold Rush? 2014-07-19T10:08:53.127Z
Outside View(s) and MIRI's FAI Endgame 2013-08-28T23:27:23.372Z
Three Approaches to "Friendliness" 2013-07-17T07:46:07.504Z
Normativity and Meta-Philosophy 2013-04-23T20:35:16.319Z
Outline of Possible Sources of Values 2013-01-18T00:14:49.866Z

Comments

Comment by wei_dai on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-21T02:39:11.977Z · LW · GW

You mention "defenses will improve" a few times. Can you go into more detail about this? What kind of defenses do you have in mind? I keep thinking that in the long run, the only defenses are either to solve meta-philosophy so our AIs can distinguish between correct arguments and merely persuasive ones and filter out the latter for us (and for themselves), or go into an info bubble with trusted AIs and humans and block off any communications from the outside. But maybe I'm not being imaginative enough.

Comment by wei_dai on Open & Welcome Thread – October 2020 · 2020-10-31T18:04:39.710Z · LW · GW

By "planting flags" on various potentially important and/or influential ideas (e.g., cryptocurrency, UDT, human safety problems), I seem to have done well for myself in terms of maximizing the chances of gaining a place in the history of ideas. Unfortunately, I've recently come to dread more than welcome the attention of future historians. Be careful what you wish for, I guess.

Comment by wei_dai on Open & Welcome Thread – October 2020 · 2020-10-31T17:32:27.430Z · LW · GW

Free speech norms can only last if "fight hate speech with more speech" is actually an effective way to fight hate speech (and other kinds of harmful speech). Rather than being some kind of human universal constant, that's actually only true in special circumstances when certain social and technological conditions come together in a perfect storm. That confluence of conditions has now gone away, due in part to technological change, which is why the most recent free speech era in Western civilization is rapidly drawing to an end. Unfortunately, its social scientists failed to appreciate the precious rare opportunity for what it was, and didn't use it to make enough progress on important social scientific questions that will become taboo (or already has become taboo) once again to talk about.

Comment by wei_dai on A tale from Communist China · 2020-10-20T22:51:24.389Z · LW · GW

This ended up being my highest-karma post, which I wasn't expecting, especially as it hasn't been promoted out of "personal blog" and therefore isn't as visible as many of my other posts. (To be fair "The Nature of Offense" would probably have a higher karma if it was posted today, as each vote only had one point back then.) Curious what people liked about it, or upvoted it for.

Comment by wei_dai on Open & Welcome Thread – October 2020 · 2020-10-19T18:44:04.828Z · LW · GW

There's a time-sensitive trading opportunity (probably lasting a few days), i.e., to short HTZ because it's experiencing an irrational spike in prices. See https://seekingalpha.com/article/4379637-over-1-billion-hertz-shares-traded-on-friday-because-of-bankruptcy-court-filings for details. Please only do this if you know what you're doing though, for example you understand that HTZ could spike up even more and the consequences of that if it were to happen and how to hedge against it. Also I'm not an investment advisor and this is not investment advice.

Comment by wei_dai on A tale from Communist China · 2020-10-19T07:12:37.020Z · LW · GW

Lessons I draw from this history:

  1. To predict a political movement, you have to understand its social dynamics and not just trust what people say about their intentions, even if they're totally sincere.
  2. Short term trends can be misleading so don't update too much on them, especially in a positive direction.
  3. Lots of people who thought they were on the right side of history actually weren't.
  4. Becoming true believers in some ideology probably isn't good for you or the society you're hoping to help. It's crucial to maintain empirical and moral uncertainties.
  5. Risk tails are fatter than people think.
Comment by wei_dai on Everything I Know About Elite America I Learned From ‘Fresh Prince’ and ‘West Wing’ · 2020-10-19T05:09:23.827Z · LW · GW

Speaking of parents obsessed with getting their kids into an elite university, here's an amazing exposé about a corner of that world that I had little idea existed: The Mad, Mad World of Niche Sports Among Ivy League–Obsessed Parents, Where the desperation of late-stage meritocracy is so strong, you can smell it

Comment by wei_dai on A tale from Communist China · 2020-10-18T21:56:47.340Z · LW · GW

Another detail: My grandmother planed to join the Communist Revolution together with two of her classmates, who made it farther than she did. One made it all the way to Communist controlled territory (Yan'an) and later became a high official in the new government. She ended up going to prison in one of the subsequent political movements. Another one almost made it before being stopped by Nationalist authorities, who forced her to write a confession and repentance before releasing her back to her family. That ended up being dug up during the Cultural Revolution and got her branded as a traitor to Communism.

Comment by wei_dai on Covid 10/15: Playtime is Over · 2020-10-18T19:32:19.346Z · LW · GW

Upvoted for the important consideration, but your own brain is a source of errors for which it's hard to decorrelate, so is it really worse (or worse enough to justify the additional costs of the alternative) to just trust Zvi instead of your own judgement/integration of diverse sources?

ETA: Oh, I do read the comments here so that helps to catch Zvi's errors, if any.

Comment by wei_dai on Open & Welcome Thread – October 2020 · 2020-10-15T16:20:04.107Z · LW · GW

My grandparents on both sides of my family seriously considered leaving China (to the point of making concrete preparations), but didn't because things didn't seem that bad, until it was finally too late.

Comment by wei_dai on Open & Welcome Thread – October 2020 · 2020-10-15T16:04:29.269Z · LW · GW

Writing a detailed post is too costly and risky for me right now. One of my grandparents was confined in a makeshift prison for ten years during the Cultural Revolution and died shortly after, for something that would normally be considered totally innocent that he did years earlier. None of them saw that coming, so I'm going to play it on the safe side and try to avoid saying things that could be used to "cancel" me or worse. But there are plenty of articles on the Internet you can find by doing some searches. If none of them convinces you how serious the problem is, PM me and I'll send you some links.

Comment by wei_dai on Everything I Know About Elite America I Learned From ‘Fresh Prince’ and ‘West Wing’ · 2020-10-12T01:34:37.180Z · LW · GW

Here is his newsletter archive and subscribe link if anyone wants to check it out.

Comment by wei_dai on Everything I Know About Elite America I Learned From ‘Fresh Prince’ and ‘West Wing’ · 2020-10-10T16:42:18.561Z · LW · GW

There's a number of ways to interpret my question, and I kind of mean all of them:

  1. If my stated and/or revealed preferences are that I don't value joining the elite class very much, is that wrong in either an instrumental or terminal sense?
  2. For people who do seem to value it a lot, either for themselves or their kids (e.g., parents obsessed with getting their kids into an elite university), is that wrong in either an instrumental or terminal sense?

By "either an instrumental or terminal sense" I mean is "joining the elite" (or should it be) an terminal value or just a instrumental value? If it's just an instrumental value, is "joining the elite" actually a good way to achieve people's terminal values?

Comment by wei_dai on Open & Welcome Thread – October 2020 · 2020-10-03T16:17:50.435Z · LW · GW

Except it's like, the Blight has already taken over all of the Transcend and almost all of the Beyond, even a part of the ship itself and some of its crew members, and many in the crew are still saying "I'm not very worried." Or "If worst comes to worst, we can always jump ship!"

Comment by wei_dai on Open & Welcome Thread – October 2020 · 2020-10-02T20:14:16.701Z · LW · GW

Watching cancel culture go after rationalists/EA, I feel like one of the commentators on the Known Net watching the Blight chase after Out of Band II. Also, Transcend = academia, Beyond = corporations/journalism/rest of intellectual world, Slow Zone = ...

(For those who are out of the loop on this, see https://www.facebook.com/bshlgrs/posts/10220701880351636 for the latest development.)

Comment by wei_dai on What Does "Signalling" Mean? · 2020-09-17T02:39:56.119Z · LW · GW

eg, birds warning each other that there is a snake in the grass

Wait, this is not the example in the Wikipedia page, which is actually "When an alert bird deliberately gives a warning call to a stalking predator and the predator gives up the hunt, the sound is a signal."

I found this page which gives a good definition of signaling:

Signalling theory (ST) tackles a fundamental problem of communication: how can an agent, the receiver, establish whether another agent, the signaller, is telling or otherwise conveying the truth about a state of affairs or event which the signaller might have an interest to misrepresent? And, conversely, how can the signaller persuade the receiver that he is telling the truth, whether he is telling it or not? This two-pronged question potentially arises every time the interests between signallers and receivers diverge or collide and there is asymmetric information, namely the signaller is in a better position to know the truth than the receiver is. ST, which is only a little more than 30 years old, has now become a branch of game theory. In economics it was introduced by Michael Spence in 1973. In biology it took off not so much when Amotz Zahavi first introduced the idea in 1975, but since, in 1990, Alan Grafen proved formally that ‘honest’ signals can be an evolutionarily stable strategy.

Typical situations that signalling theory covers have two key features:

  • there is some action the receiver can do which benefits a signaller, whether or not he has the quality k, for instance marry him, but
  • this action benefits the receiver if and only if the signaller truly has k, and otherwise hurts her — for instance, marry an unfaithful man.

So in the alarm example, the quality k is whether the bird has really detected the predator, and the "action" is for the predator to give up the hunt. Later in the Wikipedia article, it says "For example, if foraging birds are safer when they give a warning call, cheats could give false alarms at random, just in case a predator is nearby."

Comment by wei_dai on Open & Welcome Thread - September 2020 · 2020-09-14T17:54:17.015Z · LW · GW

Did it make you or your classmates doubt your own morality a bit? If not, maybe it needs to be taught along with the outside view and/or the teacher needs to explicitly talk about how the lesson from history is that we shouldn't be so certain about our morality...

Comment by wei_dai on Open & Welcome Thread - September 2020 · 2020-09-13T21:30:52.324Z · LW · GW

I wonder if anyone has ever written a manifesto for moral uncertainty, maybe something along the lines of:

We hold these truths to be self-evident, that we are very confused about morality. That these confusions should be properly reflected as high degrees of uncertainty in our moral epistemic states. That our moral uncertainties should inform our individual and collective actions, plans, and policies. ... That we are also very confused about normativity and meta-ethics and don't really know what we mean by "should", including in this document...

Yeah, I realize this would be a hard sell in today's environment, but what if building Friendly AI requires a civilization sane enough to consider this common sense? I mean, for example, how can it be a good idea to gift a super-powerful "corrigible" or "obedient" AI to a civilization full of people with crazy amounts of moral certainty?

Comment by wei_dai on Open & Welcome Thread - September 2020 · 2020-09-13T08:12:56.765Z · LW · GW

I don't recall learning in school that most of "the bad guys" from history (e.g., Communists, Nazis) thought of themselves as "the good guys" fighting for important moral reasons. It seems like teaching that fact, and instilling moral uncertainty in general into children, would prevent a lot of serious man-made problems (including problems we're seeing play out today). So why hasn't civilization figured that out already? Or is not teaching moral uncertainty some kind of Chesterton's Fence, and teaching it widely would make the world even worse off on expectation?

Comment by wei_dai on "The Holy Grail" of portfolio management · 2020-09-12T16:58:18.091Z · LW · GW

I have changed my mind about shorting stocks and especially call options. The problem is that sometimes a stock I shorted rises sharply on significant or insignificant news (which I didn't notice myself until the price already shot up a lot), and I get very worried that maybe it's the next Tesla and will keep rising and wipe out all or a significant fraction of my net worth, and so I panic buy the stock/options to close out the short position. Then a few days later people realize that the news wasn't that significant and the stock falls again. Other than really exceptional circumstances like the recent Kodak situation, perhaps it's best to leave shorting to professionals who can follow the news constantly and have a large enough equity cushion that they can ride out any short-term spikes in the stock price. I think my short portfolio is still showing an overall profit, but it's just not worth the psychological stress involved and the constant attention that has to be paid.

Comment by wei_dai on What should we do once infected with COVID-19? · 2020-08-31T05:52:12.165Z · LW · GW

I haven't been following developments around hydroxychloroquine very closely. My impression from incidental sources is that it's probably worth taking along with zinc, at least early in the course of a COVID-19 infection. I'll probably do a lot more research if and when I actually need to make a decision.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T18:52:16.305Z · LW · GW

With a little patience and a limit order, you can usually get the midpoint between bid and ask, or close to it.

How do you do this when the market is moving constantly and so you'd have to constantly update your limit price to keep it at the midpoint? I've been doing this manually and unless the market is just not moving for some reason, I often end up chasing the market with my limit price, and then quickly get a fill (probably not that close to the midpoint although it's hard to tell) when the market turns around and moves into my limit order.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T18:30:21.453Z · LW · GW

Good points.

And in a margin account, a broker can typically sell any of your positions (because they’re collateral) to protect its interests, even part of a spread, which can again expose you to delta risk if they don’t close your whole box at once.

I guess technically it's actually "expose you to gamma risk" because the broker would only close one of your positions if doing so reduced margin requirements / increased buying power, and assuming you're overall long the broad market, that can only happen if doing so decreases overall delta risk. Another way to think about it is that as far as delta risk, it's the same whether they sell one of your options that long the SPX or sell one of your index ETFs. Hopefully they'll be smart enough to sell your index ETFs because that's much more liquid?

The above is purely theoretical though. Has this actually happened to you, or do you know a case of it actually happening?

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T18:14:55.233Z · LW · GW

Another way to get leverage in a retirement account is with leveraged ETFs.

Yeah, and another way I realized after I wrote my comment is that you can also buy stock index futures contracts in IRA accounts, and I forgot exactly but I think you can get around 5x max leverage that way. Compared to leveraged ETFs this should incur less expense cost and allow you to choose your own rebalancing schedule for a better tradeoff between risk and trading costs. (Of course at the cost of having to do your own rebalancing.)

Also after writing my comment, I realized that with leveraged CEFs there may be a risk that they deleverage quickly on the way down (because they're forced by law or regulation to not exceed some maximum leverage) and then releverage slowly on the way up (because they're afraid of being forced to deleverage again) which means they could systematically capture more downside than upside. Should probably research this more before putting a lot of money into leveraged CEFs.

I’m still interested in these CEFs for diversification though, how do you find these?

SeekingAlpha.com has a CEF section if you want to look for other people's recommendations. CEFAnalyzer.com and CEFConnect.com have screeners you can use to find what you want on your own.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T17:07:41.186Z · LW · GW
  1. Look for sectors that crash more than they should in a market downturn, due to correlated forced deleveraging, and load up on them when that happens. The energy midstream/MLP sector is a good recent example, because a lot of those stocks were held in closed end funds in part for tax reasons, those funds all tend to use leverage, and because they have a maximum leverage ratio that they're not allowed to exceed, they were forced to deleverage during the March crash, which caused more price drops and more deleveraging, and so on.
Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T16:47:23.326Z · LW · GW

What are some reputable activist short-sellers?

I'm reluctant to give out specific names because I'm still doing "due diligence" on them myself. But generally, try to find activist short-sellers who have a good track record in the past, and read/listen to some of their interviews/reports/articles to see how much sense they make.

Where do you go to identify Robinhood bubbles?

I was using Robintrack.net but it seems that Robinhood has stopped providing the underlying data. So now I've set up a stock screener to look for big recent gains, and then check whether the stock has any recent news to justify the rally, and check places like SeekingAlpha, Reddit, and StockTwits to see what people are saying about it. Also just follow general market news because really extreme cases like Hertz will be reported.

I guess this question is really a general question about where you go for information about the market, in a general sense.

Podcasts seem to be a good source, especially ones that interview a variety of guests so I can get diverse perspectives without seeking them out myself. I currently follow "Real Vision Daily", "Macro Voices", and "What Goes Up".

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T16:19:25.691Z · LW · GW

Note on 5: Before you try this, make sure you understand what you're getting into and the risks involved. (There are rarely completely riskless arbitrage opportunities, and this isn't one of them.)

  1. Stock borrowing cost might be the biggest open secret that few investors know about. Before buying or shorting any individual stock, check its borrowing cost and "utilization ratio" (how much available stock to borrow have already been borrowed for short selling) using Interactive Broker's Trader Workstation. If borrowing cost is high and utilization ratio isn't very low (not sure why that happens sometimes) that means some people are willing to pay a high cost per day to hold a short position in the stock, which means it very likely will tank in the near future. But if utilization ratio is very high, near 100%, that means no new short selling can take place so the stock can easily zoom up more due to lack of short selling pressure and potential for short squeeze, before finally tanking.

If you do decide you want to bet against the short sellers and buy the stock anyway, at least hold the position at a broker that offers a Fully Paid Lending Program, so you can capture part of the borrowing cost that short sellers pay.

Comment by wei_dai on What posts on finance would your find helpful or interesting? · 2020-08-22T09:04:06.749Z · LW · GW

Technical analysis, momentum, trend following, and the like, from an EMH-informed perspective.

I've been dismissive of anything that looks at past price information, but given that markets are clearly sometimes inefficient due to short selling being constrained by availability and cost of borrowing stock (which causes prices to be too high which can cause short squeezes), this can "infect" the market with inefficiency during other times as well (because potential short sellers are afraid of being short squeezed), which means there's no (obvious) theoretical reason to dismiss technical analysis and the like anymore.

Comment by wei_dai on "The Holy Grail" of portfolio management · 2020-08-22T08:25:44.158Z · LW · GW

Recently I started thinking that it's a good idea to add short positions (on individual stocks or call options) to one's portfolio. Then you can win if either the short thesis turns out to be correct (e.g., the company really is faking its profits), or the market tanks as a whole and the short positions act as a hedge. I wrote about some ways to find short ideas in a recent comment.

Question for the audience: do you know of a good way to measure the worst case correlation?

Not sure if this is the best way, but I've just been looking at the drawdown percentage from the Feb top to the March bottom of each asset.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-22T07:55:13.678Z · LW · GW

Possible places to look for alpha:

  1. Articles on https://seekingalpha.com/. Many authors there give free ideas/tips as advertisement for their paid subscription services. The comments section of articles often have useful discussions.
  2. Follow the quarterly reports of small actively managed funds (or the portfolio/holdings reports on Morningstar, which show fund portfolio changes) to get stock ideas.
  3. Follow reputable activist short-sellers on Twitter. (They find companies that commit fraud, like Luckin Coffee or Wirecard, and report on them after shorting their stock.)
  4. Look for Robinhood bubble stocks (famous examples being Nikola, Hertz and Kodak) and short them as they start to burst. (But watch out for Hard To Borrow fees, and early assignment risk if you're shorting call options.)
  5. Arbitrage between warrants and call options for the same stock. Robinhood users can't buy warrants but can buy call options, so call options can be way overpriced relative to warrants. (I'm not sure why hedge funds haven't arbitraged away the mispricings already, but maybe it's because options markets are small/illiquid enough that it's hard to make enough money to be worthwhile for them.)
Comment by wei_dai on The Wrong Side of Risk · 2020-08-16T09:13:20.612Z · LW · GW

Recently I had the epiphany that an investor's real budget constraint isn't how much money they have (with portfolio margin you can get 6x or even 12x leverage) but how much risk-taking capacity they have. So another way of making what I think is your main point is that the market pays you to take (certain kinds of) risks, so don't waste your risk-taking capacity by taking too little risk. But one should be smart and try to figure out where the market is paying the most per unit of risk.

Standard finance theory says the market should pay you the most for taking "market risk", i.e., holding the total market portfolio. But the total market portfolio includes no options, because short and long options cancel each other out giving a sum of 0. So the only way that it makes sense for someone to hold an options position is if they differ from the average investor in some way, and figuring out how they differ should be the starting point for deciding what kind of options positions to hold, right?

In this case, it seems that you're saying the average investor manages someone else's money, which makes them want to buy puts. They have to pay extra for this because most assets are managed by investors like this, so there's a lot of demand and little supply of puts. If you're not like this, you can therefore make above-market risk-adjusted returns by selling puts to meet this demand. (I'm not totally sure this is true empirically, but wanted to spell out the reasoning I think you're using more.)

Comment by wei_dai on Alignment By Default · 2020-08-15T19:31:02.816Z · LW · GW

So similarly, a human could try to understand Alice's values in two ways. The first, equivalent to what you describe here for AI, is to just apply whatever learning algorithm their brain uses when observing Alice, and form an intuitive notion of "Alice's values". And the second is to apply explicit philosophical reasoning to this problem. So sure, you can possibly go a long way towards understanding Alice's values by just doing the former, but is that enough to avoid disaster? (See Two Neglected Problems in Human-AI Safety for the kind of disaster I have in mind here.)

(I keep bringing up metaphilosophy but I'm pretty much resigned to be living in a part of the multiverse where civilization will just throw the dice and bet on AI safety not depending on solving it. What hope is there for our civilization to do what I think is the prudent thing, when no professional philosophers, even ones in EA who are concerned about AI safety, ever talk about it?)

Comment by wei_dai on Alignment By Default · 2020-08-13T03:28:25.951Z · LW · GW

To help me check my understanding of what you're saying, we train an AI on a bunch of videos/media about Alice's life, in the hope that it learns an internal concept of "Alice's values". Then we use SL/RL to train the AI, e.g., give it a positive reward whenever it does something that the supervisor thinks benefits Alice's values. The hope here is that the AI learns to optimize the world according to its internal concept of "Alice's values" that it learned in the previous step. And we hope that its concept of "Alice's values" includes the idea that Alice wants AIs, including any future AIs, to keep improving their understanding of Alice's values and to serve those values, and that this solves alignment in the long run.

Assuming the above is basically correct, this (in part) depends on the AI learning a good enough understanding of "improving understanding of Alice's values" in step 1. This in turn (assuming "improving understanding of Alice's values" involves "using philosophical reasoning to solve various confusions related to understanding Alice's values, including Alice's own confusions") depends on that the AI can learn a correct or good enough concept of "philosophical reasoning" from unsupervised training. Correct?

If AI can learn "philosophical reasoning" from unsupervised training, GPT-N should be able to do philosophy (e.g., solve open philosophical problems), right?

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-13T02:04:14.609Z · LW · GW

I don't have a detailed analysis to back it up, but my guess is that CEFs are probably superior because call options don't pay dividends so you're not getting as much tax benefit as holding CEFs. It's also somewhat tricky to obtain good pricing on options (the bid-ask spread tends to be much higher than on regular securities so you get a terrible deal if you just do market orders).

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-08-13T01:33:44.614Z · LW · GW

For people in the US, the best asset class to put in a tax-free or tax-deferred account seems to be closed-end funds (CEF) that invest in REITs. REITs because they pay high dividends, which would usually be taxed as non-qualified dividends, and CEF (instead of ETF or open-end mutual funds) because these funds can use leverage (up to 50%), and it's otherwise hard or impossible to obtain leverage in a tax-free/deferred account (because they usually don't allow margin). (The leverage helps maximize the value of tax-freeness or deferral, but if you don't like the added risk you can compensate by using less leverage or invest in less risky assets in your taxable accounts.)

As an additional bonus, CEFs usually trade at a premium or discount to their net asset value (NAV) and those premiums/discounts show a (EMH-violating) tendency to revert to the mean, so you can obtain alpha by buying CEFs that have higher than historical average discounts and waiting for the mean reversion. There's a downside in that CEFs also tend to have active management fees, but the leverage, discount, and mean reversion should more than make up for that.

Comment by wei_dai on Property as Coordination Minimization · 2020-08-06T08:20:59.037Z · LW · GW

Many different landlords can make many different decisions, whereas one Housing Bureau will either make one decision for everyone, or make unequal decisions in a corrupt way.

In our economy we have all three of:

  1. individual landlords making decisions about property that they directly own
  2. groups of people pooling capital to buy property, then hiring professional managers to make decisions on behalf of the group (c.f. REIT)
  3. property (e.g., public housing projects, parks) that is owned by various government departments/agencies, and managed by bureaucrats

The point is that 2 and 3 aren't that different in terms of "corruption". In both cases, we (at least in theory) made a deliberate trade-off to accept greater principal-agent costs ("corruption") for some expected benefit the arrangement brings, e.g., greater diversification / spreading of risk in the case of 2. Why isn't the same true for letting the government own everything or a lot more things? (Not sure who you're arguing against, but presumably there's a steelman-version of them that argues that we should accept the "corruption" in that case too because the benefits are greater.)

the people who would rent out the additional floors I add to the house generally don’t comment at the public meeting, whereas the retiree who would have to deal with more cars on the road or a blocked view of the Bay does.

This isn't as bad as it sounds, because one of these is a priced externality, and the other one is an unpriced externality. That is, since you would get rent from the renter, you already have an incentive to speak on their behalf at the meeting. The alternative to such meetings is either you just ignore the unpriced externality (the retiree's blocked view) when you make your decision or the externality has to be handled some other way, like the retiree paying you for a "no additional floor" covenant, or suing you through the court system, both of which also involve coordination costs (that can add up quickly when there are many externalities). Again it's not that clear, at least from this post, that the current system (where everyone who may be affected speaks at the meeting and then some bureaucrat makes a decision that at least supposedly takes all of them into account) isn't actually optimal given the constraints we face.

ETA: Consider if there is in fact a bunch of negative externalities that together outweigh the benefits of building another floor. Without this meeting how would all those affected people realistically coordinate (supposing none of them individually has enough incentive) to stop you?

Comment by wei_dai on Predictions for GPT-N · 2020-07-30T03:06:56.151Z · LW · GW

Anyone want to predict when we'll reach the same level of translation and other language capability as GPT-3 via iterated amplification or another "aligned" approach? (How far behind is alignment work compared to capability work?)

Comment by wei_dai on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-13T04:58:48.234Z · LW · GW

Another big update for me is that according to modern EMH, big stock market movements mostly reflect changes in risk premium, rather than changes in predicted future cash flows. (The recent COVID-19 crash however was perhaps driven even more by liquidity needs.)

Comment by wei_dai on Six economics misconceptions of mine which I've resolved over the last few years · 2020-07-13T04:51:14.151Z · LW · GW

My understanding of banking and monetary policy was pretty wrong until very recently. Apparently the textbook I read in the 90s was explaining how banking and central banking worked in the 50s. John Wentsworth pointed me to a Coursera course by Perry Mehrling and here are the same lectures without having to register for the course.

Comment by wei_dai on Open & Welcome Thread - July 2020 · 2020-07-12T23:39:07.131Z · LW · GW

Many AI issues will likely become politicized. (For example, how much to prioritize safety versus economic growth and military competitiveness? Should AI be politically neutral or be explicitly taught "social justice activism" before they're allowed to be deployed or used by the public?) This seems to be coming up very quickly and we are so not prepared, both as a society and as an online community. For example I want to talk about some of these issues here, but we haven't built up the infrastructure to do so safely.

Comment by wei_dai on High Stock Prices Make Sense Right Now · 2020-07-07T05:58:26.740Z · LW · GW

In case people want to know more about this stuff, most of my understanding comes from Perry Mehrling’s coursera course (which I recommend)

Thanks! I've been hoping to come across something like this, to learn about the details of the modern banking system.

Comment by wei_dai on Cryonics without freezers: resurrection possibilities in a Big World · 2020-07-06T18:55:20.882Z · LW · GW

I agree recent events don't justify a huge update by themselves if one started with a reasonable prior. It's more that I somehow failed to consider the possibility of that scenario, the recent events made me consider it, and that's why it triggered a big update for me.

Comment by wei_dai on High Stock Prices Make Sense Right Now · 2020-07-05T06:17:19.446Z · LW · GW

The institutions which own Treasuries (e.g. banks) do so with massive amounts of cheap leverage, and those are the only assets they’re allowed to hold with that much leverage.

I'm curious about this. What source of leverage do banks have access to, that cost less than interest on Treasuries? (I know there are retail deposit accounts that pay almost no interest, but I think those are actually pretty expensive for the banks to obtain, because they have to maintain a physical presence to get those customers. I doubt those banks can make a profit if they just put those deposits into Treasuries. You must be talking about something else?)

Comment by wei_dai on Cryonics without freezers: resurrection possibilities in a Big World · 2020-07-01T09:46:26.878Z · LW · GW

My subjective anticipation is mollified by the thought that I’ll probably either never experience dying or wake up to find that I’ve been in an ancestral simulation, which leaves the part of me that wants to prevent all the empty galaxies from going to waste to work in peace. :)

Update: Recent events have made me think that the fraction of advanced civilizations in the multiverse that are sane may be quite low. (It looks like our civilization will probably build a superintelligence while suffering from serious epistemic pathologies, and this may be be typical for civilizations throughout the multiverse.) So now I'm pretty worried about "waking up" in some kind of dystopia (powered or controlled by a superintelligence with twisted beliefs or values), either in my own future lightcone or in another universe.

Actually, I probably shouldn't have been so optimistic even before the recent events...

Comment by wei_dai on Self-sacrifice is a scarce resource · 2020-06-28T22:49:34.881Z · LW · GW

If you find yourself doing too much self-sacrifice, injecting a dose of normative and meta-normative uncertainty might help. (I've never had this problem, and I attribute it to my own normative/meta-normative uncertainty. :) Not sure which arguments you heard that made you extremely self-sacrificial, but try Shut Up and Divide? if it was "Shut Up and Multiply", or Is the potential astronomical waste in our universe too small to care about? if it was "Astronomical Waste".

Comment by wei_dai on Atemporal Ethical Obligations · 2020-06-27T00:18:46.991Z · LW · GW

Thus, in order to be truly good people, we must take an active role, predict the future of moral progress, and live by tomorrow’s rules, today.

Suppose you think X is what is actually moral (or is a distribution representing your moral uncertainty after doing your best to try to figure out what is actually moral) and Y is what you expect most people will recognize as moral in the future (or is a distribution representing your uncertainty about that). Are you proposing to follow Y instead of X? (It sounds that way but I want to make sure I'm not misunderstanding.)

Assuming the answer is yes, is that because you think that trying to predict what most people will recognize as moral is more likely to lead to what is actually moral than directly trying to figure it out yourself? Or is it because you want to be recognized by future people as being moral and following Y is more likely to lead to that result?

Comment by wei_dai on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-25T11:19:10.589Z · LW · GW

But if you claim to be charitable and openminded, except when confronted by a test that affects your own community, then you’re using those words as performative weapons, deliberately or not.

I guess "charitable" here is referring to the principle of charity, but I think that is supposed to apply in a debate or discussion, to make them more productive and less likely to go off the rails. But in this case there is no debate, as far as I can tell. The NYT reporter or others representing NYT have not given a reason for doxxing Scott (AFAIK, except to cite a "policy" for doing so, but that seems false because there have been plenty of times when they've respected their subjects' wishes to remain pseudonymous), so what are people supposed to be charitable about?

If instead the intended meaning of "charitable and openminded" is something like "let's remain uncertain about NYT's motives for doxxing Scott until we know more", it seems like absence of any "principled reasons" provided so far is already pretty strong evidence for ruling out certain motives, leaving mostly "dumb mistake" and "evil or selfish" as the remaining possibilities. Given that, I'm not sure what people are doing that Richard thinks is failing the test to be "charitable and openminded", especially given that NYT has not shown a willingness to engage in a discussion so far and the time-sensitive nature of the situation.

Comment by wei_dai on Open & Welcome Thread - February 2020 · 2020-06-25T05:54:58.600Z · LW · GW

Another reason for attributing part of the gains (from betting on the coronavirus market crash) to luck, from Rob Henderson's newsletter which BTW I highly recommend:

The geneticist Razib Khan has said that the reason the U.S. took so long to respond to the virus is that Americans do not consider China to be a real place. For people in the U.S., “Wuhan is a different planet, mentally.” From my view, it didn’t seem “real” to Americans (or Brits) until Italy happened.

Not only have I lived in China, my father was born in Wuhan and I've visited there multiple times.

Comment by wei_dai on Tips/tricks/notes on optimizing investments · 2020-06-19T06:14:26.842Z · LW · GW

Thanks for the feedback. I guess I in part was expecting people to learn about portfolio margin and box spread options for other reasons (so the additional work to pull out equity into CDs isn't that much), and in part forgot how difficult it might be for someone to learn about these things. Maybe there's an opportunity for someone to start a business to do this for their customers...

BTW you'll have to pass a multiple-choice test to be approved for PM at TDA, which can be tough. Let me know if you need any help with that. Also I've been getting 0.5%-0.55% interest rate from box spreads recently, and CDs are currently 1.25%-1.3%. CDs were around 1.5% when I first wrote this, so it was significantly more attractive then. I would say it's still worth it because once you learn these things you can get the extra return every year without that much additional work, and over several decades it can add up to a lot.

Comment by wei_dai on Open & Welcome Thread - June 2020 · 2020-06-19T04:51:17.599Z · LW · GW

Personal update: Over the last few months, I've become much less worried that I have a tendency to be too pessimistic (because I frequently seem to be the most pessimistic person in a discussion). Things I was worried about more than others (coronavirus pandemic, epistemic conditions getting significantly worse) have come true, and when I was wrong in a pessimistic direction, I updated quickly after coming across a good argument (so I think I was wrong just because I didn't think of that argument, rather than due to a tendency to be pessimistic).

Feedback welcome, in case I've updated too much about this.