Open & Welcome Thread - February 2020
post by ryan_b · 2020-02-04T20:49:54.924Z · LW · GW · 114 commentsContents
116 comments
If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)
And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.
If you want to explore the community more, I recommend reading the Library, [? · GW] checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [LW · GW] section of the LessWrong FAQ [LW · GW].
The Open Thread sequence is here [? · GW].
114 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2020-02-17T14:17:01.061Z · LW(p) · GW(p)
Statements of (purported) empirical fact are often strong Bayesian evidence of the speaker's morality and politics (i.e., what values one holds, what political coalition one supports), and this is a root cause of most group-level bad epistemics [LW · GW]. For example someone who thinks eugenics is immoral is less likely (than someone who doesn't think that) to find it instrumentally useful to make a statement like (paraphrasing) "eugenics may be immoral, but is likely to work in the sense that selective breeding works for animals", so when someone says that, it is evidence for them not thinking that eugenics is immoral and therefore not belonging to a political coalition that holds "eugenics is immoral" as a part of its ideology.
I think for many people this has been trained into intuition/emotion (you automatically think that someone is bad/evil or hate them if they express a statement of fact that your political enemies are much more likely to make than your political allies) or even reflexive policy (automatically attack anyone who makes such statements).
This seems pretty obvious to me, but some people do not seem to be aware of it (e.g., Richard Dawkins seemed surprised by people's reactions to his tweet linked above) and I haven't been able to find any discussion on LW about this.
(Given the above, perhaps it shouldn't be surprising that bad social epistemics abound, and what needs explaining is how good epistemic norms like "free inquiry" and "civil discourse" ever became a thing.)
Replies from: Wei_Dai, Zack_M_Davis, Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-02-19T08:16:56.998Z · LW(p) · GW(p)
what needs explaining is how good epistemic norms like “free inquiry” and “civil discourse” ever became a thing.
An idea for explaining this: some group happened to adopt such norms due to a historical accident, and there happened to be enough low hanging epistemic fruit that could be picked up by a group operating under such norms that the group became successful enough for the norms to spread by conquest and emulation. This also suggests that one reason for the decay of these norms is that we are running out of low hanging fruit.
↑ comment by Zack_M_Davis · 2020-02-29T06:35:30.125Z · LW(p) · GW(p)
I haven't been able to find any discussion on LW about this.
I discuss this in "Heads I Win, Tails?—Never Heard of Her" [LW · GW] ("Reality itself isn't on anyone's side, but any particular fact, argument, sign, or portent might just so happen to be more easily construed as "supporting" the Blues or the Greens [...]").
Richard Dawkins seemed surprised
I suspect Dawkins was motivatedly playing dumb, or "living in the should-universe" [LW · GW]. Indignation (e.g., at people motivatedly refusing to follow a simple logical argument because of their political incentives) often manifests itself as expression of incomprehension, but is distinguishable from literal incomprehension (e.g., by asking Dawkins to bet beforehand on what he thinks is going to happen after he Tweets that).
↑ comment by Wei Dai (Wei_Dai) · 2020-02-19T08:16:37.864Z · LW(p) · GW(p)
Robin Hanson also doesn't seem to be aware of what I wrote in the parent comment:
"Other people are low-decouplers, who see ideas as inextricable from their contexts. For them =… You say 'By X, I don’t mean Y,' but when you say X, they will still hear Y." More precisely, they CHOOSE to hear Y, knowing enough associates also choose that
The rest of us can choose the opposite, & we should not accept their claim that they can decide what words mean & how language works.
But why are some people low-decouplers? I think it's because "Statements of (purported) empirical fact are often strong Bayesian evidence of the speaker’s morality and politics" so we can't simply "choose the opposite" without understanding this and its implications.
To put it another way, a lot of times when someone says "By X, I don’t mean Y" they actually secretly do believe Y, so if another person "CHOOSE to hear Y", that's not a completely unreasonable heuristic, and we can't just "not accept their claim that they can decide what words mean & how language works" without acknowledging this.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-02-22T08:52:48.691Z · LW(p) · GW(p)
Copy-pasting a followup to this with Robin Hanson via DM (with permission).
Robin: You can of course suspect people of many things using many weak clues. But you should hold higher standards of evidence when making public accusations that you say orgs should use to fire people, cancel speeches, etc.
Me: My instinct is to support/agree with this, but (1) it's not an obvious interpretation of what you tweeted and (2) I think we need to understand why the standards of evidence for making public accusations and for actual firing/canceling have fallen so low (which my own comment didn't address either) and what the leverage points are for changing that, otherwise we might just be tilting at windmills when we exhort people to raise those standards (or worse, making suicide charges, if we get lumped with "public enemies").
comment by Wei Dai (Wei_Dai) · 2020-02-23T21:31:25.884Z · LW(p) · GW(p)
Anyone else look at the coronavirus outbreak and think this is how a future existential accident will play out, with policy always one step behind what's necessary to control it, because governments don't want to take the risk of "overreacting" and triggering large or very large political and economic costs "for no good reason". So they wait until absolutely clear evidence emerge, by which time it will be too late.
Why will governments be more afraid of overreacting than underreacting? (Parallel: We don't seem to see governments doing anything in this outbreak that could be interpreted as overreacting.) Well, every alarm up to that point will have been a false alarm. (Parallel: SARS, MERS, Ebola, every flu pandemic in recent history.)
Replies from: matthew-barnett, Lukas_Gloor, Wei_Dai↑ comment by Matthew Barnett (matthew-barnett) · 2020-02-25T23:54:27.877Z · LW(p) · GW(p)
I share this reaction [LW(p) · GW(p)]. I think that a lot of people are under-reacting due to misperception of overreaction, signaling wisdom and vague outside view stuff. I can tell because so far everyone who has told me to "stop panicking" won't give me any solid evidence for why my fears are underrated.
It now seems plausible that unless prominent epidemiologists are just making stuff up and the deathrate is also much smaller than its most commonly estimated value, then between 60-160 million people will die from it within about a year. Yet when I tell people this they just brush it off! [ETA: Please see comments below. This estimate does not imply I think there is a greater than 50% chance of this happening.]
Replies from: sil-ver, shminux, sil-ver↑ comment by Rafael Harth (sil-ver) · 2020-02-29T14:06:14.988Z · LW(p) · GW(p)
signaling wisdom
I see this problem all the time with regard to things that can be classified as "childish". Beside pandemics, the most striking examples in my mind are risk of nuclear war and risk of AI [LW · GW], but I expect there are lots of others. I don't exactly think of it as signaling wisdom, but as signaling being a serious-person-who-undestands-that-unserious-problems-are-low-status (the difference being that it doesn't necessitate thinking of yourself as particularly "smart" or "wise").
↑ comment by Shmi (shminux) · 2020-02-29T08:33:31.429Z · LW(p) · GW(p)
between 60-160 million people will die from it within about a year.
That seems high. If you assume that it's as contagious as the regular flu, and given that every year about 5-15% of people get infected (https://en.wikipedia.org/wiki/Influenza#Epidemic_and_pandemic_spread), that makes roughly 700 million infected, and given the expected mortality rate in single percents (currently 7% and dropping of all closed cases, estimated 1% in general), we arrive at the 10 million deaths estimate without any containment measures in place. Given the containment measures, the number of infections and deaths is likely to be a fraction of that, likely under a million dead.
Replies from: matthew-barnett↑ comment by Matthew Barnett (matthew-barnett) · 2020-02-29T09:18:57.778Z · LW(p) · GW(p)
From this thread [EA · GW],
- The coronavirus spreads a little faster than the flu.
- You have some natural immunity to flu even though each season the strain is different. You probably have no immunity against this coronavirus.
- We have a reliable vaccine against seasonal flu. We will not have a vaccine or effective treatment for coronavirus for some time.
- Seasonal flu is very well characterized and understood. This virus is still under intensive study, and all the numbers I give have uncertainty, which means that it may be worse than our best guess. Long-term effects of catching the virus are unknown.
Also, my estimates from a few days ago were out of date and I did more research in the intervening time and found that the case fatality rate was probably lower than I was previously lead to believe (I did research back in January and then stopped for a while since it was draining to research it).
My current estimate that you can quote me on is that there is a 10% chance of the virus killing more than 50 million people [ETA: Update, I did more research. 5% is now probably my current estimate.]. I used language that did not reflect my probability estimates here as I used the word "plausible" but not in a sense that implied probable.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2020-03-01T06:05:25.759Z · LW(p) · GW(p)
Can't remember where, but I remember reading that for people in their 20s and 30s, the death rate is only 0.1%.
↑ comment by Rafael Harth (sil-ver) · 2020-02-29T13:53:44.163Z · LW(p) · GW(p)
↑ comment by Lukas_Gloor · 2020-02-24T12:09:10.143Z · LW(p) · GW(p)
One would think the incentives for an international body like the WHO would be different, but the way they handled it sadly suggests otherwise. (That said, I don't actually know whether a stronger early reaction by the WHO would have changed anything, because it seems like most of the necessary stuff happens on national levels anyway.)
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-02-25T10:17:35.103Z · LW(p) · GW(p)
See these news stories about the WHO being blamed for being too aggressive about swine flu, which probably caused it to learn a wrong lesson:
- https://web.archive.org/web/20100420235803/http://www.msnbc.msn.com:80/id/36421914
- https://web.archive.org/web/20100531094130/http://www.timesonline.co.uk/tol/news/world/article7104253.ece
↑ comment by brianwang712 · 2020-03-12T10:55:55.368Z · LW(p) · GW(p)
You might also be interested in the 1976 mass vaccination program in the US for swine flu, which was a case of perceived overreaction (given the anticipated pandemic never materialized) and also hurt the reputation of public health generally: https://www.discovermagazine.com/health/the-public-health-legacy-of-the-1976-swine-flu-outbreak
Or in "The Cutter Incident" in 1955, where a rush to get a polio vaccine out in advance of the next polio season resulted in some batches containing live polio virus, with several children receiving the vaccine actually getting polio instead: https://en.wikipedia.org/wiki/Cutter_Laboratories#The_Cutter_incident
There's definitely a history of incidents in public health of perceived overreaction followed by public backlash, which could potentially be playing into public health officials' heads nowadays. I don't know if becoming more conservative and less-quick-to-take-action is necessarily a wrong lesson, though – even if you think, just simply on the numbers, that taking preventative measures in each of these incidents was correct ex ante given the stakes involved, reputational risks are real and have to be taken into account. As much as "take action to prepare for low probability, high consequence scenarios when the expected cost < expected benefit" applies to personal preparation, it doesn't translate easily to governmental action, at least not when "expected cost" doesn't factor in "everyone will yell at you and trust you less in the future if the low probability scenario doesn't pan out, because people don't do probabilities well."
This does put us in a bit of a bind, since ideally you'd want to have public health authorities be able to take well-calibrated actions against <10%-likely scenarios. But they are, unfortunately, constrained by public perception to some extent.
↑ comment by Wei Dai (Wei_Dai) · 2020-02-23T23:12:13.650Z · LW(p) · GW(p)
comment by Wei Dai (Wei_Dai) · 2020-02-24T13:13:26.340Z · LW(p) · GW(p)
Global equity markets may have underestimated the economic effects of a potential COVID-19 pandemic because the only historical parallel to it is the 1918 flu pandemic (which is likely worse than COVID-19 due to a higher fatality rate) and stock markets didn't drop that much. But maybe traders haven't taken into account (and I just realized this) that there was war-time censorship in effect which strongly downplayed the pandemic and kept workers going to factories, which is a big disanalogy between the two cases, so markets could drop a lot more this time around. The upshot is that maybe it's not too late to short the markets.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-02-24T19:55:34.377Z · LW(p) · GW(p)
How can a private individual with a few thousand dollars to invest effectively trade on the idea that equity markets underestimate this? Might this be a way to make money for rationalists?
Replies from: Wei_Dai, John_Maxwell_IV, gilch↑ comment by Wei Dai (Wei_Dai) · 2020-02-24T21:19:24.298Z · LW(p) · GW(p)
I bought some S&P 500 put options (SPXW Apr 17 2020 3000 Put (PM) to be specific) a couple of weeks ago. They were 40% underwater at some point because the market kept going up (which confused me a lot), but is up 125% as of today. (Note that it's very easy to lose 100% of your investment when trading options. In my case, all I'd have to do is not sell the options until April 17 and S&P 500 hasn't dropped to 3000 by then.) I had to open a brokerage account (most have no minimum I think) then apply to trade options then wait a day to be approved. You can also sell stocks short. You can also bet against foreign markets and specific stocks this way.
The above is for information purposes only. It is not intended to be investment advice. Seek a duly licensed professional for investment advice.
Replies from: Wei_Dai, jmh, Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-02-27T22:08:26.096Z · LW(p) · GW(p)
The option I bought is up 700% since I bought them, implying that as of 2/10/2020 the market thought there was less than 1/8 chance things would be as bad as they are today. At least for me this puts a final nail in the coffin of EMH.
Added on Mar 24: Just in case this thread goes viral at some point, to prevent a potential backlash against me or LW (due to being perceived as caring more about making money than saving lives), let me note that on Feb 8 I thought of and collected a number of ideas for preventing or mitigating the pandemic that I foresaw and subsequently sent them to several people working in pandemic preparedness, and followed up with several other ideas as I came across them.
Replies from: Eliezer_Yudkowsky, Wei_Dai, matthew-barnett, steven0461, John_Maxwell_IV, MTGandP, ioannes_shade↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2020-02-27T23:57:44.405Z · LW(p) · GW(p)
Thank you for sharing this info. My faith is now shaken.
Replies from: Wei_Dai, mason-bially↑ comment by Wei Dai (Wei_Dai) · 2020-02-28T03:32:08.640Z · LW(p) · GW(p)
From someone replying to you on Twitter:
Someone made a profitable trade ergo markets aren’t efficient?
This is why I said "at least for me". You'd be right to discount the evidence and he would be right to discount the evidence even more, because of more room for selection bias.
ETA: Hmm, intuitively this makes sense but I'm not sure how it squares up with Aumann Agreement. Maybe someone can try to work out the actual math?
Replies from: Pattern↑ comment by Mason Bially (mason-bially) · 2020-06-17T02:03:53.057Z · LW(p) · GW(p)
I always thought the EMH was obviously invalid due to it's connection with the P=NP issue (which is to say the EMH iff P=NP).
↑ comment by Wei Dai (Wei_Dai) · 2020-02-28T19:20:28.611Z · LW(p) · GW(p)
The position is now up 1000%. I find myself unsure what to do at this point. (Aside from taking some profit out) should I close out the position, and if so put the money into what?
Also, I find myself vexed with thoughts like "if only I had made this other trade, I could have made even more profits" or "if only I had put even more money into the bet ..." How do professional or amatuer traders deal with this?
Replies from: Wei_Dai, gilch, matthew-barnett, ioannes_shade, Three-Monkey Mind↑ comment by Wei Dai (Wei_Dai) · 2020-03-11T20:59:56.691Z · LW(p) · GW(p)
An update on this trade in case anyone is interested. The position is now up 1500%. I also have another position which is up 2300% (it's a deeper out-of-the-money put, which I realized would be an even better idea after seeing a Facebook post by Danielle Fong). For proper calibration I should mention that a significant part of these returns is due to chance rather than skill:
- VIX (a measure of stock market volatility priced into options) was unreasonably low when I bought the puts (apparently because traders got used to central banks rescuing the stock market on every downturn), meaning the put options were underpriced in part due to that, but I didn't know this.
- Russia decided not to cooperate with Saudi Arabia in lowering oil production, in order to hurt the US shale oil industry. This is not something I could have reasonably predicted.
- I also didn't predict that the CDC would bungle their testing kits, and the FDA would delay independent testing by others so much, thus making containment nearly impossible in the US.
↑ comment by Wei Dai (Wei_Dai) · 2020-06-25T05:54:58.600Z · LW(p) · GW(p)
Another reason for attributing part of the gains (from betting on the coronavirus market crash) to luck, from Rob Henderson's newsletter which BTW I highly recommend:
The geneticist Razib Khan has said that the reason the U.S. took so long to respond to the virus is that Americans do not consider China to be a real place. For people in the U.S., “Wuhan is a different planet, mentally.” From my view, it didn’t seem “real” to Americans (or Brits) until Italy happened.
Not only have I lived in China, my father was born in Wuhan and I've visited there multiple times.
Replies from: ryan_b↑ comment by ryan_b · 2020-06-25T15:21:25.393Z · LW(p) · GW(p)
It feels like your background should be attributed differently than things like the Saudi-Russian spat, or the artificially deflated VIX. In Zvi's terminology this is an Unknown Known [LW · GW]; it isn't as though you weren't updating based on it. It was merely an unarticulated component of the prior.
↑ comment by Matthew Barnett (matthew-barnett) · 2020-03-12T16:48:10.228Z · LW(p) · GW(p)
After today's crash, what are you at now?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-03-12T16:50:41.790Z · LW(p) · GW(p)
Up 2600% and 5200%. ETA: Now back down to 2300% and 4200%.
Replies from: yangshuo1015↑ comment by yangshuo1015 · 2020-03-26T20:13:07.818Z · LW(p) · GW(p)
Have you sold those put options by now? Looks like the Fed and Treasury 6 trillion stimulation package boosted the market a lot. I had similar put position which dropped significantly during the past 2 days of Market rally. Do you think it is still good to hold the put options?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-03-27T02:10:31.389Z · LW(p) · GW(p)
I did sell some of the puts, but not enough of them and not near enough to the bottom to not leave regrets. I definitely underestimated how fast and strong the monetary and fiscal responses were, and paid too much attention to epidemiological discussions relative to developments on those policy fronts. (The general lesson here seems to be that governments can learn to react fast on something they have direct experience with, e.g., Asian countries with SARS, the US with the 2008 financial crisis.) I sold 1/3 of remaining puts this morning at a big loss (relative to paper profits at the market bottom) and am holding the rest since it seems like the market has priced in the policy response but is being too optimistic about the epidemiology. The main reason I sold this morning is that the Fed might just "print" as much money as needed to keep the market at its current level, no matter how bad the real economy gets.
↑ comment by homsit · 2020-03-12T18:50:48.429Z · LW(p) · GW(p)
Why are deeper out-of-the-money puts better here? Have been scratching my head at this one for a while, but haven't been able to figure it out.
Replies from: Wei_Dai, amanaplan, jmh↑ comment by Wei Dai (Wei_Dai) · 2020-03-12T16:37:19.052Z · LW(p) · GW(p)
One explanation is that the deeper out-of-the-money put (which remains out-of-the-money) benefits from both a fall in the underlying security and an increase in VIX. The shallower out-of-the-money put (which became in-the-money as a result of the market drop) benefits from the former, but not so much from the latter. Maybe another way to explain it is that the deeper out-of-the-money put was more mispriced to begin with.
↑ comment by gilch · 2020-03-01T04:21:19.388Z · LW(p) · GW(p)
Epistemic status: I am not a financial advisor. Please double-check anything I say before taking me seriously. But I do have a little experience trading options. I am also not telling you what to do, just suggesting some (heh) options to consider.
Your "system 1" does not know how to trade (unless you are very experienced, and maybe not even then). Traders who know what they are doing make contingency plans in advance to avoid dangerous irrational/emotional trading. They have a trading system with rules to get them in and out. Whatever you do, don't decide it on a whim. But doing nothing is also a choice.
Options are derivatives, which makes their pricing more complex than the underlying stock. Options have intrinsic value, which is what they're worth if exercised immediately, and the rest is extrinsic value, which is their perceived potential to have more intrinsic value before they expire. Options with no intrinsic value are called out of the money. Extrinsic value is affected by time remaining and the implied volatility (IV), or the market-estimated future variance of the underlying. When the market has a big selloff like this, IV increases, which inflates the extrinsic value of options. And indeed, IV is elevated well above normal now. High IV conditions like this do not tend to last long (perhaps a month). When IV reverts to the mean, the option's extrinsic value will be deflated. You should not be trading options with no awareness of IV conditions.
If you are no longer confident in your forecast, it may be prudent to take some money off the table. You can sell your option at a profit and then put the money in a different position that you like better. Perhaps a different strike or expiration date, or something else entirely.
A "safe haven" investment is one that traders tend to buy when the stock market is falling. For example, TLT (a long-term treasury bond ETF), has shot up due to the current market crisis, but it is also a suitable investment vehicle in its own right, with buy-and-hold seeing positive returns in the long term, so it can hold value even after the market turns around. But being a bond fund with lower volatility, its returns are likewise lower.
On the other hand, if you are more confident in your forecast and want to double down, you could close one of your puts and use some of the profits from your put to buy two puts at a lower strike. (Maybe out of the money for their Gamma*). If your forecast is correct, and the market continues to fall rapidly, you'll gain profit even faster, but if you're wrong and the market turns around, they may expire worthless. Keep in mind that these puts are more expensive than normal due to high IV, even considering the current underlying price. If the market regains confidence, they'll deflate in value, even before the market turns around. Options with less extrinsic value are less affected by IV. (IV sensitivity is known as Vega.)
If you have a margin account, you could take advantage of the high IV conditions by selling call spreads. You would sell the call with a Delta* of ~.3 and simultaneously buy another call one strike higher up to cap your losses if you're wrong (this also reduces the margin required). This will be for a net credit. If the market continues to fall, you can let the whole spread expire worthless and keep the credit, or buy it back early for less than the credit (maybe for half) and then reposition. If you're not terribly wrong and the market goes sideways or even slightly up, you can still buy these back for less than you paid for them due to deflating extrinsic as expiration nears and IV falls (due to market stabilization). If you are wrong and the spread goes under, your max loss is limited to your original margin (the difference between strikes, less the initial credit).
[*Delta is a measure of sensitivity to the price of the underlying. It's also a rough estimate of the probability that the option will have any intrinsic value at expiration. Gamma is the rate of change of Delta. Together with Theta (time sensitivity) and Vega, these are known as The Greeks, and should be available from your broker along with the option quotes.]
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-03-01T05:34:40.380Z · LW(p) · GW(p)
Thanks, this is a really helpful intro to options. One thing you didn't address which makes me hesitant to do any more options trading is the ask-bid spread, which can easily be 10% or more for some of the options I'm looking at. I don't know how to tell when the ask-bid spread makes strategies such as "sell 1 put and buy 2 puts at lower strike" not worth doing (because potential profit is eaten up by transaction costs).
Also, picking the strike price and expiration date is also a mystery to me. I did it by intuition and it seems have worked out well enough, but was probably far from optimal.
They have a trading system with rules to get them in and out.
I don't see how a trading system can incorporate new complex and often ambiguous evidence in real time. I definitely take your point about emotional trading being dangerous though.
Maybe out of the money for their Gamma
What does this mean?
Is there a book you can recommend for a more complete education about options? Maybe I can quickly flip through it to help me figure out what to do.
Replies from: gilch, gilch, gilch, gilch, gilch↑ comment by gilch · 2020-03-01T06:21:32.616Z · LW(p) · GW(p)
the ask-bid spread, which can easily be 10% or more
Options are much less liquid than the underlying, since the market is divided among so many strikes and dates. If the spread is less than 10% of the ask price, that's actually considered pretty good for an option. You can also look at open interest (the number of open contracts) and volume (the number traded today) for each contract to help judge liquidity (this information should also be available from the broker.) Typically strike prices closer to the underlying price are more liquid. Also, the monthly (third-Friday) contracts tend to be more liquid than the Weeklys. (Weeklys weren't available before, so monthly contracts are something of a Schelling point. They also open sooner.)
Do not trade options with market orders. Use limit orders and make an offer at about the midpoint between bid and ask. The market maker will usually need some time to get around to your order. You'll usually get a fill within 15 minutes. If not, you may have to adjust your price a little before someone is willing to take the other side of the deal. A little patience can save a lot of money.
↑ comment by gilch · 2020-03-01T07:37:33.479Z · LW(p) · GW(p)
I don't know how to tell when the ask-bid spread makes strategies such as "sell 1 put and buy 2 puts at lower strike" not worth doing (because potential profit is eaten up by transaction costs).
I meant close one of the profitable puts you already own, and then use the money to buy two more. (Incidentally, the spread you are describing is called a backspread, which is also worth considering when you expect a big move, as the short option can offset some of the problematic Greeks of the long ones.) Maybe you can vary the ratios. It depends on what's available. I don't know how many puts you have, but how aggressive you should be depends on your forecast, account size, and risk tolerance.
I don't know your transaction costs either, but commissions have gotten pretty reasonable these days. This can vary widely among brokers. TD Ameritrade, for example, charges only $0.65 per contract and lets you close contracts under $0.05 for free. Tastyworks charges $1.00 per contract, but always lets you close for free. They also cap their commissions at $10 per leg (which can add up if you trade at high enough volume). Firstrade charges $0. That is not a typo. (There are still some regulatory fees that add up to less that a cent.) If your commissions are much higher than these, maybe you need a new broker.
↑ comment by gilch · 2020-03-01T07:08:25.201Z · LW(p) · GW(p)
I don't see how a trading system can incorporate new complex and often ambiguous evidence in real time. I definitely take your point about emotional trading being dangerous though.
Systematic trading is not the same thing as algorithmic trading. They're related, but algorithmic trading is taken to the extreme where a computer can do all the work. Normal systematic trading can have a human element, and you can provide the "forecast" component (instead of technical signals or something), and the rules tell you what to do based on your current forecast.
You need to have an exit already planned when you get in. Not just how to deal with a win, but also how to handle a loss, or you may be tempted to take profits too early, or be in denial and ride a loss too far. The adage is "cut your losses short and let your profits run". Emotional trading tends to do the opposite and lose money. (BTW, the rule is the opposite for short option spreads.)
Carver's Systematic Trading is a good introduction to the concept. This one I have read.
↑ comment by gilch · 2020-03-01T06:57:32.408Z · LW(p) · GW(p)
Maybe out of the money for their Gamma
What does this mean?
Gamma is the rate of change of Delta. It's how curved your P&L graph is. Gamma opposes Theta. If you want more Deltas (like owning shares) and you expect a big move, Gammas are a way to get them cheaply, because they turn in to Deltas. (Of course, Deltas are negative for long puts.)
Is there a book you can recommend for a more complete education about options? Maybe I can quickly flip through it to help me figure out what to do.
Options are complex, but maybe not that complex. Option pricing models do use differential equations, but everybody uses computers for that. Trading options is not beyond the reach on anyone who passed a calculus class, but I'm still not sure if you can pick it up that quickly.
I did not learn all of this from a textbook. I know there are books that cover this. Hull's Options, Futures and Other Derivatives is the introductory textbook I hear recommended, but I have not read it myself (you might want to skip the futures chapters.) There may be shorter introductions. I think Tastyworks was supposed to have a good intro somewhere.
↑ comment by gilch · 2020-03-01T06:38:01.637Z · LW(p) · GW(p)
Also, picking the strike price and expiration date is also a mystery to me.
Use the Greeks! Watch them and adjust them to your needs. They trade off against each other, but a spread can have the sum or difference of them. Keep in mind that extrinsic value is perceived potential and the Greeks make a lot more sense. The strikes nearest the underlying price have the most extrinsic and liquidity. Those deeper in the money have more Delta. Each Delta is like owning a share (puts have negative Deltas). Those further out of the money have more Gamma for the price. These relationships are nonlinear, because the underlying price variance is assumed to have a normal distribution (which is close enough to true most of the time).
Theta is not constant. It gets stronger the closer you get to expiration. Think about future variance as a bell curve spreading out from the current price like <. There's much less time to vary left near the tip of the curve. For this reason, when holding a long option position, you probably want 60-90 days so you're not exposed to too much Theta. But that also means more Vega, due to the higher extrinsic value.
↑ comment by Matthew Barnett (matthew-barnett) · 2020-02-28T22:18:57.871Z · LW(p) · GW(p)
I find myself unsure what to do at this point.
On one hand, people in the mainstream still seem too optimistic to me. Like, apparently,
Cases of the new coronavirus disease are rising quickly outside China, and the odds of the outbreak turning into a pandemic have now doubled — from 20% to 40%, according to a report from Moody’s Analytics.
This seems super optimistic to me. I don't see why people are still forecasting majority probability that it will be contained. On the other hand, I've been convinced to be more optimistic than the 15-20% prediction of disaster I had the other day.
I did a more detailed dive into the evidence for a case fatality rate in the 2-3% range and I know think that it's very likely lower. Still, at 0.5% - 1% it would be much more severe than an average flu season and the market might take it seriously simply due to the shock factor. There is also the potential for an effective anti-viral being developed by the end of the year, which makes me a bit more hopeful.
I am not well calibrated about whether the ~12% market drop is appropriate given the evidence above.
↑ comment by ioannes (ioannes_shade) · 2020-03-01T17:42:15.559Z · LW(p) · GW(p)
I find myself unsure what to do at this point. (Aside from taking some profit out) should I close out the position, and if so put the money into what?
I bought some $APT (US-based mask manufacturer) in mid-January.
Sold off most of it this week as it 8x'd. I put most of the earnings into other coronavirus-relevant names: $AIM, $INO, $COCP, $MRNA, $TRIB. Also considering $GILD but haven't bought any yet ($MRNA and $GILD aren't really corona pure-plays because they're large-ish biotech companies with multiple product lines).
I'll revisit these allocations when the market opens on Monday. I don't have a good sense of how smart this is... there's a lot of hype in this sector and I haven't carefully diligenced any of these picks, they're just names that seem to be doing something real and haven't had a crazy run-up yet.
I also pulled back a lot of my portfolio into cash.
↑ comment by Three-Monkey Mind · 2020-02-28T20:22:21.401Z · LW(p) · GW(p)
Also, I find myself vexed with thoughts […] How do professional or amatuer traders deal with this?
Habituation, meditation, and/or alcohol.
↑ comment by Matthew Barnett (matthew-barnett) · 2020-02-27T23:01:17.334Z · LW(p) · GW(p)
One way of framing the EMH is to say that in normal circumstances, it's hard to beat the market. But we are in a highly abnormal circumstance -- same with Bitcoin. One could imagine that even if the EMH false in its strong form, you have to wait years before seeing each new opportunity. This makes the market nearly unexploitable.
Replies from: drethelin, Wei_Dai, philtable↑ comment by drethelin · 2020-02-28T00:36:25.767Z · LW(p) · GW(p)
the absolutely important part that people seem to miss with a basic 101 understanding of EMH is "hard" in no way means "impossible"
People do hard things all the time! It takes work and time and IQ and learning from experience but they do it.
↑ comment by Wei Dai (Wei_Dai) · 2020-02-28T07:39:01.051Z · LW(p) · GW(p)
One could imagine that even if the EMH false in its strong form, you have to wait years before seeing each new opportunity. This makes the market nearly unexploitable.
I'm not sure I understand your point. Investing in an index fund lets you double your money every 5 to 10 years. If every 10 years there's an opportunity to quickly 5x your money or more (on top of the normal market growth), how does it make sense to call that "nearly unexploitable"?
Replies from: matthew-barnett↑ comment by Matthew Barnett (matthew-barnett) · 2020-02-28T08:04:23.875Z · LW(p) · GW(p)
Hmm, true, but if you took that argument to its logical extreme the existence of a single grand opportunity implies the market is exploitable. I mean technically, yeah, but when I talk about EMH I mostly mean that $20 bills don't show up every week.
↑ comment by Phil (philtable) · 2020-02-28T02:23:08.068Z · LW(p) · GW(p)
That's a tautology: Anytime I can beat the market is a highly abnormal time. You can only beat the market in a highly abnormal time.
Replies from: matthew-barnett↑ comment by Matthew Barnett (matthew-barnett) · 2020-02-28T00:55:01.681Z · LW(p) · GW(p)
Eh, I'm not so sure. If I noticed that every Wednesday the S&P went up 1%, and then fell 1% the next day, that would allow me to regularly beat it, no? Unless we are defining "abnormal" in a way that makes reference to the market.
↑ comment by steven0461 · 2020-02-28T00:37:34.958Z · LW(p) · GW(p)
If the market is genuinely this beatable, it seems important for the rationalist/EA/forecaster cluster to take advantage of future such opportunities in an organized way, even if it just means someone setting up a Facebook group or something.
(edit: I think the evidence, while impressive, is a little weaker than it seems on first glance, because my impression from Metaculus is the probability of the virus becoming widespread has gotten higher in recent days for reasons that look unrelated to your point about what the economic implications of a widespread virus would be.)
Replies from: matthew-barnett, Gurkenglas↑ comment by Matthew Barnett (matthew-barnett) · 2020-02-28T00:58:18.893Z · LW(p) · GW(p)
my impression from Metaculus is the probability of the virus becoming widespread has gotten higher in recent days for reasons that look unrelated to your point about what the economic implications of a widespread virus would be.
Do you care to share those reasons? I've also been following Metaculus and my impression has been a slow progression of updates as the outbreak has gotten bigger, rather than a big update. However, the stock market looks like it did a big update.
Replies from: steven0461↑ comment by steven0461 · 2020-02-28T01:57:28.737Z · LW(p) · GW(p)
I don't know what the reasons are off the top of my head. I'm not saying the probability rise caused most of the stock market fall, just that it has to be taken into account as a nonzero part of why Wei won his 1 in 8 bet.
↑ comment by Gurkenglas · 2020-03-03T15:05:39.012Z · LW(p) · GW(p)
The easy way is for Wei_Dai to take your money, invest it as he would his, and take 10% of the increase.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2020-03-01T06:26:01.807Z · LW(p) · GW(p)
It seems like your opinion has changed a lot since our conversation 7 months ago, when you wrote [LW(p) · GW(p)]:
(I personally bought some individual stocks when I was younger for reasons similar to ones you list, but they mostly underperformed the market so I stopped.)
↑ comment by MTGandP · 2020-04-18T21:37:24.396Z · LW(p) · GW(p)
Now that April 17 has passed, how much did you end up making on this bet?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-04-20T21:18:42.703Z · LW(p) · GW(p)
I rolled a lot of the puts into later expirations, which have become almost worthless. I did cash out or convert into long positions some of them, and made about 10x my initial bet as a result. (In other words I lost about 80% of my paper profits.) It seems like I have a tendency to get out of my bets too late (same thing happened with Bitcoin), which I'll have to keep in mind in the future. BTW I wrote about some of my other investments/bets recently at https://ea.greaterwrong.com/posts/g4oGNGwAoDwyMAJSB/how-much-leverage-should-altruists-use/comment/RBXqgYshRhCJsCvWG, in case you're interested.
↑ comment by ioannes (ioannes_shade) · 2020-03-02T17:01:25.450Z · LW(p) · GW(p)
Do you know of a way to buy puts with <$1000?
I don't understand options well and would like to make some small options trades to drive my learning, but from what I see on my broker (Fidelity), all puts for something like $SPX cost several thousand at minimum.
Replies from: gilch, gilch↑ comment by gilch · 2020-03-02T20:20:02.171Z · LW(p) · GW(p)
I feel that I should also point out that long options are a risky play. They do eventually expire, and may expire worthless. You have to get the timing right as well as the direction, and deflating volatility could mean they lose most of their value sooner than you expect. You could lose your entire investment. If you want to experiment, either do a "paper" trade (simulate and track it, but don't actually do it), or make sure it's money you can afford to lose on a very small percentage of your account. 5% of the account is considered big for a single trade, even for experienced option traders who know what they are doing, and I basically never go that high on a long position. I'd recommend you keep it to 1% or less.
↑ comment by gilch · 2020-03-02T20:10:42.777Z · LW(p) · GW(p)
You can try puts on SPY
instead. It's an ETF that tracks the same index: the S&P 500, but the share price is 1/10th, so the options are proportionally cheaper as well. There's also the XSP
mini options, but I think SPY
still has better liquidity.
Also, if you have the right kind of account, you can try spreads, buying one option and selling another to help pay for it.
You could also consider a call option on an inverse index ETF, like SH
, which is designed to rise when SPX
falls. Its share price is even lower than SPY
, currently about 1/100th of SPX
or under $30/share. Most options on this will cost hundreds or less per contract, not thousands.
↑ comment by ioannes (ioannes_shade) · 2020-03-02T21:18:39.659Z · LW(p) · GW(p)
Thank you – super helpful.
↑ comment by Wei Dai (Wei_Dai) · 2020-03-17T06:00:30.593Z · LW(p) · GW(p)
↑ comment by John_Maxwell (John_Maxwell_IV) · 2020-03-01T06:14:38.618Z · LW(p) · GW(p)
You can buy stock in companies like Zoom and Slack that enable remote work. I did this about a month ago and their stocks have gone up about 30% since then.
↑ comment by gilch · 2020-03-01T05:52:07.567Z · LW(p) · GW(p)
You could buy an inverse ETF, like SH for a short-term bearish forecast. An advantage of inverse ETFs over options is that they do not require you to apply for margin account or option trading privileges.
[Epistemic status: I am not a financial advisor! Double check anything I say. For educational purposes only. This is information to consider, not a recommendation to buy anything in particular. I have no idea where the market bottom is. Maybe we're already there.]
SH closely tracks the daily -1x performance of the S&P 500, but may not be aligned that well over long periods. There are a number of other inverse ETFs you might consider, including -2x (SDS) and -3x (SPXU) leveraged ones (which have even worse alignment over long periods, especially during high-volatility periods, such as right now), as well as ETFs tracking the inverse of other indexes. For longer periods consider "safe haven" investments like TLT.
comment by Maxime Riché (maxime-riche) · 2020-02-21T16:30:01.308Z · LW(p) · GW(p)
Offering 100-300h of technical work on an AI Safety project
I am a deep learning engineer (2y exp), I currently develop vision models to be used on satellite images (I also do some software engineering around that) (Linkedin profile https://www.linkedin.com/in/maxime-riche-73696182/). On my spare time, I am organizing a EA local group in Toulouse (France), learning RL, doing a research project on RL for computer vision (only expecting indirect utility from this) and developing an EAA tool (EffectiveAnimalAdvocacy). I have been in the French EA community for 4 years. In 2020, I chose to work part time to dedicate 2 to 3 days of work per week to EA aligned projects.Thus for the next 8 months, I have ~10h / week that I want to dedicate to assist an AI safety project. For myself, I am not looking for funds, nor to publish myself a paper, nor a blog post.To me the ideal project would be:
- a relevant technical AI safety project (research or not). I am looking for advice on the "relevant" part.
- where I would be able to help the project to achieve better quality results than otherwise without my contribution. (e.g. through writing better code, doing more experiments, testing other designs)
- where I can learn more about technical AI safety
- where my contribution would include writing code. If it is a research proposal, then implement experiments. If there is no experimental part currently in the project, I could take charge of creating one.
comment by [deleted] · 2020-02-06T01:49:59.780Z · LW(p) · GW(p)
I think I figured out a stunningly simple way to modulate interstellar radio signals so that they contain 3-D spatial information on the point of origin of an arbitrarily short non-repeating signal. I applied this method to well-known one-off galactic radio transients and got sane results. I would love to write this up for ARXIV.
Anybody got a background in astronomy that can help make sure I write this up for ARXIV in a way that uses the proper terminology and software?
Replies from: Stefan42↑ comment by StefanHex (Stefan42) · 2020-03-03T22:21:30.126Z · LW(p) · GW(p)
That is a very broad description - are you talking about locating Fast Radio Bursts? I would be very surprised if that was easily possible.
Background: Astronomy/Cosmology PhD student
Replies from: Nonecomment by Wei Dai (Wei_Dai) · 2020-02-04T21:39:08.142Z · LW(p) · GW(p)
A descriptive model of moral change, virtue signaling, and cause allocation that I thought of in part as response to Paul Christiano's Moral public goods . (It was previously posted [LW(p) · GW(p)] deep in a subthread and I'm reposting it here to get more attention and feedback before possibly writing it up as a top-level post.)
- People are socially rewarded for exhibiting above average virtue/morality (for certain kinds of virtue/morality that are highly regarded in their local community) and punished for exhibiting below average virtue/morality.
- As a result, we evolved two internal mechanisms: preference alteration (my own phrase) and preference falsification. Preference alteration is where someone's preferences actually change according to the social reward gradient, and preference falsification is acting in public according to the reward gradient but not doing so in private. The amounts of preference alteration and preference falsification can vary between individuals. (We have preference alteration because preference falsification is cognitively costly, and we have preference falsification because preference alteration is costly in terms of physical resources.)
- Preference alteration changes one's internal "moral factions" to better match what is being rewarded. That is, the factions representing virtues/morals being socially rewarded get a higher budget.
- For example there is a faction representing "family values" (being altruistic to one's family), one for local altruism, one for national altruism, one for global altruism, one for longtermism, etc., and #3 is mainly how one ends up allocating resources between these factions, including how much to donate to various causes and how to weigh considerations for voting.
- In particular, public goods and other economics considerations do not really apply (at least not directly), as far as relative spending across different moral factions, because resources are allocated through budgets rather than weights in a utility function. For example if global anti-poverty suddenly becomes much more cost effective, one doesn't vote or donate to spend more on global poverty, because the budget allocated to that faction hasn't changed. Similarly, larger countries do not have higher ODA as the public goods model predicts. Instead the allocation is mostly determined by how much global altruism is rewarded by one's local community, which differs across communities due to historical contingencies, what kinds of people make up the community, etc.
- On top of this, preference falsification makes one act in public (through, e.g., public advocacy, publicly visible donations, conspicuously punishing those who fall short of the local norms) more like someone who fully subscribes to the virtues/morals being socially rewarded, even if one's preference alteration falls short of that. [Added: This is probably responsible for purity spirals / runaway virtue signaling. E.g., people overcompensate for private deviations from moral norms by putting lots of effort into public signaling including punishing norm violators and non-punishers, causing even more preference alteration and falsification by others.]
- This system probably evolved to "solve" local problems like local public goods and fairness within the local community, but has been co-opted by larger-scale moral memeplexes.
- "Rhetoric about doing your part" is how communities communicate what the local norms are, in order to trigger preference alteration. "Feelings of guilt" is what preference alteration feels like from the inside. [Added: This is referring to some things Paul said earlier in that subthread.]
↑ comment by paulfchristiano · 2020-02-05T17:35:38.882Z · LW(p) · GW(p)
tl;dr: seems like you need some story for what values a group highly regards / rewards. If those are just the values that serve the group, this doesn't sound very distinct from "groups try to enforce norms which benefit the group, e.g. public goods provision" + "those norms are partially successful, though people additionally misrepresent the extent to which they e.g. contribute to public goods."
Similarly, larger countries do not have higher ODA as the public goods model predicts
Calling this the "public goods model" still seems backwards. "Larger countries have higher ODA" is a prediction of "the point of ODA is to satisfy the donor's consequentialist altruistic preferences."
The "public goods model" is an attempt to model the kind of moral norms / rhetoric / pressures / etc. that seem non-consequentialist. It suggests that such norms function in part to coordinate the provision of public goods, rather than as a direct expression of individual altruistic preferences. (Individual altruistic preferences will sometimes be why something is a public good.)
This system probably evolved to "solve" local problems like local public goods and fairness within the local community, but has been co-opted by larger-scale moral memeplexes.
I agree that there are likely to be failures of this system (viewed teleologically as a mechanism for public goods provision or conflict resolution) and that "moral norms are reliably oriented towards provide public goods" is less good than "moral norms are vaguely oriented towards providing public goods." Overall the situation seems similar to a teleological view of humans.
For example if global anti-poverty suddenly becomes much more cost effective, one doesn't vote or donate to spend more on global poverty, because the budget allocated to that faction hasn't changed.
I agree with this, but it seems orthogonal to the "public goods model," this is just about how people or groups aggregate across different values. I think it's pretty obvious in the case of imperfectly-coordinated groups (who can't make commitments to have their resource shares change as beliefs about relative efficacy change), and I think it also seems right in the case of imperfectly-internally-coordinated people.
(We have preference alteration because preference falsification is cognitively costly, and we have preference falsification because preference alteration is costly in terms of physical resources.)
Relevant links: if we can't lie to others, we will lie to ourselves, the monkey and the machine.
E.g., people overcompensate for private deviations from moral norms by putting lots of effort into public signaling including punishing norm violators and non-punishers, causing even more preference alteration and falsification by others.
I don't immediately see why this would be "compensation," it seems like public signaling of virtue would always be a good idea regardless of your private behavior. Indeed, it probably becomes a better idea as your private behavior is more virtuous (in economics you'd only call the behavior "signaling" to the extent that this is true).
As a general point, I think calling this "signaling" is kind of misleading. For example, when I follow the law, in part I'm "signaling" that I'm law-abiding, but to a significant extent I'm also just responding to incentives to follow the law which are imposed because other people want me to follow the law. That kind of thing is not normally called signaling. I think many of the places you are currently saying "virtue signaling" have significant non-signaling components.
Replies from: Wei_Dai, Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-02-10T10:50:14.786Z · LW(p) · GW(p)
I don’t immediately see why this would be “compensation,” it seems like public signaling of virtue would always be a good idea regardless of your private behavior.
I didn't have a clear model in mind when I wrote that, and just wrote down "overcompensate" by intuition. Thinking more about it, I think a model that makes sense here is to assume that your private actions can be audited by others at some cost (think of Red Guards going into people's homes to look for books, diaries, assets, etc., to root out "counter-revolutionaries"), so if you have something to hide you'd want to avoid getting audited by avoiding suspicion, and one way to do that is to put extra effort into public displays of virtue. People whose private actions are virtuous would not have this extra incentive.
As a general point, I think calling this “signaling” is kind of misleading.
I guess I've been using "virtue signaling" because it's an established term that seems to be referring to the same kind of behavior that I'm talking about. But I acknowledge that the way I'm modeling it doesn't really match the concept of "signaling" from economics, and I'm open to suggestions for a better term. (I'll also just think about how to reword my text to avoid this confusion.)
↑ comment by Wei Dai (Wei_Dai) · 2020-02-05T22:35:58.831Z · LW(p) · GW(p)
If those are just the values that serve the group, this doesn’t sound very distinct from “groups try to enforce norms which benefit the group, e.g. public goods provision” + “those norms are partially successful, though people additionally misrepresent the extent to which they e.g. contribute to public goods.”
It's entirely possible that I misunderstood or missed some of the points of your Moral public goods [LW · GW] post and then reinvented the same ideas you were trying to convey. By "public goods model" I meant something like "where we see low levels of redistribution and not much coordination over redistribution, that is best explained by people preferring a world with higher level of redistribution but failing to coordinate, instead of by people just not caring about others." I was getting this by generalizing from your opening example:
The nobles are altruistic enough that they prefer it if everyone gives to the peasants, but it’s still not worth it for any given noble to contribute anything to the collective project.
Your sections 1 and 2 also seemed to be talking about this. So this is what my "alternative model" was in reaction to. The "alternative model" says that where we see low levels of redistribution (to some target class), it's because people don't care much about the target class of redistribution and assign the relevant internal moral faction a small budget, and this is mostly because caring about the target class is not socially rewarded.
Your section 3 may be saying something similar to what I'm saying, but I have to admit I don't really understand it (perhaps I should have tried to get clarification earlier but I thought I understood what the rest of the post was saying and could just respond to that). Do you think you were trying to make any points that have not been reinvented/incorporated into my model? If so please explain what they were, or perhaps do a more detailed breakdown of your preferred model, in a way that would be easier to compare with my "alternative model"?
seems like you need some story for what values a group highly regards / rewards
I think it depends on a lot of things so it's hard to give a full story, but if we consider for example the question of "why is concern about 'social justice' across identity groups currently so much more highly regarded/rewarded than concerns about 'social justice' across social classes" the answer seems to be that a certain moral memeplex happened to be popular in some part of academia and then spread from there due to being "at the right place at the right time" to take over from other decaying moral memeplexes like religion, communism, and liberalism [LW(p) · GW(p)]. (ETA: This isn't necessarily the right explanation, my point is just that it seems necessary to give an explanation that is highly historically contingent.)
(I'll probably respond to the rest of your comment after I get clarification on the above.)
Replies from: ChristianKl↑ comment by ChristianKl · 2020-02-18T13:25:57.232Z · LW(p) · GW(p)
I don't think that it's just social justice across identity groups being at the right place at the right time. As a meme it has the advantage that it allows people who are already powerful enough to effect social structures to argue why they should have more power. That's a lot harder for social justice across social classes.
↑ comment by Vaniver · 2020-02-04T23:26:46.539Z · LW(p) · GW(p)
We have preference alteration because preference falsification is cognitively costly
This seems incomplete; if I hold money in different currencies, it seems right for me to adopt 'market rates' for conversion between them, which seems like preference alteration. But the root cause isn't that it'd be cognitively costly for me to keep a private ledger of how I want to exchange between pounds and yen and dollars and a separate public ledger, it's that I was only ever using pounds and yen and dollars as an investment vehicle.
It seems quite possible that similar things are true for preferences / time use / whatever; someone who follows TV shows so that they have something to talk about with their coworkers is going to just follow whatever shows their coworkers are interested in, because they're just using it as an investment vehicle instead of something to be pursued in its own right.
Preference alteration changes one's internal "moral factions" to better match what is being rewarded. That is, the factions representing virtues/morals being socially rewarded get a higher budget.
It also seems like the factions changing directions is quite important here; you might not change the total budget spent on global altruism at all while taking totally different actions (i.e. donating to different charities).
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-02-08T10:05:07.569Z · LW(p) · GW(p)
Sorry for the delayed reply, but I was confused by your comment and have been trying to figure out how to respond. Still not sure I understand but I'm going to take a shot.
someone who follows TV shows so that they have something to talk about with their coworkers is going to just follow whatever shows their coworkers are interested in, because they’re just using it as an investment vehicle instead of something to be pursued in its own right.
Watching a TV show in order to talk about it with coworkers is an instance of instrumental preferences (which I didn't talk about specifically in my model but was implicitly assuming as a background concept). When I wrote "preference alteration" I was referring to terminal preferences/values. So if you switch what show you watch in order to match your coworkers' interests (and would stop as soon as that instrumental value went away), that's not covered by either "preference alteration" or "preference falsification", but just standard instrumental preferences. However if you're also claiming to like the show when you don't, in order to fit in, then that would be covered under "preference falsification".
Does this indicate a correct understanding of your comment, and does it address your point? If so, it doesn't seem like the model is missing anything ("incomplete"), except I could perhaps add an explicit explanation of instrumental preferences and clarify that "preference alteration" is talking about terminal preferences. Do you agree?
It also seems like the factions changing directions is quite important here; you might not change the total budget spent on global altruism at all while taking totally different actions (i.e. donating to different charities).
Sure, this is totally compatible with my model and I didn't intend to suggest otherwise.
Replies from: Vaniver↑ comment by Vaniver · 2020-02-09T21:53:46.704Z · LW(p) · GW(p)
Does this indicate a correct understanding of your comment, and does it address your point?
I think the core thing going on with my comment is that I think for humans most mentally accessible preferences are instrumental, and the right analogy for them is something like 'value functions' instead of 'reward' (as in RL).
Under this view, preference alteration is part of normal operation, and so should probably be cast as a special case of the general thing, instead of existing only in this context. When someone who initially dislikes the smell of coffee grows to like it, I don't think it's directly because it's cognitively costly to keep two books, and instead it's because they have some anticipation-generating machinery that goes from anticipating bad things about coffee to anticipating good things about coffee.
[It is indirectly about cognitive costs, in that if it were free you might store all your judgments ever, but from a functional perspective downweighting obsolete beliefs isn't that different from forgetting them.]
And so it seems like there are three cases worth considering: given a norm that people should root for the sports team where they grew up, I can either 1) privately prefer Other team while publicly rooting for Local team, 2) publicly prefer Local team in order to not have to lie to myself, or 3) publicly prefer Local team for some other reason. (Maybe I trust the thing that generated the norm is wiser than I am, or whatever.)
Maybe another way to think about this how the agent relates to the social reward gradient; if it's just a fact of the environment, then it makes sense to learn about it the way you would learn about coffee, whereas if it's another agent influencing you as you influence it, then it makes sense to keep separate books, and only not do so when the expected costs outweigh the expected rewards.
Replies from: TurnTrout↑ comment by TurnTrout · 2020-02-10T15:27:20.494Z · LW(p) · GW(p)
I think for humans most mentally accessible preferences are instrumental, and the right analogy for them is something like 'value functions' instead of 'reward' (as in RL).
I agree. As far as I can tell, people seem to be predicting their on-policy Q function when considering different choices. See also attainable utility theory [? · GW] and the gears of impact [? · GW].
↑ comment by Eli Tyre (elityre) · 2020-02-29T21:05:45.235Z · LW(p) · GW(p)
[The following is a musing that might or might not be adding anything.]
As a result, we evolved two internal mechanisms: preference alteration (my own phrase) and preference falsification. Preference alteration is where someone's preferences actually change according to the social reward gradient, and preference falsification is acting in public according to the reward gradient but not doing so in private. The amounts of preference alteration and preference falsification can vary between individuals. (We have preference alteration because preference falsification is cognitively costly, and we have preference falsification because preference alteration is costly in terms of physical resources.)
One thing that comes to mind here is framing myself as a mesa-optimizer in a (social) training process. Insofar as the training process worked, and I was successfully aligned, my values are the values of the social gradient. Or the I might be an unaligned optimizer intending to execute a treacherous turn (though in this context, the "treacherous turn" is not a discreet moment when I change my actions, but rather a continual back-and-forth between serving selfish interests and serving the social morality, depending on the circumstances).
"Feelings of guilt" is what preference alteration feels like from the inside.
I'm not sure that that is always what it feels like. I can feel pride at my moral execution.
comment by Steven Byrnes (steve2152) · 2020-03-03T16:03:35.483Z · LW(p) · GW(p)
After the hospitals fill up, the COVID-19 death rate is going to get a lot higher. How much higher? What's the fatality rate from untreated COVID-19?
This article may be an answer: it lumps together ICU, mechanical ventilation, and death into a "primary composite end point". That seems like an OK proxy for "death without treatment", right?
If so, Table 1 suggests fatality rate of 6% overall, 0% ages 0-14, 2% ages 15-49, 7% ages 50-64, 21% ages 65+. There's more in the table about pre-existing conditions and so on.
(ETA one more: 2.5% for people of all ages with no preexisting condition.)
Thoughts? Any other data?
(ETA: This should be viewed as a lower bound on fatality rate, see comments.)
Replies from: cousin_it, Wei_Dai↑ comment by cousin_it · 2020-03-03T19:40:20.003Z · LW(p) · GW(p)
Look at Table 3, most people in the study received some kind of treatment, in particular 40% received oxygen. You can't figure out the untreated fatality rate from this.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-03-03T20:10:38.299Z · LW(p) · GW(p)
Missed that! Thanks! I agree. It's a lower bound.
Interesting list of treatments. I'm a bit confused why a majority needed antibiotics, for example. I guess the virus opens the door for bacterial infections...?
Replies from: romeostevensit, leggi↑ comment by romeostevensit · 2020-03-06T08:52:38.973Z · LW(p) · GW(p)
pneumonia comorbid in a huge number of cases.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-03-06T10:29:46.463Z · LW(p) · GW(p)
I thought pneumonia was a condition / symptom / cluster of symptoms, not a disease. You can have pneumonia caused by COVID-19, or pneumonia caused by a bacterial infection, or pneumonia caused by some other viral infection, etc. It's confusing because there's a so-called "pneumonia vaccine". It's really a "vaccine against a particular bacterial infection that often causes pneumonia". You can correct me if I'm wrong :)
Replies from: romeostevensit↑ comment by romeostevensit · 2020-03-06T19:53:44.399Z · LW(p) · GW(p)
Having a respiratory infection makes you much more vulnerable to bacterial pneumonia secondary infection which is what is being seen in a lot of the deadly cases.
Replies from: steve2152, steve2152↑ comment by Steven Byrnes (steve2152) · 2020-03-09T14:16:39.020Z · LW(p) · GW(p)
This not-particularly-reliable source says "So far, there have been very few concurrent or subsequent bacterial infections, unlike Influenza where secondary bacterial infections are common and a large source of additional morbidity and mortality". So ... I guess the doctors were giving antibiotics as a preventive measure that turned out to be unnecessary? Maybe??
↑ comment by Steven Byrnes (steve2152) · 2020-03-06T21:18:31.261Z · LW(p) · GW(p)
Thanks for explaining!!
↑ comment by leggi · 2020-03-06T13:06:39.635Z · LW(p) · GW(p)
For cases receiving antibiotics I would want to distinguish between prophylactic and therapeutic prescribing.
Are they being given "just in case" or are they being used to treat a bacterial infection (confirmed by testing)?
The general health/disease history and current medications of the patients most affected should also be considered when looking at the stats.
↑ comment by Wei Dai (Wei_Dai) · 2020-03-03T19:57:12.804Z · LW(p) · GW(p)
It seems to be a good paper to consider, which I hadn't seen before.
This article may be an answer: it lumps together ICU, ventilation, and death into a “primary composite end point”. That seems like an OK proxy for “death without treatment”, right?
The number of people reaching "primary composite end point" would also probably increase without treatment though, so it can only serve as a lower bound. The same table gives 15.7% as "severe cases", so 6-16% seems a reasonable range, which is not too different from 5-20% I estimated earlier [LW(p) · GW(p)].
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-03-03T20:07:58.112Z · LW(p) · GW(p)
Good point. Thanks!
comment by Ben Pace (Benito) · 2020-03-01T00:30:50.432Z · LW(p) · GW(p)
Is there something you think we can all do on LessWrong can do to help with the coronavirus?
We have a justified practical advice thread [LW · GW] and some solid posts about quarantine prepations [LW · GW], not acting out of social fears [LW · GW], and a draft model of risks from using delivery services [LW · GW].
We also have a few other questions:
- What should my triggers be for quarantining? [LW · GW]
- Is this a good or bad time for extended travel? [LW · GW]
- What will be the big-picture implications if this affects 10%+ of the world? [LW · GW]
- Will the coronavirus increase use of blockchain? [LW · GW]
Finally, here's the advice that my house and some friends put together.
I'm interested if people have ideas for better ways we could organise info on LessWrong or something.
comment by gjm · 2020-02-06T10:03:24.985Z · LW(p) · GW(p)
Steven Pinker is running a general-education course on rationality at Harvard University. There are some interesting people booked as guest lecturers. Details on Pinker's website, including links that will get you to video of all the lectures (there have been three so far).
I've watched only the first, which suggests unsurprisingly that a lot of the material will be familiar to LW regulars.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2020-02-10T15:03:30.971Z · LW(p) · GW(p)
The syllabus also includes (either as required or optional reading) https://www.lesswrong.com/posts/ujTE9FLWveYz9WTxZ/what-cost-for-irrationality [LW · GW] , https://www.lesswrong.com/posts/XTXWPQSEgoMkAupKt/an-intuitive-explanation-of-bayes-stheorem [LW · GW] ,
https://www.lesswrong.com/posts/QxZs5Za4qXBegXCgu/introduction-to-game-theorysequence-guide , https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/ , https://80000hours.org/key-ideas/ , and https://arbital.com/p/bayes_rule/?l=1zq ; its "other resources" sections also include the following mentions:
LessWrong.com is a forum for the “Rationality community,” an informal network of bloggers who seek to call attention to biases and fallacies and apply reason more rigorously (sometimes to what may seem like extreme lengths).
Slate Star Codex https://slatestarcodex.com/ is an anagram of “Scott Alexander,” the author of the tutorial recommended above and a prominent member of the “rationality community.” This deep and witty blog covers diverse topics in social science, medicine, events, and everyday life.
80,000 Hours, https://80000hours.org/, an allusion to the number of hours in your career, is a non-profit that provides research and advice on how you can best make a difference through your career.Replies from: gjm
↑ comment by gjm · 2020-02-10T17:09:14.624Z · LW(p) · GW(p)
Nice to see that Steven Pinker has the same N-blindness as Scott himself :-).
Replies from: raj-thimmiah↑ comment by Raj Thimmiah (raj-thimmiah) · 2020-07-09T07:55:52.356Z · LW(p) · GW(p)
What do you mean by n-blindness?
Replies from: gjm↑ comment by gjm · 2020-07-09T13:00:44.676Z · LW(p) · GW(p)
SLATE STAR CODEX is almost an anagram of SCOTT ALEXANDER. I think I remember Scott saying it was meant to actually be an anagram and he goofed. Pinker says it's an anagram.
(But I misremembered: it has an extra S as well as missing an N, so it's pole-blindness rather than just N-blindness. Also, perhaps I'm also misremembering about the origins of the name; maybe Scott didn't actually goof at all, but just decided to make do with an imperfect anagram.)
comment by Wei Dai (Wei_Dai) · 2020-02-22T17:31:47.299Z · LW(p) · GW(p)
An observation on natural language being illogical: I've noticed that at least some native Chinese speakers use 不一定 (literally "not certain") to mean "I disagree", including when I say "I think there's 50% chance that X." At first I was really annoyed with the person doing that ("I never said I was certain!") but then I noticed another person doing it so now I think it's just a standard figure of speech at this point, and I'm just generally annoyed at ... cultural evolution, I guess.
comment by gjm · 2020-02-04T20:59:44.232Z · LW(p) · GW(p)
Google's AI folks have made a new chatbot using a transformer-based architecture (but a network substantially bigger than full-size GPT2). Blog post; paper on arXiv. They claim it does much better than the state of the art (though I think everyone would agree that the state of the art is rather unimpressive) according to a human-evaluated metric they made up called "sensibleness and specificity average", which means pretty much what you think it does, and apparently correlates with perplexity in the right sort of way.
comment by Jeffrey Ladish (jeff-ladish) · 2020-02-28T03:53:45.232Z · LW(p) · GW(p)
I'd be curious how people relate to this Open Thread compared to their personal ShortForm posts. I'm trying to get more into LessWrong posting and don't really understand the differences between these.
This has probably already been discussed, and if so please link me to that discussion if it's easy.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-02-28T05:47:39.675Z · LW(p) · GW(p)
I haven't seen any previous discussion about this. I think the main relevant difference for me is that it's much easier to find someone's ShortForm posts than their Open Thread posts (which are hidden among all of their other comments). I don't really see the point of ShortForm posts compared to regular posts, so I basically post in Open Thread if I don't want something to be part of my "permanent record" (e.g., posting a rough idea or a draft for feedback), and make a regular post otherwise.
comment by Zian · 2020-02-29T20:49:51.348Z · LW(p) · GW(p)
I noticed that even though I may not be as optimized in the matter of investments as others (hi, Wei Dai!), the basic rationality principles still help a lot. This morning, when I went to invest my usual chunk of my paycheck, I reflected on my actions and realized that the following principles were helping me (and had helped me in the past) buy stuff that were likely undervalued:
- pre-commitment (to a certain fund allocation)
- think about it for more than 5 min (to putting in the up front leg work and reading to determine my investing approach)
- use the try harder, Luke (repetitively following the investment plan week after week even in bad times; overcoming changes in brokerages and entire financial firms going away)
It's nice to have dramatic examples of "rationalists should win" from other people. But for me, nothing beats personal experience with even small examples of "winning".
comment by Wei Dai (Wei_Dai) · 2020-02-24T11:18:22.853Z · LW(p) · GW(p)
Further alarming evidence of humanity's inability to coordinate (especially in an emergency), and also relevant to recent discussions around terminology: ‘A bit chaotic.’ Christening of new coronavirus and its disease name create confusion
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2020-02-24T21:33:00.752Z · LW(p) · GW(p)
This news article has a bizarre story about this:
Following reports of a Japanese national who tested positive for COVID-19 having visited Indonesia days before his diagnosis, an official from Indonesia’s Health Ministry today said that the patient was infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which he claims is different from COVID-19.
“The disease we’re facing in this current epidemic is COVID-19. Meanwhile, some experts have said that there is about 70 percent difference between COVID-19 and SARS-CoV-2 virus,” Achmad Yurianto, secretary of the Health Ministry’s Disease Prevention and Control Directorate, told Kompas.
According to reports, the ministry has been in touch with Japanese health authorities. Achmad also noted how the ministry is still working to find out the correlation between the two viruses.
comment by Ricardo Meneghin (ricardo-meneghin-filho) · 2020-03-02T06:50:44.700Z · LW(p) · GW(p)
Hi! I have been reading lesswrong for some years but have never posted, and I'm looking for advice about the best path towards moving permanently to the US to work as a software engineer.
I'm 24, single, currently living in Brazil and making 13k a year as a full stack developer in a tiny company. This probably sounds miserable to a US citizen but it's actually a decent salary here. However, I feel completely disconnected from the people around me; the rationalist community is almost nonexistent in Brazil, specially in a small town like the one I live. In larger cities there's a lot of crime, poverty and pollution, which makes moving and finding a job in a larger company unattractive to me. Add that to the fact that I could make 10x what I make today at an entry level position in the US and it becomes easy to see why I want to move.
I don't have formal education. I was approved at University of São Paulo (Brazil's top university) when I was 15 but I couldn't legally enroll, so I had to wait until I was approved again at 17. I always excelled at tests, but hated attending classes, and thought classes were progressing too slowly for me. So I dropped out the following year (2014). Since then, I taught myself how to program in several languages and ended up in my current position.
The reason I'm asking for help is that I think it would save me a lot of time if someone gave me the right pointers as to where to look for a job, which companies to apply to, or if there's some shortcut I could take to make that a reality. Ideally I'd work in the Bay Area, but I'd be willing to move anywhere in the US really, at any living salary (yeah I'm desperate to leave my current situation). I'm currently applying to anything I can find on Glassdoor that has visa sponsorship.
Because I'm working in a private company I don't have a lot to show to easily prove I'm skilled (there's only the company apps/website but it's hard to put that in a resume), but I could spend the next few months doing open source contributions or something that I could use to show off. The only open source contribution I currently have is a fix to the Kotlin compiler.
Does anyone have any advice as to how to proceed or has done something similar? Is it even feasible, will anyone hire me without a degree? Should I just give up and try something else? I have also considered travelling to the US with a tourism visa and looking for a job while I'm there, could that work (I'm not sure if it's possible to get work visa when already in the US)?
Replies from: gilch↑ comment by gilch · 2020-03-02T06:36:16.114Z · LW(p) · GW(p)
I work as a software developer for an American company, but my perspective is mostly limited to my own experience. I have also been involved in some hiring decisions and interviews. You can sometimes get hired without a degree, if you can prove you have the skills. LinkedIn is helpful for finding work if you can connect with recruiters. It may be easier to find a job when you already have one, as that proves you can currently do work. Open-source work was helpful for me. The quality matters more than the quantity. It can show that you know how to use version control, and, depending on the project, that you can coordinate work with a team.
Replies from: ricardo-meneghin-filho↑ comment by Ricardo Meneghin (ricardo-meneghin-filho) · 2020-03-03T00:34:15.639Z · LW(p) · GW(p)
Thanks for giving your perspective! Good to know some hire without requiring a degree. Guess I'll start building a portfolio that can demonstrate I have the necessary skills, and keep applying.
comment by Wei Dai (Wei_Dai) · 2020-02-27T02:03:37.940Z · LW(p) · GW(p)
ETA: It's out of stock again just a couple of hours later, but you can sign up for be notified when it's back in stock.
Possible source of medicine for COVID-19. Someone in the FB group Viral Exploration suggested inhousepharmacy.vu as an online pharmacy to buy medicine without prescription. I don't know them personally but they seem trustworthy enough. (ETA: Another seemingly trustworthy person has also vouched for it.) Hydroxychloroquine has been out of stock until a few minutes ago. I bought some myself in case the medical system get overwhelmed. Relevant links:
- https://www.ncbi.nlm.nih.gov/pubmed/32075365
- https://www.who.int/blueprint/priority-diseases/key-action/Table_of_therapeutics_Appendix_17022020.pdf
- https://www.inhousepharmacy.vu/p-1106-plaquenil-tablets-200mg.aspx
↑ comment by jmh · 2020-02-27T02:29:17.968Z · LW(p) · GW(p)
It sounded like that was for a treatment which suggests you need to know you have COVID-19 before starting to take the med. Is the thinking that you might not be able to get treatment after the diagnosis?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2020-02-27T02:35:27.011Z · LW(p) · GW(p)
I imagine that hospitals will be overwhelmed, if 40-70% of the population eventually get COVID-19, so I'm buying mostly in case I or my family get symptoms and it's impossible to obtain medical attention at that point. I'm also considering (or will consider later) taking hydroxychloroquine for prophylaxis (during local peak of the infection in my area), since it's used for that purpose for malaria.
comment by aleph_four · 2020-02-06T04:01:23.609Z · LW(p) · GW(p)
As of right now, I think that if business-as-usual continues in AI/ML, most unskilled labor in the transportation/warehousing of goods will be automatable by 2040.
Scott Anderson, Amazon’s director of Robotics puts it at over 10 years. https://www.theverge.com/2019/5/1/18526092/amazon-warehouse-robotics-automation-ai-10-years-away.
I don’t think it requires any fundamental new insights to happen by 2040, only engineering effort and currently available techniques.
I believe the economic incentives will align with this automation once it becomes achievable.
Transportation and warehousing currently accounts for ~10% of US employment.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2020-02-18T23:46:52.768Z · LW(p) · GW(p)
Scott Anderson
Can I infer via nominative determinism that Scott Anderson is a friend of the rationalist community?
Replies from: jeff-ladish, aleph_four↑ comment by Jeffrey Ladish (jeff-ladish) · 2020-02-28T03:54:34.562Z · LW(p) · GW(p)
He is indeed.
↑ comment by aleph_four · 2020-02-28T01:02:24.009Z · LW(p) · GW(p)
Let’s add another Scott to our coffers.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2020-02-28T21:53:51.320Z · LW(p) · GW(p)
The other other Scott A