## Posts

Notes on a recent wave of spam 2018-06-14T15:39:51.090Z

Comment by rossry on Coordination Schemes Are Capital Investments · 2021-09-10T14:51:05.967Z · LW · GW

Came here to say this. It doesn't even depend on knowing the other player's value with certainty -- if you shift your submitted price by $1 in your favor, you might give up a trade worth <$0.5 (if the other player's price was between your true value and the new number), and you might improve your price by $0.5 (if a trade happens). Even if you don't know anything for sure, it seems much more likely that a trade happens than the other player's price being in exactly that dollar, so it's good for you to do the price shift. Comment by rossry on In Most Markets, Lower Risk Means Higher Reward · 2021-09-10T02:44:15.081Z · LW · GW Reasonable beliefs! I feel like we're mostly at a point where our perspectives are mainly separated by mood, and I don't know how to make forward progress from here without more data-crunching than I'm up for at this time. Thanks for discussing! Comment by rossry on Open & Welcome Thread - August 2020 · 2021-08-31T01:42:21.657Z · LW · GW The actual algorithm I followed was remembering that habryka posts them and going to his page to find the one he posted most recently. Not sure what the most principled way to find it is, though... Comment by rossry on Open & Welcome Thread - August 2020 · 2021-08-29T16:37:05.541Z · LW · GW Welcome; glad to have you here! Just so you know, this is the August 2020 thread, and the August 2021 thread is at https://www.lesswrong.com/posts/QqnQJYYW6zhT62F6Z/open-and-welcome-thread-august-2021 -- alternatively, you could wait three days for habryka to post the September 2021 thread, which might see more traffic in the early month than the old thread does at the end of the month. Comment by rossry on In Most Markets, Lower Risk Means Higher Reward · 2021-08-29T16:20:46.505Z · LW · GW I think of the Fama-French thesis as having two mostly-separate claims: (1) correlated factors create under-investment + excess return, and (2) the "right" factors to care about are these three -- oops five -- fundamentally-derived ones. Like you, I'm pretty skeptical on the way (2) is done by F-F, and I think the practice of hunting for factors could (should) be put on much more principled ground. It's worth keeping in mind, though, that (1) is not just "these features predict excess returns", but "these features have correlation, and that correlation arrows excess returns". So it's not the same as saying there's a single excess-return factor, because the model has excess return being driven specifically by correlation and portfolio under-investment. Example: In hypothetical 2031, it feels valid to me to say "oh, the new 'crypto minus fiat' factor explains a bunch of correlated variance, and I predict it will be accompanied by excess returns". The fact that the factor is new doesn't mean its correlation should do anything different (to portfolio weightings, and thus returns) than other correlated factors do. I also don't think the binary of "the risk-return paradox exists" vs "the market is efficient in a weak-form sense" is a helpful way to divide hypothesis-space. If there's a given observed amount of persistent excess return, F-F ideas might explain some of it but leave the rest looking like inefficiency. The fact that some inefficiency remains doesn't mean that we should ignore the part that is explainable, though. Comment by rossry on Open and Welcome Thread – August 2021 · 2021-08-24T17:45:55.232Z · LW · GW What do you mean by "demonstrate vaccine effectiveness"? My instinct is that it's going to be ~impossible to prove a casual result in a principled way just from this data. (This is different from how hard it will be to extract Bayesian evidence from the data.) For intuition, consider the hypothesis that countries can (at some point after February 2020) unlock Blue Science, which decreases cases and deaths by a lot. If the time to develop and deploy Blue Science is sufficiently correlated with the time to develop and deploy vaccines (and the common component can't be measured well), it won't be possible to distinguish casual effectiveness of vaccines from casual effectiveness of Blue Science. (A Bayesian would draw some update even from an uncontrolled correlation, so if you want the Bayesian answer, the real question is "how much of an update so you want to demonstrate (and assuming what prior)"? Comment by rossry on In Most Markets, Lower Risk Means Higher Reward · 2021-08-22T15:41:49.420Z · LW · GW Originally the Fama-French model only had 3 fundamental risk factors. If things don't quite work out after the first 3, it seems awfully ad-hoc to just find 2 more and then add them to the back. There also seems to be a belief in academia that getting higher risk adjusted returns through analysis of company fundamentals is more possible than getting them through historical price data. I'm a bit confused here -- the core Fama-French insight is that if a given segment of the market have a large common correlation, then it'll be under-invested in by investors constrained by a portfolio risk budget. In this framework, I think it's perfectly valid to identify new factors as the research progresses. (1) As a toy example, say that we discover all the stocks that start with 'A' are secretly perfectly correlated with each other. So, from a financial perspective, they're one huge potential investment with a massive opportunity to deploy many trillions of dollars of capital. However, every diversified portfolio manager in the world has developed the uncontrollable shakes -- they thought they had a well-diversified portfolio of 2600 companies, but actually they have 100 units of general-A-company and 2500 well-diversified holdings. Assuming that each stock had the same volatility, that general-A position quintuples their portfolio variance! The stock-only managers start thinking about rotating As into Bs through Zs, and both the leveraged managers and the stocks+plus+bonds managers think about how much they'll have to trim stock leverage, and how much is that should be As vs the rest... Ultimately, when it all shakes out, many people have cut their general-A investments significantly, and most have increased their other investments modestly. A's price has fallen a bit. Because A's opportunities to generate returns are still strong, A now has some persistent excess return. Some funds are all-in on A, but they're hugely outweighed by funds that take 3x leveraged bets on B-Z, and so the relative underinvestment and outperformance persist. (2) In this case, the correlation between the A stocks is analogous to an extreme French-Fama factor (in the sense the original authors mean the term). It "predicts higher risk-adjusted returns", but not in a practically exploitable way, because the returns go along with a "factor"-wide correlation that limits just how much of it you can take on, as an investor with a risk budget. If you could pick only one stock in this world, you would make it an A. Sure. But any sophisticated portfolio already has as much A as it wants, and so there's no way for them to trade A to eliminate the excess return. (3) And in this universe, would it be valid for Fama and French to write their initial model, notice this extra correlation (and that it explains higher risk-adjusted returns for A stocks), and tack it on to the other factors of the model? I think that's perfectly valid. Comment by rossry on The Case for Extreme Vaccine Effectiveness · 2021-05-24T13:34:05.787Z · LW · GW 1. cold-chain requirements had more margin for error than any of us had thought 2. the process that produced excess margin for error in this case likely produced excess margin for error in other relevant areas (tentative) Should this make less plausible a line of reasoning that goes through "Except sometimes vaccines are left at high temperature for too long, the delicate proteins are damaged, and people receiving them are effectively not vaccinated..."? I'm not sure yet, and I don't know how central this particular line is to the overall argument. Comment by rossry on Covid 2/11: As Expected · 2021-02-15T08:48:38.505Z · LW · GW You can also get a fair number of points for just predicting the community prediction — but you won't get that many because as a question's point value increases (which it does with the number of predictions), more and more of the score is relative rather than absolute. I think this is actually backwards (the value goes up as the question's point value increases), because the relative score is the component responsible for the "positive regardless of resolution" payoffs. Explanation and worked example here: https://blog.rossry.net/metaculus/ Comment by rossry on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2020-11-26T06:27:45.719Z · LW · GW Clever, but it hasn't been tried for a good reason. If, say, the next five years of markets are all untethered from reality (but consistent with each other), there's no way to get paid for bringing them into line with expected reality except by putting on the trades and holding them for five years. (The natural one-year trade will just resolve to the unfair market price of the next-year-market market and there's nothing to do about it except wait for longer.) The chained markets end up being no more fair than if they all settled to the final expiry directly. Comment by rossry on The US Already Has A Wealth Tax · 2020-08-20T12:21:14.878Z · LW · GW One can avoid a wealth tax by living in another country. I don't understand why this is necessarily true. What would stop the US from levying a wealth tax on US persons living abroad? Comment by rossry on Delegate a Forecast · 2020-07-28T22:59:31.284Z · LW · GW When will air travel from New York to Hong Kong no longer require arriving passengers to self-quarantine? Comment by rossry on ESRogs's Shortform · 2020-06-10T14:50:15.966Z · LW · GW It's worse than that. If there weren't any shares available at your broker for you to short-sell in the market, you should consider it likely that instead of paying 0.4%/day, you just are told you have to buy shares to cover your short from assignment. This is an absolutely normal thing that happens sometimes when it's hard to find additional people to lend stock (which is happening now). (Disclaimer: I am a financial professional, but I'm not a financial advisor, much less yours.) Comment by rossry on What are objects that have made your life better? · 2020-05-25T14:50:48.555Z · LW · GW Strictly speaking, they're not both laptop chargers, but laptop/phone/USB-C chargers. So two of them are useful on the road for charging laptop and phone simultaneously. Comment by rossry on What are objects that have made your life better? · 2020-05-24T18:02:16.637Z · LW · GW Not a physical object, but the Cloud9 IDE (now absorbed into Amazon's AWS suite) for programming work. If your work fits into a terminal plus text editor (which it probably does), then making the actual hardware be a cloud server instead of a laptop that can run out of charge is a big win, and being able to access your "real" machine from different interface machines is sometimes useful. For the interface laptop itself, I've been very, very happy with a Google Pixelbook (which I got after many, many satisfied years with the original Chromebook Pixel), but that depends on whether you have tasks outside the browser, terminal, or text editor. Comment by rossry on What are objects that have made your life better? · 2020-05-24T17:41:04.997Z · LW · GW Separate travel toiletries from at-home toiletries. (The biggest win is not having to unpack exactly when you get home tired.) Similarly, separate travel phone/laptop chargers from at-home chargers, for same reason. I haven't yet gone all the way to a separate set of travel clothes, but would like to, one of these years. The 80/20 version is making sure to lay out 1-2 full sets of clothes before going on the long trip. (For reference, I spent maybe five weeks of 2019 traveling, though naturally 2020 has been much less than that.) Comment by rossry on What are objects that have made your life better? · 2020-05-24T17:24:46.493Z · LW · GW Ahem: a fifth laptop/USB-C charger. (One each for my couch, desk, and bedroom; two stay packed in my travel luggage.) h/t to Zvi for making this suggestion in Dual Weilding, under the general heading of More Dakka. Comment by rossry on English Bread Regulations · 2020-05-19T12:28:34.991Z · LW · GW The first two regulations have reference prices for wheat that differ by 50%. How far apart in time were they issued? Comment by rossry on Tips/tricks/notes on optimizing investments · 2020-05-12T13:43:08.111Z · LW · GW I'm being unnecessarily oblique in the above comment, for which I'm sorry. What I mean is, in a taxable account, you have the option to donate winners and harvest capital losses on losers. In a post-donation investment vehicle like a DAF, you don't have that optionality. (Compared to a taxable account, your treatment on winners also comes out to no capital gains tax, but your treatment on losers is worse, with no harvesting losses.) (not tax or investment advice) Comment by rossry on Tips/tricks/notes on optimizing investments · 2020-05-11T10:40:45.022Z · LW · GW It's worth mentioning that this is generally a bad idea in the US tax regime (despite being trivially easy), because the options for handling capital gains and losses differently mean you can sometimes do better with pre-donation investments than with post-donation investments. (I'm a finance professional, but no one's tax or investment advisor, much less your tax or investment advisor.) Comment by rossry on Could city design impact spread of infections? · 2020-04-22T15:23:15.271Z · LW · GW Having lived in New York (but only having visited LA), the difference in city design that is immediately salient to me is the presence/absence of the Subway. According to MIT health economist Jeffrey Harris, the subways seeded the massive coronavirus epidemic in New York City: New York City’s multitentacled subway system was a major disseminator – if not the principal transmission vehicle – of coronavirus infection during the initial takeoff of the massive epidemic that became evident throughout the city during March 2020. The near shutoff of subway ridership in Manhattan – down by over 90 percent at the end of March – correlates strongly with the substantial increase in the doubling time of new cases in this borough. Maps of subway station turnstile entries, superimposed upon zip code-level maps of reported coronavirus incidence, are strongly consistent with subway-facilitated disease propagation. Local train lines appear to have a higher propensity to transmit infection than express lines. Reciprocal seeding of infection appears to be the best explanation for the emergence of a single hotspot in Midtown West in Manhattan. Bus hubs may have served as secondary transmission routes out to the periphery of the city. Comment by rossry on Why don't singularitarians bet on the creation of AGI by buying stocks? · 2020-03-11T22:44:48.806Z · LW · GW Higher variance is worth avoiding (under standard assumptions), but I for one was surprised by how little additional variance one takes on by allocating, say, 10% of one's portfolio to a single arbitrary bet. In this comment I ballparked it at maybe an extra 0.5% variance. That said, allocating one's entire portfolio this way basically requires a rejection of the standard risk-budget assumptions. (Disclaimer: I'm a financial professional, but I'm not anyone's investment advisor, much less yours.) Comment by rossry on Some quick notes on hand hygiene · 2020-02-09T04:58:16.930Z · LW · GW I'm not sil ver, but as a casual coronavirus watcher (in part because I live significantly closer to affected areas than most), my instinctive doubts are mostly 1 and 3. What numbers are you using for those to base the claim "ankifying this is probably among the most valuable things you could ever use it for."? Comment by rossry on [deleted post] 2019-12-29T02:38:34.987Z In what sense are you using the word "trilemma"? I'm either not familiar with the usage or missing a big message of the post. (The common definition of "trilemma" I'm most familiar with presents three desiderata, of which it's possible to achieve at most two.) Comment by rossry on Meditation Retreat: Immoral Mazes Sequence Introduction · 2019-12-28T02:13:17.705Z · LW · GW I, too, am excited. Comment by rossry on Bayesian examination · 2019-12-11T13:51:22.544Z · LW · GW However, the post assumes that 1) there is (or should be) one correct answer, 2) which is of the form: (1, 0, 0, 0) or a permutation thereof, and 3) the material is independent of the system (does not include probability, for example). These are assumed for the sake of explanation, but none are necessary; in fact, the scoring rule and analysis go through verbatim if you have questions with multiple answers in the form of arbitrary vectors of numbers, even if they have randomness. The correct choice is still to guess, for each potential answer, your expectation of that answer's realized result. Comment by rossry on Open & Welcome Thread - November 2019 · 2019-11-23T06:03:41.424Z · LW · GW just because "I don't want to see more of this" doesn't mean it's up to me to influence whether anyone else can see it. I feel like this proves more than you want. For example, is it up to you to influence whether someone sees more of something, just because you want to see more of it? Similarly, it's also helpful to get a reason for up votes, but enforcing that a reason be given can reduce the amount of information-aggregation that will occur, on some margins. What justifies an asymmetry between how we aggregate positive information and how we aggregate negative information? Or would you also argue that up votes should come with reasons? Comment by rossry on Open & Welcome Thread - November 2019 · 2019-11-13T14:10:28.300Z · LW · GW I mean a weighted sum where weights add to unity. Comment by rossry on An optimal stopping paradox · 2019-11-12T13:53:54.318Z · LW · GW You need an exponentially increasing reward for your argument to go through. In particular, this doesn't prove enough: Since at each moment in time, you face the exact same problem (linearly increasing reward, α-exponentially decaying survival rate) The problem isn't exactly the same, because the ratio of (linear) growth rate to current value is decreasing over time. At some point, the value equals (is the right expression, I think?), and your marginal value of waiting is 0 (and decreasing), and you sell. If the ratio of growth rate to current value is constant over time, then you're in the same position at each step, but then it's either the St. Petersburg paradox or worthless. Comment by rossry on Open & Welcome Thread - November 2019 · 2019-11-12T13:28:14.919Z · LW · GW Sorry, I'm writing pretty informally here. I'm pretty sure that there are senses in which these arguments can be made formal, though I'm not really interested in going through that here, mostly because I don't think formality wins us anything interesting here. Some notes, though: (still in a fairly informal mode) My intuition that the only way to combine the two estimates without introducing a bias or assumed prior is by a mixture comes from treating each estimate (treated as a random variable) as a true estimate plus some idiosyncratic noise. Then any function of them yields an expression in terms of true estimate, each respective estimator's noise, and maybe other constants. But "unbiased" implies that setting the noise terms to 0 should set the expression equal to the true estimate (in expectation). Without making assumptions about the actual distribution of true values, this needs to just be 1 times the true estimate (plusmaybe some other noise you don't want, which I think you can get rid of). And the only way you get there from the noisy estimates is a mixture. By "assembly", I'm proposing to treat each estimate as a larger number of estimates with the same mean and larger variance, such that they form equivalent evidence. Intuitively, this works out if the count goes as the square of the variance ratio. Then I claim that the natural thing to do with many estimates each of the same variance is to take a straight average. But they're distributions, not observations. Sure, formally each observer's posterior is a distribution. But if you treat "observer 1's posterior is Normally distributed, with mean and standard deviation " as an observation you make as a Bayesian (who trusts observer 1's estimation and calibration), it gets you there. Comment by rossry on Open & Welcome Thread - November 2019 · 2019-11-11T14:01:58.581Z · LW · GW Ah, okay. In that case, here are a few attempts to ground the idea philosophically: 1. It's the "prior-free" estimate with the least error. See that unbiased "prior-free" estimates must be mixtures of the (unbiased) estimates, and that biased estimates are dominated by being scaled to fit. So the best you can do is to pick the mixture that minimizes variance, which this is. 2. It actually is the point that maximizes the product of likelihoods (equivalently, the joint likelihood, since the estimate errors are assumed to be independent). You can see this by remembering that the Normal pdf is the inverse exponential quadratic, so you maximize the product of likelihoods by maximizing the sum of log-likelihoods, which happens where the log-likelihood slopes are each the negative of the other, which happens when distances are inversely proportional to the x^2 coefficients (or the weights are inversely proportional to the variances). 3. There's a pseudo-frequentist(?) version of this, where you treat each estimate as an assembly of (higher-variance) estimates at the same point, notice that the count is inversely proportional to the variance, and take the total population mean as your estimator. (You might like the mean for its L2-minimizing properties.) 4. A Bayesian interpretation is that, given the improper prior uniformly distributed over numbers and treating the two as independent pieces of evidence, the given formula gives the mode of the posterior (and, since the posterior is Normal, gives its mean and median as well). Are any of those compelling? Comment by rossry on Open & Welcome Thread - November 2019 · 2019-11-09T13:58:25.383Z · LW · GW Are you asking for a justification for averaging independent estimates to achieve an estimate with lower errors? "Blended estimate" isn't a specific term of art, but the general idea here is so common that I'm not sure _what_ the most common term for it is. And the theoretical justification -- under assumptions of independent and Normal errors -- is at the post, where the author demonstrates that there's a lower error from the weighted average (and that their choice of weights minimizes the error). Am I missing something here? Comment by rossry on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-03T13:45:07.688Z · LW · GW Arimaa is the(?) classic example of a chess-like board game that was designed to be hard for AI (albeit from an age before "AI" mostly meant ML). From David Wu's paper on the bot that finally beat top humans in 2015: Why is Arimaa computer-resistant? We can identify two major obstacles. The first is that in Arimaa, the per-turn branching factor is extremely large due to the combinatorial possibilities produced by having four steps per turn. Even after identifying equivalent permutations of steps as the same move, on average there are about 17000 legal moves per turn (Haskin, 2006). This is a serious impediment to search. Obviously, a high branching factor alone doesn’t imply computer-resistance, particularly if the standard of comparison is with human play: high branching factors affect humans as well. However, Arimaa has a property common to many computer-resistant games: that “per amount of branching” the board changes slowly. Indeed, pieces move only one orthogonal step at a time. This makes it possible to effectively plan ahead, cache evaluations of local positions, and visualize patterns of good moves, all things that usually favor human players. The second obstacle is that Arimaa is frequently quite positional or strategic, as opposed to tactical. Capturing or trading pieces is somewhat more difficult in Arimaa than in, for example, Chess. Moreover, since the elephant cannot be pushed or pulled and can defend any trap, deadlocks between defending elephants are common, giving rise to positions sparse in easy tactical landmarks. Progress in such positions requires good long-term judgement and strategic understanding to guide the gradual maneuvering of pieces, posing a challenge for positional evaluation. Comment by rossry on When is pair-programming superior to regular programming? · 2019-10-10T13:09:51.406Z · LW · GW It's easy to play armchair statistician and contribute little, but I want to point out that the empirics cited here are effectively just anecdotes. The paper studies 13 pairs and 13 individuals in three assignments in one class at UUtah. Its estimate of relative time costs is only significant to ~ because development time has variance of (if I backsolved correctly) 65%, which...seems about right. Still, it seems like borderline abuse of frequentist statistics to argue that a two-tailed p<0.05 should be required to reject the hypothesis that pairs finish projects in half the wall-clock time of individuals (which is the null the analysis assumes). That said, the author correctly identifies that quality matters significantly more than speed. The quality metric, however, is "assignment tests passed" in throwaway academic projects, eliding the questions of what quality failures would or wouldn't be caught by the review / CI workflows that an industrial project would be going through anyway. So, finger to the wind, this study feels like it suggests that a pair spends 15% more person-hours (once they get used to each other) before turning their schoolwork in, and do 15% more of the work of the assignment than a student working alone. Consistent with the higher reported work-enjoyment numbers! Definitely a stronger showing than I would have guessed! But definitely not well-abstracted by "no significant result for time; significant improvement for quality". What am I missing here? Comment by rossry on Eukryt Wrts Blg · 2019-09-29T01:59:11.495Z · LW · GW (continued, to address a different point) B and C seem like arguments against "simple" (i.e., even-odds) bets as well as weird (e.g., "70% probability") bets, except for C's "like bets where I'm surer...about what's going on", which is addressed by A (sibling comment). Your point about differences in wealth causing different people to have different thresholds for meaningfulness is valid, though I've found that it matters much less than you'd expect in practice. It turns out that people making upwards of$100k/yr still do not feel good about opening up their wallet you give you $3. In fact, it feels so bad that if you do it more than a few times in a row, you really feel the need to examine your own calibration, which is exactly the success condition. I've found that the small ritual of exchanging pieces of paper just carries significantly more weight than would be implied by their relation to my total savings. (For this, it's surprisingly important to exchange actual pieces of paper; electronic payments make the whole thing less real, ruining the whole point.) Finally, it's hard to argue with someone's utility function, but I think that some rationalists get this one badly wrong by failing to actually multiply real numbers. For example, if you make a$10 bet (as defined in my sibling comment) every day for a year at the true probabilities, the standard decision of your profit/loss on the year is <$200, or$200/365 per day, which seems like a very small annual cost to practice being better calibrated and evaluate just how well-calibrated you are.

Comment by rossry on Eukryt Wrts Blg · 2019-09-29T01:38:57.423Z · LW · GW

Hi! I've done a fair amount of betting beliefs for fun and calibration over the years; I think most of these issues are solvable.

A is a solved problem. The formulation that I (and my local social group) prefer goes like "The buyer pays $X*P% to the seller. The seller pays$X to the buyer if the event comes true."

The precise payoffs aren't the important part, so long as they correspond to quoted probabilities in the correct way (and agreed sizes in a reasonable way). So this convention makes the probability you're discussing an explicit part of the bet terms, so people can discuss probabilities instead of confusing themselves with payoffs (and gives a clear upper bound for possible losses). Then you can work out exact payoffs later, after the bet resolves.

(As a worked example, if you thought a probability was less than 70% and wanted to bet about $20 with me, if you "sold$20 at 70%" in the above convention, you'd either win $2070%=$14 or lose $20-($2070%)=$6. But it's even easier to see that you selling a liability of$20p(happens) for $2070% is good for you if you think p(happens)<70%.) You've right that odds are a terrible convention for betting on probabilities unless you're trying to hide the actual numbers from your counterparties (which is the norm in retail sports betting). Comment by rossry on Long-term Donation Bunching? · 2019-09-29T01:01:03.867Z · LW · GW I also think that if the "sixth friend" donates$10k in line with each other friend's values and beliefs (as a result of social expectation, not contract), then there's no particular benefit to being the one who has to handle the money, and you don't need to trust in multi-year commitments.

Comment by rossry on Long-term Donation Bunching? · 2019-09-29T00:53:06.260Z · LW · GW

Your suggestion is correct, though it seemed too messy (and nonessential) to explain for the sake of an off-the-cuff proposal. I added a footnote to clarify this above, though.

Comment by rossry on Long-term Donation Bunching? · 2019-09-28T00:41:56.340Z · LW · GW

Proposal: Five friends in this situation write $10k checks[1] to a sixth. They all have a long chat about their altruist values and beliefs. The sixth donates$60k to a variety of EA causes.

Question: Just how likely / unpleasant would the ensuing IRS audit be?

(There's also a micro-donor-lottery version of this, except the individual contributions are personal gifts and the full $60k is a charitable donation.) [1] Actually, you want this to be something like$7k, since the tax deduction from donating is worth [your marginal income tax rate] on the amount, roughly 30%. Formally, $10k less the tax benefits from donating$10k.

Comment by rossry on Free Money at PredictIt? · 2019-09-27T07:42:56.215Z · LW · GW

That's exactly correct. It's a standard taxation-begets-misallocation scenario.

For reference, PI's current rules have this effect to roughly 0-3% per contract, potentially adding across multiple contracts in a bundle. Prices closer to 50% are worse (though prices further away have their own biases, as Zvi explains).

Comment by rossry on Free Money at PredictIt? · 2019-09-27T07:32:02.184Z · LW · GW

Yeah, Zvi is (unsurprisingly) right; the change in margining rules (after I wrote that post) makes it much better to sell the low-value contracts, and the withdrawal fees amortize if you're in for the longer term.

To new rules, and on the back of my envelope, Zvi's 12% "arbitrage" is something like a few percent good: maybe it covers withdrawal fees on its own, and likely will do so after a few rounds. The opportunity cost of capital is a whole 'nother issue...

I also strongly endorse the punchline that trading (even on the margins of trading costs) is some of the best rationalist training you can find.

Comment by rossry on Free Money at PredictIt? · 2019-09-26T23:18:07.859Z · LW · GW

Huh, I hadn't noticed that they didn't tie up the potential fees on your winnings. Hypotheses:

• bug introduced when they moved from gross margining to net margining years and didn't reconsider fees withholding
• doesn't actually matter; they don't give up ~anything by letting some people carrying small balances make free trades
• it's really hard to abuse this into free trades repeatedly
• the withholding here is too complicated and feel-bad to explain
• other
Comment by rossry on Taxing investment income is complicated · 2019-09-23T12:00:36.673Z · LW · GW

Ah, that makes sense.

Separately, I'm not entirely convinced by that second bullet point -- it seems like a non-omniscient state planner in a non-stationary environment would benefit from being able to determine the desired level of redistribution after the wealthy have accrued their income as wealth, rather than needing to get it right as they earned it.

(I'm assuming away the confiscatory impulse here, naturally; in practice, the political economy of confiscation causes serious issues for deferred decisions about distribution like this.)

Comment by rossry on Taxing investment income is complicated · 2019-09-22T23:40:55.720Z · LW · GW

Can you explain more why the tax rate on the risk-free-rate portion of investment income should be 0? A positive rate here implements a proxy wealth tax (without raising the reporting problems of a direct wealth tax), and a nonzero wealth tax might be part of an optimal tax policy (e.g., for the lots-of-small-taxes argument, if no other reason).

(I'm not sure that this is right, and am mostly asking this question from a stance of exploratory uncertainty.)

Comment by rossry on Taxing investment income is complicated · 2019-09-22T23:35:36.191Z · LW · GW

To be clear, this is the low-order approximation around 0; as explained in Paul's link (sibling to this) the effect away from zero involves the shape of the supply and demand curves through the relevant region of prices (and the stated claim holds when they're linear).

Comment by rossry on Spiracular's Shortform Feed · 2019-09-12T22:53:59.965Z · LW · GW

My guess is that one gets a reasonable start by framing more tasks as self-delegation (adding the middle step of "decide to do" / "describe task on to-do list" / "do task"), then periodically reviewing the tasks-competed list and pondering the benefits and feasibility of outsourcing a chunk of the "do task"s.

Creating a record of task-intentions has a few benefits in making self-reflection possible; reflecting on delegation opportunities is a special case.

Comment by rossry on Negative "eeny meeny miny moe" · 2019-08-21T14:55:24.967Z · LW · GW

Oh, you're right. The net incentive to catch cheaters is actually... 1/(k(k-1)^2), then? The relative incentive story is worse, though still better in total than the positive version, and better still if you assume a constant-size disincentive to be caught cheating.

Comment by rossry on Negative "eeny meeny miny moe" · 2019-08-20T10:49:46.809Z · LW · GW

Another (related?) advantage is that the incentives to manipulate and catch manipulation are much better balanced with the negative ("you're out") version. Consider:

• Perfectly cheating in the positive version improves your chances of winning by (n-1)/n, and stopping you from doing so improves each other person's chances by 1/n.
• Perfectly cheating in a round of the negative version improves your chances of winning by 1/(k(k-1)), where k is the number of people in to start the round. Stopping you from doing so improves each other person's chances by the same amount.
• The total (summed) incentive to manipulate in the negative version is (n-1)/n, the same as in the positive case.
Comment by rossry on Open & Welcome Thread - August 2019 · 2019-08-13T13:17:23.744Z · LW · GW

(Disclaimer: I'm a financial professional, but I'm not anyone's investment advisor, much less yours.)

You mention diversity as an advantage because it reduces your risk, but this framing is missing the crucial point that you can transmute a portfolio that's 0.5x as risky as your baseline to one that returns twice as much as your baseline. (Wei_Dai mentions this, but obliquely.) The trick is to use leverage, which is not as hard to get (or as expensive, or as complicated) as you think.

To be explicit about it: if you increase the per-dollar riskiness of your portfolio without increasing the per-dollar expected value, then after you leverage it down until it's at the optimal level of risk for your utility function (which you were going to do, right?), you will have lower expected returns.

The relevant question is "how much lower?" (which is precisely to say "how much does it increase your per-dollar risk?"), which I answer in my response to Wei_Dai (nephew to this). The answer turns out to be "very little", but in order to get there, you have to be asking the right question first.

Comment by rossry on Open & Welcome Thread - August 2019 · 2019-08-13T13:13:58.042Z · LW · GW
I don't have a good intuition about how costly this actually is in practice, if you only play with 10% of your portfolio.

tl;dr extremely little.

Here's some numbers I made up:

• Let the market's single common factor explain 90% of the variance of each stock.
• Let the remaining 10%s be idiosyncratic and independent.
• Let stocks have equal volatility (and let all risk be described by volatility).

Now compare a portfolio that's $100 of each of a hundred stocks with one that's$90 of each of a hundred plus $1k of another stock. (I'll model each stock as 0.75 times the market factor plus 0.25 a same-variance idiosyncratic factor.) Compared to a$10k single-stock portfolio...

• the equal-weighted portfolio has like
• the shot-caller's portfolio has like

...for an increase in of 4.5 basis points. So, pretty negligible.

Even if the market's single factor explains only half of the variance of each stock, the increased risk of the shot-caller's portfolio is just 40 basis points ( vs ). In the extreme case where stocks are uncorrelated, the increased risk is +34.5%, though I think that that's unrealistically generous to the diversification strategy.

Since an increase in volatility-per-dollar of basis points means that you give up basis points of your expected returns, I'm going to say that this effect is negligible in the "10% of portfolio" setting.