Comment by rossry on Open Thread April 2019 · 2019-04-09T23:40:27.223Z · score: 1 (1 votes) · LW · GW

I was trying to construct a proof along similar lines, so thank you for beating me to it!

Note that 2 is actually a case of 1, since you can think of the "walls" of the simplex as being bets that the universe offers you (at zero odds).

Comment by rossry on Open Thread April 2019 · 2019-04-09T11:34:57.791Z · score: 1 (1 votes) · LW · GW

(This comment isn't an answer to your question.)

If I'm understanding properly, you're trying to use the set of bets offered as evidence to infer the common beliefs of the market that's offering them. Yet from a Bayesian perspective, it seems like you're assigning P( X offers bet B | bet B has positive expectation ) = 0. While that's literally the statement of the Efficient Markets Hypothesis, presumably you -- as a Bayesian -- don't actually believe the probability to be literally 0.

Getting this right and generalizing a bit (presumably, you think that P( X offers B | B has expectation +epsilon ) < P( X offers B | B has expectation +BIG_E )), should make the market evidence more informative (and cases of arbitrage less divide-by-zero, break-your-math confusing).

Comment by rossry on Open Thread April 2019 · 2019-04-09T09:46:50.312Z · score: 1 (1 votes) · LW · GW

I'm confused what the word "fairly" means in this sentence.

Do you mean that they make a zero-expected-value bet, e.g., 1:1 odds for a fair coin? (Then "fairly" is too strong; non-degenerate odds (i.e., not zero on either side) is the actual required condition.)

Do you mean that they bet without fraud, such that one will get a positive payout in one outcome and the other will in the other? (Then I think "fairly" is redundant, because I would say they haven't actually bet on the outcome of the coin if the payouts don't correspond to coin outcomes.)

Comment by rossry on How can we respond to info-cascades? [Info-cascade series] · 2019-03-17T01:53:12.102Z · score: 1 (1 votes) · LW · GW

A related idea in non-punishment of "wrong" reports that have insufficient support (again in the common-prior/private-info setting) comes from this paper [pdf] (presented at the same conference), which suggests collecting reports from all agents and assigning rewards/punishments by assuming that agents' reports represent their private signal, computing their posterior, and scoring this assumed posterior. Under the model assumptions, this makes it an optimal strategy for agents to truly reveal their private signal to the mechanism, while allowing the mechanism to collect non-cascaded base data to make a decision.

In general, I feel like the academic literature on market design / mechanism design has a lot to say about questions of this flavor.

Comment by rossry on How can we respond to info-cascades? [Info-cascade series] · 2019-03-17T01:34:53.337Z · score: 4 (2 votes) · LW · GW

Abstract: Considering information cascades (both upwards and downwards) as a problem of incentives, better incentive design holds some promise. This academic paper suggests a model in which making truth-finding rewards contingent on reaching a certain number of votes prevents down-cascades, and where an informed (self-interested) choice of payout odds and threshold can also prevent up-cascades in the limit of a large population of predictors.

1) cf. avturchin from the question about distribution across fields, pointing out that up-cascades and down-cascades are both relevant concerns, in many contexts.

2) Consider information cascades as related to a problem of incentives -- in the comments of the Johnichols post referenced in the formalization question, multiple commentators point out that the model fails if agents seek to express their marginal opinion, rather than their true (posterior) belief. But incentives to be right do need to be built into a system that you're trying to pump energy into, so the question remains of whether a different incentive structure could do better, while still encouraging truth-finding.

3) Up-Cascaded Wisdom of the Crowd (Cong and Xiao, working paper) considers the information-aggregation problem in terms of incentives, and consider the incentives at play in an all-or-nothing crowdfunding model, like venture capital or Kickstarter (assuming that a 'no' vote is irrevocable like a 'yes' vote is) -- 'yes' voters win if there is a critical mass of other 'yes' voters and the proposition resolves to 'yes'; they lose if there is a critical mass and the proposition resolves to 'no'; they have 0 loss/gain if 'yes' doesn't reach a critical mass; 'no' voters are merely abstaining from voting 'yes'.

Their main result is that if the payment of incentives is conditioned on the proposition gaining a fixed number of 'yes' votes, a population of symmetric, common-prior/private-info agents will avoid down-cascades, as a single 'yes' vote that breaks a down-cascade will not be penalized for being wrong unless some later agent intentionally votes 'yes' to put the vote over the 'yes' threshold. (An agent i with negative private info still should vote no, because if a later agent i' puts the vote over the 'yes' threshold based in part on i's false vote, then i expects to lose on the truth-evaluation, since they've backed 'yes' but believe 'no'.)

A further result from the same paper is that if the actor posing the proposition can set the payout odds and the threshold in response to the common prior and known info-distribution, then a proposition-poser attempting to minimize down-cascades (perhaps because they will cast the first 'yes' vote, and so can only hope to win if the vote resolves to 'yes') will be incentivized to set odds and a threshold that coincidentally minimize the chance of up-cascades. In the large-population limit, the number of cascades under such an incentive design goes to 0.

4) I suspect (but will not here prove) that augmenting Cong and Xiao's all-or-nothing "crowdfunding for 'yes'" design with a parallel "crowdfunding for 'no'" design -- i.e., 'no' voters win (resp. lose) iff there is a critical mass of 'no' voters and the proposition resolves 'no' (resp. 'yes') -- can further strengthen the defenses against up-cascades (by making it possible to cast a more informed 'no' vote conditioned on a later, more-informed agent deciding to put 'no' over the threshold).

Comment by rossry on How can we respond to info-cascades? [Info-cascade series] · 2019-03-17T01:30:21.056Z · score: 0 (0 votes) · LW · GW

[this answer was duplicated when I mistakenly copied my comment into an answer and then moved the comment to an answer.]

Comment by rossry on Distribution of info-cascades across fields? [Info-cascade series] · 2019-03-17T00:14:15.158Z · score: 1 (1 votes) · LW · GW

Why stop at 2? Belief-space is large, and many issues admit more than one (+/-) bit of information to cascade.

Comment by rossry on Open Thread February 2019 · 2019-03-02T02:03:26.197Z · score: 1 (1 votes) · LW · GW

But how make small changes in the trajectory of a star? One idea is to impact the star with large comets. It is not difficult, as remote Oort cloud objects (or wandering small planets, as they are not part of already established orbital movement of the star) need only small perturbations to start falling down on the central star, which could be done via nuclear explosions or smaller impacts by smaller astreoids.

I don't think this works; conservation of momentum means that the impact is almost fully counteracted by the gravitational pull that accelerated the comet to such speed (so that, in the end, the delta-v imparted to the star is precisely what you imparted with your nuclear explosions or smaller asteroids).

Maybe a Shkadov thruster is what you want? (It's slow going, though; this article suggests 60ly/200My, accounting for acceleration.)

Comment by rossry on Blackmail · 2019-02-23T10:35:22.371Z · score: 1 (1 votes) · LW · GW

Winner's Curse doesn't seem like the right effect to me -- it seems more like an orthogonality/Goodhart effect, where optimizing for outrageousness decreases the fitness w/r/t social welfare (on the margin). It's always in the blackmailer's interest to make the outrageousness greater, so they're not (selfishly) sad when they overshoot.

Comment by rossry on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-19T02:11:42.887Z · score: 8 (4 votes) · LW · GW

I'm still thinking about what quantitative estimates I'd stand behind. I think I'd believe that a prize-based competitive prediction system with all eval deferred until and conditioned on AGI is <4% to add more than $1mln of value to [just pay some smart participants for their best-efforts opinions].

(If I thought harder about corner-cases, I think I could come up with a stronger statement.)

Comment by rossry on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T23:39:13.713Z · score: 3 (3 votes) · LW · GW

If the reason your questions won't resolve is that you are dead or that none of your money at all will be useful, I think things are a bit different.

I'm confused; to restate the above, I think that a p% chance that your predictions don't matter (for any reason: game rained out, you're dead, your money isn't useful) is (to first order) equivalent to a p% tax on investment in making better predictions. What do you think is different?

one major ask is that the forecasters believe the AGI will happen in between, which seems to me like an even bigger issue

Sure, that's an issue, but I think that requiring participants to all assume short AGI timelines is tractable in a way that the delayed/improbable resolution issues are not.

I can imagine a market without resolution issues that assumes participants all believe short AGI timelines could support 12 semi-professional traders subsidized by interested stakeholders. I don't believe that a market with resolution issues as above can elicit serious investment in getting its answers right from half that many. (I recognize that I'm eliding my definition of "serious investment" here.)

Comment by rossry on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T14:39:01.178Z · score: 3 (3 votes) · LW · GW

I understand Zvi's points as being relatively universal to systems where you want to use rewards to incentivize participants to work hard to get good answers.

No matter how the payouts work, a p% chance that your questions don't resolve is (to first order) equivalent to a p% tax on investment in making better predictions, and a years-long tie-up kills iterative growth and selection/amplification cycles as well limiting the return-on-investment-in-general-prediction-skill to a one-shot game. I don't think these issues go away if you reward predictions differently, since they're general features of the relation between the up-front investment in making better predictions and the to-come potential reward for doing so well.

(A counterpoint I'll entertain is Zvi's caveat to "quick resolution" -- which also caveats "probable resolution" -- that sufficient liquidity can substitute for resolution. But bootstrapping that liquidity itself seems like a Hard Problem, so I'd need to further be convinced that it's tractable here.)

Comment by rossry on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T13:15:17.761Z · score: 2 (4 votes) · LW · GW

Assuming that your AI timelines are well-approximated by "likely more than three years", Zvi's post on prediction market desiderata suggests that post-AGI evaluation is pretty dead-on-arrival for creating liquid prediction markets. Even laying aside the conditional-on-AGI dimension, the failures of "quick resolution" (years) and "probable resolution" (~20%, by your numbers) are crippling for the prospect of professionals or experts investing serious resources in making profitable predictions.

Comment by rossry on Probability space has 2 metrics · 2019-02-11T06:23:16.236Z · score: 7 (3 votes) · LW · GW

The speculative proposition that humans might only be using one metric rings true and is compellingly presented.

However, I feel a bit clickbaited by the title, which (to me) implies that probability-space has only two metrics (which isn't true, as the later proposition depends on). Maybe consider changing it to "Probability space has multiple metrics", to avoid confusion?

Comment by rossry on Open Thread February 2019 · 2019-02-08T13:23:33.873Z · score: 9 (5 votes) · LW · GW

It varies a lot between papers (in my experience) and between fields (I imagine), but several hours for a deep reading doesn't seem out of line. To take an anecdote, I was recently re-reading a paper (of comparable length, though in economics) that I'm planning to present as a guest of a mathematics reading group, and I probably spent 4 hours on the re-read, before starting on my slides and presumably re-re-reading a bunch more.

Grazing over several days (and/or multiple separate readings) is also my usual practice for a close read, fwiw.

Comment by rossry on X-risks are a tragedies of the commons · 2019-02-07T09:29:36.884Z · score: 2 (2 votes) · LW · GW

I acknowledge that there's a distinction, but I fail to see how it's important. When you (shut up and) multiply out the probabilities, the expected personal reward for putting in disproportionate resources is negative, and the personal-welfare-optimizing level of effort is lower than the social-welfare-optimizing level.

Why is it important that that negative expected reward is made up of a positive term plus a large negative term (X-risk defense), instead of a negative term plus a different negative term (overgrazing)?

Comment by rossry on Book Trilogy Review: Remembrance of Earth’s Past (The Three Body Problem) · 2019-01-30T14:24:58.273Z · score: 1 (1 votes) · LW · GW

Huh. I don't think I ever heard someone call this series hard sci-fi where I could hear them; the most common recommendation was related to its Chineseness, which, as Zvi claims, definitely delivers.

And I'm not sure I'd take Niven as the archetype of truly hard sci-fi; have you ever tried Egan? Diaspora says sensible things about philosophy of mind for emulated, branching AIs with a plot arc where the power laws of a 5+1-dimensional universe become relevant, and Clockwork Rocket invents alternate laws of special relativity incidentally to a story involving truly creative alt-biology...

Comment by rossry on How could shares in a megaproject return value to shareholders? · 2019-01-21T10:54:23.484Z · score: 3 (2 votes) · LW · GW

Investors would prefer to invest in moonshot megaprojects over, like, infrastructure megaprojects.

I don't think this is true; as in corporate equity markets, the preference should be responsive to price, and equity shares should trade over naive book value, but at a price where the marginal investor is indifferent to buying more.

In fact, insofar as the Miller-Modigliani assumptions hold, any capitalization of the project in terms of equity and debt should trade at the same total price (and so, raise the same funding), just with a different mix of the total coming from the debt and the equity.

should they be incentivized to abort?

Clearly that would be ideal, but this scheme doesn't make this issue worse than the status quo (and provides the advantage that at least there's some market-based metric to tell you that the project is going off the rails).

Comment by rossry on How could shares in a megaproject return value to shareholders? · 2019-01-20T04:57:22.658Z · score: 3 (2 votes) · LW · GW

Proves too much; this is similarly true of shareholders in any company with a debt+equity capitalization.

It's true that shareholders' incentives are not perfectly aligned with creditors'. In private corporations, this is handled with governance practices; for public megaprojects it seems like even less of an issue.

Comment by rossry on Does anti-malaria charity destroy the local anti-malaria industry? · 2019-01-06T12:31:49.338Z · score: 1 (1 votes) · LW · GW

Does the "typical argument for protectionism" you cite claim that protectionist policy increases the total amount of local production (creating a steady-state trade surplus)? Or does it merely shift local production from comparatively-advantaged-to-produce-locally goods to comparatively-advantaged-to-import ones (hopefully ones with greater externalities to local production)?

There's a relevance to the anti-aid argument: The first-order effect of steady-state aid is to create the effect of a steady-state trade deficit on production without an effect on the current account. If the total externalities to local production are larger than the consumer value of goods, then this effects a welfare transfer from the aid recipient to the aid sender.

But the general-equilibrium effect is a shift of the recipient's local production away from the received goods towards the marginally-efficient goods. If the marginally-efficient goods have sufficiently greater externalities to local production than the received goods, then this might be a net win. (Where "sufficiently" here depends on elasticities of production and consumption.)

Comment by rossry on State Machines and the Strange Case of Mutating API · 2018-12-24T12:28:14.792Z · score: 3 (2 votes) · LW · GW

I'm not a network programmer or language designer by trade, so I expect to be missing something here, but I'll give it a go to learn where I'm wrong.

If you're using distinct interfaces for distinct states (as it seems you are in your latter examples) and your compiler is going to enforce them, then shadowing variables (as a language feature) lets you unassign them as you go along. In a language I'm more familiar with, which uses polymorphic types rather than interfaces:

let socket = socks5_socket () in
let socket = connect_unauthenticated socket ~proxy in
let socket = connect_tcp socket address in
enjoy socket ();

with signatures:

val sock5_socket : unit -> [> `Closed of Socket.t]
val connect_unauthenticated : [< `Closed of Socket.t] -> proxy:Address.t -> [> `Authenticated of Socket.t]
val connect_tcp -> [< `Authenticated of Socket.t] -> Address.t -> [> `Tcp_established of Socket.t]
val enjoy : [< `Tcp_established of Socket.t] -> unit -> unit

so that if you forget the connect_unauthenticated line (big danger of shadowing as you mutate), your compiler will correct you with a friendly but stern:

This expression has type [> `Closed of Socket.t] but an expression was expected of type [< `Authenticated of Socket.t].

Of course, shadowing without type-safety sounds like a nightmare, and I'm not claiming that any language you want to use actually supports it as syntax. But I occasionally appreciate it (given, of course, that I've got a type inspector readily keybound, to query what state my socket is in at this particular point).

Comment by rossry on Criticism Scheduling and Privacy · 2018-10-02T14:34:14.574Z · score: 1 (1 votes) · LW · GW

I've been enjoying this sequence on privacy, in no small part because I disagree with some of its fundamental premises so strongly. (Hopefully, someday soon I'll make the time to pull together a sequence laying out the grounds of my disagreement.)

But without backing up that teaser (sorry), I'll say that the brief mention of privacy as a guard against falling into inferential-distance chasms seems very straightforwardly true in hindsight (even to one skeptical of the guard-against-coercive-control angle), though I hadn't been able to put it nearly so cleanly in my own thoughts. If you've got even a short post's worth of content on related ideas in deploying privacy in deeply cooperative/aligned contexts, I'd be especially interested to hear more thoughts in that direction.

Comment by rossry on Defining by opposites · 2018-09-19T11:03:05.960Z · score: 2 (2 votes) · LW · GW

I like this framing, especially as it gracefully handles the way that communication isn't like Guess Who -- you have priors that don't look like "uniform over the following N possibilities", your payoffs for actually finding the answer might be nonconstant depend on what the answer is, some resource limitation might make the maxi-p(win) strategy different from the optional discriminator -- but once you start thinking about how you'd win a game with those rules, strategies for smarter search suggest themselves.

Comment by rossry on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-15T09:12:51.421Z · score: 1 (1 votes) · LW · GW
If I'm not the first, was this posted before?

No, I'm referencing an in-person conversation. (Incidentally, the fact that ialdabaoth fielded that suggestion and still wrote this post with 'dragon' makes me worry that they've got at least an instinct that it's the right word in some way I'm missing.)

And I think I see the worry that you're pointing at here. I think it's a valid one, though not one that I expect can be resolved entirely through theory; I'd like to see some people work with the ontology for a bit to see which words work in useful ways.

Comment by rossry on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-12T12:19:40.732Z · score: 2 (2 votes) · LW · GW

You're not the first to suggest s/Dragon/Hydra/g here, and I'd be tempted to agree, if not for the fact that dragon-slaying is significantly more poetic than hydra-slaying. OTOH, "Hydra" serves as a mnemonic that attacking symptoms is a Known Bad Strategy.

(Do note that the existence of a dragon can cause a series of not-obviously-related symptoms -- this stuff is on fire, and this stuff is smashed up, and these people got eaten...)

Comment by rossry on Preliminary thoughts on moral weight · 2018-08-15T12:32:28.123Z · score: 2 (2 votes) · LW · GW

Right, so if we're using a uniform distribution over 2^30000, there should be exactly zero ants sharing observer-moments, so in order to argue that ants' overlap in observer-moments should discount their total weight, we're going to need to squeeze that space a lot harder than that.

I've also spent some time recently staring at ~randomly generated grids of color for an unrelated project, and I think there's basically no way that the human visual system is getting so much as 5000 bits of entropy (i.e., 50x50 grid of four-color choices) out of the observer-experience of the visual field. So I think using 2^#receptors is just the wrong starting point. Similarly, assuming that neurons operate independently is going to give you a number in entirely the wrong realm of numbers entirely. (Wikipedia says an ant has ~250,000 neurons.)

I think that if you want to get to the belief that two ants might ever actually share an experience, you're going to need to work in a significantly smaller domain, like your suggestion of output actions, though applying the domain of "typical reactions of a human to new objects" is going to grossly undercount the number of human possible observer-experiences, so now I'm back to being stuck wondering how to do that at all.

Comment by rossry on Preliminary thoughts on moral weight · 2018-08-15T05:55:41.474Z · score: 2 (2 votes) · LW · GW

How many distinct possible ant!observer-moments are there? What is the entropy of their distribution in the status quo?

How many distinct possible human!observer-moments are there? What is the entropy of their distribution in the status quo?

(Confidence intervals okay; I just have no intuition about these quantities, and you seem to have considered them, so I'm curious what estimates you're working with.)

Comment by rossry on Open Thread August 2018 · 2018-08-11T15:53:02.375Z · score: 2 (2 votes) · LW · GW

My experience has been that in practice it almost always suffices to express second-order knowledge qualitatively rather than quantitatively. Granted, it requires some common context and social trust to be adequately calibrated on "50%, to make up a number" < "50%, just to say a number" < "let's say 50%" < "something in the ballpark of 50%" < "plausibly 50%" < "probably 50%" < "roughly 50%" < "actually just 50%" < "precisely 50%" (to pick syntax that I'm used to using with people I work with), but you probably don't actually have good (third-order!) calibration of your second-order knowledge, so why bother with the extra precision?

The only other thing I've seen work when you absolutely need to pin down levels of second-order knowledge is just talking about where your uncertainty is coming from, what the gears of your epistemic model are, or sometimes how much time of concerted effort it might take you to resolve X percentage points of uncertainty in expectation.

Comment by rossry on Probabilistic decision-making as an anxiety-reduction technique · 2018-07-17T10:44:22.157Z · score: 14 (5 votes) · LW · GW

I'm confused; why don't you just pick black with probability 100%? (Assuming that your utility is 1 if you make the on-further-reflection-correct choice, 0 else.)

This isn't the same thing as knowing what you're going to think at the end of your fretting -- you still don't know -- but the correct response to uncertainty is not half speed, and just picking the 40%-to-be-right option all of the time is the expectancy-maximizing response to uncertainty.

Comment by rossry on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-13T14:36:54.508Z · score: 2 (2 votes) · LW · GW

Pretty confident they meant it that way:

I am not thinking about physics-time, I am thinking about logical-time. If something is in your past, but has no effect on what algorithm you are running on what observations you get, then it might as well be considered as space-like separated from you. If you compute how everything in the universe evaluates, the space-like separated things are the things that can be evaluated either before or after you, since their output does not change yours or vice-versa. If you partially observe a fact, then I want to say you can decompose that fact into the part that you observed and the part that you didn't, and say that the part you observed is in your past, while the part you didn't observe is space-like separated from you.

Comment by rossry on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-13T14:32:59.138Z · score: 14 (5 votes) · LW · GW

It seems pretty easy for such mechanisms to be adapted for maximizing reproduction in some ancestral excitement but maladapted for maximizing your preferences in the modern environment.

I think I agree that your point is generally under-considered, especially by the sort of people who compulsively tear down Chesterton's fences.

Comment by rossry on Book review: Pearl's Book of Why · 2018-07-08T03:36:15.391Z · score: 3 (2 votes) · LW · GW

RCTs (and p-values) don't seem to be popular in physics or geology.

What evidence makes you say p-values aren't popular in physics? My passing (and mainly secondhand) understanding of cosmology is that it uses ~no RCTs but is very wedded to p-values (generally around the 5 sigma level).

Comment by rossry on Debt is an Anti-investment · 2018-07-07T23:16:33.120Z · score: 1 (1 votes) · LW · GW

A single house in a single market levered up 5x isn't the good kind of diversification

Hm, I can see your point. Are you saying because you've worked through a back-of-the-envelope calculation, or are you just guessing? (Me, I was just guessing.)

and will make your expected portfolio performance worse.

Do you mean "will make the expected dollar-returns of your portfolio worse", or are you making a claim about the expected utility-adjusted returns?

Comment by rossry on Debt is an Anti-investment · 2018-07-07T04:31:18.325Z · score: 1 (1 votes) · LW · GW

There is no compelling reason to think such outperformance will continue into the future.

The market disagrees with you.

What market instruments express opinions on the future returns of US equity investments? Everything I can think of serves to express an opinion on present prices of equity investments or on things like the future risk-free rate of return, not the future return on, e.g., the S&P500 index.

Comment by rossry on Debt is an Anti-investment · 2018-07-07T04:23:47.979Z · score: 1 (1 votes) · LW · GW

What text from the post suggests accounting for the emotional disutility of indebtedness?

Here's a quote that to me seems to argue against it:

The question becomes: what risk-free rate of return is equivalent to your best available investment? If you pay a higher interest on your debt than that, you should pay it off. If the rate you’re paying is lower, you should invest.

Following this algorithm suggests investing in a risk-free 3.01% return before paying off debts costing a net 3% in interest.

There's a bit of section 5 that accepts emotional disutility of volatility, but that's not the same thing.

As for ChristianKl, they comment that to them it doesn't make sense to value indebtedness at zero, agreeing with you that the disutility should be accounted for.

Comment by rossry on Debt is an Anti-investment · 2018-07-06T23:52:03.928Z · score: 1 (1 votes) · LW · GW

I think you are in agreement with ChristianKl (i.e., you both think the OP overlooks the emotional disutility of knowing that you have debt), but your tone seems to me to indicate disagreement.

Comment by rossry on Debt is an Anti-investment · 2018-07-06T23:48:22.241Z · score: 1 (1 votes) · LW · GW

Don’t hold any debt at above 4%.

It depends. If debt is tax deductible then it can make sense to hold it. Or based on the size of the opportunity.

Note that this is in the section of the OP's advice to themself, and presumably accounts for them already checking for tax-deductible opportunities and opportunity costs.

Comment by rossry on Debt is an Anti-investment · 2018-07-06T23:44:44.292Z · score: 1 (1 votes) · LW · GW

An anecdote for an anecdote: I made up and plugged in some numbers that roughly corresponded to buying my last (rented) apartment, and the calculator said it'd be cheaper than my rent by ~10%.

Tl;dr if buying vs renting was an obvious win the market would arbitrage that.

As I understand it, lots of the win comes from limit-one-per-person tax advantages, so "the market" can't straightforwardly arbitrage it away.

In any case, along the lines of the OP's section 4 above, the typical person should probably, ceteris paribus, prefer some investment in some real estate over additional investment in whatever market portfolio they already have, because not-perfectly-correlated assets add expectations but less-than-add volatilities. I think the calculator you linked is getting this wrong.

Comment by rossry on Problem Solving with Mazes and Crayon · 2018-06-24T14:57:55.049Z · score: 4 (3 votes) · LW · GW

Also not sure if it's a standard concept in path search, but searching from both ends seems fruitful, even in its most naive implementations:

  • if the graph is sufficiently random, I think it should take the expected fraction of the maze you have to walk from N/2 to sqrt(N)
  • you can apply it to either BFS or DFS, though I suspect that in practice DFS works better (and even better still if you DFS by switching your first decision (rather than the OP's algorithm of switching your last)
  • naturally, there's plenty of room for layering heuristics on top, like seeking towards the opposing cursor, or even the nearest point of the opposing explored-so-far surface...
Comment by rossry on Problem Solving with Mazes and Crayon · 2018-06-24T14:46:00.115Z · score: 3 (2 votes) · LW · GW

Answering my own musing: it's implementable as a 2-state cellular automaton, rather than requiring the ~5 states that distributed DFS does. So there's that.

Comment by rossry on Problem Solving with Mazes and Crayon · 2018-06-24T14:36:56.115Z · score: 2 (1 votes) · LW · GW

(Side question: there’s at least one more-human-intuitive way to apply chunking to mazes. Can you figure it out?)

What comes to my mind is a kind of algorithm for auto-chunking: repeatedly take a dead end and fill it in until the point where it branches off from a hallway or intersection. Eventually you're left with a single hallway from entrance to exit.

I'm not certain how it's an improvement over DFS, though.

Notes on a recent wave of spam

2018-06-14T15:39:51.090Z · score: 11 (7 votes)
Comment by rossry on Expressive Vocabulary · 2018-05-27T02:08:49.502Z · score: 3 (1 votes) · LW · GW

Oh, I certainly didn't mean to imply that there weren't cases of suitable replacements for slurs (or that it wouldn't be valuable to find such); rather, I only meant to claim that there existed a case where it isn't obvious how to find a suitable replacement (contra jimrandomh above).

Comment by rossry on Containment Inversion · 2018-05-27T02:02:56.906Z · score: 6 (2 votes) · LW · GW

Does anyone else have this experience of inversion? Is there already a name for it?

It sounds to me like something akin to the "figure-ground inversion" that Scott Alexander moots in a review of House of God:

House of God does a weird form of figure-ground inversion.

An example of what I mean, taken from politics: some people think of government as another name for the things we do together, like providing food to the hungry, or ensuring that old people have the health care they need. These people know that some politicians are corrupt, and sometimes the money actually goes to whoever’s best at demanding pork, and the regulations sometimes favor whichever giant corporation has the best lobbyists. But this is viewed as a weird disease of the body politic, something that can be abstracted away as noise in the system.

And then there are other people who think of government as a giant pork-distribution system, where obviously representatives and bureaucrats, incentivized in every way to support the forces that provide them with campaign funding and personal prestige, will take those incentives. Obviously they’ll use the government to crush their enemies. Sometimes this system also involves the hungry getting food and the elderly getting medical care, as an epiphenomenon of its pork-distribution role, but this isn’t particularly important and can be abstracted away as noise.

I think I can go back and forth between these two models when I need to, but it’s a weird switch of perspective, where the parts you view as noise in one model resolve into the essence of the other and vice versa.

Comment by rossry on Expressive Vocabulary · 2018-05-26T15:21:31.470Z · score: 3 (1 votes) · LW · GW

1) My model of people who use slurs as a significant part of their expressive vocabulary is that at least some of them use the slur to mean "member of group [X] I don't like", as explicitly opposed to "member of group [X] I feel indifferent-to-positive about". A neutral group signifier plus optional insult alone fails to encode this distinction, perhaps making it a less-than-suitable replacement.

2) I read:

I think people are within their rights to reject a proposed replacement for not meaning the right thing, sounding ugly, being one syllable longer, being hard to spell, not rhyming in a poem they're trying to write, and vague gut feeling that you're just trying to control them.

...to be pretty clear about whether a more-verbose construction that requires the speaker to separate their personal insult can fail to be a suitable replacement.

None of this means that the general principle can't or shouldn't have a carve-out for slurs; My only intended argument is that, as expressed above, it seems plausible to me that finding suitable replacements requires significant effort (and basically is never done in practice by people attempting to remove slurs from others' excessive vocabulary).

Comment by rossry on Affordance Widths · 2018-05-14T07:42:37.611Z · score: 14 (3 votes) · LW · GW

I think I disagree; "tolerance", to me, seems to point more towards the special case where {B} is some external events and {X} and {Y} are internal reactions. To talk about social affordances, as the OP does, you'd have to talk about the tolerances of others for {B} done by different people [A-E] -- and you've made less obvious the fact that the tolerance of [Q] for {B} done by [A] is different than the tolerance of [Q] for {B} done by [E] -- the entire content of the post.

Comment by rossry on Fun With DAGs · 2018-05-13T23:17:21.990Z · score: 3 (1 votes) · LW · GW

However, if the node includes the entire meal, so that there are six nodes (chicken, pepsi), (chicken, coke), (pork, pepsi), (pork, coke), (steak, pepsi), (steak, coke), then the magnitude doesn't matter.

I don't think this is right; you still want to be able to decide between actions which might have probabilistic "outcomes" (given that your action is necessarily being taken under incomplete information about its exact results).

You could define a continuous DAG over probability distributions, but that structure is actually too general; you do want to be able to rely on linear additivity if you're using utilitarianism (rather than some other consequentialism that cares about the whole distribution in some nonlinear way).

Of course, once you have your function from worlds to utilities, you can construct the ordering between nodes of {100% to be X | outcomes X}, but that transformation is lossy (and you don't need the full generality of a DAG, since you're just going to end up with a linear ordering.

(For modeling incomplete preferences, DAGs are great! Not so great for utility functions.)

Comment by rossry on Affordance Widths · 2018-05-12T03:10:01.911Z · score: 13 (3 votes) · LW · GW
In most of the situations where this is most salient to me, {B} is a social behavior, and {X} and {Y} are punishments that people mete out to people who do not conform to correct {B}-ness.

Notwithstanding this, I note that the model of affordance widths also seems apt for modeling binds in situations where the constraints are imposed by uncaring parts of the universe, rather than the social web.

Take as an example the task of riding a bike, where potential hazards include {X} riding too slowly and falling over and {Y} riding too quickly and losing control. Here, taking speed as {B}, it seems quite natural that different people might have different affordance widths for speed.

What does this buy us? Well, once again we see that the natural advice on "how to ride a bike better" for [A] might be actively misleading for [C], and the best advice for [D] and [E] might be in a different class entirely. So the concept seems like a useful tool for anyone considering how to give advice to other people on how to do things.

(A more complicated example that I've been thinking about recently is the task of forming predictions under uncertainty, where {B} is something like "trust your intuition"; generating various kinds of {X} and {Y} are left as an exercise.)

Comment by rossry on Frequently Asked Questions for Central Banks Undershooting Their Inflation Target · 2017-11-11T16:54:59.126Z · score: 24 (7 votes) · LW · GW
Eliezer, why on earth are you writing about this? [...] Can’t you just link to some econ-bloggers? There’s plenty of them out there, right? (I seem to recall you even mentioning one, in your previous post…)

It is worth noting that Scott Sumner called this post "probably the best single introduction to the market monetarist way of thinking in the entire blogosphere."