Comment by rossry on Get Rich Real Slowly · 2019-06-12T08:05:54.069Z · score: 3 (2 votes) · LW · GW

What's up with "FDIC insurance up to $1,000,000"? Wikipedia claims that FDIC insurance only covers $0.25mln/bank unless I have a joint account, retirement account, or some kind of legal entity which I'm not.

Comment by rossry on Fractional Reserve Charity · 2019-06-07T12:46:58.638Z · score: 4 (2 votes) · LW · GW

I don't think that there is off-the-shelf insurance for "I lose my job (prospects) or otherwise choose to have lower lifetime earnings, in some way I did not foresee."

It would be confusing to me if most EAs had equity investments they could not afford to bear a crash loss in, especially those involved in an emergency fund scheme. Why add market exposure to the portfolio of donations + scheme membership + liquid cash? (Especially, why as market exposure you expect other scheme participants to share?)

That said, it is possible to buy insurance against a market crash. Probably not as a centralized service.

Comment by rossry on 0.999...=1: Another Rationality Litmus Test · 2019-05-26T22:41:10.149Z · score: 3 (2 votes) · LW · GW

So shouldn't that inequality apply to 0.AAA... (base eleven) and 0.999... (base ten) as well? (A debatable point maybe).

Not debatable, just false. Formally, the fact that for all does not imply that .

If I were to poke a hole in the (proposed) argument that 0.[k 9s]{base 10} < 0.[k As]{base 11} (0.9<0.A; 0.99<0.AA;...), I'd point out that 0.[2*k 9s]{base 10} > 0.[k As]{base 11} (0.99>0.A; 0.9999>0.AA;...), and that this gives the opposite result when you take (in the standard sense of those terms). I won't demonstrate it rigorously here, but the faulty link here (under the standard meanings of real numbers and infinities) is that carrying the inequality through the limit just doesn't create a necessarily-true statement.

0.111...{binary} is 1, basically for the Dedekind cut reason in the OP, which is not base-dependent (or representation-dependent at all) -- you can define and identify real numbers without using Arabic numerals or place value at all, and if you do that, then 0.999...=1 is as clear as not(not(true))=true.

Comment by rossry on No Really, Why Aren't Rationalists Winning? · 2019-05-22T23:09:55.839Z · score: 6 (2 votes) · LW · GW

I think your comment is unnecessarily hedged -- do you think that you'd find much disagreement among LWers who interact with FHI/GMU-Econ over whether people there sometimes (vs never) fail to do level-one things?

I think I understand the connotation of your statement, but it'd be easier to understand if you strengthened "sometimes" to a stronger statement about academia's inadequacy. Certainly the rationality community also sometimes fails to do even the obvious, level-one intelligent character things to enable them to achieve their goals -- what is the actual claim that distinguishes the communities?

Comment by rossry on No Really, Why Aren't Rationalists Winning? · 2019-05-22T12:30:33.702Z · score: 3 (2 votes) · LW · GW

I'm confused; it seems like evidence against the claim that you can get arbitrary amounts of value out of learning generic rationality skills, but I don't see it as "devastating" to the claim you can get significant value, unless you're claiming that "spent years learning all that stuff, and now do it as a day job; some of them 16 hours a day" should imply only a less-than-significant improvement. Or am I missing something here?

Comment by rossry on Counterspells · 2019-04-29T14:30:12.832Z · score: 3 (2 votes) · LW · GW

cf. my comment cousin to this one; I misunderstood what the term was pointing at at first, though I stand by my complaint that that's a problem with the term.

Comment by rossry on Counterspells · 2019-04-28T14:34:51.922Z · score: 3 (3 votes) · LW · GW

Thanks; I legitimately misunderstood at first read whether "counterspell" was intended to apply to the invocations thrown out by bad arguers or the concise and specific distillations the OP is presenting for use. On re-read, I agree that it's supposed to be a set of useful tools.

I remain convinced of the specific claim that "counterspell" is bad jargon (though I don't think it's good practice to cite my own confusion too strongly; the incentives there aren't great). I agree that MtG''s paradigm where more general counterspells are more expensive seems like a good fit for thinking about rhetorical (and perhaps epistemic) tactics, though I reiterate that that's not how they work in many other settings, and that ambiguous baggage is worse than no baggage for this sort of thing. The question of whether identifying counterspells with magic is supposed to be a positive or negative association is additional gratuitous confusion -- I think your claim that the magic metaphor implies they don't work is wrong, but I'm not 85% sure.

Comment by rossry on Counterspells · 2019-04-28T04:40:42.545Z · score: 16 (9 votes) · LW · GW

Strong approval of the overall goal of the post, but here's a semantic criticism:

In accordance with the Rationalist tradition that requires everything to have a nerdy sci-fi or fantasy name

I parse this as an in-joke (and appreciate it as such), but I do think that regularly minting new jargon that's likely to carry substantial, conflicting(!) contextual baggage (not all of it appropriate[1]) is...a bad norm for an epistemic community to have.

I also think that the deeper tradition of jargon-forging (as Eliezer practiced it) involved names that sounded nerdy, but _not_ sci-fi or fantasy -- _cf._ Knowing About Biases Can Hurt People, which uses "Fully General Counterargument" in much the same way(?) as you're using "Counterspell". "Fully General Counterargument" is slightly more unwieldy, but apart from that is a better piece of jargon -- it's less loaded and by leaning even harder into the implicit snark that winks at the fact that the purported counter- isn't working at all, makes it even more clear that no, this is not a useful thing.

[1] To belabor this point, MtG counterspells are fully general (edit: okay, not **fully** general, and see Slider below), and a reasonable fit for the term as you're using it, but D&D (3.5e, at least) counterspells are based on negating a spell by casting a copy of it, which is not what you mean at all. I don't actually know what "counterspells" are in WoD/Mage, but the risk that they're even further afield from your intention should be another strike against using the already-loaded handle.

Comment by rossry on Open Thread April 2019 · 2019-04-28T04:23:18.363Z · score: 1 (1 votes) · LW · GW
preference "my decisions should be mine" - and many people seems to have it

Fair. I'm not sure how to formalize this, though -- to my intuition it seems confused in roughly the same way that the concept of free will is confused. Do you have a way to formalize what this means?

(In the absence of a compelling deconfusion of what this goal means, I'd be wary of hacking epistemics in defense of it.)

There are "friends" who claim to have the same goals as me, but later turns out that they have hidden motives.

Agreed and agreed that there's a benefit to removing their affordance to exploit you. That said, why does this deserve more attention than the inverse case (there are parties you do not trust who later turn out to have benign motives)?

Comment by rossry on Open Thread April 2019 · 2019-04-27T02:57:39.037Z · score: 3 (2 votes) · LW · GW

"give the power over my final decisions to small random events around me" seems like a slightly confused concept if your preferences are truly indifferent. Can you say more about why you see that as a problem?

The potential adversary seems like a more straightforward problem, though one exciting possibility is that lightness of decisions lets a potential cooperator manipulate your decisions on favor of your common interests. And presumably you already have some system for filtering acquaintances into adversaries and cooperators. Is the concern that your filtering is faulty, or something else?

[Commitment] eventually often turn to be winning strategy, compared to the flexible strategy of constant updating expected utility.

Some real-world games are reducible to the game of Chicken. Commitment is often a winning strategy in them. Though I'm not certain that it's a commitment to a particular set of beliefs about utility so much as a more-complex decision theory which sits between utility beliefs and actions.

In summary, if the acquaintances whose info you update on are sufficiently unaligned with you and your decision theory always selects the addition that your posterior assigns the highest utility, then your actions will be "over-updating on the evidence" if your beliefs are properly Bayesian. But I don't think the best response is to bias yourself towards under-updating.

Comment by rossry on Moral Weight Doesn't Accrue Linearly · 2019-04-23T23:22:08.456Z · score: 3 (2 votes) · LW · GW

Do you bite the bullet that this means the set of things you morally value changes discontinuously and predictably as things move out of your light cone? (Or is there some way you value things less as they are less "in" your light cone, in some nonbinary way?)

Comment by rossry on On the Nature of Programming Languages · 2019-04-23T23:15:02.894Z · score: 4 (3 votes) · LW · GW

I think it's from SICP that programs are meant to be read by humans and only incidentally for computers to execute; I've been trying for more than a year now to write a blog post about the fundamental premise that, effort-weighted, we almost never write new programs from scratch, and mostly are engaged in transmuting one working program into another working program. Programs are not only meant to be read by humans, but edited by humans.

I think if you start from the question of how much effort it is to write a new program on a blank page, most languages will come out looking the same, and the differences will look like psychological constructs. If you ask, however, how much effort it is to change an existing piece of a code base to a specific something else, you start to see differences in epistemic structure, where it matters how many of the possible mutations that a human algorithm might try will non-obviously make the resulting program do something unexpected. And that, as you point out, opens the door to at least some notion of universality.

Comment by rossry on On the Nature of Programming Languages · 2019-04-23T22:55:11.028Z · score: 3 (2 votes) · LW · GW

C, the most widespread general-purpose programming language, does things that are extremely difficult or impossible in highly abstract languages like Haskell or LISP

Can you give an example? I'm surprised by this claim, but I only have deep familiarity with C of these three. (My primary functional language includes mutable constructs; I don't know how purely functional languages fare without them.)

Comment by rossry on Open Thread April 2019 · 2019-04-09T23:40:27.223Z · score: 1 (1 votes) · LW · GW

I was trying to construct a proof along similar lines, so thank you for beating me to it!

Note that 2 is actually a case of 1, since you can think of the "walls" of the simplex as being bets that the universe offers you (at zero odds).

Comment by rossry on Open Thread April 2019 · 2019-04-09T11:34:57.791Z · score: 1 (1 votes) · LW · GW

(This comment isn't an answer to your question.)

If I'm understanding properly, you're trying to use the set of bets offered as evidence to infer the common beliefs of the market that's offering them. Yet from a Bayesian perspective, it seems like you're assigning P( X offers bet B | bet B has positive expectation ) = 0. While that's literally the statement of the Efficient Markets Hypothesis, presumably you -- as a Bayesian -- don't actually believe the probability to be literally 0.

Getting this right and generalizing a bit (presumably, you think that P( X offers B | B has expectation +epsilon ) < P( X offers B | B has expectation +BIG_E )), should make the market evidence more informative (and cases of arbitrage less divide-by-zero, break-your-math confusing).

Comment by rossry on Open Thread April 2019 · 2019-04-09T09:46:50.312Z · score: 1 (1 votes) · LW · GW

I'm confused what the word "fairly" means in this sentence.

Do you mean that they make a zero-expected-value bet, e.g., 1:1 odds for a fair coin? (Then "fairly" is too strong; non-degenerate odds (i.e., not zero on either side) is the actual required condition.)

Do you mean that they bet without fraud, such that one will get a positive payout in one outcome and the other will in the other? (Then I think "fairly" is redundant, because I would say they haven't actually bet on the outcome of the coin if the payouts don't correspond to coin outcomes.)

Comment by rossry on How can we respond to info-cascades? [Info-cascade series] · 2019-03-17T01:53:12.102Z · score: 1 (1 votes) · LW · GW

A related idea in non-punishment of "wrong" reports that have insufficient support (again in the common-prior/private-info setting) comes from this paper [pdf] (presented at the same conference), which suggests collecting reports from all agents and assigning rewards/punishments by assuming that agents' reports represent their private signal, computing their posterior, and scoring this assumed posterior. Under the model assumptions, this makes it an optimal strategy for agents to truly reveal their private signal to the mechanism, while allowing the mechanism to collect non-cascaded base data to make a decision.

In general, I feel like the academic literature on market design / mechanism design has a lot to say about questions of this flavor.

Comment by rossry on How can we respond to info-cascades? [Info-cascade series] · 2019-03-17T01:34:53.337Z · score: 4 (2 votes) · LW · GW

Abstract: Considering information cascades (both upwards and downwards) as a problem of incentives, better incentive design holds some promise. This academic paper suggests a model in which making truth-finding rewards contingent on reaching a certain number of votes prevents down-cascades, and where an informed (self-interested) choice of payout odds and threshold can also prevent up-cascades in the limit of a large population of predictors.

1) cf. avturchin from the question about distribution across fields, pointing out that up-cascades and down-cascades are both relevant concerns, in many contexts.

2) Consider information cascades as related to a problem of incentives -- in the comments of the Johnichols post referenced in the formalization question, multiple commentators point out that the model fails if agents seek to express their marginal opinion, rather than their true (posterior) belief. But incentives to be right do need to be built into a system that you're trying to pump energy into, so the question remains of whether a different incentive structure could do better, while still encouraging truth-finding.

3) Up-Cascaded Wisdom of the Crowd (Cong and Xiao, working paper) considers the information-aggregation problem in terms of incentives, and consider the incentives at play in an all-or-nothing crowdfunding model, like venture capital or Kickstarter (assuming that a 'no' vote is irrevocable like a 'yes' vote is) -- 'yes' voters win if there is a critical mass of other 'yes' voters and the proposition resolves to 'yes'; they lose if there is a critical mass and the proposition resolves to 'no'; they have 0 loss/gain if 'yes' doesn't reach a critical mass; 'no' voters are merely abstaining from voting 'yes'.

Their main result is that if the payment of incentives is conditioned on the proposition gaining a fixed number of 'yes' votes, a population of symmetric, common-prior/private-info agents will avoid down-cascades, as a single 'yes' vote that breaks a down-cascade will not be penalized for being wrong unless some later agent intentionally votes 'yes' to put the vote over the 'yes' threshold. (An agent i with negative private info still should vote no, because if a later agent i' puts the vote over the 'yes' threshold based in part on i's false vote, then i expects to lose on the truth-evaluation, since they've backed 'yes' but believe 'no'.)

A further result from the same paper is that if the actor posing the proposition can set the payout odds and the threshold in response to the common prior and known info-distribution, then a proposition-poser attempting to minimize down-cascades (perhaps because they will cast the first 'yes' vote, and so can only hope to win if the vote resolves to 'yes') will be incentivized to set odds and a threshold that coincidentally minimize the chance of up-cascades. In the large-population limit, the number of cascades under such an incentive design goes to 0.

4) I suspect (but will not here prove) that augmenting Cong and Xiao's all-or-nothing "crowdfunding for 'yes'" design with a parallel "crowdfunding for 'no'" design -- i.e., 'no' voters win (resp. lose) iff there is a critical mass of 'no' voters and the proposition resolves 'no' (resp. 'yes') -- can further strengthen the defenses against up-cascades (by making it possible to cast a more informed 'no' vote conditioned on a later, more-informed agent deciding to put 'no' over the threshold).

Comment by rossry on How can we respond to info-cascades? [Info-cascade series] · 2019-03-17T01:30:21.056Z · score: 0 (0 votes) · LW · GW

[this answer was duplicated when I mistakenly copied my comment into an answer and then moved the comment to an answer.]

Comment by rossry on Distribution of info-cascades across fields? [Info-cascade series] · 2019-03-17T00:14:15.158Z · score: 1 (1 votes) · LW · GW

Why stop at 2? Belief-space is large, and many issues admit more than one (+/-) bit of information to cascade.

Comment by rossry on Open Thread February 2019 · 2019-03-02T02:03:26.197Z · score: 1 (1 votes) · LW · GW

But how make small changes in the trajectory of a star? One idea is to impact the star with large comets. It is not difficult, as remote Oort cloud objects (or wandering small planets, as they are not part of already established orbital movement of the star) need only small perturbations to start falling down on the central star, which could be done via nuclear explosions or smaller impacts by smaller astreoids.

I don't think this works; conservation of momentum means that the impact is almost fully counteracted by the gravitational pull that accelerated the comet to such speed (so that, in the end, the delta-v imparted to the star is precisely what you imparted with your nuclear explosions or smaller asteroids).

Maybe a Shkadov thruster is what you want? (It's slow going, though; this article suggests 60ly/200My, accounting for acceleration.)

Comment by rossry on Blackmail · 2019-02-23T10:35:22.371Z · score: 1 (1 votes) · LW · GW

Winner's Curse doesn't seem like the right effect to me -- it seems more like an orthogonality/Goodhart effect, where optimizing for outrageousness decreases the fitness w/r/t social welfare (on the margin). It's always in the blackmailer's interest to make the outrageousness greater, so they're not (selfishly) sad when they overshoot.

Comment by rossry on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-19T02:11:42.887Z · score: 8 (4 votes) · LW · GW

I'm still thinking about what quantitative estimates I'd stand behind. I think I'd believe that a prize-based competitive prediction system with all eval deferred until and conditioned on AGI is <4% to add more than $1mln of value to [just pay some smart participants for their best-efforts opinions].

(If I thought harder about corner-cases, I think I could come up with a stronger statement.)

Comment by rossry on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T23:39:13.713Z · score: 3 (3 votes) · LW · GW

If the reason your questions won't resolve is that you are dead or that none of your money at all will be useful, I think things are a bit different.

I'm confused; to restate the above, I think that a p% chance that your predictions don't matter (for any reason: game rained out, you're dead, your money isn't useful) is (to first order) equivalent to a p% tax on investment in making better predictions. What do you think is different?

one major ask is that the forecasters believe the AGI will happen in between, which seems to me like an even bigger issue

Sure, that's an issue, but I think that requiring participants to all assume short AGI timelines is tractable in a way that the delayed/improbable resolution issues are not.

I can imagine a market without resolution issues that assumes participants all believe short AGI timelines could support 12 semi-professional traders subsidized by interested stakeholders. I don't believe that a market with resolution issues as above can elicit serious investment in getting its answers right from half that many. (I recognize that I'm eliding my definition of "serious investment" here.)

Comment by rossry on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T14:39:01.178Z · score: 3 (3 votes) · LW · GW

I understand Zvi's points as being relatively universal to systems where you want to use rewards to incentivize participants to work hard to get good answers.

No matter how the payouts work, a p% chance that your questions don't resolve is (to first order) equivalent to a p% tax on investment in making better predictions, and a years-long tie-up kills iterative growth and selection/amplification cycles as well limiting the return-on-investment-in-general-prediction-skill to a one-shot game. I don't think these issues go away if you reward predictions differently, since they're general features of the relation between the up-front investment in making better predictions and the to-come potential reward for doing so well.

(A counterpoint I'll entertain is Zvi's caveat to "quick resolution" -- which also caveats "probable resolution" -- that sufficient liquidity can substitute for resolution. But bootstrapping that liquidity itself seems like a Hard Problem, so I'd need to further be convinced that it's tractable here.)

Comment by rossry on Can We Place Trust in Post-AGI Forecasting Evaluations? · 2019-02-18T13:15:17.761Z · score: 2 (4 votes) · LW · GW

Assuming that your AI timelines are well-approximated by "likely more than three years", Zvi's post on prediction market desiderata suggests that post-AGI evaluation is pretty dead-on-arrival for creating liquid prediction markets. Even laying aside the conditional-on-AGI dimension, the failures of "quick resolution" (years) and "probable resolution" (~20%, by your numbers) are crippling for the prospect of professionals or experts investing serious resources in making profitable predictions.

Comment by rossry on Probability space has 2 metrics · 2019-02-11T06:23:16.236Z · score: 8 (4 votes) · LW · GW

The speculative proposition that humans might only be using one metric rings true and is compellingly presented.

However, I feel a bit clickbaited by the title, which (to me) implies that probability-space has only two metrics (which isn't true, as the later proposition depends on). Maybe consider changing it to "Probability space has multiple metrics", to avoid confusion?

Comment by rossry on Open Thread February 2019 · 2019-02-08T13:23:33.873Z · score: 9 (5 votes) · LW · GW

It varies a lot between papers (in my experience) and between fields (I imagine), but several hours for a deep reading doesn't seem out of line. To take an anecdote, I was recently re-reading a paper (of comparable length, though in economics) that I'm planning to present as a guest of a mathematics reading group, and I probably spent 4 hours on the re-read, before starting on my slides and presumably re-re-reading a bunch more.

Grazing over several days (and/or multiple separate readings) is also my usual practice for a close read, fwiw.

Comment by rossry on X-risks are a tragedies of the commons · 2019-02-07T09:29:36.884Z · score: 2 (2 votes) · LW · GW

I acknowledge that there's a distinction, but I fail to see how it's important. When you (shut up and) multiply out the probabilities, the expected personal reward for putting in disproportionate resources is negative, and the personal-welfare-optimizing level of effort is lower than the social-welfare-optimizing level.

Why is it important that that negative expected reward is made up of a positive term plus a large negative term (X-risk defense), instead of a negative term plus a different negative term (overgrazing)?

Comment by rossry on Book Trilogy Review: Remembrance of Earth’s Past (The Three Body Problem) · 2019-01-30T14:24:58.273Z · score: 1 (1 votes) · LW · GW

Huh. I don't think I ever heard someone call this series hard sci-fi where I could hear them; the most common recommendation was related to its Chineseness, which, as Zvi claims, definitely delivers.

And I'm not sure I'd take Niven as the archetype of truly hard sci-fi; have you ever tried Egan? Diaspora says sensible things about philosophy of mind for emulated, branching AIs with a plot arc where the power laws of a 5+1-dimensional universe become relevant, and Clockwork Rocket invents alternate laws of special relativity incidentally to a story involving truly creative alt-biology...

Comment by rossry on How could shares in a megaproject return value to shareholders? · 2019-01-21T10:54:23.484Z · score: 3 (2 votes) · LW · GW

Investors would prefer to invest in moonshot megaprojects over, like, infrastructure megaprojects.

I don't think this is true; as in corporate equity markets, the preference should be responsive to price, and equity shares should trade over naive book value, but at a price where the marginal investor is indifferent to buying more.

In fact, insofar as the Miller-Modigliani assumptions hold, any capitalization of the project in terms of equity and debt should trade at the same total price (and so, raise the same funding), just with a different mix of the total coming from the debt and the equity.

should they be incentivized to abort?

Clearly that would be ideal, but this scheme doesn't make this issue worse than the status quo (and provides the advantage that at least there's some market-based metric to tell you that the project is going off the rails).

Comment by rossry on How could shares in a megaproject return value to shareholders? · 2019-01-20T04:57:22.658Z · score: 3 (2 votes) · LW · GW

Proves too much; this is similarly true of shareholders in any company with a debt+equity capitalization.

It's true that shareholders' incentives are not perfectly aligned with creditors'. In private corporations, this is handled with governance practices; for public megaprojects it seems like even less of an issue.

Comment by rossry on Does anti-malaria charity destroy the local anti-malaria industry? · 2019-01-06T12:31:49.338Z · score: 1 (1 votes) · LW · GW

Does the "typical argument for protectionism" you cite claim that protectionist policy increases the total amount of local production (creating a steady-state trade surplus)? Or does it merely shift local production from comparatively-advantaged-to-produce-locally goods to comparatively-advantaged-to-import ones (hopefully ones with greater externalities to local production)?

There's a relevance to the anti-aid argument: The first-order effect of steady-state aid is to create the effect of a steady-state trade deficit on production without an effect on the current account. If the total externalities to local production are larger than the consumer value of goods, then this effects a welfare transfer from the aid recipient to the aid sender.

But the general-equilibrium effect is a shift of the recipient's local production away from the received goods towards the marginally-efficient goods. If the marginally-efficient goods have sufficiently greater externalities to local production than the received goods, then this might be a net win. (Where "sufficiently" here depends on elasticities of production and consumption.)

Comment by rossry on State Machines and the Strange Case of Mutating API · 2018-12-24T12:28:14.792Z · score: 3 (2 votes) · LW · GW

I'm not a network programmer or language designer by trade, so I expect to be missing something here, but I'll give it a go to learn where I'm wrong.

If you're using distinct interfaces for distinct states (as it seems you are in your latter examples) and your compiler is going to enforce them, then shadowing variables (as a language feature) lets you unassign them as you go along. In a language I'm more familiar with, which uses polymorphic types rather than interfaces:

let socket = socks5_socket () in
let socket = connect_unauthenticated socket ~proxy in
let socket = connect_tcp socket address in
enjoy socket ();

with signatures:

val sock5_socket : unit -> [> `Closed of Socket.t]
val connect_unauthenticated : [< `Closed of Socket.t] -> proxy:Address.t -> [> `Authenticated of Socket.t]
val connect_tcp -> [< `Authenticated of Socket.t] -> Address.t -> [> `Tcp_established of Socket.t]
val enjoy : [< `Tcp_established of Socket.t] -> unit -> unit

so that if you forget the connect_unauthenticated line (big danger of shadowing as you mutate), your compiler will correct you with a friendly but stern:

This expression has type [> `Closed of Socket.t] but an expression was expected of type [< `Authenticated of Socket.t].

Of course, shadowing without type-safety sounds like a nightmare, and I'm not claiming that any language you want to use actually supports it as syntax. But I occasionally appreciate it (given, of course, that I've got a type inspector readily keybound, to query what state my socket is in at this particular point).

Comment by rossry on Criticism Scheduling and Privacy · 2018-10-02T14:34:14.574Z · score: 1 (1 votes) · LW · GW

I've been enjoying this sequence on privacy, in no small part because I disagree with some of its fundamental premises so strongly. (Hopefully, someday soon I'll make the time to pull together a sequence laying out the grounds of my disagreement.)

But without backing up that teaser (sorry), I'll say that the brief mention of privacy as a guard against falling into inferential-distance chasms seems very straightforwardly true in hindsight (even to one skeptical of the guard-against-coercive-control angle), though I hadn't been able to put it nearly so cleanly in my own thoughts. If you've got even a short post's worth of content on related ideas in deploying privacy in deeply cooperative/aligned contexts, I'd be especially interested to hear more thoughts in that direction.

Comment by rossry on Defining by opposites · 2018-09-19T11:03:05.960Z · score: 2 (2 votes) · LW · GW

I like this framing, especially as it gracefully handles the way that communication isn't like Guess Who -- you have priors that don't look like "uniform over the following N possibilities", your payoffs for actually finding the answer might be nonconstant depend on what the answer is, some resource limitation might make the maxi-p(win) strategy different from the optional discriminator -- but once you start thinking about how you'd win a game with those rules, strategies for smarter search suggest themselves.

Comment by rossry on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-15T09:12:51.421Z · score: 1 (1 votes) · LW · GW
If I'm not the first, was this posted before?

No, I'm referencing an in-person conversation. (Incidentally, the fact that ialdabaoth fielded that suggestion and still wrote this post with 'dragon' makes me worry that they've got at least an instinct that it's the right word in some way I'm missing.)

And I think I see the worry that you're pointing at here. I think it's a valid one, though not one that I expect can be resolved entirely through theory; I'd like to see some people work with the ontology for a bit to see which words work in useful ways.

Comment by rossry on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-12T12:19:40.732Z · score: 2 (2 votes) · LW · GW

You're not the first to suggest s/Dragon/Hydra/g here, and I'd be tempted to agree, if not for the fact that dragon-slaying is significantly more poetic than hydra-slaying. OTOH, "Hydra" serves as a mnemonic that attacking symptoms is a Known Bad Strategy.

(Do note that the existence of a dragon can cause a series of not-obviously-related symptoms -- this stuff is on fire, and this stuff is smashed up, and these people got eaten...)

Comment by rossry on Preliminary thoughts on moral weight · 2018-08-15T12:32:28.123Z · score: 2 (2 votes) · LW · GW

Right, so if we're using a uniform distribution over 2^30000, there should be exactly zero ants sharing observer-moments, so in order to argue that ants' overlap in observer-moments should discount their total weight, we're going to need to squeeze that space a lot harder than that.

I've also spent some time recently staring at ~randomly generated grids of color for an unrelated project, and I think there's basically no way that the human visual system is getting so much as 5000 bits of entropy (i.e., 50x50 grid of four-color choices) out of the observer-experience of the visual field. So I think using 2^#receptors is just the wrong starting point. Similarly, assuming that neurons operate independently is going to give you a number in entirely the wrong realm of numbers entirely. (Wikipedia says an ant has ~250,000 neurons.)

I think that if you want to get to the belief that two ants might ever actually share an experience, you're going to need to work in a significantly smaller domain, like your suggestion of output actions, though applying the domain of "typical reactions of a human to new objects" is going to grossly undercount the number of human possible observer-experiences, so now I'm back to being stuck wondering how to do that at all.

Comment by rossry on Preliminary thoughts on moral weight · 2018-08-15T05:55:41.474Z · score: 2 (2 votes) · LW · GW

How many distinct possible ant!observer-moments are there? What is the entropy of their distribution in the status quo?

How many distinct possible human!observer-moments are there? What is the entropy of their distribution in the status quo?

(Confidence intervals okay; I just have no intuition about these quantities, and you seem to have considered them, so I'm curious what estimates you're working with.)

Comment by rossry on Open Thread August 2018 · 2018-08-11T15:53:02.375Z · score: 2 (2 votes) · LW · GW

My experience has been that in practice it almost always suffices to express second-order knowledge qualitatively rather than quantitatively. Granted, it requires some common context and social trust to be adequately calibrated on "50%, to make up a number" < "50%, just to say a number" < "let's say 50%" < "something in the ballpark of 50%" < "plausibly 50%" < "probably 50%" < "roughly 50%" < "actually just 50%" < "precisely 50%" (to pick syntax that I'm used to using with people I work with), but you probably don't actually have good (third-order!) calibration of your second-order knowledge, so why bother with the extra precision?

The only other thing I've seen work when you absolutely need to pin down levels of second-order knowledge is just talking about where your uncertainty is coming from, what the gears of your epistemic model are, or sometimes how much time of concerted effort it might take you to resolve X percentage points of uncertainty in expectation.

Comment by rossry on Probabilistic decision-making as an anxiety-reduction technique · 2018-07-17T10:44:22.157Z · score: 14 (5 votes) · LW · GW

I'm confused; why don't you just pick black with probability 100%? (Assuming that your utility is 1 if you make the on-further-reflection-correct choice, 0 else.)

This isn't the same thing as knowing what you're going to think at the end of your fretting -- you still don't know -- but the correct response to uncertainty is not half speed, and just picking the 40%-to-be-right option all of the time is the expectancy-maximizing response to uncertainty.

Comment by rossry on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-13T14:36:54.508Z · score: 2 (2 votes) · LW · GW

Pretty confident they meant it that way:

I am not thinking about physics-time, I am thinking about logical-time. If something is in your past, but has no effect on what algorithm you are running on what observations you get, then it might as well be considered as space-like separated from you. If you compute how everything in the universe evaluates, the space-like separated things are the things that can be evaluated either before or after you, since their output does not change yours or vice-versa. If you partially observe a fact, then I want to say you can decompose that fact into the part that you observed and the part that you didn't, and say that the part you observed is in your past, while the part you didn't observe is space-like separated from you.

Comment by rossry on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-13T14:32:59.138Z · score: 14 (5 votes) · LW · GW

It seems pretty easy for such mechanisms to be adapted for maximizing reproduction in some ancestral excitement but maladapted for maximizing your preferences in the modern environment.

I think I agree that your point is generally under-considered, especially by the sort of people who compulsively tear down Chesterton's fences.

Comment by rossry on Book review: Pearl's Book of Why · 2018-07-08T03:36:15.391Z · score: 3 (2 votes) · LW · GW

RCTs (and p-values) don't seem to be popular in physics or geology.

What evidence makes you say p-values aren't popular in physics? My passing (and mainly secondhand) understanding of cosmology is that it uses ~no RCTs but is very wedded to p-values (generally around the 5 sigma level).

Comment by rossry on Debt is an Anti-investment · 2018-07-07T23:16:33.120Z · score: 1 (1 votes) · LW · GW

A single house in a single market levered up 5x isn't the good kind of diversification

Hm, I can see your point. Are you saying because you've worked through a back-of-the-envelope calculation, or are you just guessing? (Me, I was just guessing.)

and will make your expected portfolio performance worse.

Do you mean "will make the expected dollar-returns of your portfolio worse", or are you making a claim about the expected utility-adjusted returns?

Comment by rossry on Debt is an Anti-investment · 2018-07-07T04:31:18.325Z · score: 1 (1 votes) · LW · GW

There is no compelling reason to think such outperformance will continue into the future.

The market disagrees with you.

What market instruments express opinions on the future returns of US equity investments? Everything I can think of serves to express an opinion on present prices of equity investments or on things like the future risk-free rate of return, not the future return on, e.g., the S&P500 index.

Comment by rossry on Debt is an Anti-investment · 2018-07-07T04:23:47.979Z · score: 1 (1 votes) · LW · GW

What text from the post suggests accounting for the emotional disutility of indebtedness?

Here's a quote that to me seems to argue against it:

The question becomes: what risk-free rate of return is equivalent to your best available investment? If you pay a higher interest on your debt than that, you should pay it off. If the rate you’re paying is lower, you should invest.

Following this algorithm suggests investing in a risk-free 3.01% return before paying off debts costing a net 3% in interest.

There's a bit of section 5 that accepts emotional disutility of volatility, but that's not the same thing.

As for ChristianKl, they comment that to them it doesn't make sense to value indebtedness at zero, agreeing with you that the disutility should be accounted for.

Comment by rossry on Debt is an Anti-investment · 2018-07-06T23:52:03.928Z · score: 1 (1 votes) · LW · GW

I think you are in agreement with ChristianKl (i.e., you both think the OP overlooks the emotional disutility of knowing that you have debt), but your tone seems to me to indicate disagreement.

Comment by rossry on Debt is an Anti-investment · 2018-07-06T23:48:22.241Z · score: 1 (1 votes) · LW · GW

Don’t hold any debt at above 4%.

It depends. If debt is tax deductible then it can make sense to hold it. Or based on the size of the opportunity.

Note that this is in the section of the OP's advice to themself, and presumably accounts for them already checking for tax-deductible opportunities and opportunity costs.

Notes on a recent wave of spam

2018-06-14T15:39:51.090Z · score: 11 (7 votes)