Comment by vladimir_nesov on Do bond yield curve inversions really indicate there is likely to be a recession? · 2019-07-10T06:26:08.019Z · score: 4 (2 votes) · LW · GW

More generally, market timers lose hard.

Do you mean they make a portfolio that's too conservative, so that lost money becomes lost utility? Or do they lose in some other way? (This sounds superficially similar to claims that there are stock/futures trading strategies that systematically lose money other than on fees or spread, which I think can't happen because the opposite strategies would then systematically make money.)

Comment by vladimir_nesov on Black hole narratives · 2019-07-09T04:02:48.292Z · score: 2 (1 votes) · LW · GW

From the other side, agreement is often not real. People agree out of politeness, or agree with a distorted (perhaps more salient but less relevant to the discussion) version of a claim, without ensuring it's not a bucket error. There's some use in keeping beliefs unchanged, but not in failing to understand the meaning of the claim under discussion (when it has one). So agreement (especially your own, as it's easier to fix) should be treated with scepticism.

Comment by vladimir_nesov on What are good resources for learning functional programming? · 2019-07-05T05:55:38.906Z · score: 3 (2 votes) · LW · GW

My impression is that there are no books that even jointly give a reasonable selection of what you'd need to master the main ideas in (typed) functional programming. Most of this stuff is in papers.

Comment by vladimir_nesov on Causal Reality vs Social Reality · 2019-07-05T00:53:02.171Z · score: 6 (3 votes) · LW · GW

Upon reflection, my answer is that I do endorse it on many occasions

The salient question is whether it's a good idea to respond to possible attacks in a direct fashion. Situations that can be classified as attacks (especially in a sense that allows the attacker to remain unaware of this fact) are much more common.

Comment by vladimir_nesov on Causal Reality vs Social Reality · 2019-07-04T22:06:03.145Z · score: 33 (7 votes) · LW · GW

But if you don’t endorse this reaction—then deal with it yourself.

I agree with the above two comments (Vaniver's and yours) except for a certain connotation of this point. Rejection of own defensiveness does not imply endorsement of insensitivity to tone. I've been making this error in modeling others until recently, and I currently cringe at many of my "combative" comments and forum policy suggestions from before 2014 or so. In most cases defensiveness is flat wrong, but so is not optimizing towards keeping the conversation comfortable. It's tempting to shirk that responsibility in the name of avoiding the danger of compromising the signal with polite distortions. But there is a lot of room for safe optimization in that direction, and making sure people are aware of this is important. "Deal with it yourself" suggests excluding this pressure. Ten years ago, I would have benefitted from it.

Comment by vladimir_nesov on What does the word "collaborative" mean in the phrase "collaborative truthseeking"? · 2019-06-26T15:26:24.001Z · score: 9 (2 votes) · LW · GW

Here's the link, found from my account by searching for "collaborative truthseeking". There is a "Posts from Anyone/You/Your Friends" radio control on the left of the search page, so should probably work on your own posts as well.

Comment by vladimir_nesov on Upcoming stability of values · 2019-06-22T21:21:21.722Z · score: 6 (3 votes) · LW · GW

These "meta-values" you mention are just values applied to appraisal of values. So in these terms it's possible to have values about meta-values and to value change in meta-values. Value drift becomes instrumentally undesirable with greater power over the things you value, and this argument against value drift is not particularly sensitive to your values (or "meta-values"). Even if you prefer for your values to change according to their "natural evolution", it's still useful for them not to change. For change in values to be a good decision, you need to see value drift as more terminally valuable than its opportunity cost (decrease in value of the future according to your present values, in the event that your present values undergo value drift).

Comment by vladimir_nesov on Let Values Drift · 2019-06-21T00:42:46.317Z · score: 5 (5 votes) · LW · GW

The strongest argument against value drift (meaning the kind of change in current values that involves change in idealized values) is instrumental usefulness of future values that pursue idealized present values. This says nothing about terminal value of value drift, and a priori we should expect that people hold presence of value drift as a terminal value, because there is no reason for the haphazard human values to single out the possibility of zero value drift as most valuable. Value drift is just another thing that happens in the world, like kittens. Of course valuable value drift must observe proper form even as it breaks idealized values, since most changes are not improvements.

The instrumental argument is not that strong when your own personal future values don't happen to control the world. So the argument still applies to AIs that have significant influence over what happens in the future, but not to ordinary people, especially not to people whose values are not particularly unusual.

Comment by vladimir_nesov on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-11T23:21:28.503Z · score: 2 (1 votes) · LW · GW

Thanks!

Comment by vladimir_nesov on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-11T23:12:24.623Z · score: 6 (3 votes) · LW · GW

Sure, any resource would do (like copper coins needed to pay for maintenance of equipment), or just generic "units of resources". My concern is that the term "utility" is used incorrectly in an otherwise excellent post that is tangentially related to the topic where its technical meaning matters, potentially propagating this popular misreading.

Comment by vladimir_nesov on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-11T21:33:02.314Z · score: 6 (3 votes) · LW · GW

Utility is not a resource. In the usual expected utility setting, utility functions don't care about affine transformations (all decisions remain the same), for example decreasing all utilities by 1000 doesn't change anything, and there is no significance to utility being positive vs. negative. So requirements to have at least five units of utility in order to play a round of a game shouldn't make sense.

In this post, there is a resource whose utility could maybe possibly be given by the identity function. But in that case it could also be given by the "times two minus 87" utility function and lead to the same decisions. It's not really clear that utility is identity, since a 50/50 lottery between ending up with zero or ten units of the resource seems much worse than the certainty of obtaining five units. ("If you’re Elliott, [zero] is a super scary result to imagine.")

Comment by vladimir_nesov on Mistakes with Conservation of Expected Evidence · 2019-06-09T14:49:52.572Z · score: 13 (8 votes) · LW · GW

To see why it is wrong in general, consider an extreme case: a universal law, which you mostly already believe to be true.

I feel that this example might create another misconception, that certainty usually begets greater certainty. So here's an opposing example, where high confidence in a belief coexists with high probability that it's going to get less and less certain. You are at a bus stop, and you believe that you'll get to your destination on time, as busses here are usually reliable, though they only arrive once an hour and can deviate from schedule by several minutes. If you see a bus, you'll probably arrive in time, but every minute that you see no bus, it's evidence of something having gone wrong, so that the bus won't arrive at all in the next hour. You expect a highly certain belief (that you'll arrive on time) to decrease (a little bit), which is balanced by an unlikely alternative (for each given minute of waiting) of it going in the direction of greater certainty (if the bus does arrive within that minute).

Comment by vladimir_nesov on "But It Doesn't Matter" · 2019-06-01T19:11:20.025Z · score: 2 (1 votes) · LW · GW

(See the edit in the grandparent, my initial interpretation of the post was wrong.)

Comment by vladimir_nesov on "But It Doesn't Matter" · 2019-06-01T08:22:31.076Z · score: 8 (6 votes) · LW · GW

With bounded attention, noticing less relevant things more puts you into a worse position to be aware of the worlds you actually live in.

Edit: Steven's comment gives a more central interpretation of the post, namely as a warning against the bailey/motte pair of effectively disbelieving something and defending that state of mind by claiming it's irrelevant. (I think the motte is also invalid, it's fine to be agnostic about things, even relevant ones. If hypothetically irrelevance argues against belief, it argues against disbelief as well.)

Comment by vladimir_nesov on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T07:24:59.777Z · score: 4 (2 votes) · LW · GW

If it assigns equal probability to everything then nothing can be evidence in favor of it

Nope. If it assigns more probability to an observation than another hypothesis does ("It's going to be raining tomorrow! Because AGI!), then the observation is evidence for it and against the other hypothesis. (Of course given how the actual world looks, anything that could be called "assigns equal probability to everything", whatever that means, is going to quickly lose to any sensible model of the world.)

That said, I think being reasoned about instead of simulated really does have "infinite explanatory power", in the sense that you can't locate yourself in the world that does that based on an observation, since all observations are relevant to most situations where you are being reasoned about. So assigning probability to individual (categories of) observations is only (somewhat) possible for the instances of yourself that are simulated or exist natively in physics, not for instances that are reasoned about.

Comment by vladimir_nesov on Comment section from 05/19/2019 · 2019-05-26T06:21:45.603Z · score: 15 (4 votes) · LW · GW

I agree that it's possible for feelings to be relevant (or for factual beliefs to be relevant). But discouragement of discussion shouldn't be enacted through feelings, feelings should just be info that prompts further activity, which might have nothing to do with discouragement of discussion. So there is no issue with Vanessa's first comment and parts of the rest of the discussion that clarified the situation. A lot of the rest of it though wasn't constructive in building any sort of valid argument that rests on the foundation of info about feelings.

Comment by vladimir_nesov on Does the Higgs-boson exist? · 2019-05-25T06:44:32.593Z · score: 6 (3 votes) · LW · GW

"Shut up and calculate" no longer suffices when you want to figure out something about reality that is not about prediction of observations (or if you are interested in unusual kinds of reality, where even prediction of observations looks unlike it does in our world). So this concerns many philosophical questions, in particular decision theory (where you want to figure out what to do and how to think about what to do). The relationship with decision theory is the same as with physics: you want to replace reality with something more specific. But if you haven't found a sufficiently good replacement, forcing a bad replacement is worse than fumbling with the preformal idea of reality.

Comment by vladimir_nesov on Does the Higgs-boson exist? · 2019-05-25T05:31:04.227Z · score: 4 (2 votes) · LW · GW

"Shut up and calculate" serves as an excellent replacement for truth in most situations. It's about hypotheses/theories and observations, not reality, so within its area of applicability it makes the idea of reality irrelevant.

Comment by vladimir_nesov on Separation of Concerns · 2019-05-23T23:19:05.379Z · score: 4 (2 votes) · LW · GW

Ideas, beliefs, feelings, norms, and preferences can be elucidated/reformulated, ascertained/refuted, endorsed/eschewed, reinforced/depreciated, or specialized/coordinated, not just respectively but in many combinations. Typically all these distinctions are mixed together. Naming them with words whose meaning is already common knowledge might be necessary to cheaply draw attention to relevant distinctions. Otherwise it takes too long to frame the thought, at which point most people have already left the conversation.

Comment by vladimir_nesov on Comment section from 05/19/2019 · 2019-05-23T21:26:35.336Z · score: 4 (2 votes) · LW · GW

I don't get the impression Said was describing or channeling a particular culture. Rather, he was making specific observations that are primarily concerned with what they actually refer to. There is a difference between the hypothesis that such claims implicitly suggest a general point about some guiding principles that the author endorses, and the hypothesis that their style pattern-matches such a general point and thus perpetuates norms that you consider undesirable.

In the context where the resulting norms are undesirable, the impact of these options is the same, but arguments that are valid for them are different, which is important for nudging the style of discussion in the direction you hope for. To prevent a norm from taking hold, it's crucial to understand what the norm is, so that you won't unintentionally fuel it.

Comment by vladimir_nesov on Comment section from 05/19/2019 · 2019-05-23T20:43:49.085Z · score: 8 (4 votes) · LW · GW

no, this isn’t the default culture people should be expecting to be consensus on LW

I don't just agree with many of the things Said said in the comments here, but see them as undoubtedly correct, while other points are more debatable, so I second his question about specifics of this nefarious "this".

Comment by vladimir_nesov on A War of Ants and Grasshoppers · 2019-05-23T15:49:59.860Z · score: 4 (2 votes) · LW · GW

The thesis seems to be to caution against automatically blaming those who might've benefited from a disaster, as often enough things don't happen in a goal-directed fashion. ("To fathom a strange plot, one technique is to look at what ended up happening, assume it was the intended result, and ask who benefited.") Not sure this is a widespread enough heuristic to be worth reining in in the form of an unconditional advice.

Comment by vladimir_nesov on Schelling Fences versus Marginal Thinking · 2019-05-23T14:13:01.732Z · score: 2 (3 votes) · LW · GW

Cultivate [...] that on the margin you are always going to underestimate the value of long term investment in habits and virtue cultivation

Why though? Shouldn't you recalibrate immediately to make this no longer predictable? Or is such recalibration the meaning of the quoted sentence? In that case, why phrase it so, it seems to risk overcorrection, not noticing when the opposite advice becomes relevant, or else requires undue caution in following your own advice, at which point it becomes a self-fulfilling flaw/advice combo? (Following the advice cautiously ensures that the flaw is not fully removed, and so the advice remains relevant.)

Comment by vladimir_nesov on Schelling Fences versus Marginal Thinking · 2019-05-23T14:02:30.763Z · score: 3 (3 votes) · LW · GW

I agree, it's just not a primary thing that's happening, the coercion of the discipline (conflict between values and behavior) is more prominent than reinforcement of values where the discipline becomes necessary (partially by definition, since if it works well, the discipline is not necessary after all). For this reason, it's misleading to characterise the effect of this policy as reinforcement of current values, though that probably happens as well. Not sure how that's balanced by rebellious urges.

(I disagree with my statements above in the thread in the context where preventing value drift is much more important than preventing suffering from coercion of behavior to unaligned values.)

Comment by vladimir_nesov on Schelling Fences versus Marginal Thinking · 2019-05-23T11:34:27.099Z · score: 4 (3 votes) · LW · GW

I would think that applying schelling fences to reinforce current values reduces the amount of expected drift in the future

It reinforces the position endorsed by current values, not the current values themselves. (I'm not saying this about Schelling fences in general, which have their uses, rather about leveraging of status quo and commitment norms via reliable application of simple rules, chosen to signal current (past, idealized) values.) This hurts people with future changed values without preventing the change in values.

what specifically you think is making the error of making it difficult to re-align with current values

The effect on prevention of change in values is negative only in the sense of opportunity cost and because of the possibility of confusing this activity for something useful, which inhibits seeking something actually useful. It's analogous to the issues caused by homeopathy. (Though I'm skeptical about value drift being harmful for humans.)

Comment by vladimir_nesov on Comment section from 05/19/2019 · 2019-05-23T09:54:48.144Z · score: 5 (3 votes) · LW · GW

Abstractions are a central example of things considered on the object level, so I don't understand them as being in opposition to the object level. They can be in opposition to more concrete ideas, those closer to experience, but not to being considered on object level.

Comment by vladimir_nesov on Comment section from 05/19/2019 · 2019-05-23T07:35:26.508Z · score: 7 (4 votes) · LW · GW

And the object level is what we're all doing this for, or what's the point?

What's the point of concrete ideas, compared to more abstract ideas? The reasons seem similar, just with different levels of grounding in experience, like with a filter bubble that you can only peer beyond with great difficulty. This situation is an argument against emphasis on the concrete, not for it.

(I think there's a mixup between "meta" and "abstract" in this subthread. It's meta that exists for the object level, not abstractions. Abstractions are themselves on object level when you consider them in their own right.)

Comment by vladimir_nesov on Schelling Fences versus Marginal Thinking · 2019-05-23T07:05:44.104Z · score: 2 (3 votes) · LW · GW

This runs the risk of denying that value drift has taken place instead of preventing value drift, creating ammunition for a conflict with future self or future others instead of ensuring that your current self is in harmony with them. Some examples you cite and list seem to be actually making this error.

Comment by vladimir_nesov on Is value drift net-positive, net-negative, or neither? · 2019-05-05T17:11:14.911Z · score: 6 (3 votes) · LW · GW

This question requires distinguishing current values from idealized values, and values (in charge) of the world from values of a person. Idealized values is an unchanging and general way of judging situations (the world), including choices that take place there. Current values are an aspect of an actual agent (person) involved in current decisions that are more limited in scope and can't accurately judge many things. By idealizing current values, we obtain idealized values that give a way in which the current values should function.

Most changes in current values change their idealization, but some changes that follow the same path as idealization don't, they only improve ability to judge things in the same idealized way. Value drift is a change in current values that changes their idealization. When current values disagree with idealized current values, their development without value drift eventually makes them agree, fixes their error. But value drift can instead change idealized values to better fit current values, calcifying the error.

Values in charge of the world (values of a singleton AI or of an agentic idealization of humanity) in particular direct what happens to people who live there. From the point of view of any idealized values, including idealized values of particular people (who can't significantly affect the world), it's the current values of the world that matter the most, because they determine what actually happens, and idealized values judge what actually happens.

Unless all people have the same idealized values, the values of the world are different from values of individual people, so value drift in values of the world can change what happens both positively and negatively according to idealized values of individual people. On the other hand, values of the world could approve of value drift in individual people (conflict between people, diversity of personal values over time, disruption of reflective equilibrium in people's reasoning), and so could those individual people, since their personal value drift won't disrupt the course of the world, which is what their idealized values judge. Note that idealized personal values approving of value drift doesn't imply that current personal values do. Finally, idealized values of the world disapprove of value drift in values of the world, since that actually would disrupt the course of the world.

Comment by vladimir_nesov on [Meta] Hiding negative karma notifications by default · 2019-05-05T15:33:11.403Z · score: 6 (5 votes) · LW · GW

In theory I agree with this, and it was my position several years ago. My own reaction is balanced in this sense, I perceive downvotes as interesting observations, not punishment. But several people I respect described their experience as remaining negative after all this time, so I suspect they can't easily change that response. You wouldn't occasionaly throw spiders at someone with arachnophobia. Maybe you should shower them in spiders though, I heard that helps with desensitisation.

Comment by vladimir_nesov on Dishonest Update Reporting · 2019-05-04T23:50:26.603Z · score: 7 (2 votes) · LW · GW

The value of caring about informal reasoning is in training the same skills that apply for knowably important questions, and in seemingly unimportant details adding up in ways you couldn't plan for. Existence of a credible consensus lets you use a belief without understanding its origin (i.e. without becoming a world-class expert on it), so doesn't interact with those skills.

When correct disagreement of your own beliefs with consensus is useful at scale, it eventually shifts the consensus, or else you have a source of infinite value. So almost any method of deriving significant value from private predictions being better than consensus is a method of contributing knowledge to consensus.

(Not sure what you were pointing at, mostly guessing the topic.)

Comment by vladimir_nesov on Dishonest Update Reporting · 2019-05-04T19:31:17.849Z · score: 3 (2 votes) · LW · GW

By "informal" I meant that the belief is not on a prediction market, so you can influence consensus only by talking, without carefully keeping track of transactions. (I disagree with it being appropriate not to care about results in informal communication, so it's not a distinction I was making.)

Comment by vladimir_nesov on Dishonest Update Reporting · 2019-05-04T18:56:27.976Z · score: 7 (4 votes) · LW · GW

Holding to delivery is already familiar for informal communication. But short-term speculation is a different mode of contributing rare knowledge into consensus that doesn't seem to exist for discussions of beliefs that are not on prediction markets, and breaks many assumptions about how communication should proceed. In particular it puts into question the virtues of owning up to your predictions and of regularly publishing updated beliefs.

Comment by vladimir_nesov on Dishonest Update Reporting · 2019-05-04T16:04:38.882Z · score: 19 (4 votes) · LW · GW

In a prediction market your belief is not shared, but contributes to the consensus (market price of a futures). Many traders become agnostic about a question (close their position) before the underlying fact of the matter is revealed (delivery), perhaps shortly after stating the direction in which they expect the consensus to move (opening the position), to contribute (profit from) their rare knowledge while it remains rare. Requiring traders to own up to a prediction (hold to delivery) interferes with efficient communication of rare information into common knowledge (market price).

So consider declaring that the consensus is shifting in a particular direction, without explaining your reasoning, and then shortly after bow out of the discussion (taking note of how the consensus shifted in the interim). This seems very strange when compared to common norms, but I think something in this direction could work.

Comment by vladimir_nesov on Habryka's Shortform Feed · 2019-05-04T14:55:03.000Z · score: 10 (2 votes) · LW · GW

That depends on what norm is in place. If the norm is to explain downvoting, then people should explain, otherwise there is no issue in not doing so. So the claim you are making is that the norm should be for people to explain. The well-known counterargument is that this disincentivizes downvoting.

you are under no obligation to waste cognition trying to figure them out

There is rarely an obligation to understand things, but healthy curiosity ensures progress on recurring events, irrespective of morality of their origin. If an obligation would force you to actually waste cognition, don't accept it!

Comment by vladimir_nesov on Functional Decision Theory vs Causal Decision Theory: Expanding on Newcomb's Problem · 2019-05-03T03:15:55.027Z · score: 10 (6 votes) · LW · GW

To make decisions, an agent needs to understand the problem, to know what's real and valuable that it needs to optimize. Suppose the agent thinks it's solving one problem, while you are fooling it in a way that it can't perceive, making its decisions lead to consequences that the agent can't (shouldn't) take into account. Then in a certain sense the agent acts in a different world (situation), in the world that it anticipates (values), not in the world that you are considering it in.

This is also the issue with CDT in Newcomb's problem: a CDT agent can't understand the problem, so when we test it, it's acting according to its own understanding of the world that doesn't match the problem. If you explain a reverse Newcomb's to an FDT agent (ensure that it's represented in it), so that it knows that it needs to act to win in the reverse Newcomb's and not in regular Newcomb's, then the FDT agent will two-box in regular Newcomb's problem, because it will value winning in reverse Newcomb's problem and won't value winning in regular Newcomb's.

Comment by vladimir_nesov on Speaking for myself (re: how the LW2.0 team communicates) · 2019-04-27T21:58:56.697Z · score: 2 (1 votes) · LW · GW

Within the hypothetical where the dimensions I suggest are better, fuzziness of upvote/downvote is better in the same way as uncertainty about facts is better than incorrect knowledge, even when the latter is easier to embrace than correct knowledge. In that hypothetical, moving from upvote/downvote to agree/disagree is a step in the wrong direction, even if the step in the right direction is too unwieldy to be worth making.

Comment by vladimir_nesov on Speaking for myself (re: how the LW2.0 team communicates) · 2019-04-27T19:53:34.920Z · score: 5 (2 votes) · LW · GW

I think both agree/disagree and approve/disapprove are toxic dimensions for evaluating quality discussions. Useful communication is about explaining and understanding relevant things, real-world truth and preference are secondary distractions. So lucid/confused (as opposed to clear/unclear) and relevant/misleading (as opposed to interesting/off-topic) seem like better choices.

Comment by vladimir_nesov on Agent Foundation Foundations and the Rocket Alignment Problem · 2019-04-09T16:48:27.192Z · score: 10 (5 votes) · LW · GW

At first glance, the current Agent Foundations work seems to be formal, but it's not the kind of formal where you work in an established setting. It's counterintuitive that people can doodle in math, but they can. There's a lot of that in physics and economics. Pre-formal work doesn't need to lack formality, it just doesn't follow a specific set of rules, so it can use math to sketch problems approximately the same way informal words sketch problems approximately.

Comment by vladimir_nesov on Would solving logical counterfactuals solve anthropics? · 2019-04-06T12:33:24.778Z · score: 4 (2 votes) · LW · GW

I consider questions of morality or axiology seperate from questions of decision theory.

The claim is essentially that specification of anthropic principles an agent follows belongs to axiology, not decision theory. That is, the orthogonality thesis applies to the distinction, so that different agents may follow different anthropic principles in the same way as different stuff-maximizers may maximize different kinds of stuff. Some things discussed under the umbrella of "anthropics" seem relevant to decision theory, such as being able to function with most anthropic principles, but not, say, choice between SIA and SSA.

(I somewhat disagree with the claim, as structuring values around instances of agents doesn't seem natural, maps/worlds are more basic than agents. But that is disagreement with emphasizing the whole concept of anthropics, perhaps even with emphasizing agents, not with where to put the concepts between axiology and decision theory.)

Comment by vladimir_nesov on Open Thread April 2019 · 2019-04-01T08:04:47.277Z · score: 11 (2 votes) · LW · GW

The issue is that GPT2 posts so much it drowns out everything else.

Comment by vladimir_nesov on Do you like bullet points? · 2019-03-26T21:11:33.435Z · score: 4 (2 votes) · LW · GW

An explanation communicates an idea without insisting on its relevance to reality (or some other topic). It's modular. You can then explain its relevance to reality, as another idea that reifies the relevance. Persuasion is doing both at the same time without making it clear. For example, you can explain how to think about a hypothesis to see what observations it predicts, without persuading that the hypothesis is likely.

Comment by vladimir_nesov on Privacy · 2019-03-19T04:54:30.696Z · score: 5 (3 votes) · LW · GW

Nod. I did actually consider a more accurate version of the comment that said something like "at least one of us is at least somewhat confused about something" [...]

The clarification doesn't address what I was talking about, or else disagrees with my point, so I don't see how that can be characterised with a "Nod". The confusion I refer to is about what the other means, with the question of whether anyone is correct about the world irrelevant. And this confusion is significant on both sides, otherwise a conversation doesn't go off the rails in this way. Paying attention to truth is counterproductive when intended meaning is not yet established, and you seem to be talking about truth, while I was commenting about meaning.

Comment by vladimir_nesov on Karma-Change Notifications · 2019-03-19T04:24:04.728Z · score: 4 (2 votes) · LW · GW

No ancient updates for the previous week, several for this week. An alternative to removing old notifications is to prepend entries in the list with recency, like "13d" or "8y", and sort by it.

Comment by vladimir_nesov on Karma-Change Notifications · 2019-03-19T04:16:42.683Z · score: 2 (1 votes) · LW · GW

The reversal test is with respect to the norm, not with respect to ways of handling a fixed norm. So imagine that the norm is the opposite, and see what will happen. People will invent weird things like gaging popularity based on number of downvotes, or sum of absolute values of upvotes and downvotes, when there are not enough downvotes. This will work about as well as what happens with the present norm. In that context, the option of "only upvotes" looks funny and pointless, but we can see that it actually isn't, because we can look from the point of view of both possible norms.

When an argument goes through in the world of the opposite status quo, we can transport it to our world. In this case, we obtain the argument that "only downvotes" is not particularly funny and pointless, instead it's about as serviceable (or about as funny and pointless) as "only upvotes", and both are not very good.

Comment by vladimir_nesov on Privacy · 2019-03-19T03:59:52.945Z · score: 8 (4 votes) · LW · GW

I have some probability on me being the confused one here.

In conversations like this, both sides are confused, that is don't understand the other's point, so "who is the confused one" is already an incorrect framing. One of you may be factually correct, but that doesn't really matter for making a conversation work, understanding each other is more relevant.

(In this particular case, I think both of you are correct and fail to see what the other means, but Jessica's point is harder to follow and pattern-matches misleading things, hence the balance of votes.)

Comment by vladimir_nesov on How dangerous is it to ride a bicycle without a helmet? · 2019-03-10T00:32:19.855Z · score: 4 (2 votes) · LW · GW

Sure, for voting the effect on decision making is greater. I'm just suspicious of this whole idea of acausal impact, and moderate observations about effect size don't help with that confusion. I don't think it can apply to voting without applying to other things, so the quantitative distinction doesn't point in a particular direction on correctness of the overall idea.

Comment by vladimir_nesov on How dangerous is it to ride a bicycle without a helmet? · 2019-03-09T23:35:36.030Z · score: 2 (1 votes) · LW · GW

New information argues for a change on the margin, so the new equilibrium is different, though it may not be far away. The arguments are not "cancelled out", but they do only have bounded impact. Compare with charity evaluation in effective altruism: if we take the impact of certain decisions as sufficiently significant, it calls for their organized study, so that the decisions are no longer made based on first impressions. On the other hand, if there is already enough infrastructure for making good decisions of that type, then significant changes are unnecessary.

In the case of acausal impact, large reference classes imply that at least that many people are already affected, so if organized evaluation of such decisions is feasible to set up, it's probably already in place without any need for the acausal impact argument. So actual changes are probably in how you pay attention to info that's already available, not in creating infrastructure for generating better info. On the other hand, a source of info about sizes of reference classes may be useful.

Comment by vladimir_nesov on How dangerous is it to ride a bicycle without a helmet? · 2019-03-09T23:12:49.041Z · score: 6 (3 votes) · LW · GW

The absolute size of a reference class only gives the problem statement for an individual decision some altruistic/paternalistic tilt, which can fail to change it. Greater relative size of a reference class increases the decision's relative importance compared to other decisions, which on the margin should pull some effort away from the other decisions.

That the effective multiplier due to acausal coordination is smaller for non-voting decisions doesn't inform the question of whether the argument applies to non-voting decisions. The argument may be ignored in the decision algorithm only if the reference class is always small or about the same size for different decisions.

Comment by vladimir_nesov on How dangerous is it to ride a bicycle without a helmet? · 2019-03-09T20:03:55.150Z · score: 2 (1 votes) · LW · GW

That influences sizes of reference classes, but at some point the sizes cash out in morally relevant object level decisions.

No Anthropic Evidence

2012-09-23T10:33:06.994Z · score: 10 (15 votes)

A Mathematical Explanation of Why Charity Donations Shouldn't Be Diversified

2012-09-20T11:03:48.603Z · score: 2 (25 votes)

Consequentialist Formal Systems

2012-05-08T20:38:47.981Z · score: 12 (13 votes)

Predictability of Decisions and the Diagonal Method

2012-03-09T23:53:28.836Z · score: 21 (16 votes)

Shifting Load to Explicit Reasoning

2011-05-07T18:00:22.319Z · score: 15 (21 votes)

Karma Bubble Fix (Greasemonkey script)

2011-05-07T13:14:29.404Z · score: 23 (26 votes)

Counterfactual Calculation and Observational Knowledge

2011-01-31T16:28:15.334Z · score: 11 (22 votes)

Note on Terminology: "Rationality", not "Rationalism"

2011-01-14T21:21:55.020Z · score: 31 (41 votes)

Unpacking the Concept of "Blackmail"

2010-12-10T00:53:18.674Z · score: 25 (34 votes)

Agents of No Moral Value: Constrained Cognition?

2010-11-21T16:41:10.603Z · score: 6 (9 votes)

Value Deathism

2010-10-30T18:20:30.796Z · score: 26 (48 votes)

Recommended Reading for Friendly AI Research

2010-10-09T13:46:24.677Z · score: 29 (32 votes)

Notion of Preference in Ambient Control

2010-10-07T21:21:34.047Z · score: 14 (19 votes)

Controlling Constant Programs

2010-09-05T13:45:47.759Z · score: 25 (38 votes)

Restraint Bias

2009-11-10T17:23:53.075Z · score: 16 (21 votes)

Circular Altruism vs. Personal Preference

2009-10-26T01:43:16.174Z · score: 11 (17 votes)

Counterfactual Mugging and Logical Uncertainty

2009-09-05T22:31:27.354Z · score: 10 (13 votes)

Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds

2009-08-16T16:06:18.646Z · score: 20 (22 votes)

Sense, Denotation and Semantics

2009-08-11T12:47:06.014Z · score: 9 (16 votes)

Rationality Quotes - August 2009

2009-08-06T01:58:49.178Z · score: 6 (10 votes)

Bayesian Utility: Representing Preference by Probability Measures

2009-07-27T14:28:55.021Z · score: 33 (18 votes)

Eric Drexler on Learning About Everything

2009-05-27T12:57:21.590Z · score: 31 (36 votes)

Consider Representative Data Sets

2009-05-06T01:49:21.389Z · score: 6 (11 votes)

LessWrong Boo Vote (Stochastic Downvoting)

2009-04-22T01:18:01.692Z · score: 3 (30 votes)

Counterfactual Mugging

2009-03-19T06:08:37.769Z · score: 56 (76 votes)

Tarski Statements as Rationalist Exercise

2009-03-17T19:47:16.021Z · score: 11 (21 votes)

In What Ways Have You Become Stronger?

2009-03-15T20:44:47.697Z · score: 26 (28 votes)

Storm by Tim Minchin

2009-03-15T14:48:29.060Z · score: 15 (22 votes)