Posts

No Anthropic Evidence 2012-09-23T10:33:06.994Z · score: 10 (15 votes)
A Mathematical Explanation of Why Charity Donations Shouldn't Be Diversified 2012-09-20T11:03:48.603Z · score: 2 (25 votes)
Consequentialist Formal Systems 2012-05-08T20:38:47.981Z · score: 12 (13 votes)
Predictability of Decisions and the Diagonal Method 2012-03-09T23:53:28.836Z · score: 21 (16 votes)
Shifting Load to Explicit Reasoning 2011-05-07T18:00:22.319Z · score: 15 (21 votes)
Karma Bubble Fix (Greasemonkey script) 2011-05-07T13:14:29.404Z · score: 23 (26 votes)
Counterfactual Calculation and Observational Knowledge 2011-01-31T16:28:15.334Z · score: 11 (22 votes)
Note on Terminology: "Rationality", not "Rationalism" 2011-01-14T21:21:55.020Z · score: 31 (41 votes)
Unpacking the Concept of "Blackmail" 2010-12-10T00:53:18.674Z · score: 25 (34 votes)
Agents of No Moral Value: Constrained Cognition? 2010-11-21T16:41:10.603Z · score: 6 (9 votes)
Value Deathism 2010-10-30T18:20:30.796Z · score: 26 (48 votes)
Recommended Reading for Friendly AI Research 2010-10-09T13:46:24.677Z · score: 29 (32 votes)
Notion of Preference in Ambient Control 2010-10-07T21:21:34.047Z · score: 14 (19 votes)
Controlling Constant Programs 2010-09-05T13:45:47.759Z · score: 25 (38 votes)
Restraint Bias 2009-11-10T17:23:53.075Z · score: 16 (21 votes)
Circular Altruism vs. Personal Preference 2009-10-26T01:43:16.174Z · score: 11 (17 votes)
Counterfactual Mugging and Logical Uncertainty 2009-09-05T22:31:27.354Z · score: 10 (13 votes)
Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds 2009-08-16T16:06:18.646Z · score: 20 (22 votes)
Sense, Denotation and Semantics 2009-08-11T12:47:06.014Z · score: 9 (16 votes)
Rationality Quotes - August 2009 2009-08-06T01:58:49.178Z · score: 6 (10 votes)
Bayesian Utility: Representing Preference by Probability Measures 2009-07-27T14:28:55.021Z · score: 33 (18 votes)
Eric Drexler on Learning About Everything 2009-05-27T12:57:21.590Z · score: 31 (36 votes)
Consider Representative Data Sets 2009-05-06T01:49:21.389Z · score: 6 (11 votes)
LessWrong Boo Vote (Stochastic Downvoting) 2009-04-22T01:18:01.692Z · score: 3 (30 votes)
Counterfactual Mugging 2009-03-19T06:08:37.769Z · score: 56 (76 votes)
Tarski Statements as Rationalist Exercise 2009-03-17T19:47:16.021Z · score: 11 (21 votes)
In What Ways Have You Become Stronger? 2009-03-15T20:44:47.697Z · score: 26 (28 votes)
Storm by Tim Minchin 2009-03-15T14:48:29.060Z · score: 15 (22 votes)

Comments

Comment by vladimir_nesov on A Critique of Functional Decision Theory · 2019-09-15T14:11:22.815Z · score: 10 (2 votes) · LW · GW

By the way, selfish values seem related to the reward vs. utility distinction. An agent that pursues a reward that's about particular events in the world rather than a more holographic valuation seems more like a selfish agent in this sense than a maximizer of a utility function with a small-in-space support. If a reward-seeking agent looks for reward channel shaped patterns instead of the instance of a reward channel in front of it, it might tile the world with reward channels or search the world for more of them or something like that.

Comment by vladimir_nesov on Proving Too Much (w/ exercises) · 2019-09-15T13:54:43.526Z · score: 3 (2 votes) · LW · GW

"I think, therefore I am."

(This is also incorrect, because considering a thinking you in a counterfactual makes sense. Many UDTish examples demonstrate that this principle doesn't hold.)

Comment by vladimir_nesov on Formalising decision theory is hard · 2019-09-14T18:38:24.339Z · score: 2 (1 votes) · LW · GW

I was never convinced that "logical ASP" is a "fair" problem. I once joked with Scott that we can consider a "predictor" that is just the single line of code "return DEFECT" but in the comments it says "I am defecting only because I know you will defect."

I'm leaning this way as well, but I think it's an important clue to figuring out commitment races. ASP Predictor, DefectBot, and a more general agent will make different commitments, and these things are already algorithms specialized for certain situations. How is the chosen commitment related to what the thing making the commitment is?

When an agent can manipulate a predictor in some sense, what should the predictor do? If it starts scheming with its thoughts, it's no longer a predictor, it's just another agent that wants to do something "predictory". Maybe it can only give up, as in ASP, which acts as a precommitment that's more thematically fitting for a predictor than for a general agent. It's still a commitment race then, but possibly the meaning of something being a predictor is preserved by restricting the kind of commitment that it makes: the commitment of a non-general agent is what it is rather than what it does, and a general agent is only committed to its preference. Thus a general agent loses all knowledge in an attempt to out-commit others, because it hasn't committed to that knowledge, didn't make it part of what it is.

Comment by vladimir_nesov on G Gordon Worley III's Shortform · 2019-09-13T16:48:51.968Z · score: 9 (4 votes) · LW · GW

(By "belief" I meant a belief that talkes place in someone's head, and its existence is not necessarily communicated to anyone else. So an uttered statement "I think X" is a declaration of belief in X, not just a belief in X. A belief in X is just a fact about that person's mind, without an accompanying declaration. In this framing, the version of the norm about beliefs (as opposed to declarations) is the norm not to think certain thoughts, not a norm to avoid sharing the observations about the fact that you are thinking them.)

I think a salient distinction between declarations of "I think X" and "it's true that X" is a bad thing, as described in this comment. The distinction is that in the former case you might lack arguments for the belief. But if you don't endorse the belief, it's no longer a belief, and "I think X" is a bug in the mind that shouldn't be called "belief". If you do endorse it, then "I think X" does mean "X". It is plausibly a true statement about the state of the universe, you just don't know why; your mind inscrutably says that it is and you are inclined to believe it, pending further investigation.

So the statement "I think this is true of other people in spite of their claims to the contrary" should mean approximately the same as "This is true of other people in spite of their claims to the contrary", and a meaningful distinction only appears with actual arguments about those statements, not with different placement of "I think".

Comment by vladimir_nesov on G Gordon Worley III's Shortform · 2019-09-12T15:55:52.237Z · score: 10 (5 votes) · LW · GW

criticizing people who don't justify their beliefs with adequate evidence and arguments

I think justification is in the nature of arguments, but not necessary for beliefs or declarations of beliefs. A belief offered without justification is a hypothesis called to attention. It's concise, and if handled carefully, it can be sufficient for communication. As evidence, it's a claim about your own state of mind, which holds a lot of inscrutable territory that nonetheless can channel understanding that doesn't yet lend itself to arguments. Seeking arguments is certainly a good thing, to refactor and convey beliefs, but that's only a small part of how human intelligence builds its map.

Comment by vladimir_nesov on G Gordon Worley III's Shortform · 2019-09-12T14:33:59.451Z · score: 6 (3 votes) · LW · GW

there is absolutely a time and a place for this

That's not the point! Zack is talking about beliefs, not their declaration, so it's (hopefully) not the case that there is "a time and a place" for certain beliefs (even when they are not announced), or that beliefs require ability and willingness to justify them (at least for some senses of "justify" and "belief").

Comment by vladimir_nesov on G Gordon Worley III's Shortform · 2019-09-12T14:27:18.916Z · score: 11 (6 votes) · LW · GW

So it's a moral principle under the belief vs. declaration distinction (as in this comment). In that case I mostly object to not making that distinction (a norm to avoid beliefs of that form is on entirely different level than a norm to avoid their declarations).

Personally I don't think the norm about declarations is on the net a good thing, especially on LW, as it inhibits talking about models of thought. The examples you mentioned are important but should be covered by a more specialized norm that doesn't cause as much collateral damage.

Comment by vladimir_nesov on G Gordon Worley III's Shortform · 2019-09-12T02:38:31.260Z · score: 7 (4 votes) · LW · GW

That's one way for my comment to be wrong, as in "Systematic recurrence of preventable epistemic errors is morally abhorrent."

When I was writing the comment, I was thinking of another way it's wrong: given morality vs. axiology distinction, and distinction between belief and disclosure of that belief, it might well be the case that it's a useful moral principle to avoid declaring beliefs about what others think, especially when those others disagree with the declarations. In that case it's a violation of this principle, a moral wrong, to declare such beliefs. (A principle like this gets in the way of honesty, so promoting it is contentious and shouldn't be an implicit background assumption. And the distinction between belief and its declaration was not clearly made in the above discussion.)

Comment by vladimir_nesov on G Gordon Worley III's Shortform · 2019-09-12T01:30:38.792Z · score: 19 (7 votes) · LW · GW

[He] does and will regularly decide that he knows better than other people what's going on in those other people's heads. [...] Personally, I find it unjustifiable and morally abhorrent.

How can it be morally abhorrent? It's an epistemic issue. Factual errors often lead to bad consequences, but that doesn't make those errors moral errors. A moral error is an error about a moral fact, assignement of value to situations, as opposed to prediction of what's going on. And what someone thinks is a factual question, not a question of assigning value to an event.

Comment by vladimir_nesov on The 3 Books Technique for Learning a New Skilll · 2019-09-09T22:29:55.481Z · score: 3 (2 votes) · LW · GW

SICP is a "Why" book, one of the few timeless texts on the topic. It's subsumed by studying any healthy functional programming language to a sufficient extent (idiomatic use of control operator libraries, not just syntax), but it's more straightforward to start with reading the book.

Comment by vladimir_nesov on Open & Welcome Thread - September 2019 · 2019-09-09T22:02:04.202Z · score: 4 (2 votes) · LW · GW

Not unless the traffic increases severalfold where it would be too much trouble to even skim everything. Skimming the content can turn up interesting things from unfamiliar authors under uninterestingly titled topics, and this can't be recovered by going subscription-only.

Comment by vladimir_nesov on Open & Welcome Thread - September 2019 · 2019-09-09T21:51:11.226Z · score: 2 (1 votes) · LW · GW

Most users don't read through every single comment. [...] dedicated power-user-comment readers [...]

This is not my use case. I mostly skim based on the author, post title, and votes. I don't want to miss certain things, but I'm also completely ignoring (not reading) most discussions.

Comment by vladimir_nesov on Open & Welcome Thread - September 2019 · 2019-09-09T20:52:29.240Z · score: 4 (2 votes) · LW · GW

Does the current LW design let one find the All Comments page, or is this feature no longer intended to be used? I couldn't find any mention of it. I'm a bit worried, since this is the main way in which I've always interacted with the site. (Thankfully this is available on GreaterWrong as a primary feature.)

(Incidentally, an issue I have with the current implementation of All Comments is that negatively voted comments disappear and there is no way of getting them to show. IIRC they used to show in a collapsed form, but now they are just absent. A harder-to-settle and less important issue is that reading multiple days' worth of comments is inconvenient, because there are no URLs for enumerating pages of older comments. GreaterWrong has neither of these issues.)

Comment by vladimir_nesov on Looking for answers about quantum immortality. · 2019-09-09T20:19:03.185Z · score: 2 (1 votes) · LW · GW

To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?

It's not my argument, but it follows from what I'm saying, yes. Even if people should care about this, there are probably good reasons not to, just not good enough to tilt the balance. There are good reasons for all kinds of wrong conclusions, it should be suspicious when there aren't. Note that caring about this too much is the same as caring about other things too little. Also, as an epistemic principle, appreciation of arguments shouldn't depend on consequences of agreeing with them.

How does our subjective suffering improve anything in the worlds where you die?

Focusing effort on the worlds where you'll eventually die (as well as the worlds where you survive in a normal non-QI way) improves them at the cost of neglecting the worlds where you eternally suffer for QI reasons.

Comment by vladimir_nesov on Looking for answers about quantum immortality. · 2019-09-09T18:58:39.025Z · score: 3 (3 votes) · LW · GW

You, as in the person you are right now, is going to experience that.

This has the same issue with "is going to experience" as the "you will always find" I talked about in my first comment.

Not a infinitesimal proportion of other 'yous' while the majority die. Your own subjective experience, 100% of it.

Yes. All of the surviving versions of myself will experience their survival. This happens with extremely small probability. I will experience nothing else. The rest of the probability goes to the worlds where there are no surviving versions of myself, and I won't experience those worlds. But I still value those worlds more than the worlds that have surviving versions of myself. The things that happen to all of my surviving subjective experiences matter less to me than the things that I won't experience happening in the other worlds. Furthermore, I believe that not as a matter of unusual personal preference, but for general reasons about the structure of valuing of things that I think should convince most other people, see the links in the above comments.

Comment by vladimir_nesov on Looking for answers about quantum immortality. · 2019-09-09T17:06:53.861Z · score: 3 (3 votes) · LW · GW

I don't see how it refutes the possibility of QI, then.

See the context of that phrase. I don't see how it could be about "refuting the possibility of QI". (What is "the possibility of QI"? I don't find anything wrong with QI scenarios themselves, only with some arguments about them, in particular the argument that their existence has decision-relevant implications because of conditioning on subjective experience. I'm not certain that they don't have decision-relevant implications that hold for other reasons.)

[We] (as in our internal subjective experience) will continue on only in branches where we stay alive.

This seems tautologously correct. See the points about moral value in the grandparent comment and in the rest of this comment for what I disagree with, and why I don't find this statement relevant.

Since I care about my subjective internal experience, I wouldn't want it to suffer

Neither would I. But this is not all that people care about. We also seem to care about what happens outside our subjective experience, and in quantum immortality scenarios that component of value (things that are not personally experienced) is dominant.

Comment by vladimir_nesov on Looking for answers about quantum immortality. · 2019-09-09T15:56:34.318Z · score: 3 (3 votes) · LW · GW

Nothing is technically impossible with quantum mechanics.

By "essentially impossible" I meant "extremely improbable". The word "essentially" was meant to distinguish this from "physically impossible".

You're not understanding that all of our measure is going into those branches where we survive.

There is a useful distinction between knowing the meaning of an idea and knowing its truth. I'm disagreeing with the claim that "all of our measure is going into those branches where we survive", understood in the sense that only those branches have moral value (see What Are Probabilities, Anyway?), in particular the other branches taken together have less value. See the posts linked from the grandparent comment for a more detailed discussion (I've edited it a bit).

This meaning could be different from one you intend, in which case I'm not understanding your claim correctly, and I'm only disagreeing with my incorrect interpretation of it. But in that case I'm not understanding what you mean by "all of our measure is going into those branches where we survive", not that "all of our measure is going into those branches where we survive" in the sense you intend, because the latter would require me to know the intended meaning of that claim first, at which point it becomes possible for me to fail to understand its truth.

Comment by vladimir_nesov on Looking for answers about quantum immortality. · 2019-09-09T15:33:01.752Z · score: 2 (1 votes) · LW · GW

Hence you will always find your subjective self in that improbable branch.

The meaning of "you will always find" has a connotation of certainty or high probability, but we are specifically talking about essentially impossible outcomes. This calls for tabooing "you will always find" to reconcile an intended meaning with extreme improbability of the outcome. Worrying about such outcomes might make sense when they are seen as a risk on the dust speck side of Torture vs. Dust Specks (their extreme disutility overcomes their extreme improbability). But conditioning on survival seems to be a wrong way of formulating values (see also), because the thing to value is the world, not exclusively subjective experience, even if subjective experience manages to get significant part of that value.

Comment by vladimir_nesov on A Game of Giants [Wait But Why] · 2019-09-06T05:28:32.675Z · score: 2 (1 votes) · LW · GW

The likelihood of arriving at anything like the right answer seems low

In the useful version of this activity, arriving at right answers is not relevant. Instead, you are collecting tools for thinking about a topic, which is mostly about being able to hold and manipulate ideas in your mind, including incorrect ideas. At some point, you get to use those tools to understand what others have figured out, or what's going on in the real world. This framing opposes the failure mode where you learn facts without being able to grasp what they even mean or why they hold.

Comment by vladimir_nesov on Machine Learning Analogy for Meditation (illustrated) · 2019-08-13T16:15:02.000Z · score: 4 (2 votes) · LW · GW

thoughts being the cause of actions is related to a central strategy of many people around here

(It's a good reason to at least welcome arguments against this being the case. If your central strategy is built on a false premise, you should want to know. It might be pointless to expect any useful info in this direction, but I think it's healthier to still want to see it emotionally even when you decide that it's not worth your time to seek it out.)

Comment by vladimir_nesov on Open & Welcome Thread - August 2019 · 2019-08-13T11:01:05.276Z · score: 3 (2 votes) · LW · GW

Another important takeaway from this observation is that there is no point in rebalancing a portfolio of stocks with any regularity, which makes hand-made stock portfolios almost as efficient (in hassle and expenses) as index ETFs. Rebalancing is only useful to keep it reasonably diversified and to get rid of stocks that risk reduced liquidity. This is how index ETFs fall short of the mark of what makes them a good idea: a better ETF should stop following a distribution of an index and only use an index as a catalogue of liquid stocks. Given how low TERs get in large funds, this doesn't really matter, and accountability/regulation is easier with keeping to the distribution from an index, but smaller funds could get lower TERs by following this strategy while retaining all benefits (except the crucial marketing benefit of being able to demonstrate how its performance keeps up with an index). For the same reason, cap weighted index ETFs are worse by being less diversified (which actually makes some index ETFs that hold a lot of stocks a bad choice), while equal-weight ETFs are worse by rebalancing all the time (to the point where they can't get a low TER at all).

Aside from that, a very low TER index (that's not too unbalanced due to cap-weighting) is more diversified than a hand-made portfolio with 30 stocks, without losing expected money, so one can use a bit more leverage with it to get a similar risk profile with a bit more expected money (leveraged ETFs are hard to judge, but one could make a portfolio that holds some non-leveraged index ETFs in a role similar to bonds in a conservative allocation, i.e. as lower-risk part, and some self-contained leveraged things in the rest of it).

There might also be tax benefits to how an index handles dividends, getting more expected money than the same stocks (not sure if this happens in the US, or how it depends on tax brackets). Similarly, stocks that pay no dividends might be better for a hand-made portfolio (and there is less hassle with receiving/reinvesting dividends or having to personally declare taxes for them if they are not automatically withheld higher in the chain in your jurisdiction).

you could beat an index fund

(Replying to the phrase, not its apparent meaning in context.) All liquid stocks give the same expected money as each other, and the same as all indices composed of them. Different distributions of stocks will have different actual outcomes, some greater than others. So of course one can beat an index in an actual outcome (this will happen exactly half the time). A single leveraged stock gives more expected money than any non-leveraged index fund (or any non-leveraged stock), yet makes a very poor investment, which illustrates that beating an index in expectation is also not what anyone's after.

Comment by vladimir_nesov on Power Buys You Distance From The Crime · 2019-08-12T10:40:51.377Z · score: 2 (1 votes) · LW · GW

That's relevant to the example, but not to the argument. Consider a hypothetical Jessica less interested in conflict theory or a topic other than conflict theory. Also, common knowledge doesn't seem to play a role here, and "doesn't know about" is a level of taboo that contradicts the assumption I posited about the argument from selection effect being "well-known".

Comment by vladimir_nesov on Power Buys You Distance From The Crime · 2019-08-12T09:07:13.790Z · score: 2 (3 votes) · LW · GW

Would you correct your response so? (Should you?) If the target audience tends to act similarly, so would they.

Aside from that, "How do you explain X?" is really ambiguous and anchors on well-understood rather than apt framing. "Does mistake theory explain this case well?" is better, because you may well use a bad theory to think about something while knowing it's a bad theory for explaining it. If it's the best you can do, at least this way you have gears to work with. Not having a counterfactually readily available good theory because it's taboo and wasn't developed is of course terrible, but it's not a reason to embrace the bad theory as correct.

Comment by vladimir_nesov on Power Buys You Distance From The Crime · 2019-08-12T00:27:51.281Z · score: 2 (1 votes) · LW · GW

Is "well-known" good enough here, or do you actually need common knowledge?

There is no need for coordination or dependence on what others think. If you expect yourself to be miscalibrated, you just fix that. If most people act this way and accept the argument that convinced you, then you expect them to have done the same.

Comment by vladimir_nesov on Power Buys You Distance From The Crime · 2019-08-11T23:15:27.384Z · score: 12 (4 votes) · LW · GW

But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community's beliefs.

Expected infrequent discussion of a theory shouldn't lower estimates of its probability. (Does the intuition that such theories should be seen as less likely follow from most natural theories predicting discussion of themselves? Erroneous theorizing also predicts that, for example "If this statement is correct, it will be the only topic of all future discussions.")

In general, it shouldn't be possible to expect well-known systematic distortions for any reason, because they should've been recalibrated away immediately. What not discussing a theory should cause is lack of precision (or progress), not systematic distortion.

Comment by vladimir_nesov on Karma-Change Notifications · 2019-08-09T09:26:17.917Z · score: 2 (1 votes) · LW · GW

I think updates were less frequent recently (e.g. zero updates from last week). This should still happen for some people, though there is maybe only about a hundred users with similar number of ancient comments.

Comment by vladimir_nesov on Compilers/PLs book recommendation? · 2019-08-06T04:01:25.564Z · score: 4 (2 votes) · LW · GW

A good selection of topics on static analysis is in

  • F Nielson, HR Nielson, C Hankin. Principles of Program Analysis

Some prerequisites for it and other relevant things can be picked up from

  • K Cooper, L Torczon. Engineering a Compiler
  • HR Nielson, F Nielson. Semantics with Applications: An Appetizer
Comment by vladimir_nesov on Do bond yield curve inversions really indicate there is likely to be a recession? · 2019-07-10T06:26:08.019Z · score: 4 (2 votes) · LW · GW

More generally, market timers lose hard.

Do you mean they make a portfolio that's too conservative, so that lost money becomes lost utility? Or do they lose in some other way? (This sounds superficially similar to claims that there are stock/futures trading strategies that systematically lose money other than on fees or spread, which I think can't happen because the opposite strategies would then systematically make money.)

Comment by vladimir_nesov on Black hole narratives · 2019-07-09T04:02:48.292Z · score: 2 (1 votes) · LW · GW

From the other side, agreement is often not real. People agree out of politeness, or agree with a distorted (perhaps more salient but less relevant to the discussion) version of a claim, without ensuring it's not a bucket error. There's some use in keeping beliefs unchanged, but not in failing to understand the meaning of the claim under discussion (when it has one). So agreement (especially your own, as it's easier to fix) should be treated with scepticism.

Comment by vladimir_nesov on What are good resources for learning functional programming? · 2019-07-05T05:55:38.906Z · score: 3 (2 votes) · LW · GW

My impression is that there are no books that even jointly give a reasonable selection of what you'd need to master the main ideas in (typed) functional programming. Most of this stuff is in papers.

Comment by vladimir_nesov on Causal Reality vs Social Reality · 2019-07-05T00:53:02.171Z · score: 6 (3 votes) · LW · GW

Upon reflection, my answer is that I do endorse it on many occasions

The salient question is whether it's a good idea to respond to possible attacks in a direct fashion. Situations that can be classified as attacks (especially in a sense that allows the attacker to remain unaware of this fact) are much more common.

Comment by vladimir_nesov on Causal Reality vs Social Reality · 2019-07-04T22:06:03.145Z · score: 35 (8 votes) · LW · GW

But if you don’t endorse this reaction—then deal with it yourself.

I agree with the above two comments (Vaniver's and yours) except for a certain connotation of this point. Rejection of own defensiveness does not imply endorsement of insensitivity to tone. I've been making this error in modeling others until recently, and I currently cringe at many of my "combative" comments and forum policy suggestions from before 2014 or so. In most cases defensiveness is flat wrong, but so is not optimizing towards keeping the conversation comfortable. It's tempting to shirk that responsibility in the name of avoiding the danger of compromising the signal with polite distortions. But there is a lot of room for safe optimization in that direction, and making sure people are aware of this is important. "Deal with it yourself" suggests excluding this pressure. Ten years ago, I would have benefitted from it.

Comment by vladimir_nesov on What does the word "collaborative" mean in the phrase "collaborative truthseeking"? · 2019-06-26T15:26:24.001Z · score: 9 (2 votes) · LW · GW

Here's the link, found from my account by searching for "collaborative truthseeking". There is a "Posts from Anyone/You/Your Friends" radio control on the left of the search page, so should probably work on your own posts as well.

Comment by vladimir_nesov on Upcoming stability of values · 2019-06-22T21:21:21.722Z · score: 6 (3 votes) · LW · GW

These "meta-values" you mention are just values applied to appraisal of values. So in these terms it's possible to have values about meta-values and to value change in meta-values. Value drift becomes instrumentally undesirable with greater power over the things you value, and this argument against value drift is not particularly sensitive to your values (or "meta-values"). Even if you prefer for your values to change according to their "natural evolution", it's still useful for them not to change. For change in values to be a good decision, you need to see value drift as more terminally valuable than its opportunity cost (decrease in value of the future according to your present values, in the event that your present values undergo value drift).

Comment by vladimir_nesov on Let Values Drift · 2019-06-21T00:42:46.317Z · score: 5 (5 votes) · LW · GW

The strongest argument against value drift (meaning the kind of change in current values that involves change in idealized values) is instrumental usefulness of future values that pursue idealized present values. This says nothing about terminal value of value drift, and a priori we should expect that people hold presence of value drift as a terminal value, because there is no reason for the haphazard human values to single out the possibility of zero value drift as most valuable. Value drift is just another thing that happens in the world, like kittens. Of course valuable value drift must observe proper form even as it breaks idealized values, since most changes are not improvements.

The instrumental argument is not that strong when your own personal future values don't happen to control the world. So the argument still applies to AIs that have significant influence over what happens in the future, but not to ordinary people, especially not to people whose values are not particularly unusual.

Comment by vladimir_nesov on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-11T23:21:28.503Z · score: 2 (1 votes) · LW · GW

Thanks!

Comment by vladimir_nesov on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-11T23:12:24.623Z · score: 6 (3 votes) · LW · GW

Sure, any resource would do (like copper coins needed to pay for maintenance of equipment), or just generic "units of resources". My concern is that the term "utility" is used incorrectly in an otherwise excellent post that is tangentially related to the topic where its technical meaning matters, potentially propagating this popular misreading.

Comment by vladimir_nesov on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-11T21:33:02.314Z · score: 6 (3 votes) · LW · GW

Utility is not a resource. In the usual expected utility setting, utility functions don't care about affine transformations (all decisions remain the same), for example decreasing all utilities by 1000 doesn't change anything, and there is no significance to utility being positive vs. negative. So requirements to have at least five units of utility in order to play a round of a game shouldn't make sense.

In this post, there is a resource whose utility could maybe possibly be given by the identity function. But in that case it could also be given by the "times two minus 87" utility function and lead to the same decisions. It's not really clear that utility is identity, since a 50/50 lottery between ending up with zero or ten units of the resource seems much worse than the certainty of obtaining five units. ("If you’re Elliott, [zero] is a super scary result to imagine.")

Comment by vladimir_nesov on Mistakes with Conservation of Expected Evidence · 2019-06-09T14:49:52.572Z · score: 13 (8 votes) · LW · GW

To see why it is wrong in general, consider an extreme case: a universal law, which you mostly already believe to be true.

I feel that this example might create another misconception, that certainty usually begets greater certainty. So here's an opposing example, where high confidence in a belief coexists with high probability that it's going to get less and less certain. You are at a bus stop, and you believe that you'll get to your destination on time, as busses here are usually reliable, though they only arrive once an hour and can deviate from schedule by several minutes. If you see a bus, you'll probably arrive in time, but every minute that you see no bus, it's evidence of something having gone wrong, so that the bus won't arrive at all in the next hour. You expect a highly certain belief (that you'll arrive on time) to decrease (a little bit), which is balanced by an unlikely alternative (for each given minute of waiting) of it going in the direction of greater certainty (if the bus does arrive within that minute).

Comment by vladimir_nesov on "But It Doesn't Matter" · 2019-06-01T19:11:20.025Z · score: 2 (1 votes) · LW · GW

(See the edit in the grandparent, my initial interpretation of the post was wrong.)

Comment by vladimir_nesov on "But It Doesn't Matter" · 2019-06-01T08:22:31.076Z · score: 8 (6 votes) · LW · GW

With bounded attention, noticing less relevant things more puts you into a worse position to be aware of the worlds you actually live in.

Edit: Steven's comment gives a more central interpretation of the post, namely as a warning against the bailey/motte pair of effectively disbelieving something and defending that state of mind by claiming it's irrelevant. (I think the motte is also invalid, it's fine to be agnostic about things, even relevant ones. If hypothetically irrelevance argues against belief, it argues against disbelief as well.)

Comment by vladimir_nesov on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T07:24:59.777Z · score: 4 (2 votes) · LW · GW

If it assigns equal probability to everything then nothing can be evidence in favor of it

Nope. If it assigns more probability to an observation than another hypothesis does ("It's going to be raining tomorrow! Because AGI!), then the observation is evidence for it and against the other hypothesis. (Of course given how the actual world looks, anything that could be called "assigns equal probability to everything", whatever that means, is going to quickly lose to any sensible model of the world.)

That said, I think being reasoned about instead of simulated really does have "infinite explanatory power", in the sense that you can't locate yourself in the world that does that based on an observation, since all observations are relevant to most situations where you are being reasoned about. So assigning probability to individual (categories of) observations is only (somewhat) possible for the instances of yourself that are simulated or exist natively in physics, not for instances that are reasoned about.

Comment by vladimir_nesov on Comment section from 05/19/2019 · 2019-05-26T06:21:45.603Z · score: 15 (4 votes) · LW · GW

I agree that it's possible for feelings to be relevant (or for factual beliefs to be relevant). But discouragement of discussion shouldn't be enacted through feelings, feelings should just be info that prompts further activity, which might have nothing to do with discouragement of discussion. So there is no issue with Vanessa's first comment and parts of the rest of the discussion that clarified the situation. A lot of the rest of it though wasn't constructive in building any sort of valid argument that rests on the foundation of info about feelings.

Comment by vladimir_nesov on Does the Higgs-boson exist? · 2019-05-25T06:44:32.593Z · score: 6 (3 votes) · LW · GW

"Shut up and calculate" no longer suffices when you want to figure out something about reality that is not about prediction of observations (or if you are interested in unusual kinds of reality, where even prediction of observations looks unlike it does in our world). So this concerns many philosophical questions, in particular decision theory (where you want to figure out what to do and how to think about what to do). The relationship with decision theory is the same as with physics: you want to replace reality with something more specific. But if you haven't found a sufficiently good replacement, forcing a bad replacement is worse than fumbling with the preformal idea of reality.

Comment by vladimir_nesov on Does the Higgs-boson exist? · 2019-05-25T05:31:04.227Z · score: 4 (2 votes) · LW · GW

"Shut up and calculate" serves as an excellent replacement for truth in most situations. It's about hypotheses/theories and observations, not reality, so within its area of applicability it makes the idea of reality irrelevant.

Comment by vladimir_nesov on Separation of Concerns · 2019-05-23T23:19:05.379Z · score: 4 (2 votes) · LW · GW

Ideas, beliefs, feelings, norms, and preferences can be elucidated/reformulated, ascertained/refuted, endorsed/eschewed, reinforced/depreciated, or specialized/coordinated, not just respectively but in many combinations. Typically all these distinctions are mixed together. Naming them with words whose meaning is already common knowledge might be necessary to cheaply draw attention to relevant distinctions. Otherwise it takes too long to frame the thought, at which point most people have already left the conversation.

Comment by vladimir_nesov on Comment section from 05/19/2019 · 2019-05-23T21:26:35.336Z · score: 4 (2 votes) · LW · GW

I don't get the impression Said was describing or channeling a particular culture. Rather, he was making specific observations that are primarily concerned with what they actually refer to. There is a difference between the hypothesis that such claims implicitly suggest a general point about some guiding principles that the author endorses, and the hypothesis that their style pattern-matches such a general point and thus perpetuates norms that you consider undesirable.

In the context where the resulting norms are undesirable, the impact of these options is the same, but arguments that are valid for them are different, which is important for nudging the style of discussion in the direction you hope for. To prevent a norm from taking hold, it's crucial to understand what the norm is, so that you won't unintentionally fuel it.

Comment by vladimir_nesov on Comment section from 05/19/2019 · 2019-05-23T20:43:49.085Z · score: 8 (4 votes) · LW · GW

no, this isn’t the default culture people should be expecting to be consensus on LW

I don't just agree with many of the things Said said in the comments here, but see them as undoubtedly correct, while other points are more debatable, so I second his question about specifics of this nefarious "this".

Comment by vladimir_nesov on A War of Ants and Grasshoppers · 2019-05-23T15:49:59.860Z · score: 4 (2 votes) · LW · GW

The thesis seems to be to caution against automatically blaming those who might've benefited from a disaster, as often enough things don't happen in a goal-directed fashion. ("To fathom a strange plot, one technique is to look at what ended up happening, assume it was the intended result, and ask who benefited.") Not sure this is a widespread enough heuristic to be worth reining in in the form of an unconditional advice.

Comment by vladimir_nesov on Schelling Fences versus Marginal Thinking · 2019-05-23T14:13:01.732Z · score: 2 (3 votes) · LW · GW

Cultivate [...] that on the margin you are always going to underestimate the value of long term investment in habits and virtue cultivation

Why though? Shouldn't you recalibrate immediately to make this no longer predictable? Or is such recalibration the meaning of the quoted sentence? In that case, why phrase it so, it seems to risk overcorrection, not noticing when the opposite advice becomes relevant, or else requires undue caution in following your own advice, at which point it becomes a self-fulfilling flaw/advice combo? (Following the advice cautiously ensures that the flaw is not fully removed, and so the advice remains relevant.)