Posts

No Anthropic Evidence 2012-09-23T10:33:06.994Z
A Mathematical Explanation of Why Charity Donations Shouldn't Be Diversified 2012-09-20T11:03:48.603Z
Consequentialist Formal Systems 2012-05-08T20:38:47.981Z
Predictability of Decisions and the Diagonal Method 2012-03-09T23:53:28.836Z
Shifting Load to Explicit Reasoning 2011-05-07T18:00:22.319Z
Karma Bubble Fix (Greasemonkey script) 2011-05-07T13:14:29.404Z
Counterfactual Calculation and Observational Knowledge 2011-01-31T16:28:15.334Z
Note on Terminology: "Rationality", not "Rationalism" 2011-01-14T21:21:55.020Z
Unpacking the Concept of "Blackmail" 2010-12-10T00:53:18.674Z
Agents of No Moral Value: Constrained Cognition? 2010-11-21T16:41:10.603Z
Value Deathism 2010-10-30T18:20:30.796Z
Recommended Reading for Friendly AI Research 2010-10-09T13:46:24.677Z
Notion of Preference in Ambient Control 2010-10-07T21:21:34.047Z
Controlling Constant Programs 2010-09-05T13:45:47.759Z
Restraint Bias 2009-11-10T17:23:53.075Z
Circular Altruism vs. Personal Preference 2009-10-26T01:43:16.174Z
Counterfactual Mugging and Logical Uncertainty 2009-09-05T22:31:27.354Z
Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds 2009-08-16T16:06:18.646Z
Sense, Denotation and Semantics 2009-08-11T12:47:06.014Z
Rationality Quotes - August 2009 2009-08-06T01:58:49.178Z
Bayesian Utility: Representing Preference by Probability Measures 2009-07-27T14:28:55.021Z
Eric Drexler on Learning About Everything 2009-05-27T12:57:21.590Z
Consider Representative Data Sets 2009-05-06T01:49:21.389Z
LessWrong Boo Vote (Stochastic Downvoting) 2009-04-22T01:18:01.692Z
Counterfactual Mugging 2009-03-19T06:08:37.769Z
Tarski Statements as Rationalist Exercise 2009-03-17T19:47:16.021Z
In What Ways Have You Become Stronger? 2009-03-15T20:44:47.697Z
Storm by Tim Minchin 2009-03-15T14:48:29.060Z

Comments

Comment by Vladimir_Nesov on How You Can Gain Self Control Without "Self-Control" · 2021-03-25T12:44:09.437Z · LW · GW

The article gives framing and advice that seem somewhat arbitrary, and doesn't explain most of the choices. It alludes to research, but the discussion actually present in the article is only tangentially related to most of the framing/advice content, and even that discussion is not very informative when considered in isolation, without further reading.

There is a lot of attention to packaging the content, with insufficient readily available justification for it, which seems like a terrible combination without an explicit reframing of what the article wants to be. With less packaging, it would at least not appear to be trying to counteract normal amount of caution in embracing content of (subjectively) mysterious origin.

Comment by Vladimir_Nesov on What are the best resources to point people who are skeptical of getting vaccinated for COVID-19 to? · 2021-03-20T18:41:46.871Z · LW · GW

The distinction is between understanding and faith/identity (which abhors justification from outside itself). Sometimes people build understanding that enables checking if things make sense. This also applies to justifying trust of the kind not based on faith. The alternative is for decisions/opinions/trust to follow identity, determined by luck.

Comment by Vladimir_Nesov on Impact of the rationalist community who someone who aspired to be "rational" · 2021-03-15T03:49:02.758Z · LW · GW

Naming a group of people is a step towards reification of an ideology associated with it. It's a virtuous state of things that there is still no non-awkward name, but keeping the question of identity muddled and tending towards being nameless might be better.

Comment by Vladimir_Nesov on samshap's Shortform · 2021-03-12T13:01:59.522Z · LW · GW

Sleeping Beauty illustrates the consequences of following general epistemic principles. Merely finding an assignment of probabilities that's optimal for a given way of measuring outcomes is appeal to consequences, on its own it doesn't work as a general way of managing knowledge (though some general ways of managing knowledge might happen to assign probabilities so that the consequences are optimal, in a given example). In principle consequentialism makes superfluous any particular elements of agent design, including those pertaining to knowledge. But that observation doesn't help with designing specific ways of working with knowledge.

Comment by Vladimir_Nesov on [deleted post] 2021-03-04T18:31:04.635Z

Labels are no substitute for arguments.

But that's the nature of identity: a claim that's part of identity won't suffer insinuations that it needs any arguments behind it, let alone the existence of arguments against. Within one's identity, labels are absolutely superior to arguments. So the disagreement is more about epistemic role of identity, not about object level claims or arguments.

Comment by Vladimir_Nesov on [deleted post] 2021-03-04T17:25:17.229Z

See proving too much. In the thought experiment where you consider sapient wolves who hold violent consumption of sentient creatures as an important value, the policy of veganism is at least highly questionable. An argument for such a policy needs to distinguish humans from sapient wolves, so as to avoid arguing for veganism for sapient wolves with the same conviction as it does for humans.

Your argument mentions relevant features (taste, tradition) at the end and dismisses them as "lazy excuses". Yet their weakness in the case of humans is necessary for the argument's validity. Taste and tradition point to an ethical argument against veganism, that doesn't not exist as you claim at the start of the article. Instead the argument exists and might be weak.

Comment by Vladimir_Nesov on [deleted post] 2021-03-03T22:40:46.390Z

This proves too much. Most of these arguments would profess to hold veganism as the superior policy for sapient wolves (who are sufficiently advanced to have developed cheap dietary supplementation), degrading the moral imperative of tearing living flesh from the bones.

Comment by Vladimir_Nesov on Weighted Voting Delenda Est · 2021-03-03T09:26:59.838Z · LW · GW

This is a much clearer statement of the problem you are pointing at than the post.

(I don't see how it's apparent that the voting system deserves significant blame for the overall low-standard-in-your-estimation of LW posts. A more apparent effect is probably bad-in-your-estimation posts getting heavily upvoted or winning in annual reviews, but it's less clear where to go from that observation.)

Comment by Vladimir_Nesov on Takeaways from one year of lockdown · 2021-03-02T01:13:47.414Z · LW · GW

The stress of negotiation/management of COVID precautions destroyed my intellectual productivity for a couple of months at the start of the pandemic. So I rented a place to live alone, which luckily happened to be possible for me, and the resulting situation is much closer to normal than it is to the pre-move situation during the pandemic. There is no stress, as worrying things are no longer constantly trying to escape my control without my knowledge, there's only the challenge of performing "trips to the surface" correctly that's restricted to the time of the trips and doesn't poison the rest of my time.

Comment by Vladimir_Nesov on Subjectivism and moral authority · 2021-03-02T00:47:05.900Z · LW · GW

As I understand this, Clippy might be able to issue an authoritative moral command, "Stop!", to the humans, provided it's "caused" by human values, as conveyed through its correct understanding of them. The humans obey, provided they authenticate the command as channeling human values. It's not advice, as the point of intervention is different: it's not affecting a moral argument (decision making) within the humans, instead it's affecting their actions more directly, with the moral argument having been computed by Clippy.

Comment by Vladimir_Nesov on "If You're Not a Holy Madman, You're Not Trying" · 2021-02-28T23:53:47.559Z · LW · GW

The nice things are skills and virtues, parts of designs that might get washed away by stronger optimization. If people or truths or playing chess are not useful/valuable, agents get rid of them, while people might have a different attitude.

(Part of the motivation here is in making sense of corrigibility. Also, I guess simulacrum level 4 is agency, but humans can't function without a design, so attempts to take advantage of the absence of a design devolve into incoherence.)

Comment by Vladimir_Nesov on "If You're Not a Holy Madman, You're Not Trying" · 2021-02-28T22:01:54.408Z · LW · GW

It's not clear that people should be agents. Agents are means of setting up content of the world to accord with values, they are not optimized for being the valuable content of the world. So a holy madman has a work-life balance problem, they are an instrument of their values rather than an incarnation of them.

Comment by Vladimir_Nesov on What are a rationalist's best tools for better decision making? · 2021-02-26T06:43:30.619Z · LW · GW

What are a rationalist's best tools for better decision making?

What are a farrier's best recipes for better pizza? Probably the same as an ophthalmologist's. What about worse pizza, or worse recipes?

Omit needless words. Yes requires the possibility of no.

Comment by Vladimir_Nesov on A No-Nonsense Guide to Early Retirement · 2021-02-25T11:56:04.211Z · LW · GW

Investing everything in a single ETF (especially at a single brokerage) is possibly fine, but seems difficult to justify. When something looks rock solid in theory, in practice there might be all sorts of black swans, especially over decades (where you lose at least a significant portion of the value held in a particular ETF at a particular brokerage, compared to its underlying basket of securities, because something has gone wrong with the brokerage, the ETF provider, the infrastructure that makes it impossible for anything to go wrong with an ETF, or something else you aren't even aware of). Since there are many similar brokerages and ETF providers, I think it makes sense to diversify across several, which should only cost a bit of additional paperwork.

Even if in fact this activity is completely useless, obtaining knowledge of this fact at an actionable level of certainty (that outweighs the paperwork in the expected utility calculation) looks like a lot of work, much more than the paperwork. Experts might enjoy having to do less paperwork.

(For example, there's theft by malware, a particular risk that would be a subjective black swan for many people, which is more likely to affect only some of the accounts held by a given person. The damage can be further reduced by segregating access between multiple devices running different systems, so that they won't be compromised at the same time, but the risk can't be completely eliminated. Theoretically, malware can be slipped even into security updates to benign software by hacking its developers, if they are not implausibly careful. And in 20 years this might get worse. This is merely an example of a risk reduced by diversification between brokerages that I'm aware of, the point is that there might be other risks that I have no idea about.)

Comment by Vladimir_Nesov on Is the influence of money pervasive, even on LessWrong? · 2021-02-17T10:11:40.674Z · LW · GW

Identity colors the status quo in how the world is perceived, but the process of changing it is not aligned with learning (it masks the absence of attempting to substantiate its claims), thus a systematic bias resistant to observations that should change one's mind. There are emotions involved in the tribal psychological drives responsible for maintaining identity, but they are not significant for expressing identity in everything it has a stance on, subtly (or less so) warping all cognition.

There's some clarification of what I'm talking about in this comment and references therein.

Comment by Vladimir_Nesov on How is rationalism different from utilitarianism? · 2021-02-15T15:06:09.387Z · LW · GW

Rationality is perhaps about thinking carefully about careful thinking: what it is, what it's for, what is its value, what use is it, how to channel it more clearly. Utilitarianism is about very different things.

Comment by Vladimir_Nesov on How is rationalism different from utilitarianism? · 2021-02-15T14:47:32.477Z · LW · GW

It's instrumentally useful for the world to be affected according to a decision theory, but it's not obviously a terminal value for people to act this way, especially in detail. Instrumentally useful things that people shouldn't be doing can instead be done by tools we build.

Comment by Vladimir_Nesov on [deleted post] 2021-02-14T01:39:24.376Z

Depends on the license.

Comment by Vladimir_Nesov on Your Cheerful Price · 2021-02-13T21:30:15.979Z · LW · GW

There is no fundamental reason for Cheerful Price to be higher than what you are normally paid. For example, if you'd love to do a thing even without pay, Cheerful Price would be zero (and if you can't arbitrage by doing the thing without the transaction going through, the price moves all the way into the negatives). If you are sufficiently unusual in that attitude, the market price is going to be higher than that.

Comment by Vladimir_Nesov on Is the influence of money pervasive, even on LessWrong? · 2021-02-03T10:45:29.649Z · LW · GW

strong emotional reactions

I expect being part of one's identity is key, and doesn't require notable emotional reactions.

Comment by Vladimir_Nesov on The 10,000-Hour Rule is a myth · 2021-02-01T16:33:44.504Z · LW · GW

My woefully inexpert guess is that advanced cooking should be thought of as optimization in a space of high dimension, where gradient descent will often zig-zag, making simple experiments inefficient. Then apart from knowledge of many landmarks (which is covered by cooking books), high cooking skill would involve ability to reframe recipes to reduce dimensionality, and intuition about how to change a process to make it better or to vary it without making it worse, given fine details of a particular setup and available ingredients. This probably can't be usefully written down at all, but does admit instruction about changes in specific cases.

Comment by Vladimir_Nesov on What's the big deal about Bayes' Theorem? · 2021-02-01T08:13:30.210Z · LW · GW

Here's an example of applying the formula (to a puzzle).

Comment by Vladimir_Nesov on The Lottery Paradox · 2021-02-01T08:08:40.278Z · LW · GW

Let's apply Bayes formula in odds form to this example. Let = "Xavier won the lottery", , = "Yovanni says Xavier won the lottery". We have (for simplicity, let's assume that Xavier couldn't be someone who didn't even enter the lottery), . What is ? Given that someone other than Xavier won the lottery, what is the probability that Yovanni would claim that it was Xavier in particular who did? While Yovanni might have a reason to single out Xavier, Zenia doesn't, so from the hypothesis they would predict that anyone could be wrongly named the winner, which gives picking Xavier one in a million chance (also, the Yovanni within the hypothesis won't pay attention to in particular, so can't anchor to Xavier based on that). Then, Yovanni must make a mistake or decide to lie, with probability of, say, 1%. In total, we have . Putting this in the formula, we get , so Xavier is probably the winner after all.

(This is the same as Phil's answer, formulated a little bit differently.)

Comment by Vladimir_Nesov on The 10,000-Hour Rule is a myth · 2021-02-01T07:04:08.415Z · LW · GW

The 10,000-Hour Rule [...] popularized by Malcolm Gladwell [says that] ten thousand hours of practice is necessary and sufficient to become an expert.

Not having read the book, and from a cursory google search, I was unable to find a clear argument that Gladwell actually makes the claim about practice being sufficient. I did find his own statement that this is not a claim he made in the book. (My impression is that the book was criminally ambiguous and neither affirms nor denies the claim despite discussing related things at length.)

Comment by Vladimir_Nesov on Self-Criticism Can Be Wrong And Harmful · 2021-02-01T06:14:09.741Z · LW · GW

If at all possible, good activities with risks should at the very least be approached with caution and training, not outright avoided.

Taking ideas seriously is potentially harmful as ideas are possibly no good, prompting the general strategy of steering clear of ideas. The urges of asymmetric justice also pull in this direction, as application of norms, with presence of any blamable risks paralizing even when action is manifestly good in expectation.

Comment by Vladimir_Nesov on What's the big deal about Bayes' Theorem? · 2021-01-30T06:13:03.251Z · LW · GW

The above formula is usually called "odds form of Bayes formula". We get the standard form by letting in the odds form, and we get the odds form from the standard form by dividing it by itself for two hypotheses ( cancels out).

The serious problem with the standard form of Bayes is the term, which is usually hard to estimate (as we don't get to choose what is). We can try to get rid of it by expanding but that's also no good, because now we need to know . One way to state the problem with this is to say that a hypothesis for given observations is a description of a situation that makes it possible to estimate the probability of those observations. That is, is a hypothesis for if it's possible to get a good estimate of . To evaluate an observation, we should look for hypotheses that let us estimate that conditional probability; we do get to choose what to use as hypotheses. So the problem here is that if is a hypothesis for , it doesn't follow that is a hypothesis for or for anything else of interest. The negation of a hypothesis is not necessarily a hypothesis. That is why it defeats some of the purpose of moving over to using the odds form of Bayes if we let , as it's sometimes written.

Comment by Vladimir_Nesov on [deleted post] 2021-01-30T05:34:17.008Z

It's possible for two programs to know each other's code and to perfectly deduce each other's result without taking forever, they just can't do it by simulating each other. But they can do it by formal reasoning about each other, if it happens to be sufficiently easy and neither is preventing the other from predicting it. The issues here are not about fidelity of prediction.

Comment by Vladimir_Nesov on [deleted post] 2021-01-29T08:24:03.046Z

This is not the halting problem. The halting problem is about existence of an algorithm that predicts all algorithms. Here, we are mostly interested in predicting Agent and Predictor.

However, the standard argument about the halting problem can be applied to this case, giving interesting constructions. Agent can decide to diagonalize Predictor, to simulate it and then say the opposite, making it impossible for Predictor to predict it. Similarly, Agent can decide to diagonalize itself (which is called having a chicken rule), by committing to do the opposite of whatever it predicts itself to do, if that ever happens. This makes it impossible for Agent to predict what it's going to do. (The details of correctly implementing the chicken rule are harder to formalize in the absence of oracles.)

It might seem that some questions similar to the halting problem are "unfair": if Predictor sees that Agent is diagonalizing it, it understands Agent's behavior fully, yet it can't express its understanding, as it must state a particular action Agent is going to perform, which is impossible. What if instead Predictor states a dependence of Agent's action on Predictor's prediction? But then Agent might be able to diagonalize that as well. One solution is for Predictor to avoid saying or thinking its prediction in a way legible to Agent, and standard Newcomb's problem tries to do exactly that by keeping boxes opaque and Predictor's algorithm unknown.

When Predictor is not an oracle and its algorithm is known to Agent, we might simply require that it fills the big box with money only if it succeeds in predicting that Agent one-boxes. In that case, Agent has an incentive to be predictable. If a decision theory prescribes behavior that makes it impossible for a particular Predictor to predict one-boxing, then Agent following that decision theory goes home without the money. One example is the Agent Simulates Predictor (ASP) problem, where Predictor performs only a bounded computation, and Agent can fully simulate Predictor's reasoning, which tempts it into two-boxing if Predictor is seen to predict one-boxing. Another example is Transparent Newcomb's problem, where Agent is allowed to observe Predictor's prediction directly (by seeing the contents of the boxes before making a decision).

Comment by Vladimir_Nesov on Non-Coercive Perfectionism · 2021-01-27T16:46:42.853Z · LW · GW

What I mean by perfectionism is a desire for a certain unusually high level of challenge and thoroughness. It's not about high valuation according to a more abstract or otherwise relevant measure/goal. So making a process "more perfect" in this sense means bringing challenge and thoroughness closer to the emotionally determined comfortable levels (in particular, it might involve making something less challenging if it was too challenging originally). The words "more perfect" aren't particularly apt for this idea.

Why novel unimportant things specifically? That would be mostly about fiction/games/tv shows. Maybe I'm looking at ratings/reviews/screenshots more than typical in proportion to actually watching/reading/playing. (The games are always on impossible/deathworld/etc. difficulty and never completed.) I'm certainly aware of much more media than I've actually experienced, additionally because of general dislike of novel activities (for example, I'm avoiding movies altogether). This seems related, but I don't have a specific story for the relation.

(I've now edited this comment about ten times, and re-read even more times, which is typical for anything longer than a couple of sentences. Thus "commenting on LW" eats up enough time to meaningfully share time budget with other fruitless entertainment such as fiction/tv shows, even when I'm commenting an order of magnitude less than I used to years ago.)

Comment by Vladimir_Nesov on Non-Coercive Perfectionism · 2021-01-27T13:34:54.461Z · LW · GW

I thought my answer worked for that case as well: choosing the amount of time to spend on a project looks like choosing to not abandon the project when it should be continued (out of abstract consideration of what projects are important). The alternative, abandoning of projects, bears no emotional valence, so costs no effort.

Comment by Vladimir_Nesov on What's the big deal about Bayes' Theorem? · 2021-01-27T07:22:49.719Z · LW · GW

In daily life, the basic move is to ask,

  • What are the straightforward and the alternative explanations (hypotheses) for what I'm seeing?
  • How much more likely is one compared to the other a priori (when ignoring what I'm seeing)?
  • What probabilities do they assign to what I'm seeing?

and get the ratio of a posteriori probabilities of the hypotheses (a posteriori odds) from that (by multiplying the a priori odds by the likelihood ratio). Odds measure the relative strengths of hypotheses, so the move is to obtain relative strengths of a pair of hypotheses after observing a piece of data, with the choice of hypotheses inspired by the same piece of data. This is a very easy computation that becomes a habit and routinely fixes intuitive misestimates. Usually it's about explanation of data/claim as correct vs. constructed by a specific sloppy process that doesn't ensure correctness.

That is, for the data/claim you are observing, and and hypotheses chosen as possible explanations for , This holds for any choice of two hypotheses and , which don't have to be mutually exclusive or exhaust all possibilities, and there may be many other plausible hypotheses.

Comment by Vladimir_Nesov on What is up with spirituality? · 2021-01-27T06:39:10.920Z · LW · GW

For explanations of popularity of religion, there's identity (simulacrum level 3), the same thing that guides dogmatic political allegiance. It probably has some basic psychological drives behind it, some of which might manifest as spirituality. This predicts spiritual experiences associated with politics.

Comment by Vladimir_Nesov on Non-Coercive Perfectionism · 2021-01-27T05:07:05.226Z · LW · GW

Activities differ by how much time I put into them, not by effort per unit of time. I'm only choosing when to not abandon an activity. Putting more effort per unit of time is not psychologically feasible long term, while putting less effort per unit of time makes activities less enjoyable, and is only of use to quickly get unfamiliar things done (which causes me some discomfort).

Comment by Vladimir_Nesov on Non-Coercive Perfectionism · 2021-01-27T04:51:19.439Z · LW · GW

There's perfectionism about results, and perfectionism about process, and these are very different. As a process perfectionist, I usually don't care about completing a project that isn't otherwise important to me, or even making much of a headway with it, only about approaching it in a well-researched way. Thus there is no systematic drain on effort towards unimportant activities, as they can be easily abandoned, while important activities are not abandoned and also get the serious attention to the process. On the other hand, I get less rarely encountered unimportant stuff done than usual (frequently encountered unimportant stuff eventually becomes efficient).

Comment by Vladimir_Nesov on Lessons I've Learned from Self-Teaching · 2021-01-24T06:10:08.441Z · LW · GW

Instead of reading a textbook with SRS and notes, skim more books (not just textbooks), solve problems, read wikis and papers, talk to people, watch talks and lectures, solve more problems from diverse sources. Instead of memorizing definitions, figure out how different possible definitions work out, how motivations of an area select definitions that allow constructing useful things and formulating important facts. The best exercise is to reconstruct main definitions and results on a topic you've mostly forgotten or never learned that well. Math is self-healing, so instead of propping it up with SRS, it's much better to let it heal itself in your mind. (It heals in its own natural order, not according to card schedule.) And then, after you've already learned the material, maybe read a textbook on it cover to cover, again.

(I used to have a mindset similar to what the post expresses, and what it recalls, though not this extreme. The issue is that there is a lot of material available to study on standard topics, especially if you have access to people fluent in them and can reinvent parts of them on your own, which lets this hold even for obscure or advanced topics where not a lot is written up. So the effort applied to memorizing things or recalling old problems can be turned to better understanding and new problems. This post seems to be already leaning in this direction, so I'm just writing down my current guess at a healthy learning practice.)

Comment by Vladimir_Nesov on Why do stocks go up? · 2021-01-17T20:10:21.672Z · LW · GW

value of the stock is $50 today and will be $5,000 ten years from now, and the rest of the market prices it at $50 today, then I could earn insane expected returns by investing at $50 today. Thus, I don't think the market would price it at $50 today.

Everyone gets the insane nominal returns after ten years are up (assuming central banks target inflation), but after the initial upheaval at the time of the announcement there is no stock that gives more insane returns than other stocks, there are no arbitrage trades to drive the price up immediately. For nominal prices of stocks, what happens in ten years is going to look like significant devaluation of currency.

If a $5,000 free design car (that's the only thing in our consumer busket) can suddenly be printed out of dirt for $50, and central banks target inflation, they are going to essentially redefine the old $50 to read "$5,000", so that the car continues to cost $5,000 despite the nanofactory. At the same time, $5,000 in a stock becomes $500,000.

(Of course this is a hopeless caricature intended to highlight the argument, not even predict what happens in the ridiculous thought experiment. Things closer to reality involve much smaller gradual changes.)

Comment by Vladimir_Nesov on Why do stocks go up? · 2021-01-17T19:24:42.906Z · LW · GW

I'm not talking about time-discounting at all. The point is that real value of stock (and money) is defined with respect to a busket of consumer goods, and that's the only thing that isn't being priced-in in advance, it's always recalculated at present time. As it becomes objectively easier to make the things people consume, real value of everything else (including total return indices of stocks) increases, by definition of real value. It doesn't increase in advance, as valuation of the goods is not performed in advance to define consumer price index.

Comment by Vladimir_Nesov on Why do stocks go up? · 2021-01-17T19:07:59.321Z · LW · GW

Suppose at some point there is an announcement that in ten years Free Hardware Foundation will release a magical nanofactory that turns dirt into most things currently in the basket of goods used to calculate inflation. There is no doubt about truth of the announcement. No company directly profits from the machine, as it's free (libre) hardware.

There's some upheaval in the market, that eventually settles down. Yet real value of stock is predictably going to sharply go up after ten years, not (just) immediately, as that's when the basket of goods actually becomes cheaper.

Comment by Vladimir_Nesov on Alienation and meta-ethics (or: is it possible you should maximize helium?) · 2021-01-17T17:00:08.179Z · LW · GW

It's confusing how the term "realism" is used when applied to ethics, which I think obfuscates a natural position relevant to alignment. Realism about mathematical objects (small-p platonism?) doesn't say that there is One True Mathematical Object to be discovered by all civilizations, instead there are many objects that exist, governing truth of propositions about them. When discussing a mathematical question, we first produce references to some objects, in order to locate them, and then build models of situations that involve them, to understand how these objects work, to formulate claims that are true about them. The references depend on the mathematician's choice of topic or problems, while truth of models, given objects, doesn't depend on the references and hence on the mathematician. The dependence involves two steps: first, there are references, which reside in the mathematician, then there are mathematical objects singled out using the references, and finally there are models that again reside in the mathematician, determined by the objects. Even though the models depend on the references, the references are screened off by the objects, so given the objects, the models no longer depend on the references.

This straightforwardly applies to ethics, except unlike for mathematics, the references are typically vague. The resulting position is realist in the sense that it considers moral facts (objects) as real mind-independent entities governing truth of propositions about them, but the choice of moral objects to consider depends on the agent, which is usually called an anti-realist position, making it difficult to frame a realist/anti-realist narrative. Here, the results of consideration of the moral objects are not necessarily intuitively transparent, their models can be unlike the references that singled them out for consideration, and correctness of models doesn't depend on attitude of the agent, it's determined by the moral objects themselves, their origin in references within the agent is screened off.

This position is, according to the post's own taxomony, the only one not discussed in the post! Here, what you should do depends on current values, yet ideal understanding need not bring values into harmony with what you should do. That is, a probable outcome is alienation from what you should do despite what you should do being determined by what you currently value.

Comment by Vladimir_Nesov on Deconditioning Aversion to Dislike · 2021-01-16T13:25:39.942Z · LW · GW

Hence "a risk, not necessarily a failure". If the prior says that a systematic error is in place, and there is no evidence to the contrary, you expect the systematic error. But it's an expectation, not precise knowledge, it might well be the case that there is no systematic error.

Furthermore, ensuring that there is no systematic error doesn't require this fact to become externally verifiable. So an operationalization is not necessary to solve the problem, even if it's necessary to demonstrate that the problem is solved. It's also far from sufficient, with vaguely defined topics such as this deliberation easily turns into demagoguery, misleading with words instead of using them to build a more robust and detailed understanding. So it's more of a side note than the core of a plan.

Comment by Vladimir_Nesov on Deconditioning Aversion to Dislike · 2021-01-16T10:50:53.201Z · LW · GW

Careful reasoning (precision) helps with calibration, but is not synonymous with it. Systematic error is about calibration, not precision, so demanding that it's to be solved through improvement of precision is similar to a demand for a particular argument, risking rejection of correct solutions outside the scope of what's demanded. That is, if calibration can be ensured without precision, your demand won't be met, yet the problem would be solved. Hence my objection to the demand.

Comment by Vladimir_Nesov on Deconditioning Aversion to Dislike · 2021-01-16T03:22:33.310Z · LW · GW

without a fair operationalization

It's a risk, but not necessarily a failure. It might be enough to seek operationalization in suspicious cases, not in general.

Comment by Vladimir_Nesov on The True Face of the Enemy · 2021-01-15T04:06:13.073Z · LW · GW

a case against status quo is incomplete without the case for an alternative

"A case against status quo" is ambiguous in this context. The first step to fixing a problem is realizing that you have one. A formulation of a problem is a perfectly adequate thing on its own, it lets you understand the problem better. It's not incomplete as a tool for understanding a problem.

Comment by Vladimir_Nesov on The True Face of the Enemy · 2021-01-15T04:01:33.285Z · LW · GW

Lack of better plans should quiet the urge to immediately tear down the status quo, shouldn't influence moral judgement of it.

Comment by Vladimir_Nesov on What to do if you can't form any habits whatsoever? · 2021-01-10T18:52:09.406Z · LW · GW

I was addressing the title. There are things that can be done, I named one of them (by the general strategy of making progress on helplessly difficult problems through finding similar but easier problems that it's possible to work on). It doesn't encompass everything, and likely doesn't straightforwardly help with any issue you might still be having. I suspect that if "procedures" include cognitive habits and specific training of aspects of activities that usually get no deliberative attention, it might still be useful. Probably not for brushing teeth.

Comment by Vladimir_Nesov on What to do if you can't form any habits whatsoever? · 2021-01-10T07:11:44.999Z · LW · GW

A part of forming a habit is becoming familiar with the procedure. Consistently executing the procedure is a separate aspect. In this framing, it should be possible, and being familiar with useful procedures is useful, it makes them more available and cheaper to execute.

Comment by Vladimir_Nesov on What confusions do people have about simulacrum levels? · 2020-12-15T11:24:12.493Z · LW · GW

Level 3 is identity, masking the absence of justification. Level 4 masks the absence of identity.

Comment by Vladimir_Nesov on Matt Goldenberg's Short Form Feed · 2020-12-12T21:00:00.564Z · LW · GW

My takeaway was that awareness of all levels is necessary if you want to reliably remain on level 1 (make sure that you don't trigger responses for levels 2-4 by crafting statements that have no salient interpretations at levels 2-4). So both the problem and the solution involve reading statements at multiple levels.

(The innovation is in how this heuristic is more principled/general than things like "don't talk about politics or religion". You might even manage to talk about politics and religion without triggering levels 2-4.)

Comment by Vladimir_Nesov on Matt Goldenberg's Short Form Feed · 2020-12-12T17:57:40.870Z · LW · GW

The simulacra levels are not mutually exclusive, a given statement should be interpreted at all four levels simultaneously:

  • Level 1 (facts): What does the statement claim about the world?
  • Level 2 (deception): What actions does belief in the statement's truth incite?
  • Level 3 (identity): Which groups does uttering this statement serve as evidence for belonging to?
  • Level 4 (consequences): What goals does uttering this statement serve?
Comment by Vladimir_Nesov on [deleted post] 2020-12-10T12:56:26.167Z

Is there any data on how prevalent this is? I only occasionally experience a dream from the perspective of someone straightforwardly analogous to myself.