Posts

No Anthropic Evidence 2012-09-23T10:33:06.994Z
A Mathematical Explanation of Why Charity Donations Shouldn't Be Diversified 2012-09-20T11:03:48.603Z
Consequentialist Formal Systems 2012-05-08T20:38:47.981Z
Predictability of Decisions and the Diagonal Method 2012-03-09T23:53:28.836Z
Shifting Load to Explicit Reasoning 2011-05-07T18:00:22.319Z
Karma Bubble Fix (Greasemonkey script) 2011-05-07T13:14:29.404Z
Counterfactual Calculation and Observational Knowledge 2011-01-31T16:28:15.334Z
Note on Terminology: "Rationality", not "Rationalism" 2011-01-14T21:21:55.020Z
Unpacking the Concept of "Blackmail" 2010-12-10T00:53:18.674Z
Agents of No Moral Value: Constrained Cognition? 2010-11-21T16:41:10.603Z
Value Deathism 2010-10-30T18:20:30.796Z
Recommended Reading for Friendly AI Research 2010-10-09T13:46:24.677Z
Notion of Preference in Ambient Control 2010-10-07T21:21:34.047Z
Controlling Constant Programs 2010-09-05T13:45:47.759Z
Restraint Bias 2009-11-10T17:23:53.075Z
Circular Altruism vs. Personal Preference 2009-10-26T01:43:16.174Z
Counterfactual Mugging and Logical Uncertainty 2009-09-05T22:31:27.354Z
Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds 2009-08-16T16:06:18.646Z
Sense, Denotation and Semantics 2009-08-11T12:47:06.014Z
Rationality Quotes - August 2009 2009-08-06T01:58:49.178Z
Bayesian Utility: Representing Preference by Probability Measures 2009-07-27T14:28:55.021Z
Eric Drexler on Learning About Everything 2009-05-27T12:57:21.590Z
Consider Representative Data Sets 2009-05-06T01:49:21.389Z
LessWrong Boo Vote (Stochastic Downvoting) 2009-04-22T01:18:01.692Z
Counterfactual Mugging 2009-03-19T06:08:37.769Z
Tarski Statements as Rationalist Exercise 2009-03-17T19:47:16.021Z
In What Ways Have You Become Stronger? 2009-03-15T20:44:47.697Z
Storm by Tim Minchin 2009-03-15T14:48:29.060Z

Comments

Comment by Vladimir_Nesov on [deleted post] 2021-07-22T14:11:36.567Z

Hypotheses/theories (aka gears-level models) are not required to have an established relation to the world. A viable attitude is to develop theories as theories, improving ability to navigate them without necessarily believing, claiming, or aiming to figure out their real world truth. Theories developed this way may tend to be irrelevant, but sometimes they help with distilling useful building blocks for other things, or can be changed to become more relevant, once it becomes clear how to go about that. They can also be interesting, or interesting to develop, which is valuable in itself, regardless of any potential use!

Comment by Vladimir_Nesov on [Link] Musk's non-missing mood · 2021-07-13T19:08:52.504Z · LW · GW

Senescence doesn't kill the world.

Comment by Vladimir_Nesov on steven0461's Shortform Feed · 2021-07-09T13:21:11.023Z · LW · GW

There's now further clarification in this thread.

Comment by Vladimir_Nesov on paulfchristiano's Shortform · 2021-07-09T11:21:56.735Z · LW · GW

The point is that in order to be useful, a prediction/reasoning process should contain mesa-optimizers that perform decision making similar in a value-laden way to what the original humans would do. The results of the predictions should be determined by decisions of the people being predicted (or of people sufficiently similar to them), in the free-will-requires-determinism/you-are-part-of-physics sense. The actual cognitive labor of decision making needs to in some way be an aspect of the process of prediction/reasoning, or it's not going to be good enough. And in order to be safe, these mesa-optimizers shouldn't be systematically warped into something different (from a value-laden point of view), and there should be no other mesa-optimizers with meaningful influence in there. This just says that prediction/reasoning needs to be X-and-only-X in order to be safe. Thus the equivalence. Prediction of exact imitation in particular is weird because in that case the similarity measure between prediction and exact imitation is hinted to not be value-laden, which it might have to be in order for the prediction to be both X-and-only-X and efficient.

This is only unimportant if X-and-only-X is the likely default outcome of predictive generalization, so that not paying attention to this won't result in failure, but nobody understands if this is the case.

The mesa-optimizers in the prediction/reasoning similar to the original humans is what I mean by efficient imitations (whether X-and-only-X or not). They are not themselves the predictions of original humans (or of exact imitations), which might well not be present as explicit parts of the design of reasoning about the process of reflection as a whole, instead they are the implicit decision makers that determine what the conclusions of the reasoning say, and they are much more computationally efficient (as aspects of cheaper reasoning) than exact imitations. At the same time, if they are similar enough in a value-laden way to the originals, there is no need for better predictions, much less for exact imitation, the prediction/reasoning is itself the imitation we'd want to use, without any reference to an underlying exact process. (In a story simulation, there are no concrete states of the world, only references to states of knowledge, yet there are mesa-optimizers who are the people inhabiting it.)

If prediction is to be value-laden, with value defined by reflection built out of that same prediction, the only sensible way to set this up seems to be as a fixpoint of an operator that maps (states of knowledge about) values to (states of knowledge about) values-on-reflection computed by making use of the argument values to do value-laden efficient imitation. But if this setup is not performed correctly, then even if it's set up at all, we are probably going to get bad fixpoints, as it happens with things like bad Nash equilibria etc. And if it is performed correctly, then it might be much more sensible to allow an AI to influence what happens within the process of reflection more directly than merely by making systematic distortions in predicting/reasoning about it, thus hypothetical processes of reflection wouldn't need the isolation from AI's agency that normally makes them safer than the actual process of reflection.

Comment by Vladimir_Nesov on Daniel Kokotajlo's Shortform · 2021-07-09T08:06:41.451Z · LW · GW

"Truths" are persuasion, unless expected to be treated as hypotheses with the potential to evoke curiosity. This is charity, continuous progress on improving understanding of circumstances that produce claims you don't agree with, a key skill for actually changing your mind. By default charity is dysfunctional in popular culture, so non-adversarial use of factual claims that are not expected to become evident in short order depends on knowing that your interlocutor practices charity. Non-awkward factual claims are actually more insidious, as the threat of succeeding in unjustified persuasion is higher. So in a regular conversation, there is a place for arguments, not for "truths", awkward or not. Which in this instance entails turning the conversation to the topic of AI timelines.

I don't think there are awkward arguments here in the sense of treading a social taboo minefield, so there is no problem with that, except it's work on what at this point happens automatically via stuff already written up online, and it's more efficient to put effort in growing what's available online than doing anything in person, unless there is a plausible path to influencing someone who might have high impact down the line.

Comment by Vladimir_Nesov on paulfchristiano's Shortform · 2021-06-30T14:20:10.550Z · LW · GW

I see getting safe and useful reasoning about exact imitations as a weird special case or maybe a reformulation of X-and-only-X efficient imitation. Anchoring to exact imitations in particular makes accurate prediction more difficult than it needs to be, as it's not the thing we care about, there are many irrelevant details that influence outcomes that accurate predictions would need to take into account. So a good "prediction" is going to be value-laden, with concrete facts about actual outcomes of setups built out of exact imitations being unimportant, which is about the same as the problem statement of X-and-only-X efficient imitation.

If such "predictions" are not good enough by themselves, underlying actual process of reflection (people living in the world) won't save/survive this if there's too much agency guided by the predictions. Using an underlying hypothetical process of reflection (by which I understand running a specific program) is more robust, as AI might go very wrong initially, but will correct itself once it gets around to computing the outcomes of the hypothetical reflection with more precision, provided the hypothetical process of reflection is defined as isolated from the AI.

I'm not sure what difference between hypothetical and actual processes of reflection you are emphasizing (if I understood what the terms mean correctly), since the actual civilization might plausibly move in into a substrate that is more like ML reasoning than concrete computation (let alone concrete physical incarnation), and thus become the same kind of thing as hypothetical reflection. The most striking distinction (for AI safety) seems to be the implication that an actual process of reflection can't be isolated from decisions of the AI taken based on insufficient reflection.

There's also the need to at least define exact imitations or better yet X-and-only-X efficient imitation in order to define a hypothetical process of reflection, which is not as absolutely necessary for actual reflection, so getting hypothetical reflection at all might be more difficult than some sort of temporary stability with actual reflection, which can then be used to define hypothetical reflection and thereby guard from consequences of overly agentic use of bad predictions of (on) actual reflection.

Comment by Vladimir_Nesov on paulfchristiano's Shortform · 2021-06-29T08:37:27.915Z · LW · GW

The upside of humans in reality is that there is no need to figure out how to make efficient imitations that function correctly (as in X-and-only-X). To be useful, imitations should be efficient, which exact imitations are not. Yet for the role of building blocks of alignment machinery, imitations shouldn't have important systematic tendencies not found in the originals, and their absence is only clear for exact imitations (if not put in very unusual environments).

Suppose you already have an AI that interacts with the world, protects it from dangerous AIs, and doesn't misalign people living in it. Then there's time to figure out how to perform X-and-only-X efficient imitation, which drastically expands the design space, makes it more plausible that the kinds of systems that you wrote about a lot relying on imitations actually work as intended. In particular, this might include the kind of long reflection that has all the advantages of happening in reality without wasting time and resources on straightforwardly happening in reality, or letting the bad things that would happen in reality actually happen.

So figuring out object level values doesn't seem like a priority if you somehow got to the point of having an opportunity to figure out efficient imitation. (While getting to that point without figuring out object level values doesn't seem plausible, maybe there's a suggestion of a process that gets us there in the limit in here somewhere.)

Comment by Vladimir_Nesov on I’m no longer sure that I buy dutch book arguments and this makes me skeptical of the "utility function" abstraction · 2021-06-22T18:49:56.467Z · LW · GW

I think this becomes clearer if we distinguish people from agents. People are somewhat agentic beings of high moral value, while dedicated agents might have little moral value. Maximizing agency of people too well probably starts destroying their value at some point. At present, getting as much agency as possible out of potentially effective people is important for instrumental reasons, since only humans can be agentic, but that will change.

It's useful for agents to have legible values, so that they can build complicated systems that serve much simpler objectives well. But for people it's less obviously important to have much clarity to their values, especially if they are living in a world managed by agents. Agents managing the world do need clear understanding of values of civilization, but even then it doesn't necessarily make sense to compare these values with those of individual people.

(It's not completely obvious that individual people are one of the most valuable things to enact, so a well-developed future might lack them. Even if values of civilization are determined by people actually living through eons of reflection and change, and not by a significantly less concrete process, that gives enough distance from the present perspective to doubt anything about the result.)

Comment by Vladimir_Nesov on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T13:40:32.736Z · LW · GW

The usual Pearson correlation in particular is also insensitive to positive affine transformations of either player's utility, so seems to be about the right thing, doesn't just try to check if the incomparable utility values are equal.

Comment by Vladimir_Nesov on steven0461's Shortform Feed · 2021-06-11T14:57:20.751Z · LW · GW

I'm steelmanning long reflection, as both the source of goals for an AGI, and something that happens to our actual civilization, while resolving the issues that jumped out at you. Sorry if it wasn't clear from the cryptic summary.

If it's possible to make an AGI that coexists with our civilization (probably something that's not fully agentic), it should also be possible to make one that runs our civilization in a simulation while affecting what's going on in the simulation to a similar extent. If the nature of this simulation is more like that of a story (essay?), written without a plan in mind, but by following where the people written in it lead it, it can be dramatically more computationally efficient to run and to make preliminary predictions about.

The same way that determinism enables free will, so can sufficiently lawful storytelling, provided it's potentially detailed enough to generate thoughts of people in the simulation. So the decisions of the civilization simulated in a story are going to be determined by thoughts and actions of people living there, yet it's easy to make reasonable predictions about this in advance, and running the whole thing (probably an ensemble of stories, not a single story) is not that expensive, even if it takes a relatively long time, much more than to get excellent predictions of where it leads.

As a result, we quickly get a good approximation of what people will eventually decide, and that can be used to influence the story for the better from the start, without intruding on continuity, or to decide which parts to keep summarized, not letting them become real. So this version of long reflection is basically CEV, but with people inside being real (my guess is that having influence over the outer AGI is a significant component of being real), continuing the course of our own civilization. The outer AGI does whatever based on the eventual decisions of the people within the story, made during the long reflection, assisted within the story according to their own decisions from the future.

Edit: More details in this thread, in particular this comment.

Comment by Vladimir_Nesov on For Better Commenting, Take an Oath of Reply. · 2021-05-31T09:49:41.699Z · LW · GW

Should be consideration/reflection, not reply. Without something interesting/necessary to say, replying is noise.

Comment by Vladimir_Nesov on steven0461's Shortform Feed · 2021-05-27T00:34:46.810Z · LW · GW

Inexorability of AI-enacted events doesn't intrude on decisions and discoveries of people written in those events. These decisions from the distant future may determine how the world preparing to reflect on them runs from the start.

Comment by Vladimir_Nesov on Sabien on "work-life" balance · 2021-05-21T10:42:28.662Z · LW · GW

The post takes a very consequentialist point of view. Activities may be unrewarding and useless, performed out of principle that is not itself justified in a consequentialist framing, or in pursuit of a hedonistically unrewarding purpose. Important productive things can be done this way as well, motivated neither by their potential use nor enjoyment of the process (even when potentially useful and enjoyable).

I'm in particular contrasting hedonistic enjoyment with motivations that are not emotional or otherwise grounded in psychology. The objects of an activity can themselves constitute motivation, which is unrelated to hedonistic side-effects, as those are not the point (and don't have to be absent). This is like an anti-wireheading injunction.

Comment by Vladimir_Nesov on Saving Time · 2021-05-19T06:48:42.172Z · LW · GW

My impression is that all this time business in decision making is more of an artifact of computing solutions to constraint problems (unlike in physics, where it's actually an important concept). There is a process of computation that works with things such as propositions about the world, which are sometimes events in the physical sense, and the process goes through these events in the world in some order, often against physical time. But it's more like construction of Kleene fixpoints or some more elaborate thing like tracing statements in a control flow graph or Abstracting Abstract Machines, a particular way of solving a constraint problem that describes the situation, than anything characterising the phenomenon of finding solutions in general. Or perhaps just going up in some domain in something like Scott semantics of a computation, for whatever reason, getting more detailed information about its behavior. "The order in domains" seems like the most relevant meaning for time in decision making, which isn't a whole lot like time.

Comment by Vladimir_Nesov on Deliberately Vague Language is Bullshit · 2021-05-15T06:19:00.176Z · LW · GW

The post is about being specific, precision in communication, not truth. In practice, truth is given by evident hypotheses, those readily verifiable or otherwise supported by available evidence. Getting to that point benefits from precise communication of concepts and hypotheses, building blocks of truth judgements (as well as claims about how to attain relevant evidence), but communication doesn't itself convey their truth. Conflating these different ideas wastes opportunity for precision.

Comment by Vladimir_Nesov on Domain Theory and the Prisoner's Dilemma: FairBot · 2021-05-09T23:20:09.235Z · LW · GW

We can model events (such as states of a program) with posets, related with each other by monotone maps. By beliefs I mean such posets or their elements (belief states). A state of an event can be enacted by an agent if the agent can bring it into that state. So if the agent decides on an action, that action can be enacted. But if instead the agent only decides on things like beliefs about actions (powersets of sets of possible actions), these can't be enacted, for example it can't ensure that the action is {C, D} or ⊥, that doesn't make sense. But for the beliefs about future states of agent's program that are themselves states of belief, modeled as themselves, the agent can enact them, and that makes them an excellent target for decision making. I think this is the important takeaway from the modal decisions setting, but the setting lacks the component of choosing between possible solutions according to preference, instead it's manipulating the pseudoenvironment between beliefs and actions to get something useful out of its incomplete decision making machinery.

We could say that a one-query player "decides to defect" if his query is proven false.

My point is that in principle, even for a belief whose set of states is moderately large (which the picture in the first comment of this thread gestures at), the action may be defined to depend on the belief state in an arbitrary way, perhaps switching back and forth between C and D as a belief gets stronger and stronger. That is because the action doesn't play a fundamental role in the decision making, only belief does (in this setting, statements with proofs), but we are not making use of the ability to choose which things have proofs according to preference, so there's this whole thing about carefully choosing how actions depend on beliefs, which doesn't work very well.

Comment by Vladimir_Nesov on Domain Theory and the Prisoner's Dilemma: FairBot · 2021-05-09T19:37:22.467Z · LW · GW

Decisions should be taken about beliefs (as posets), not actions (as discrete sets). With the environment modeled by monotone maps, solutions (as fixpoints) can only be enacted for beliefs that are modeled as themselves, not for things that are modeled as something other than themselves (powersets for events with small finite sets of possible states, etc.). Also, only things that shall be enacted should be modeled as themselves, or else solutions won't be enacted into correctness.

This way, actions may depend on beliefs in an arbitrary way, the same as events in the environment can depend on actions in an arbitrary way, so actions play no special role in the decision making, only enacted beliefs do. For example the states of belief (other than ⊥) that lead to Cooperate don't in principle have to be upward closed. Not sure what's going on with that, there is no choosing between solutions in this setting, so it's not a great fit.

Comment by Vladimir_Nesov on Small and Vulnerable · 2021-05-07T14:47:38.274Z · LW · GW

There is never native ultimate bedrock with human minds that has any clarity to it. Concepts for how people think are mostly about cognitive technology that someone might happen to implement in their thinking, they become more reliably descriptive only at that point. All sorts of preferences and especially personal pursuits are possible, without a clear/principled reason they develop. The abstract arguments I'm gesturing at amplify/focus a vague attitude of "suffering is bad", which is not rare and doesn't require any particular circumstances to form, into actionable recommendations.

Comment by Vladimir_Nesov on Decontextualizing Morality · 2021-05-06T13:25:08.959Z · LW · GW

There are many ways of framing the situation, looking for models of what's going on that have radically different shapes. It's crucial to establish some sort of clarity about what kind of model we are looking for, what kind of questions or judgements we are trying to develop. You seem to be conflating a lot of this, so I gave examples of importantly different framings. Some of these might fit what you are looking for, or help with noticing specific cases where they are getting mixed up.

Comment by Vladimir_Nesov on Decontextualizing Morality · 2021-05-06T09:28:27.617Z · LW · GW

When a highly intelligent self-driving boat on the bank of a lake doesn't try to save a drowning child, what is the nature of the problem? Perhaps the boat is morally repugnant and the world will be a better place if it experiences a rapid planned disassembly. Or the boat is a person and disassembling or punishing them would in itself be wrong, apart from any instrumental value gained in the other consequences of such an action. Or the fact that they are a person yet do nothing qualifies them as evil and deserving of disassembly, which would not be the case had they not been a person. Maybe the boat is making an error of judgement, that is according to some decision theory and under human-aligned values the correct actions importantly differ from the actual actions taken by the boat. Or maybe this particular boat is simply instrumentally useless for the purpose of saving drowning children, in the same way that a marker buoy would be useless.

What should be done about this situation? That's again a different question, the one asking it might be the boat themself, and a solution might not involve the boat at all.

Comment by Vladimir_Nesov on [deleted post] 2021-05-04T20:34:22.063Z

At this level of technical discussion it's hopeless to attempt to understand anything. Maybe try going for depth first, learning some things at least to a level where passing hypothetical exams on those topics would be likely, to get a sense of what a usable level of technical understanding is. Taking a wild guess, perhaps something like Sipser's "Introduction to the Theory of Computation" would be interesting?

Comment by Vladimir_Nesov on [deleted post] 2021-05-04T08:26:20.705Z

Maybe read up on the concepts of outcome, sample space, event, probability space, and see what the probability of intersection of events means in terms of all that. It's this stuff that's being implicitly used, usually it should be clear how to formulate the informal discussion in these terms. In particular, truth of whether an outcome belongs to an event is not fuzzy, it either does or doesn't, as events are defined to be certain sets of outcomes.

(Also, the reasons behind omission of the "or equal to"s you might've noticed are discussed in 0 And 1 Are Not Probabilities, though when one of the events includes the other this doesn't apply in any straightforward sense.)

Comment by Vladimir_Nesov on Small and Vulnerable · 2021-05-03T19:27:48.110Z · LW · GW

The intuition pump does live at this level of abstraction, but it's a separate entity from the abstract consideration it's meant to illustrate, which lives elsewhere. My disagreement is with how the first paragraph of the post frames the rest of it. Personal or vicarious experience of trauma is not itself a good reason for pursuing altruism, instead it's a compelling intuition pump for identifying the reason to do so. Some behaviors resulting from trauma are undesirable, and it's the abstract consideration of what motivates various induced behaviors that lets us distinguish justified takeaways of experience from pathological ones. Altruism could've been like flinching when people raise a hand, so there should be an opportunity to make this distinction, as opposed to unconditionally going along with the induced behavior.

Comment by Vladimir_Nesov on Small and Vulnerable · 2021-05-03T18:17:05.066Z · LW · GW

This is the kind of thing that feels compelling, but emphasizes a wrong level of abstraction. Personal experience of suffering is not the reason why suffering is bad. It's a bit like professing that two plus two is four because the teacher says so. The teacher is right, but there is a reason they are right that is more important than the fact that they are saying this. Similarly, personal suffering is compelling for the abstract conclusion of altruism, but there is a reason it's compelling that is more important as a consideration for this conclusion than the fact of experience. Someone with no personal experience of suffering should also be moved by that consideration.

Comment by Vladimir_Nesov on Death by Red Tape · 2021-05-01T20:42:04.650Z · LW · GW

That's ambitious without an ambition. Switching domains stops your progress in the original domain completely, so doesn't make it easier to make progress. Unless domain doesn't matter, only fungible "progress".

Comment by Vladimir_Nesov on Best empirical evidence on better than SP500 investment returns? · 2021-04-27T14:54:08.168Z · LW · GW

(This feels more like a dragon hoard than retirement savings, something that should only form as an incidental byproduct of doing what you actually value, or else under an expectation of an increase in yearly expenses.)

Comment by Vladimir_Nesov on Best empirical evidence on better than SP500 investment returns? · 2021-04-25T21:44:22.576Z · LW · GW

My expenses are well below my income; I'm done saving for retirement

Note that a simple FIRE heuristic giving about 3% chance of running out of money at some point is to have 30x yearly expenses in 100% stock index with no leverage, which is a lot more than the usual impression along the lines of "my expenses are well below my income" and is still not something that can be reasonably described as safe.

Comment by Vladimir_Nesov on Best empirical evidence on better than SP500 investment returns? · 2021-04-25T20:03:29.884Z · LW · GW

Leverage can give arbitrarily high returns at arbitrarily high risk. With things easily available at a brokerage, this goes up to very high returns with insane risk. See St. Petersburg paradox for an illustration of what insane risk means. I like the variant where you continually bet everything on heads in an infinite series of fair coin tosses, doubling the bet if you win, so that for the originally invested $100 you get back the same $100 in expectation at each step (at first step, $200 with probability 1/2 and $0 with probability 1/2; by the third step, $800 with probability 1/8 and $0 with probability 7/8), yet you are guaranteed to eventually lose everything.

Diversification, if done correctly, reduces risk at the expense of some reduction in returns. At which point increasing leverage to move the risk back up to where it was originally increases returns to a level above what they were originally. Diversification without leverage can make things worse, because it reduces returns.

Not making use of leverage is an arbitrary choice, it's unlikely to be optimal. For any given situation, there's almost certainly some level of leverage that's better than 1 (it might be higher or lower than 1). There are various heuristics for figuring out what to do, like Sharpe ratios and Kelly betting. As an outsider to finance, it was initially hard for me to make sense of this, as discussion of the heuristics is usually fairly unprincipled and relies on fluency with many finance-specific concepts. A math-heavy finance-agnostic path to this is to work out something along the lines of Black–Scholes model starting from expected utility maximization and geometric Brownian motion. For actual decisions, calculation through Monte Carlo simulations rather than analytical solutions lets utility functions, taxes, and other details be formulated more flexibly/straightforwardly.

Comment by Vladimir_Nesov on Covid 4/22: Crisis in India · 2021-04-23T08:25:08.486Z · LW · GW

when the institutions are bad and spread insane views, this outsourced thinking causes the trusting majority to share those insane views

Or alternatively, with the model of institutions as competent but dishonest, the takeaway from an action with an implausible-sounding explanation (pausing vaccination out of "an abundance of caution") is to make up your own explanation that would make the action seem reasonable (there are issues that are actually serious), and ignore all future claims from the institution on the topic ("we checked and it seems fine").

Thus conspiracy theories, grounded in faith in the competence of institutions. With how well they manage to keep the evidence behind the real explanations secret, they must be pretty competent!

Comment by Vladimir_Nesov on How You Can Gain Self Control Without "Self-Control" · 2021-03-25T12:44:09.437Z · LW · GW

The article gives framing and advice that seem somewhat arbitrary, and doesn't explain most of the choices. It alludes to research, but the discussion actually present in the article is only tangentially related to most of the framing/advice content, and even that discussion is not very informative when considered in isolation, without further reading.

There is a lot of attention to packaging the content, with insufficient readily available justification for it, which seems like a terrible combination without an explicit reframing of what the article wants to be. With less packaging, it would at least not appear to be trying to counteract normal amount of caution in embracing content of (subjectively) mysterious origin.

Comment by Vladimir_Nesov on What are the best resources to point people who are skeptical of getting vaccinated for COVID-19 to? · 2021-03-20T18:41:46.871Z · LW · GW

The distinction is between understanding and faith/identity (which abhors justification from outside itself). Sometimes people build understanding that enables checking if things make sense. This also applies to justifying trust of the kind not based on faith. The alternative is for decisions/opinions/trust to follow identity, determined by luck.

Comment by Vladimir_Nesov on Impact of the rationalist community who someone who aspired to be "rational" · 2021-03-15T03:49:02.758Z · LW · GW

Naming a group of people is a step towards reification of an ideology associated with it. It's a virtuous state of things that there is still no non-awkward name, but keeping the question of identity muddled and tending towards being nameless might be better.

Comment by Vladimir_Nesov on samshap's Shortform · 2021-03-12T13:01:59.522Z · LW · GW

Sleeping Beauty illustrates the consequences of following general epistemic principles. Merely finding an assignment of probabilities that's optimal for a given way of measuring outcomes is appeal to consequences, on its own it doesn't work as a general way of managing knowledge (though some general ways of managing knowledge might happen to assign probabilities so that the consequences are optimal, in a given example). In principle consequentialism makes superfluous any particular elements of agent design, including those pertaining to knowledge. But that observation doesn't help with designing specific ways of working with knowledge.

Comment by Vladimir_Nesov on [deleted post] 2021-03-04T18:31:04.635Z

Labels are no substitute for arguments.

But that's the nature of identity: a claim that's part of identity won't suffer insinuations that it needs any arguments behind it, let alone the existence of arguments against. Within one's identity, labels are absolutely superior to arguments. So the disagreement is more about epistemic role of identity, not about object level claims or arguments.

Comment by Vladimir_Nesov on [deleted post] 2021-03-04T17:25:17.229Z

See proving too much. In the thought experiment where you consider sapient wolves who hold violent consumption of sentient creatures as an important value, the policy of veganism is at least highly questionable. An argument for such a policy needs to distinguish humans from sapient wolves, so as to avoid arguing for veganism for sapient wolves with the same conviction as it does for humans.

Your argument mentions relevant features (taste, tradition) at the end and dismisses them as "lazy excuses". Yet their weakness in the case of humans is necessary for the argument's validity. Taste and tradition point to an ethical argument against veganism, that doesn't not exist as you claim at the start of the article. Instead the argument exists and might be weak.

Comment by Vladimir_Nesov on [deleted post] 2021-03-03T22:40:46.390Z

This proves too much. Most of these arguments would profess to hold veganism as the superior policy for sapient wolves (who are sufficiently advanced to have developed cheap dietary supplementation), degrading the moral imperative of tearing living flesh from the bones.

Comment by Vladimir_Nesov on Weighted Voting Delenda Est · 2021-03-03T09:26:59.838Z · LW · GW

This is a much clearer statement of the problem you are pointing at than the post.

(I don't see how it's apparent that the voting system deserves significant blame for the overall low-standard-in-your-estimation of LW posts. A more apparent effect is probably bad-in-your-estimation posts getting heavily upvoted or winning in annual reviews, but it's less clear where to go from that observation.)

Comment by Vladimir_Nesov on Takeaways from one year of lockdown · 2021-03-02T01:13:47.414Z · LW · GW

The stress of negotiation/management of COVID precautions destroyed my intellectual productivity for a couple of months at the start of the pandemic. So I rented a place to live alone, which luckily happened to be possible for me, and the resulting situation is much closer to normal than it is to the pre-move situation during the pandemic. There is no stress, as worrying things are no longer constantly trying to escape my control without my knowledge, there's only the challenge of performing "trips to the surface" correctly that's restricted to the time of the trips and doesn't poison the rest of my time.

Comment by Vladimir_Nesov on Subjectivism and moral authority · 2021-03-02T00:47:05.900Z · LW · GW

As I understand this, Clippy might be able to issue an authoritative moral command, "Stop!", to the humans, provided it's "caused" by human values, as conveyed through its correct understanding of them. The humans obey, provided they authenticate the command as channeling human values. It's not advice, as the point of intervention is different: it's not affecting a moral argument (decision making) within the humans, instead it's affecting their actions more directly, with the moral argument having been computed by Clippy.

Comment by Vladimir_Nesov on "If You're Not a Holy Madman, You're Not Trying" · 2021-02-28T23:53:47.559Z · LW · GW

The nice things are skills and virtues, parts of designs that might get washed away by stronger optimization. If people or truths or playing chess are not useful/valuable, agents get rid of them, while people might have a different attitude.

(Part of the motivation here is in making sense of corrigibility. Also, I guess simulacrum level 4 is agency, but humans can't function without a design, so attempts to take advantage of the absence of a design devolve into incoherence.)

Comment by Vladimir_Nesov on "If You're Not a Holy Madman, You're Not Trying" · 2021-02-28T22:01:54.408Z · LW · GW

It's not clear that people should be agents. Agents are means of setting up content of the world to accord with values, they are not optimized for being the valuable content of the world. So a holy madman has a work-life balance problem, they are an instrument of their values rather than an incarnation of them.

Comment by Vladimir_Nesov on What are a rationalist's best tools for better decision making? · 2021-02-26T06:43:30.619Z · LW · GW

What are a rationalist's best tools for better decision making?

What are a farrier's best recipes for better pizza? Probably the same as an ophthalmologist's. What about worse pizza, or worse recipes?

Omit needless words. Yes requires the possibility of no.

Comment by Vladimir_Nesov on A No-Nonsense Guide to Early Retirement · 2021-02-25T11:56:04.211Z · LW · GW

Investing everything in a single ETF (especially at a single brokerage) is possibly fine, but seems difficult to justify. When something looks rock solid in theory, in practice there might be all sorts of black swans, especially over decades (where you lose at least a significant portion of the value held in a particular ETF at a particular brokerage, compared to its underlying basket of securities, because something has gone wrong with the brokerage, the ETF provider, the infrastructure that makes it impossible for anything to go wrong with an ETF, or something else you aren't even aware of). Since there are many similar brokerages and ETF providers, I think it makes sense to diversify across several, which should only cost a bit of additional paperwork.

Even if in fact this activity is completely useless, obtaining knowledge of this fact at an actionable level of certainty (that outweighs the paperwork in the expected utility calculation) looks like a lot of work, much more than the paperwork. Experts might enjoy having to do less paperwork.

(For example, there's theft by malware, a particular risk that would be a subjective black swan for many people, which is more likely to affect only some of the accounts held by a given person. The damage can be further reduced by segregating access between multiple devices running different systems, so that they won't be compromised at the same time, but the risk can't be completely eliminated. Theoretically, malware can be slipped even into security updates to benign software by hacking its developers, if they are not implausibly careful. And in 20 years this might get worse. This is merely an example of a risk reduced by diversification between brokerages that I'm aware of, the point is that there might be other risks that I have no idea about.)

Comment by Vladimir_Nesov on Is the influence of money pervasive, even on LessWrong? · 2021-02-17T10:11:40.674Z · LW · GW

Identity colors the status quo in how the world is perceived, but the process of changing it is not aligned with learning (it masks the absence of attempting to substantiate its claims), thus a systematic bias resistant to observations that should change one's mind. There are emotions involved in the tribal psychological drives responsible for maintaining identity, but they are not significant for expressing identity in everything it has a stance on, subtly (or less so) warping all cognition.

There's some clarification of what I'm talking about in this comment and references therein.

Comment by Vladimir_Nesov on How is rationalism different from utilitarianism? · 2021-02-15T15:06:09.387Z · LW · GW

Rationality is perhaps about thinking carefully about careful thinking: what it is, what it's for, what is its value, what use is it, how to channel it more clearly. Utilitarianism is about very different things.

Comment by Vladimir_Nesov on How is rationalism different from utilitarianism? · 2021-02-15T14:47:32.477Z · LW · GW

It's instrumentally useful for the world to be affected according to a decision theory, but it's not obviously a terminal value for people to act this way, especially in detail. Instrumentally useful things that people shouldn't be doing can instead be done by tools we build.

Comment by Vladimir_Nesov on [deleted post] 2021-02-14T01:39:24.376Z

Depends on the license.

Comment by Vladimir_Nesov on Your Cheerful Price · 2021-02-13T21:30:15.979Z · LW · GW

There is no fundamental reason for Cheerful Price to be higher than what you are normally paid. For example, if you'd love to do a thing even without pay, Cheerful Price would be zero (and if you can't arbitrage by doing the thing without the transaction going through, the price moves all the way into the negatives). If you are sufficiently unusual in that attitude, the market price is going to be higher than that.

Comment by Vladimir_Nesov on Is the influence of money pervasive, even on LessWrong? · 2021-02-03T10:45:29.649Z · LW · GW

strong emotional reactions

I expect being part of one's identity is key, and doesn't require notable emotional reactions.

Comment by Vladimir_Nesov on The 10,000-Hour Rule is a myth · 2021-02-01T16:33:44.504Z · LW · GW

My woefully inexpert guess is that advanced cooking should be thought of as optimization in a space of high dimension, where gradient descent will often zig-zag, making simple experiments inefficient. Then apart from knowledge of many landmarks (which is covered by cooking books), high cooking skill would involve ability to reframe recipes to reduce dimensionality, and intuition about how to change a process to make it better or to vary it without making it worse, given fine details of a particular setup and available ingredients. This probably can't be usefully written down at all, but does admit instruction about changes in specific cases.