Posts

No Anthropic Evidence 2012-09-23T10:33:06.994Z
A Mathematical Explanation of Why Charity Donations Shouldn't Be Diversified 2012-09-20T11:03:48.603Z
Consequentialist Formal Systems 2012-05-08T20:38:47.981Z
Predictability of Decisions and the Diagonal Method 2012-03-09T23:53:28.836Z
Shifting Load to Explicit Reasoning 2011-05-07T18:00:22.319Z
Karma Bubble Fix (Greasemonkey script) 2011-05-07T13:14:29.404Z
Counterfactual Calculation and Observational Knowledge 2011-01-31T16:28:15.334Z
Note on Terminology: "Rationality", not "Rationalism" 2011-01-14T21:21:55.020Z
Unpacking the Concept of "Blackmail" 2010-12-10T00:53:18.674Z
Agents of No Moral Value: Constrained Cognition? 2010-11-21T16:41:10.603Z
Value Deathism 2010-10-30T18:20:30.796Z
Recommended Reading for Friendly AI Research 2010-10-09T13:46:24.677Z
Notion of Preference in Ambient Control 2010-10-07T21:21:34.047Z
Controlling Constant Programs 2010-09-05T13:45:47.759Z
Restraint Bias 2009-11-10T17:23:53.075Z
Circular Altruism vs. Personal Preference 2009-10-26T01:43:16.174Z
Counterfactual Mugging and Logical Uncertainty 2009-09-05T22:31:27.354Z
Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds 2009-08-16T16:06:18.646Z
Sense, Denotation and Semantics 2009-08-11T12:47:06.014Z
Rationality Quotes - August 2009 2009-08-06T01:58:49.178Z
Bayesian Utility: Representing Preference by Probability Measures 2009-07-27T14:28:55.021Z
Eric Drexler on Learning About Everything 2009-05-27T12:57:21.590Z
Consider Representative Data Sets 2009-05-06T01:49:21.389Z
LessWrong Boo Vote (Stochastic Downvoting) 2009-04-22T01:18:01.692Z
Counterfactual Mugging 2009-03-19T06:08:37.769Z
Tarski Statements as Rationalist Exercise 2009-03-17T19:47:16.021Z
In What Ways Have You Become Stronger? 2009-03-15T20:44:47.697Z
Storm by Tim Minchin 2009-03-15T14:48:29.060Z

Comments

Comment by vladimir_nesov on Lessons I've Learned from Self-Teaching · 2021-01-24T06:10:08.441Z · LW · GW

Instead of reading a textbook with SRS and notes, skim more books (not just textbooks), solve problems, read wikis and papers, talk to people, watch talks and lectures, solve more problems from diverse sources. Instead of memorizing definitions, figure out how different possible definitions work out, how motivations of an area select definitions that allow constructing useful things and formulating important facts. The best exercise is to reconstruct main definitions and results on a topic you've mostly forgotten or never learned that well. Math is self-healing, so instead of propping it up with SRS, it's much better to let it heal itself in your mind. (It heals in its own natural order, not according to card schedule.) And then, after you've already learned the material, maybe read a textbook on it cover to cover, again.

(I used to have a mindset similar to what the post expresses, and what it recalls, though not this extreme. The issue is that there is a lot of material available to study on standard topics, especially if you have access to people fluent in them and can reinvent parts of them on your own, which lets this hold even for obscure or advanced topics where not a lot is written up. So the effort applied to memorizing things or recalling old problems can be turned to better understanding and new problems. This post seems to be already leaning in this direction, so I'm just writing down my current guess at a healthy learning practice.)

Comment by vladimir_nesov on Why do stocks go up? · 2021-01-17T20:10:21.672Z · LW · GW

value of the stock is $50 today and will be $5,000 ten years from now, and the rest of the market prices it at $50 today, then I could earn insane expected returns by investing at $50 today. Thus, I don't think the market would price it at $50 today.

Everyone gets the insane nominal returns after ten years are up (assuming central banks target inflation), but after the initial upheaval at the time of the announcement there is no stock that gives more insane returns than other stocks, there are no arbitrage trades to drive the price up immediately. For nominal prices of stocks, what happens in ten years is going to look like significant devaluation of currency.

If a $5,000 free design car (that's the only thing in our consumer busket) can suddenly be printed out of dirt for $50, and central banks target inflation, they are going to essentially redefine the old $50 to read "$5,000", so that the car continues to cost $5,000 despite the nanofactory. At the same time, $5,000 in a stock becomes $500,000.

(Of course this is a hopeless caricature intended to highlight the argument, not even predict what happens in the ridiculous thought experiment. Things closer to reality involve much smaller gradual changes.)

Comment by vladimir_nesov on Why do stocks go up? · 2021-01-17T19:24:42.906Z · LW · GW

I'm not talking about time-discounting at all. The point is that real value of stock (and money) is defined with respect to a busket of consumer goods, and that's the only thing that isn't being priced-in in advance, it's always recalculated at present time. As it becomes objectively easier to make the things people consume, real value of everything else (including total return indices of stocks) increases, by definition of real value. It doesn't increase in advance, as valuation of the goods is not performed in advance to define consumer price index.

Comment by vladimir_nesov on Why do stocks go up? · 2021-01-17T19:07:59.321Z · LW · GW

Suppose at some point there is an announcement that in ten years Free Hardware Foundation will release a magical nanofactory that turns dirt into most things currently in the basket of goods used to calculate inflation. There is no doubt about truth of the announcement. No company directly profits from the machine, as it's free (libre) hardware.

There's some upheaval in the market, that eventually settles down. Yet real value of stock is predictably going to sharply go up after ten years, not (just) immediately, as that's when the basket of goods actually becomes cheaper.

Comment by vladimir_nesov on Alienation and meta-ethics (or: is it possible you should maximize helium?) · 2021-01-17T17:00:08.179Z · LW · GW

It's confusing how the term "realism" is used when applied to ethics, which I think obfuscates a natural position relevant to alignment. Realism about mathematical objects (small-p platonism?) doesn't say that there is One True Mathematical Object to be discovered by all civilizations, instead there are many objects that exist, governing truth of propositions about them. When discussing a mathematical question, we first produce references to some objects, in order to locate them, and then build models of situations that involve them, to understand how these objects work, to formulate claims that are true about them. The references depend on the mathematician's choice of topic or problems, while truth of models, given objects, doesn't depend on the references and hence on the mathematician. The dependence involves two steps: first, there are references, which reside in the mathematician, then there are mathematical objects singled out using the references, and finally there are models that again reside in the mathematician, determined by the objects. Even though the models depend on the references, the references are screened off by the objects, so given the objects, the models no longer depend on the references.

This straightforwardly applies to ethics, except unlike for mathematics, the references are typically vague. The resulting position is realist in the sense that it considers moral facts (objects) as real mind-independent entities governing truth of propositions about them, but the choice of moral objects to consider depends on the agent, which is usually called an anti-realist position, making it difficult to frame a realist/anti-realist narrative. Here, the results of consideration of the moral objects are not necessarily intuitively transparent, their models can be unlike the references that singled them out for consideration, and correctness of models doesn't depend on attitude of the agent, it's determined by the moral objects themselves, their origin in references within the agent is screened off.

This position is, according to the post's own taxomony, the only one not discussed in the post! Here, what you should do depends on current values, yet ideal understanding need not bring values into harmony with what you should do. That is, a probable outcome is alienation from what you should do despite what you should do being determined by what you currently value.

Comment by vladimir_nesov on Deconditioning Aversion to Dislike · 2021-01-16T13:25:39.942Z · LW · GW

Hence "a risk, not necessarily a failure". If the prior says that a systematic error is in place, and there is no evidence to the contrary, you expect the systematic error. But it's an expectation, not precise knowledge, it might well be the case that there is no systematic error.

Furthermore, ensuring that there is no systematic error doesn't require this fact to become externally verifiable. So an operationalization is not necessary to solve the problem, even if it's necessary to demonstrate that the problem is solved. It's also far from sufficient, with vaguely defined topics such as this deliberation easily turns into demagoguery, misleading with words instead of using them to build a more robust and detailed understanding. So it's more of a side note than the core of a plan.

Comment by vladimir_nesov on Deconditioning Aversion to Dislike · 2021-01-16T10:50:53.201Z · LW · GW

Careful reasoning (precision) helps with calibration, but is not synonymous with it. Systematic error is about calibration, not precision, so demanding that it's to be solved through improvement of precision is similar to a demand for a particular argument, risking rejection of correct solutions outside the scope of what's demanded. That is, if calibration can be ensured without precision, your demand won't be met, yet the problem would be solved. Hence my objection to the demand.

Comment by vladimir_nesov on Deconditioning Aversion to Dislike · 2021-01-16T03:22:33.310Z · LW · GW

without a fair operationalization

It's a risk, but not necessarily a failure. It might be enough to seek operationalization in suspicious cases, not in general.

Comment by vladimir_nesov on The True Face of the Enemy · 2021-01-15T04:06:13.073Z · LW · GW

a case against status quo is incomplete without the case for an alternative

"A case against status quo" is ambiguous in this context. The first step to fixing a problem is realizing that you have one. A formulation of a problem is a perfectly adequate thing on its own, it lets you understand the problem better. It's not incomplete as a tool for understanding a problem.

Comment by vladimir_nesov on The True Face of the Enemy · 2021-01-15T04:01:33.285Z · LW · GW

Lack of better plans should quiet the urge to immediately tear down the status quo, shouldn't influence moral judgement of it.

Comment by vladimir_nesov on What to do if you can't form any habits whatsoever? · 2021-01-10T18:52:09.406Z · LW · GW

I was addressing the title. There are things that can be done, I named one of them (by the general strategy of making progress on helplessly difficult problems through finding similar but easier problems that it's possible to work on). It doesn't encompass everything, and likely doesn't straightforwardly help with any issue you might still be having. I suspect that if "procedures" include cognitive habits and specific training of aspects of activities that usually get no deliberative attention, it might still be useful. Probably not for brushing teeth.

Comment by vladimir_nesov on What to do if you can't form any habits whatsoever? · 2021-01-10T07:11:44.999Z · LW · GW

A part of forming a habit is becoming familiar with the procedure. Consistently executing the procedure is a separate aspect. In this framing, it should be possible, and being familiar with useful procedures is useful, it makes them more available and cheaper to execute.

Comment by vladimir_nesov on What confusions do people have about simulacrum levels? · 2020-12-15T11:24:12.493Z · LW · GW

Level 3 is identity, masking the absence of justification. Level 4 masks the absence of identity.

Comment by vladimir_nesov on Matt Goldenberg's Short Form Feed · 2020-12-12T21:00:00.564Z · LW · GW

My takeaway was that awareness of all levels is necessary if you want to reliably remain on level 1 (make sure that you don't trigger responses for levels 2-4 by crafting statements that have no salient interpretations at levels 2-4). So both the problem and the solution involve reading statements at multiple levels.

(The innovation is in how this heuristic is more principled/general than things like "don't talk about politics or religion". You might even manage to talk about politics and religion without triggering levels 2-4.)

Comment by vladimir_nesov on Matt Goldenberg's Short Form Feed · 2020-12-12T17:57:40.870Z · LW · GW

The simulacra levels are not mutually exclusive, a given statement should be interpreted at all four levels simultaneously:

  • Level 1 (facts): What does the statement claim about the world?
  • Level 2 (deception): What actions does belief in the statement's truth incite?
  • Level 3 (identity): Which groups does uttering this statement serve as evidence for belonging to?
  • Level 4 (consequences): What goals does uttering this statement serve?
Comment by vladimir_nesov on My Weirdest Experience · 2020-12-10T12:56:26.167Z · LW · GW

Is there any data on how prevalent this is? I only occasionally experience a dream from the perspective of someone straightforwardly analogous to myself.

Comment by vladimir_nesov on Finance - accessibility (asking for textbook recommendations) · 2020-12-10T12:39:14.627Z · LW · GW

Well, my point was that redundancy is not actually a problem because fluency robs it of tedium. So if you see it as a major issue, it's inaccurate to say that you agree.

Comment by vladimir_nesov on Finance - accessibility (asking for textbook recommendations) · 2020-12-08T16:30:18.138Z · LW · GW

Not sure if this is related to your search for one true textbook, but at some point I found the following error in planning my own self-education. Working through a textbook on an unfamiliar subject is hard, so it looked important to pick a text that covers the right topics, and includes all the right topics, so that I wouldn't be forced to redundantly work through another large text after that. But contrast that impression with reading a 600pp fiction novel! A textbook on a topic you already mostly know is very skimmable, even carefully reading through it without doing the exercises is fast. Conversely, working through a text that can in principle be understood but for which you lack fluency is like learning a new language by picking up a large novel (you won't even rightly know what it's about!) and going through it word by word with a dictionary.

Thus smaller texts whose scope doesn't extend too much outside of what you wish to know are initially more useful than large texts that inevitably have too much extra stuff. This includes piecemeal sources like wiki pages, tutorials, disparate video lectures on youtube, etc. Conversely, once you already know most of what a large text offers, you can pick out the unfamiliar parts you might be interested in and effectively study them, or read through everything to shore up gaps in your knowledge with relative ease, even if it's a 1800pp multi-volume monstrosity. (This assumes that you are not urgently studying for a qualification challenge, when there won't be time to gradually accumulate fluency with a topic.)

Comment by vladimir_nesov on The Incomprehensibility Bluff · 2020-12-06T21:06:45.219Z · LW · GW

Saying incomprehensible things in a personal conversation is evidence of failing to model the interlocutor, so it's a dubious strategy for signaling intelligence. Writing an incomprehensible paper should work better.

Comment by vladimir_nesov on Theory Of Change As A Hypothesis: Choosing A High-Impact Path When You’re Uncertain · 2020-11-30T11:05:54.994Z · LW · GW

[...] your objection is that "being the best" means traditional career success [...], and this isn't a good path for maximizing impact.

It might be, I didn't say anything about that. My point is that career success is not the same condition as maximization of impact, so using these interchangeably is misleading. I suggested that there are some examples illustrating the difference captured by the concept of maximization of impact, but not by the concept of career success.

When I say "best," I mean being able to make judgement calls and contributions that the other people working on the issue can't.

This fits my usage in this thread as well.

The knowledge and skills that make you irreplaceable increase your chances of making a difference.

Yes, but only all else being equal, which is hard to formulate so that multiple examples can be found in the same world. There are lots of worthless neglected occupations. Maximization of neglectedness gives different results from those of maximization of impact.

Comment by vladimir_nesov on Theory Of Change As A Hypothesis: Choosing A High-Impact Path When You’re Uncertain · 2020-11-28T12:46:45.774Z · LW · GW

With that much competition, you would be hard pressed to be one of the best. Hence, you’re more likely to have an impact in neglected career paths because it’s less likely that someone else would have done whatever you did.

Neglected career paths are important not because they increase chances of being one of the best (which is a positional good), even though they do, but because all else equal the amount of value you create is higher, there is more low hanging fruit left ungathered.

You seem to be framing pursuit of a successful career as having a high impact. This is backwards if having a high impact is the goal. Some ways of having a high expected impact (depending on the cause area and your strengths) involve not having a successful career, or predictably having a successful career with a low probability, or being one of many similarly skilled people (that is not being one of the best). Selecting a path that increases chances of success in careerism would negatively affect success in having high expected impact in those cases.

Comment by vladimir_nesov on Small Habits Shape Identity: How I became someone who exercises · 2020-11-27T01:21:43.992Z · LW · GW

The punchline is a reference to chapter 33 of HPMoR:

Draco had observed that if the two prisoners had been Death Eaters during the Wizarding War, the Dark Lord would have killed any traitors.

Harry had nodded and said that was one way to resolve the Prisoner's Dilemma - and in fact both Death Eaters would want there to be a Dark Lord for exactly that reason.

The idea is that coordination can be enforced by a central authority such as a Dark Lord, moving the situation closer to the Pareto frontier, but having a Dark Lord is terrible for other reasons, including as a source of risks that are hard to accurately anticipate.

This is intended as an analogy to the post's story of employing identity in order to regularly exercise. The use of identity in reasoning is analogous to a Dark Lord in that it's a terrible cognitive movement that seems natural, perhaps a psychological adaptation. As a straightforward example, it's things like "What do you think causes global warming?" "I'm a Pastafarian. Pastafarians consider the decline in the number of pirates to be the cause of global warming. Therefore I believe that global warming is caused by there not being enough pirates." The problem is that the question doesn't get to be considered on the object level, it immediately goes to simulacrum level 3. In actual practice, it's not at all straightforward, the arguments in support for a position that won't be considered on object level come naturally and without a framing that makes the problem apparent.

One way of getting rid of the problem is to keep an eye on topics that trigger this movement, asking to affirm consistency instead of clarity of inference, and try to kick such topics out of your identity. There doesn't seem to be much of a point in having anything as part of one's identity in this sense, so the goal of the exercise is to eventually get rid of everything that plays that role, making identity empty.

Comment by vladimir_nesov on Small Habits Shape Identity: How I became someone who exercises · 2020-11-26T21:38:01.372Z · LW · GW

This feels like a success story of pursuing peace and prosperity through elevation of a Dark Lord.

(I'm guessing appeals to identity are responsible for a significant share of misguided, unconsidered conviction. Putting this force to a good use makes it harder to defend against it with general injunctions like keeping identity empty.)

Comment by vladimir_nesov on Stable Pointers to Value: An Agent Embedded in Its Own Utility Function · 2020-11-26T21:22:08.723Z · LW · GW

The problem of figuring out preference without wireheading seems very similar to the problem of maintaining factual knowledge about the world without suffering from appeals to consequences. In both cases a specialized part of agent design (model of preference or model of a fact in the world) has a purpose (accurate modeling of its referent) whose pursuit might be at odds with consequentialist decision making of the agent as a whole. The desired outcome seems to involve maintaining integrity of the specialized part, resisting corruption of consequentialist reasoning.

With this analogy, it might be possible to transfer lessons from the more familiar problem of learning facts about the world, to the harder problem of protecting preference.

Comment by vladimir_nesov on Learning from counterfactuals · 2020-11-26T11:25:13.678Z · LW · GW

A mostly off-topic note on the conceptual picture I was painting. The fictional world was intended to hold entities of the same ontological kind as those from the real world. A fiction text serves as a model and evidence for it, not as a precise definition. Thus an error in the text is not directly an inconsistency in the text, the text is intended to be compared against the fictional world, not against itself. Of course in practice the fictional world is only accessible through a text, probably the same one where we are seeing the error, but there is this intermediate step of going through a fictional world (using another model, the state of uncertainty about it). Similarly to how the real world is only accessible through human senses, but it's unusual to say that errors in statements about the world are inconsistencies in sensory perception.

Comment by vladimir_nesov on Learning from counterfactuals · 2020-11-25T23:37:59.894Z · LW · GW

I don't see why fictional evidence shouldn't be treated exactly the same as real evidence, as long as you don't mix up the referents. There is no fundamental use in singling out reality (there might be some practical use in specializing human minds to reality rather than fiction). Generalization from real evidence to fiction is as much a fallacy as generalization from fictional evidence to reality.

A fiction text is a model of fictional territory that can be used to get some idea of what that territory is like (found with the prior of fictional worlds), and to formulate better models, or models of similar worlds (fanfiction). Statements made in a fiction text can be false about the fictional territory, in which case they are misleading and interfere with learning about the fictional territory. Other statements are good evidence about it. One should be confused by false statements about a fictional territory, but shouldn't be confused by true statements about it. And so on and so forth.

Comment by vladimir_nesov on UDT might not pay a Counterfactual Mugger · 2020-11-22T14:58:31.633Z · LW · GW

CDT and EDT are also sensitive to their prior. The difference is that it's a more familiar routine to define their prior by idealization of the situation being considered without getting out of scope, thus ensuring that we remain close to informal expectations. When building a tractable model for UDT, it similarly makes sense to specify its prior without allowing retraction of knowledge of the situation and escaping to consideration of all possible situations (turning the prior into a model of all possible situations rather than just of this one situation).

In the case of CDT and EDT, escaping the bounds of the situation looks like a refrigerator falling from the sky on the experimental apparatus. In the case of UDT, it looks like a funding agency refusing to fund the experiment because its results wouldn't be politically acceptable, unless it's massaged to look right, and the agents within the experiment understand that (and have no scientific integrity). I think it's similarly unreasonable for both kinds of details to be included in models, and it's similarly possible for them to occur in reality.

Comment by vladimir_nesov on UDT might not pay a Counterfactual Mugger · 2020-11-22T14:30:55.733Z · LW · GW

In the first approximation, the point is not that counterfactual mugging (or any other thought experiment) is actually defined in a certain way, but how it should be redefined in order to make it possible to navigate the issue. Unless Nomegas are outlawed, it's not possible to do any calculations, therefore they are outlawed. Not because they were already explicitly outlawed or were colloquially understood to be outlawed.

But when we look at this more carefully, the assumption is not actually needed. If nonspecified Nomegas are allowed, the distribution of their possible incentives is all over the place, so they almost certainly cancel out in the expected utility of alternative precommitments. The real problem is not with introduction of Nomegas, but with managing to include the possibilities involving Omega in the calculations (as opposed to discarding them as particular Nomegas), taking into account the setting that's not yet described at the point where precommitment should be made.

In counterfactual mugging, there is no physical time when the agent is in the state of knowledge where the relevant precommitment can be made (that's the whole point). Instead, we can construct a hypothetical state of knowledge that has updated on the description of the thought experiment, but hasn't updated on the fact of how the coin toss turned out. The agent never holds this state of knowledge as a description of all that's actually known. Why retract knowledge of the coin toss, instead of retracting knowledge of the thought experiment? No reason, UDT strives to retract all knowledge and make a completely general precommitment to all eventualities. But in this setting, retracting knowledge of the coin toss while retaining knowledge of Omega creates a tractable decision problem, thus UDT that notices the possibility will make a precommitment. Similarly, it should precommit to not paying Omega in a situation where a Nomega punishing for paying up $100 to Omega (as described in this post) is known to operate. But only when it's known to be there, not when it's not known to be there.

Comment by vladimir_nesov on UDT might not pay a Counterfactual Mugger · 2020-11-22T13:07:27.524Z · LW · GW

I think I see what you mean. The situation where you'd make a precommitment, which is described by the same state of knowledge that UDT makes its decision under, occurs before the setting of the thought experiment is made clear. Thus it's not yet clear what kinds of Nomegas can show up with their particular incentives, and the precommitment can't rely on their absense. With some sort of risk-averse status-quo-anchored attitude it seems like "not precomitting" is therefore generally preferable.

But optimization of expected utility doesn't work like that. You have the estimates for possible decisions, and pick the option that's estimated to be the best available. Whether it's the status quo ("not precommitting") or not has no bearing on the decision unless it's expressed as a change in the esimate of expected utility that makes it lower or greater than expected utility of the alternative decisions. Thus when a thought experiment talks about precommitments or any UDT decisions, bringing in arbitrary Nomegas is a problem because it makes the expected utility of precommitments similarly arbitrary, and it's these expected utilities that determine decisions. (Whether to make some precommitment or not is itself a decision.) The obvious way of making it possible to perform the calculation of expected utilities of precommitments is to make the assumption of absense of Nomegas, or more generally to construct the settings of precommitments based only on what's already in the thought experiment.

(Mistakes in expected values in the post are a tiny bit relevant (one of the values is still wrong after the correction), as they vaguely signal lack of reliable knowledge of what expected utility is, although the issue here is mostly informal and correct calculation won't by itself make things clear. General experience with mathematical proofs might be closer to being helpful, as the issue is that the actual algorithms being discussed screen off a lot of informal considerations such as whether not making precommitments is the status quo.)

Comment by vladimir_nesov on UDT might not pay a Counterfactual Mugger · 2020-11-22T08:15:12.001Z · LW · GW

This proves too much. For any thought experiment, if you are allowed to introduce a generalized Nomega, you can force any conclusion for an agent that cares about counterfactuals or has a chance to make a global precommitment. What if Nomega pays $50000 when you do precommit to pay $100 to Omega? Or when you Defect in Prisoner's Dilemma? Or when you Cooperate in Prisoner's Dilemma? This shows that if Nomega is allowed to be introduced, none of the usual decision theory thought experiments can be usefully considered (with a theory like UDT). Thus, it's a reasonable assumption that Nomega shouldn't be allowed to be introduced.

(Btw, the expected values you list in the table are off. Don't know what's up with that.)

Comment by vladimir_nesov on Signalling & Simulacra Level 3 · 2020-11-21T00:29:08.279Z · LW · GW

I don't see the distinction between connotation and denotation as an effective way of carving this muddle. The problem with signaling theory of meaning is that it explains communication of meaning as communication of truth, mixing up these different things. But meaning is often communicated using the whole palette of tools also used for signaling truth. In particular, communication of meaning (that is the kinds of things used to construct models and hypotheses) with utterances that look like vague reasoning by association shouldn't in itself make it more difficult to reason lawfully and clearly about that meaning.

So the method I'm proposing is to consider any utterance in either capacity in turn, with separate questions like "Which idea is this drawing attention to?" and "What weight is implied for relevant assertions about this idea?" But at this point I'm not sure what the essential difficulty is that remains, because I don't perceive the motivation for the post clearly enough.

Comment by vladimir_nesov on Why is there a "clogged drainpipe" effect in idea generation? · 2020-11-20T22:11:10.041Z · LW · GW

Not writing an idea down also helps you naturally think about it in the background. So it's more of a trade-off than a generally useful heuristic.

Comment by vladimir_nesov on My Confusion about Moral Philosophy · 2020-11-14T21:52:33.626Z · LW · GW

"I have a neat proof. Assume that A", "I don't believe A", "Well you won't believe my proof then"

The proof won't be a convincing argument for agreeing with its conclusion. But the proof itself can be checked without belief in A, and if it checks out, this state of affairs can be described as belief in the proof.

Comment by vladimir_nesov on Signalling & Simulacra Level 3 · 2020-11-14T20:58:06.295Z · LW · GW

Communication of meaning, signaling of truth. I'm not sure what essential difficulty remains if we merely make sure to distinguish between communicating ideas (which in this role are to be made clear, not yet compared against the world), and providing evidence for their relevance to the discussion or for their correspondence-to-the-world truth. Fireflies won't be able to communicate ideas, only signal truths, so this analysis doesn't naturally apply to them. But language can communicate ideas without implication of their truth, and at this point signaling helps if truth is to be extracted somewhat directly from other actors and not evaluated in other ways.

we assume the remark is relevant to the conversation

For example, in this case the assumption is part of how meaning is guessed, but is in general unrelated to how its truth (or truth of its relevance) is to be evaluated. The intermingling of the two aspects of communication is mostly shorthand, it can be teased apart.

Comment by vladimir_nesov on The (Unofficial) Less Wrong Comment Challenge · 2020-11-11T20:18:54.635Z · LW · GW

I don't believe we're in that context, hence my comment.

It's important to practice habits when they are useless, or else you end up unpracticed when you need them. So I disagree that a (cheap) habit not being useful in some case is a reason to disregard lack of its use in that case.

Comment by vladimir_nesov on The (Unofficial) Less Wrong Comment Challenge · 2020-11-11T19:47:34.414Z · LW · GW

That's certainly the intended meaning, but the statement itself is in fact heavy with qualifiers. This fact doesn't vary with goodwill.

I think the first of the qualifiers, "If what you say is true, I'll act on what you say," is actually a harmful antipattern, since it commits to action on the assumption that the statement is true while pretending to commit to action in case the statement is true, thus reserving doubt about the statement. It would be healthier to either commit to action on the assumption of doubt, or to refuse to commit because of doubt.

Comment by vladimir_nesov on The (Unofficial) Less Wrong Comment Challenge · 2020-11-11T19:39:31.613Z · LW · GW

I don't think that having feedback, even if only something like "I like this post"/"I disliked this post"/"I had trouble to care", is noise. For me, this is incredibly important, and every person I talked with on this subject agreed.

Whether people agree is beside the point. Is this actually true? To figure it out, it's necessary to be more clear on what the statement means. In what way, specifically, is this kind of feedback important? If someone says "I like your post", it's very hard to learn anything in particular from that. If it's customary to say things like that, some people would just say them out of misplaced general niceness, and there will be even less meaning in the utterance. (Also, there are upvotes/downvotes to indicate precisely that kind of feedback.) Some people like hearing that someone likes their posts, but that's different from feedback being instructive. Noise can be pleasant without being enlightening. Finally, it can be motivating to hear that your work is appreciated.

It seems that it's false that this kind of feedback is important for learning things, but true that it's important for motivation of some authors. These are different claims that shouldn't be mixed up. When you say "every person I talked with on this subject agreed", do you (or they) know what exactly they agree with?

I don't think it will actually push readers away, because they are in total control of their level of commitment.

Unfortunately, norms don't like nuance. This post is very far from igniting a norm, but if hypothetically what it suggests bears fruit, it will be in the form of a norm to comment more, and that norm will end up punishing defectors regardless of whether that was an intended feature of the norm or not.

Comment by vladimir_nesov on The (Unofficial) Less Wrong Comment Challenge · 2020-11-11T19:08:27.914Z · LW · GW

I can try out commenting more if receiving any feedback is as important as you say.

That's a most weasely statement! If you don't know if it's as important as they say, can you actually try commenting more or not? And if you can try, will you? And if you do try, will you succeed?

Comment by vladimir_nesov on The (Unofficial) Less Wrong Comment Challenge · 2020-11-11T18:58:08.615Z · LW · GW

This would increase the level of noise. A good change along these lines is for people who have something to say to speak up more frequently (as opposed to everyone speaking up more frequently, regardless of whether they have something nontrivial to say). But even that is potentially hazardous, since a norm suggesting an obligation to voice your thoughts makes reading more costly, possibly pushes readers away.

Comment by vladimir_nesov on Why You Should Care About Goal-Directedness · 2020-11-10T15:29:07.554Z · LW · GW

Yeah, that was sloppy of the article. In context, the quote makes a bit of sense, and the qualifier "in every detail" does useful work (though I don't see how to make the argument clear just by defining what these words mean), but without context it's invalid.

Comment by vladimir_nesov on Why You Should Care About Goal-Directedness · 2020-11-10T14:41:55.633Z · LW · GW

Having an exact model of the world that contains the agent doesn't require any explicit self-references or references to the agent. For example, if there are two programs whose behavior is equivalent, A and A', and the agent correctly thinks of itself as A, then it can also know the world to be a program W(A') with some subexpressions A', but without subexpression A. To see the consequences of its actions in this world, it would be useful for the agent to figure out that A is equivalent to A', but it is not necessary that this is known to the agent from the start, so any self-reference in this setting is implicit. Also, A' can't have W(A') as a subexpression, for reasons that do admit an explanation given in the quote that started this thread, but at the same time A can have W(A') as a subexpression. What is smaller here, the world or the agent?

(What's naive Gödelian self-reference? I don't recall this term, and googling didn't help.)

Dealing with self-reference in definitions of agents and worlds does not require (or even particularly recommend) non-realizability. I don't think it's an issue specific to embedded agents, probably all puzzles that fall within this scope can be studied while requiring the world to be a finite program. It might be a good idea to look for other settings, but it's not forced by the problem statement.

non-realizability (the impossibility of the agent to contain an exact model of the world, because it is inside the world and thus smaller)

Being inside the world does not make it impossible for the agent to contain the exact model of the world, does not require non-realizability in its reasoning about the world. This is the same error as in the original quote. In what way are quines not an intuitive counterexample to this reasoning? Specifically, the error is in saying "and thus smaller". What does "smaller" mean, and how does being a part interact with it? Parts are not necessarily smaller than the whole, they can well be larger. Exact descriptions of worlds and agents are not just finite expressions, they are at least equivalence classes of expressions that behave in the same way, and elements of those equivalence classes can have vastly different syntactic size.

(Of course in some settings there are reasons for non-realizability to be necessary or to not be a problem.)

Comment by vladimir_nesov on Why You Should Care About Goal-Directedness · 2020-11-10T10:43:28.406Z · LW · GW

The quote sounds like an argument for non-existence of quines or of the context in which things like the diagonalization lemma are formulated. I think it obviously sounds like this, so raising nonspecific concern in my comment above should've been enough to draw attention to this issue. It's also not a problem Agent Foundations explores, but it's presented as such. Given your background and effort put into the post this interpretation of the quote seems unlikely (which is why I didn't initially clarify, to give you the first move). So I'm confused. Everything is confusing here, including your comment above not taking the cue, positive voting on it, and negative voting on my comment. Maybe the intended meanings of "model" and "being exact" and "representation" are such that the argument makes sense and becomes related to Agent Foundations?

Comment by vladimir_nesov on Why You Should Care About Goal-Directedness · 2020-11-09T21:36:58.570Z · LW · GW

Trouble comes from self-reference: since the agent is part of the world, so is its model, and thus a perfect model would need to represent itself, and this representation would need to represent itself, ad infinitum. So the model cannot be exact.

???

Comment by vladimir_nesov on Non Polemic: How do you personally deal with "irrational" people? · 2020-11-05T12:28:13.185Z · LW · GW

Exactly, that's what makes the question as you formulated it funny. It's not a question, or even a request. It's a non-negotiable demand. If you don't concede, the whole deal is off. Yet not conceding is often the only reasonable thing to do, so it's a demand to be unreasonable masquerading as a question, because don't be rude.

Comment by vladimir_nesov on Confucianism in AI Alignment · 2020-11-04T18:03:03.263Z · LW · GW

Scientists doing basic research also mostly aren't motivated by the hope that it will someday lead to practical applications. When there is confusion or uncertainty about a salient phenomenon that can be clarified with further research, that is enough. Incidentally, it is virtuous and signals tribal allegiance to that field of research. Some of the researchers are going to be motivated by that.

Comment by vladimir_nesov on Confucianism in AI Alignment · 2020-11-04T17:32:40.672Z · LW · GW

This plays the same role as basic research, ideas that can be developed but haven't found even an inkling of their potential practical applications. An error would be thinking that they are going to be immediately useful, but that shouldn't be a strong argument against developing them, and there should be no certainty that their very indirect use won't end up crucial at some point in the distant future.

Comment by vladimir_nesov on Non Polemic: How do you personally deal with "irrational" people? · 2020-11-04T17:13:30.049Z · LW · GW

"Can you say this in a more concise way?"

"No."

(When talking to non-experts, most points should become less concise than when talking to other experts, because to meaningfully communicate anything to a non-expert, you also have to communicate the necessary prerequisites that other experts already know.)

Comment by vladimir_nesov on Confucianism in AI Alignment · 2020-11-04T17:02:37.244Z · LW · GW

they talk about how things "should" be, but completely forget that talking about "should" has to ground out in actions in order to be useful

An idea doesn't have to be useful in order to be a thing to talk about. So when people talk about an apparently useless idea, it doesn't follow that that they forgot that it's not useful.

Comment by vladimir_nesov on Non Polemic: How do you personally deal with "irrational" people? · 2020-11-04T02:49:58.276Z · LW · GW

you can't really understand why you are having a particular intuition

Intuition is distilled deliberation. Deliberation is a sequence of intuitive steps, amplified intuition. A given intuition is formed by (and stands for) the dataset that trains it, the habits of deliberative thought on its specific topic.

Comment by vladimir_nesov on Non Polemic: How do you personally deal with "irrational" people? · 2020-11-04T02:12:42.179Z · LW · GW

Caring about things is by definition an emotive act.

I strongly disagree (about "by definition"; it's of course a popular sense of the word). Operationalization of caring is value, preference. It's channeled by decision making, and deliberative thought is capable of taking over decision making. As such, it may pursue an arbitrary purpose that a person can imagine. A purpose not derived from emotion in any way might be thought to be an incorrect idealization of preference, but even a preference ultimately grounded in emotion will be expressed by decisions that emotions are occasionally incapable of keeping up with.