Posts

No Anthropic Evidence 2012-09-23T10:33:06.994Z · score: 10 (15 votes)
A Mathematical Explanation of Why Charity Donations Shouldn't Be Diversified 2012-09-20T11:03:48.603Z · score: 2 (25 votes)
Consequentialist Formal Systems 2012-05-08T20:38:47.981Z · score: 12 (13 votes)
Predictability of Decisions and the Diagonal Method 2012-03-09T23:53:28.836Z · score: 21 (16 votes)
Shifting Load to Explicit Reasoning 2011-05-07T18:00:22.319Z · score: 15 (21 votes)
Karma Bubble Fix (Greasemonkey script) 2011-05-07T13:14:29.404Z · score: 23 (26 votes)
Counterfactual Calculation and Observational Knowledge 2011-01-31T16:28:15.334Z · score: 11 (22 votes)
Note on Terminology: "Rationality", not "Rationalism" 2011-01-14T21:21:55.020Z · score: 31 (41 votes)
Unpacking the Concept of "Blackmail" 2010-12-10T00:53:18.674Z · score: 25 (34 votes)
Agents of No Moral Value: Constrained Cognition? 2010-11-21T16:41:10.603Z · score: 6 (9 votes)
Value Deathism 2010-10-30T18:20:30.796Z · score: 26 (48 votes)
Recommended Reading for Friendly AI Research 2010-10-09T13:46:24.677Z · score: 29 (32 votes)
Notion of Preference in Ambient Control 2010-10-07T21:21:34.047Z · score: 14 (19 votes)
Controlling Constant Programs 2010-09-05T13:45:47.759Z · score: 25 (38 votes)
Restraint Bias 2009-11-10T17:23:53.075Z · score: 16 (21 votes)
Circular Altruism vs. Personal Preference 2009-10-26T01:43:16.174Z · score: 11 (17 votes)
Counterfactual Mugging and Logical Uncertainty 2009-09-05T22:31:27.354Z · score: 10 (13 votes)
Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds 2009-08-16T16:06:18.646Z · score: 20 (22 votes)
Sense, Denotation and Semantics 2009-08-11T12:47:06.014Z · score: 9 (16 votes)
Rationality Quotes - August 2009 2009-08-06T01:58:49.178Z · score: 6 (10 votes)
Bayesian Utility: Representing Preference by Probability Measures 2009-07-27T14:28:55.021Z · score: 33 (18 votes)
Eric Drexler on Learning About Everything 2009-05-27T12:57:21.590Z · score: 31 (36 votes)
Consider Representative Data Sets 2009-05-06T01:49:21.389Z · score: 6 (11 votes)
LessWrong Boo Vote (Stochastic Downvoting) 2009-04-22T01:18:01.692Z · score: 3 (30 votes)
Counterfactual Mugging 2009-03-19T06:08:37.769Z · score: 56 (78 votes)
Tarski Statements as Rationalist Exercise 2009-03-17T19:47:16.021Z · score: 11 (21 votes)
In What Ways Have You Become Stronger? 2009-03-15T20:44:47.697Z · score: 27 (29 votes)
Storm by Tim Minchin 2009-03-15T14:48:29.060Z · score: 15 (22 votes)

Comments

Comment by vladimir_nesov on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-28T10:40:46.398Z · score: 4 (2 votes) · LW · GW

If there is any punishment at all, or even absence of a reward (perhaps an indirect game theoretic reward), that is the same as there being no incentive to provide information, and you end up ignorant, that is the situation you are experiencing becomes unlikely.

on the other hand you do want to be able to react to the information

The point is that you are actually unable to usefully react to such information in a way that disincentivizes its delivery. If you have the tendency to try, then the information won't be delivered, and your reaction will happen in a low-probability outcome, won't have significant weight in the expected utility across outcomes. For your reaction to actually matter, it should ensure the incentive for the present situation to manifest is in place.

Comment by vladimir_nesov on Vanessa Kosoy's Shortform · 2020-09-27T14:13:34.873Z · score: 2 (1 votes) · LW · GW

I agree. But GPT-3 seems to me like a good estimate for how much compute it takes to run stream of consciousness imitation learning sideloads (assuming that learning is done in batches on datasets carefully prepared by non-learning sideloads, so the cost of learning is less important). And with that estimate we already have enough compute overhang to accelerate technological progress as soon as the first amplified babbler AGIs are developed, which, as I argued above, should happen shortly after babblers actually useful for automation of human jobs are developed (because generation of stream of consciousness datasets is a special case of such a job).

So the key things to make imitation plateau last for years are either sideloads requiring more compute than it looks like (to me) they require, or amplification of competent babblers into similarly competent AGIs being a hard problem that takes a long time to solve.

Comment by vladimir_nesov on Vanessa Kosoy's Shortform · 2020-09-27T12:30:48.120Z · score: 2 (1 votes) · LW · GW

I was arguing that near human level babblers (including the imitation plateau you were talking about) should quickly lead to human level AGIs by amplification via stream of consciousness datasets, which doesn't pose new ML difficulties other than design of the dataset. Superintelligence follows from that by any of the same arguments as for uploads leading to AGI (much faster technological progress; if amplification/distillation of uploads is useful straight away, we get there faster, but it's not necessary). And amplified babblers should be stronger than vanilla uploads (at least implausibly well-educated, well-coordinated, high IQ humans).

For your scenario to be stable, it needs to be impossible (in the near term) to run the AGIs (amplified babblers) faster than humans, and for the AGIs to remain less effective than very high IQ humans. Otherwise you get acceleration of technological progress, including ML. So my point is that feasibility of imitation plateau depends on absence of compute overhang, not on ML failing to capture some of the ingredients of human general intelligence.

Comment by vladimir_nesov on Vanessa Kosoy's Shortform · 2020-09-27T11:23:13.138Z · score: 2 (1 votes) · LW · GW

To me this seems to be essentially another limitation of the human Internet archive dataset: reasoning is presented in an opaque way (most slow/deliberative thoughts are not in the dataset), so it's necessary to do a lot of guesswork to figure out how it works. A better dataset both explains and summarizes the reasoning (not to mention gets rid of the incoherent nonsense, but even GPT-3 can do that to an extent by roleplaying Feynman).

Any algorithm can be represented by a habit of thought (Turing machine style if you must), and if those are in the dataset, they can be learned. The habits of thought that are simple enough to summarize get summarized and end up requiring fewer steps. My guess is that the human faculties needed for AGI can be both represented by sequences of thoughts (probably just text, stream of consciousness style) and easily learned with current ML. So right now the main obstruction is that it's not feasible to build a dataset with those faculties represented explicitly that's good enough and large enough for current sample-inefficient ML to grok. More compute in the learning algorithm is only relevant for this to the extent that we get a better dataset generator that can work on the tasks before it more reliably.

Comment by vladimir_nesov on Vanessa Kosoy's Shortform · 2020-09-27T09:13:29.894Z · score: 2 (1 votes) · LW · GW

This seems similar to gaining uploads prior to AGI, and opens up all those superorg upload-city amplification/distillation constructions which should get past human level shortly after. In other words, the limitations of the dataset can be solved by amplification as soon as the AIs are good enough to be used as building blocks for meaningful amplification, and something human-level-ish seems good enough for that. Maybe even GPT-n is good enough for that.

Comment by vladimir_nesov on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-27T08:58:07.911Z · score: 8 (5 votes) · LW · GW

Re: downvotes on the parent comment. Offers of additional (requested!) information shouldn't be punished, or else you create additional incentive to be secretive, to ignore requests for information.

Comment by vladimir_nesov on If Starship works, how much would it cost to create a system of rotable space mirrors that reduces temperatures on earth by 1° C? · 2020-09-15T16:45:03.436Z · score: 5 (4 votes) · LW · GW

I think it's useful to distinguish knowledge of truth from gears-level understanding, these two different things can occur in any combination. Your point is that attaining specific understanding of a plan that's good enough to make the estimate in question is a hopeless endeavor, and you list particular issues with getting such a plan fleshed out.

But it's also possible to know truths about the world without understanding why they are true or how they came to be known (originally). The main example of this is seeking expert consensus in an area you don't understand: by finding out what the consensus is, you get a reasonable credence in what the truth of the matter is, without necessarily understanding why it's this way, or how specifically anyone came to know it's this way.

This post asks for a Fermi estimate, which is another way in which a very vague model can yield truths about the world. Even if a detailed model is unattainable, such truths might be in reach.

(It's often a lost purpose to seek truths about the world instead of seeking understanding, so it's natural to scorn some forms of pursuit of truths. I have a lot of sympathy for this position. That doesn't make such forms of pursuit of truths unworkable, just not relevant to improving understanding of what's going on.)

Comment by vladimir_nesov on What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers · 2020-09-12T15:13:18.208Z · score: 16 (6 votes) · LW · GW

A typical paper doesn't just contain factual claims about standard questions, but also theoretical discussion and a point of view on the ideas that form the fabric of a field. Papers are often referenced to clarify the meaning of a theoretical discussion, or to give credit for inspiring the direction in which the discussion moves. This aspect doesn't significantly depend on truth of findings of particular studies, because an interesting concept motivates many studies that both experimentally investigate and theoretically discuss it. Some of the studies will be factually bogus, but the theoretical discussion in them might still be relevant to the concept, and useful for subsequent good studies.

So a classification of citations into positive and negative ignores this important third category, something like conceptual reference citation.

Comment by vladimir_nesov on On Defining your Terms · 2020-08-19T14:18:16.711Z · score: 2 (1 votes) · LW · GW

A productive purpose of giving definitions to your interlocutor is to incite cognitive activity that's relevant to your own thoughts, as levers that bring about thinking on the right topic. Apt definitions are great, when available, but demanding them when they are not is losing sight of the purpose of the whole activity. Formulating definitions (or researching existing ones) might be a good subproblem to focus on though.

For vague ideas it makes little sense to use definitions that sharply delineate their instances. A definition should instead give a degree of aptness (centrality) to its potential instances, and definitions should match based on the whole distribution of aptness they assign.

With free will, and similarly with possible worlds, I think a crucial point missing from discussion on LW is that these are semantic notions, and it's possible to consider radically different semantics for the same syntactic thing. A great illustration is semantics of programming languages, where it's possible to understand what a given program is doing in terms of very different semantic constructions. To give some examples, there's the straightforward sets of possible values, related worlds in Kripke semantics, formal theories of observations (that are not even maximal) in Scott domains, plays and strategies in game semantics. Closer to normal mathematics, there's internal languages of categories that let us interpret a term as something incarnated in very different situations, notably sheaf toposes where you can interpret a construction (as in discussion, argument, proof, term, type) as varying continuously on some space and saying different things for different places, all at the same time.

So when considering a thought experiment, assuming that what's going on is that there is some discrete set of possible worlds is very limiting. There are other ways to think about things, and thus prematurely locking in a discussion under a technical definition can be a problem, manifesting as failure to notice or urge to ridicule the notions arising from other definitions and inapt for this one. (Not formulating technical definitions is another problem.)

Comment by Vladimir_Nesov on [deleted post] 2020-07-26T01:23:36.656Z

We can't just screen Prof. Leonard for the 5,000 American college calc I classes taught every year, because many students want a classroom experience and teachers need jobs.

Re "teachers need jobs": If you get a better experience without teachers, you could just hire the teachers to do nothing.

Comment by vladimir_nesov on Bob Jacobs's Shortform · 2020-07-18T23:20:37.058Z · score: 2 (1 votes) · LW · GW

(I think making arguments clear is more meaningful than using them for persuasion.)

Comment by vladimir_nesov on Bob Jacobs's Shortform · 2020-07-18T20:48:40.249Z · score: 2 (1 votes) · LW · GW

It's not clear what "subjective idealism is correct" means, because it's not clear what "a given thing is real" means (at least in the context of this thread). It should be more clear what a claim means before it makes sense to discuss levels of credence in it.

If we are working with credences assigned to hypotheticals, the fact that the number of disjoint hypotheticals incompatible with some hypothetical S is large doesn't in itself make them (when considered altogether) more probable than S. (A sum of an infinite number of small numbers can still be small.)

Working with credences in hypotheticals is not the only possible way to reason. If we are talking about weird things like subjective idealism, assumptions about epistemics are not straightforward and should be considered.

Comment by vladimir_nesov on Null-boxing Newcomb’s Problem · 2020-07-14T13:34:49.557Z · score: 6 (4 votes) · LW · GW

If the trickster god personally reads the prediction, their behavior can depend on the prediction, which makes diagonalization possible (ask the trickster god what the prediction was, then do the opposite). This calls the claim of 100% precision of the predictor into question (or at least makes the details of its meaning relevant).

Comment by vladimir_nesov on Maximal Ventilation · 2020-07-11T21:19:31.652Z · score: 5 (3 votes) · LW · GW

Not doubting it at all

I don't think doubting should be either socially frowned-upon or uncalled-for where one doesn't see an argument that makes a claim evident.

Comment by vladimir_nesov on Why Ranked Choice Voting Isn't Great · 2019-10-20T14:44:40.617Z · score: 4 (2 votes) · LW · GW

Maybe "near-fatal" is too strong a word, the comment I replied to also had examples. Existence of examples doesn't distinguish winning from survival, seeing some use. I understand the statement I replied to as meaning something like "In 200 years, if the world remains mostly as we know it, the probability that most elections use cardinal voting methods is above 50%". This seems implausible to me for the reasons I listed, hence the question about what you actually meant, perhaps my interpretation of the statement is not what you intended. (Is "long run" something like 200 years? Is "winning" something like "most elections of some kind use cardinal voting methods"?)

Comment by vladimir_nesov on Why Ranked Choice Voting Isn't Great · 2019-10-20T06:14:33.303Z · score: 2 (1 votes) · LW · GW

Cardinal voting methods will win in the long run.

(What kind of long run? Why is this to be expected?) Popularity is not based on only merit, being more complicated than the simplest most familiar method sounds like a near-fatal disadvantage. Voting being involved with politics makes it even harder for good arguments to influence what actually happens.

Comment by vladimir_nesov on Maybe Lying Doesn't Exist · 2019-10-20T04:02:23.535Z · score: 6 (3 votes) · LW · GW

The problem with unrestrained consequentialism is that it accepts no principles in its designs. An agent that only serves a purpose has no knowledge of the world or mathematics, it makes no plans and maintains no goals. It is what it needs to be, and no more. All these things are only expressed as aspects of its behavior, godshatter of the singular purpose, but there is no part that seeks excellence in any of the aspects.

For an agent designed around multiple aspects, its parts rely on each other in dissimilar ways, not as subagents with different goals. Access to knowledge is useful for planning and can represent goals. Exploration and reflection refine knowledge and formulate goals. Planning optimizes exploration and reflection, and leads to achievement of goals.

If the part of the design that should hold knowledge accepts a claim for reasons other than arguments about its truth, the rest of the agent can no longer rely on its claims as reflecting knowledge.

Of course you'd have to also patch the specification

In my comment, I meant the situation where the specification is not patched (and by specification in the programming example I meant the informal description on the level of procedures or datatypes that establishes some principles of what it should be doing).

In the case of appeal to consequences, the specification is a general principle that a map reflects the territory to the best of its ability, so it's not a small thing to patch. Optimizing a particular belief according to the consequences of holding it violates this general specification. If the general specification is patched to allow this, you no longer have access to straightforwardly expressed knowledge (there is no part of cognition that satisfies the original specification).

Alternatively, specific beliefs could be marked as motivated, so the specification is to have two kinds of beliefs, with some of them surviving to serve the original purpose. This might work, but then actual knowledge that corresponds to the motivated beliefs won't be natively available, and it's unclear what the motivated beliefs should be doing. Will curiosity act on the motivated beliefs, should they be used for planning, can they represent goals? A more developed architecture for reliable hypocrisy might actually do something sensible, but it's not a matter of merely patching particular beliefs.

Comment by vladimir_nesov on Maybe Lying Doesn't Exist · 2019-10-19T14:24:16.744Z · score: 16 (5 votes) · LW · GW

correctly weigh these kinds of considerations against each on a case by case basis

The very possibility of intervention based on weighing map-making and planning against each other destroys their design, if they are to have a design. It's similar to patching a procedure in a way that violates its specification in order to improve overall performance of the program or to fix an externally observable bug. In theory this can be beneficial, but in practice the ability to reason about what's going on deteriorates.

Comment by vladimir_nesov on Towards a mechanistic understanding of corrigibility · 2019-09-24T03:56:16.588Z · score: 2 (1 votes) · LW · GW

I agree that exotic decision algorithms or preference transformations are probably not going to be useful for alignment, but I think this kind of activity is currently more fruitful for theory building than directly trying to get decision theory right. It's just that the usual framing is suspect: instead of exploration of the decision theory landscape by considering clearly broken/insane-acting/useless but not yet well-understood constructions, these things are pitched (and chosen) for their perceived use in alignment.

Comment by vladimir_nesov on TurnTrout's shortform feed · 2019-09-22T05:55:58.633Z · score: 2 (1 votes) · LW · GW

I suspect that it doesn't matter how accurate or straightforward a predictor is in modeling people. What would make prediction morally irrelevant is that it's not noticed by the predicted people, irrespective of whether this happens because it spreads the moral weight conferred to them over many possibilities (giving inaccurate prediction), keeps the representation sufficiently baroque, or for some other reason. In the case of inaccurate prediction or baroque representation, it probably does become harder for the predicted people to notice being predicted, and I think this is the actual source of moral irrelevance, not those things on their own. A more direct way of getting the same result is to predict counterfactuals where the people you reason about don't notice the fact that you are observing them, which also gives a form of inaccuracy (imagine that your predicting them is part of their prior, that'll drive the counterfactual further from reality).

Comment by vladimir_nesov on Open & Welcome Thread - September 2019 · 2019-09-17T20:04:52.720Z · score: 4 (2 votes) · LW · GW

This still puts these comments in Recent Comments on GreaterWrong, and the fact that they can't be seen on the LessWrong All Comments page is essentially a bug.

Comment by vladimir_nesov on A Critique of Functional Decision Theory · 2019-09-15T14:11:22.815Z · score: 10 (2 votes) · LW · GW

By the way, selfish values seem related to the reward vs. utility distinction. An agent that pursues a reward that's about particular events in the world rather than a more holographic valuation seems more like a selfish agent in this sense than a maximizer of a utility function with a small-in-space support. If a reward-seeking agent looks for reward channel shaped patterns instead of the instance of a reward channel in front of it, it might tile the world with reward channels or search the world for more of them or something like that.

Comment by vladimir_nesov on Proving Too Much (w/ exercises) · 2019-09-15T13:54:43.526Z · score: 6 (3 votes) · LW · GW

"I think, therefore I am."

(This is also incorrect, because considering a thinking you in a counterfactual makes sense. Many UDTish examples demonstrate that this principle doesn't hold.)

Comment by vladimir_nesov on Formalising decision theory is hard · 2019-09-14T18:38:24.339Z · score: 4 (2 votes) · LW · GW

I was never convinced that "logical ASP" is a "fair" problem. I once joked with Scott that we can consider a "predictor" that is just the single line of code "return DEFECT" but in the comments it says "I am defecting only because I know you will defect."

I'm leaning this way as well, but I think it's an important clue to figuring out commitment races. ASP Predictor, DefectBot, and a more general agent will make different commitments, and these things are already algorithms specialized for certain situations. How is the chosen commitment related to what the thing making the commitment is?

When an agent can manipulate a predictor in some sense, what should the predictor do? If it starts scheming with its thoughts, it's no longer a predictor, it's just another agent that wants to do something "predictory". Maybe it can only give up, as in ASP, which acts as a precommitment that's more thematically fitting for a predictor than for a general agent. It's still a commitment race then, but possibly the meaning of something being a predictor is preserved by restricting the kind of commitment that it makes: the commitment of a non-general agent is what it is rather than what it does, and a general agent is only committed to its preference. Thus a general agent loses all knowledge in an attempt to out-commit others, because it hasn't committed to that knowledge, didn't make it part of what it is.

Comment by vladimir_nesov on G Gordon Worley III's Shortform · 2019-09-13T16:48:51.968Z · score: 9 (4 votes) · LW · GW

(By "belief" I meant a belief that talkes place in someone's head, and its existence is not necessarily communicated to anyone else. So an uttered statement "I think X" is a declaration of belief in X, not just a belief in X. A belief in X is just a fact about that person's mind, without an accompanying declaration. In this framing, the version of the norm about beliefs (as opposed to declarations) is the norm not to think certain thoughts, not a norm to avoid sharing the observations about the fact that you are thinking them.)

I think a salient distinction between declarations of "I think X" and "it's true that X" is a bad thing, as described in this comment. The distinction is that in the former case you might lack arguments for the belief. But if you don't endorse the belief, it's no longer a belief, and "I think X" is a bug in the mind that shouldn't be called "belief". If you do endorse it, then "I think X" does mean "X". It is plausibly a true statement about the state of the universe, you just don't know why; your mind inscrutably says that it is and you are inclined to believe it, pending further investigation.

So the statement "I think this is true of other people in spite of their claims to the contrary" should mean approximately the same as "This is true of other people in spite of their claims to the contrary", and a meaningful distinction only appears with actual arguments about those statements, not with different placement of "I think".

Comment by vladimir_nesov on G Gordon Worley III's Shortform · 2019-09-12T15:55:52.237Z · score: 10 (5 votes) · LW · GW

criticizing people who don't justify their beliefs with adequate evidence and arguments

I think justification is in the nature of arguments, but not necessary for beliefs or declarations of beliefs. A belief offered without justification is a hypothesis called to attention. It's concise, and if handled carefully, it can be sufficient for communication. As evidence, it's a claim about your own state of mind, which holds a lot of inscrutable territory that nonetheless can channel understanding that doesn't yet lend itself to arguments. Seeking arguments is certainly a good thing, to refactor and convey beliefs, but that's only a small part of how human intelligence builds its map.

Comment by vladimir_nesov on G Gordon Worley III's Shortform · 2019-09-12T14:33:59.451Z · score: 6 (3 votes) · LW · GW

there is absolutely a time and a place for this

That's not the point! Zack is talking about beliefs, not their declaration, so it's (hopefully) not the case that there is "a time and a place" for certain beliefs (even when they are not announced), or that beliefs require ability and willingness to justify them (at least for some senses of "justify" and "belief").

Comment by vladimir_nesov on G Gordon Worley III's Shortform · 2019-09-12T14:27:18.916Z · score: 11 (6 votes) · LW · GW

So it's a moral principle under the belief vs. declaration distinction (as in this comment). In that case I mostly object to not making that distinction (a norm to avoid beliefs of that form is on entirely different level than a norm to avoid their declarations).

Personally I don't think the norm about declarations is on the net a good thing, especially on LW, as it inhibits talking about models of thought. The examples you mentioned are important but should be covered by a more specialized norm that doesn't cause as much collateral damage.

Comment by vladimir_nesov on G Gordon Worley III's Shortform · 2019-09-12T02:38:31.260Z · score: 7 (4 votes) · LW · GW

That's one way for my comment to be wrong, as in "Systematic recurrence of preventable epistemic errors is morally abhorrent."

When I was writing the comment, I was thinking of another way it's wrong: given morality vs. axiology distinction, and distinction between belief and disclosure of that belief, it might well be the case that it's a useful moral principle to avoid declaring beliefs about what others think, especially when those others disagree with the declarations. In that case it's a violation of this principle, a moral wrong, to declare such beliefs. (A principle like this gets in the way of honesty, so promoting it is contentious and shouldn't be an implicit background assumption. And the distinction between belief and its declaration was not clearly made in the above discussion.)

Comment by vladimir_nesov on G Gordon Worley III's Shortform · 2019-09-12T01:30:38.792Z · score: 19 (7 votes) · LW · GW

[He] does and will regularly decide that he knows better than other people what's going on in those other people's heads. [...] Personally, I find it unjustifiable and morally abhorrent.

How can it be morally abhorrent? It's an epistemic issue. Factual errors often lead to bad consequences, but that doesn't make those errors moral errors. A moral error is an error about a moral fact, assignement of value to situations, as opposed to prediction of what's going on. And what someone thinks is a factual question, not a question of assigning value to an event.

Comment by vladimir_nesov on The 3 Books Technique for Learning a New Skilll · 2019-09-09T22:29:55.481Z · score: 5 (4 votes) · LW · GW

SICP is a "Why" book, one of the few timeless texts on the topic. It's subsumed by studying any healthy functional programming language to a sufficient extent (idiomatic use of control operator libraries, not just syntax), but it's more straightforward to start with reading the book.

Comment by vladimir_nesov on Open & Welcome Thread - September 2019 · 2019-09-09T22:02:04.202Z · score: 5 (3 votes) · LW · GW

Not unless the traffic increases severalfold where it would be too much trouble to even skim everything. Skimming the content can turn up interesting things from unfamiliar authors under uninterestingly titled topics, and this can't be recovered by going subscription-only.

Comment by vladimir_nesov on Open & Welcome Thread - September 2019 · 2019-09-09T21:51:11.226Z · score: 2 (1 votes) · LW · GW

Most users don't read through every single comment. [...] dedicated power-user-comment readers [...]

This is not my use case. I mostly skim based on the author, post title, and votes. I don't want to miss certain things, but I'm also completely ignoring (not reading) most discussions.

Comment by vladimir_nesov on Open & Welcome Thread - September 2019 · 2019-09-09T20:52:29.240Z · score: 7 (4 votes) · LW · GW

Does the current LW design let one find the All Comments page, or is this feature no longer intended to be used? I couldn't find any mention of it. I'm a bit worried, since this is the main way in which I've always interacted with the site. (Thankfully this is available on GreaterWrong as a primary feature.)

(Incidentally, an issue I have with the current implementation of All Comments is that negatively voted comments disappear and there is no way of getting them to show. IIRC they used to show in a collapsed form, but now they are just absent. A harder-to-settle and less important issue is that reading multiple days' worth of comments is inconvenient, because there are no URLs for enumerating pages of older comments. GreaterWrong has neither of these issues.)

Comment by vladimir_nesov on Looking for answers about quantum immortality. · 2019-09-09T20:19:03.185Z · score: 2 (1 votes) · LW · GW

To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?

It's not my argument, but it follows from what I'm saying, yes. Even if people should care about this, there are probably good reasons not to, just not good enough to tilt the balance. There are good reasons for all kinds of wrong conclusions, it should be suspicious when there aren't. Note that caring about this too much is the same as caring about other things too little. Also, as an epistemic principle, appreciation of arguments shouldn't depend on consequences of agreeing with them.

How does our subjective suffering improve anything in the worlds where you die?

Focusing effort on the worlds where you'll eventually die (as well as the worlds where you survive in a normal non-QI way) improves them at the cost of neglecting the worlds where you eternally suffer for QI reasons.

Comment by vladimir_nesov on Looking for answers about quantum immortality. · 2019-09-09T18:58:39.025Z · score: 3 (3 votes) · LW · GW

You, as in the person you are right now, is going to experience that.

This has the same issue with "is going to experience" as the "you will always find" I talked about in my first comment.

Not a infinitesimal proportion of other 'yous' while the majority die. Your own subjective experience, 100% of it.

Yes. All of the surviving versions of myself will experience their survival. This happens with extremely small probability. I will experience nothing else. The rest of the probability goes to the worlds where there are no surviving versions of myself, and I won't experience those worlds. But I still value those worlds more than the worlds that have surviving versions of myself. The things that happen to all of my surviving subjective experiences matter less to me than the things that I won't experience happening in the other worlds. Furthermore, I believe that not as a matter of unusual personal preference, but for general reasons about the structure of valuing of things that I think should convince most other people, see the links in the above comments.

Comment by vladimir_nesov on Looking for answers about quantum immortality. · 2019-09-09T17:06:53.861Z · score: 3 (3 votes) · LW · GW

I don't see how it refutes the possibility of QI, then.

See the context of that phrase. I don't see how it could be about "refuting the possibility of QI". (What is "the possibility of QI"? I don't find anything wrong with QI scenarios themselves, only with some arguments about them, in particular the argument that their existence has decision-relevant implications because of conditioning on subjective experience. I'm not certain that they don't have decision-relevant implications that hold for other reasons.)

[We] (as in our internal subjective experience) will continue on only in branches where we stay alive.

This seems tautologously correct. See the points about moral value in the grandparent comment and in the rest of this comment for what I disagree with, and why I don't find this statement relevant.

Since I care about my subjective internal experience, I wouldn't want it to suffer

Neither would I. But this is not all that people care about. We also seem to care about what happens outside our subjective experience, and in quantum immortality scenarios that component of value (things that are not personally experienced) is dominant.

Comment by vladimir_nesov on Looking for answers about quantum immortality. · 2019-09-09T15:56:34.318Z · score: 3 (3 votes) · LW · GW

Nothing is technically impossible with quantum mechanics.

By "essentially impossible" I meant "extremely improbable". The word "essentially" was meant to distinguish this from "physically impossible".

You're not understanding that all of our measure is going into those branches where we survive.

There is a useful distinction between knowing the meaning of an idea and knowing its truth. I'm disagreeing with the claim that "all of our measure is going into those branches where we survive", understood in the sense that only those branches have moral value (see What Are Probabilities, Anyway?), in particular the other branches taken together have less value. See the posts linked from the grandparent comment for a more detailed discussion (I've edited it a bit).

This meaning could be different from one you intend, in which case I'm not understanding your claim correctly, and I'm only disagreeing with my incorrect interpretation of it. But in that case I'm not understanding what you mean by "all of our measure is going into those branches where we survive", not that "all of our measure is going into those branches where we survive" in the sense you intend, because the latter would require me to know the intended meaning of that claim first, at which point it becomes possible for me to fail to understand its truth.

Comment by vladimir_nesov on Looking for answers about quantum immortality. · 2019-09-09T15:33:01.752Z · score: 2 (1 votes) · LW · GW

Hence you will always find your subjective self in that improbable branch.

The meaning of "you will always find" has a connotation of certainty or high probability, but we are specifically talking about essentially impossible outcomes. This calls for tabooing "you will always find" to reconcile an intended meaning with extreme improbability of the outcome. Worrying about such outcomes might make sense when they are seen as a risk on the dust speck side of Torture vs. Dust Specks (their extreme disutility overcomes their extreme improbability). But conditioning on survival seems to be a wrong way of formulating values (see also), because the thing to value is the world, not exclusively subjective experience, even if subjective experience manages to get significant part of that value.

Comment by vladimir_nesov on A Game of Giants [Wait But Why] · 2019-09-06T05:28:32.675Z · score: 2 (1 votes) · LW · GW

The likelihood of arriving at anything like the right answer seems low

In the useful version of this activity, arriving at right answers is not relevant. Instead, you are collecting tools for thinking about a topic, which is mostly about being able to hold and manipulate ideas in your mind, including incorrect ideas. At some point, you get to use those tools to understand what others have figured out, or what's going on in the real world. This framing opposes the failure mode where you learn facts without being able to grasp what they even mean or why they hold.

Comment by vladimir_nesov on Machine Learning Analogy for Meditation (illustrated) · 2019-08-13T16:15:02.000Z · score: 4 (2 votes) · LW · GW

thoughts being the cause of actions is related to a central strategy of many people around here

(It's a good reason to at least welcome arguments against this being the case. If your central strategy is built on a false premise, you should want to know. It might be pointless to expect any useful info in this direction, but I think it's healthier to still want to see it emotionally even when you decide that it's not worth your time to seek it out.)

Comment by vladimir_nesov on Open & Welcome Thread - August 2019 · 2019-08-13T11:01:05.276Z · score: 3 (2 votes) · LW · GW

Another important takeaway from this observation is that there is no point in rebalancing a portfolio of stocks with any regularity, which makes hand-made stock portfolios almost as efficient (in hassle and expenses) as index ETFs. Rebalancing is only useful to keep it reasonably diversified and to get rid of stocks that risk reduced liquidity. This is how index ETFs fall short of the mark of what makes them a good idea: a better ETF should stop following a distribution of an index and only use an index as a catalogue of liquid stocks. Given how low TERs get in large funds, this doesn't really matter, and accountability/regulation is easier with keeping to the distribution from an index, but smaller funds could get lower TERs by following this strategy while retaining all benefits (except the crucial marketing benefit of being able to demonstrate how its performance keeps up with an index). For the same reason, cap weighted index ETFs are worse by being less diversified (which actually makes some index ETFs that hold a lot of stocks a bad choice), while equal-weight ETFs are worse by rebalancing all the time (to the point where they can't get a low TER at all).

Aside from that, a very low TER index (that's not too unbalanced due to cap-weighting) is more diversified than a hand-made portfolio with 30 stocks, without losing expected money, so one can use a bit more leverage with it to get a similar risk profile with a bit more expected money (leveraged ETFs are hard to judge, but one could make a portfolio that holds some non-leveraged index ETFs in a role similar to bonds in a conservative allocation, i.e. as lower-risk part, and some self-contained leveraged things in the rest of it).

There might also be tax benefits to how an index handles dividends, getting more expected money than the same stocks (not sure if this happens in the US, or how it depends on tax brackets). Similarly, stocks that pay no dividends might be better for a hand-made portfolio (and there is less hassle with receiving/reinvesting dividends or having to personally declare taxes for them if they are not automatically withheld higher in the chain in your jurisdiction).

you could beat an index fund

(Replying to the phrase, not its apparent meaning in context.) All liquid stocks give the same expected money as each other, and the same as all indices composed of them. Different distributions of stocks will have different actual outcomes, some greater than others. So of course one can beat an index in an actual outcome (this will happen exactly half the time). A single leveraged stock gives more expected money than any non-leveraged index fund (or any non-leveraged stock), yet makes a very poor investment, which illustrates that beating an index in expectation is also not what anyone's after.

Comment by vladimir_nesov on Power Buys You Distance From The Crime · 2019-08-12T10:40:51.377Z · score: 2 (1 votes) · LW · GW

That's relevant to the example, but not to the argument. Consider a hypothetical Jessica less interested in conflict theory or a topic other than conflict theory. Also, common knowledge doesn't seem to play a role here, and "doesn't know about" is a level of taboo that contradicts the assumption I posited about the argument from selection effect being "well-known".

Comment by vladimir_nesov on Power Buys You Distance From The Crime · 2019-08-12T09:07:13.790Z · score: 2 (3 votes) · LW · GW

Would you correct your response so? (Should you?) If the target audience tends to act similarly, so would they.

Aside from that, "How do you explain X?" is really ambiguous and anchors on well-understood rather than apt framing. "Does mistake theory explain this case well?" is better, because you may well use a bad theory to think about something while knowing it's a bad theory for explaining it. If it's the best you can do, at least this way you have gears to work with. Not having a counterfactually readily available good theory because it's taboo and wasn't developed is of course terrible, but it's not a reason to embrace the bad theory as correct.

Comment by vladimir_nesov on Power Buys You Distance From The Crime · 2019-08-12T00:27:51.281Z · score: 2 (1 votes) · LW · GW

Is "well-known" good enough here, or do you actually need common knowledge?

There is no need for coordination or dependence on what others think. If you expect yourself to be miscalibrated, you just fix that. If most people act this way and accept the argument that convinced you, then you expect them to have done the same.

Comment by vladimir_nesov on Power Buys You Distance From The Crime · 2019-08-11T23:15:27.384Z · score: 12 (4 votes) · LW · GW

But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community's beliefs.

Expected infrequent discussion of a theory shouldn't lower estimates of its probability. (Does the intuition that such theories should be seen as less likely follow from most natural theories predicting discussion of themselves? Erroneous theorizing also predicts that, for example "If this statement is correct, it will be the only topic of all future discussions.")

In general, it shouldn't be possible to expect well-known systematic distortions for any reason, because they should've been recalibrated away immediately. What not discussing a theory should cause is lack of precision (or progress), not systematic distortion.

Comment by vladimir_nesov on Karma-Change Notifications · 2019-08-09T09:26:17.917Z · score: 2 (1 votes) · LW · GW

I think updates were less frequent recently (e.g. zero updates from last week). This should still happen for some people, though there is maybe only about a hundred users with similar number of ancient comments.

Comment by vladimir_nesov on Compilers/PLs book recommendation? · 2019-08-06T04:01:25.564Z · score: 4 (2 votes) · LW · GW

A good selection of topics on static analysis is in

  • F Nielson, HR Nielson, C Hankin. Principles of Program Analysis

Some prerequisites for it and other relevant things can be picked up from

  • K Cooper, L Torczon. Engineering a Compiler
  • HR Nielson, F Nielson. Semantics with Applications: An Appetizer
Comment by vladimir_nesov on Do bond yield curve inversions really indicate there is likely to be a recession? · 2019-07-10T06:26:08.019Z · score: 4 (2 votes) · LW · GW

More generally, market timers lose hard.

Do you mean they make a portfolio that's too conservative, so that lost money becomes lost utility? Or do they lose in some other way? (This sounds superficially similar to claims that there are stock/futures trading strategies that systematically lose money other than on fees or spread, which I think can't happen because the opposite strategies would then systematically make money.)

Comment by vladimir_nesov on Black hole narratives · 2019-07-09T04:02:48.292Z · score: 2 (1 votes) · LW · GW

From the other side, agreement is often not real. People agree out of politeness, or agree with a distorted (perhaps more salient but less relevant to the discussion) version of a claim, without ensuring it's not a bucket error. There's some use in keeping beliefs unchanged, but not in failing to understand the meaning of the claim under discussion (when it has one). So agreement (especially your own, as it's easier to fix) should be treated with scepticism.