Posts

Follow Standard Incentives 2018-04-25T01:09:05.410Z

Comments

Comment by query on The Curious Prisoner Puzzle · 2018-06-16T05:22:09.238Z · LW · GW

P(vulcan mountain | you're not in vulcan desert) = 1/3

P(vulcan mountain | guard says "you're not in vulcan desert") = P(guard says "you're not in vulcan desert" | vulcan mountain) * P(vulcan mountain) / P(guard says "you're not in vulcan desert") = ((1/3) * (1/4)) / ((3/4) * (1/3)) = 1/3

Woops, you're right; nevermind! There are algorithms that do give different results, such as justinpombrio mentions above.

Comment by query on The Curious Prisoner Puzzle · 2018-06-16T04:35:35.993Z · LW · GW

EDIT: This was wrong.

The answer varies with the generating algorithm of the statement the guard makes.

In this example, he told you that you were not in one of the places you're not in (the Vulcan Desert). If he always does this, then the probability is 1/4; if you had been in the Vulcan Desert, he would have told you that you were not in one of the other three.

If he always tells you whether or not you're in the Vulcan Desert, then once you hear him say you're not your probability of being in the Vulcan Mountain is 1/3.

Comment by query on Affordance Widths · 2018-05-11T01:34:32.926Z · LW · GW

Definitely makes sense. A commonly cited example is women in an office workplace; what would be average assertiveness for a male is considered "bitchy", but they still suffer roughly the same "weak" penalties for non-assertiveness.

With the advice-giving aspect, some situations are likely coming from people not knowing what levers they're actually pulling. Adam tells David to move his "assertiveness" lever, but there's no affordance gap available for David by moving that lever -- he would actually have to move an "assertiveness + social skill W" lever which he doesn't have, but which feels like a single lever for Adam called "assertiveness". Not all situations are such; there's no "don't be a woman" or "don't be autistic lever". Sometimes there's some other solution by moving along a different dimension and sometimes there's not.

Comment by query on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-03T22:39:31.399Z · LW · GW

Feel similarly; since Facebook comments are a matter of public record, disputes and complaints on them are fully public and can have high social costs if unaddressed. I would not be worried about it in a small group chat among close friends.

Comment by query on Does Thinking Hard Hurt Your Brain? · 2018-04-30T05:26:31.228Z · LW · GW

I perceive several different ways something like this happens to me:

1. If I do something that strains my working memory, I'll have an experience of having a "cache miss". I'll reach for something, and it won't be there; I'll then attempt to pull it into memory again, but usually this is while trying to "juggle too many balls", and something else will often slip out. This feels like it requires effort/energy to keep going, and I have a desire to stop and relax and let my brain "fuzz over". Eventually I'll get a handle on an abstraction, verbal loop, or image that lets me hold it all at once.

2. If I am attempting to force something creative, I might feel like I'm paying close attention to "where the creative thing should pop up". This is often accompanied with frustration and anxiety, and I'll feel like my mind is otherwise more blank than normal as I keep an anxious eye out for the creative idea that should pop up. This is a "nothing is getting past the filter" problem; too much prune and not enough babble for the assigned task. (Not to say that always means you should babble more; maybe you shouldn't try this task, or shouldn't do it in the social context that's causing you to rightfully prune.)

3. Things can just feel generally aversive or boring. I can push through this by rehearsing convincing evidence that I should do the thing -- this can temporarily lighten the aversion.

All 3 of these can eventually lead to head pain/fogginess for me.

I think this "difficulty thinking" feeling is a mix of cognitive ability, subject domain, emotional orientation, and probably other stuff. Mechanically having less short-term memory makes #1 more salient whatever you're doing. Some people probably have more mechanical"spark" or creative intelligence in certain ways, affecting #2. Having less domain expertise makes #1 and maybe #2 more salient since you have less abstractions and raw material to work with. Lottery of interests, social climate, and any pre-made aversions like having a bad teacher for a subject will factor into #3. Sleep deprivation worsens #1 and #2, but improves #3 for me (since other distractions are less salient).

I think this phenomenon is INSANELY IMPORTANT; when you see people who are a factor of 10x or 100x more productive in an area, I think it's almost certainly because they've gotten past all of the necessary thresholds to not have any fog or mechanical impediment to thinking in that area. There is a large genetic component here, but to focus on things that might be changeable to improve these areas:

  • Using paper or software when it can be helpful.
  • Working in areas you find natively interesting or fun. Alternatively, framing an area so that it feels more natively interesting or fun (although that seems really really hard). Finding subareas that are more natively fun to start with and expand from; for instance, when learning about programming, trying out some different languages to see which you enjoy most. It'll be easier to learn necessary things about one you dislike after you've learned a lot in the framework of one you like.
  • Getting into a social context that gives you consistent recognition for doing your work. This can be a chicken and egg problem in competitive areas.
  • Eliminating unrelated stressors, setting up a life that makes you happier and more fulfilled; I had worse brain fog about math when in bad relationships.
  • Eating different food. There's too many potential dietary interventions to list (many contradictory); I had a huge improvement from avoiding anything remotely in the "junk food" category and trying to eat things that are in the "whole food" category.
  • Exercise
  • Stimulant drugs for some people; if you have undiagnosed ADHD, try to get diagnosed and medicated.

I really wish I had spent more time in the past working on these meta problems, instead of beating my head against a wall of brain fog.

Comment by query on Honest Friends Don't Tell Comforting Lies · 2018-04-20T22:06:38.883Z · LW · GW

A note on this, which I definitely don't mean to apply to the specific situations you discuss (since I don't know enough about them):

If you give people stronger incentives to lie to you, more people will lie to you. If you give people strong enough incentives, even people who value truth highly will start lying to you. Sometimes they will do this by lying to themselves first, because that's what is necessary for them to successfully navigate the incentive gradient. This can be changed by their self-awareness and force of will, but some who do that change will find themselves in the unfortunate position of being worse-off for it. I think a lot of people view the necessity of giving such lies as the fault of the person giving the bad incentive gradient; even if they value truth internally, they might lie externally and feel justified in doing so, because they view it as being forced upon them.

An example is a married couple, living together and nominally dedicated to each other for life, when one partner asks the other "Do I look fat in this?". If there is significant punishment for saying Yes, and not much ability to escape such punishment by breaking up or spending time apart, then it takes an exceedingly strong will to still say "Yes". And a person with a strong will who does so then suffers for it, perhaps continually for many years.

If you value truth in your relationships, you should not only focus on giving and receiving the truth in one-off situations; you should set up the incentive structures in your life, with the relationships you pick and how you respond to people, to optimally give and receive the truth. If you are constantly punishing people for telling you the truth (even if you don't feel like you're punishing them, even if your reactions feel like the only possible ones in the moment), then you should not be surprised when most people are not willing to tell you the truth. You should recognize that, if you're punishing people for telling you the truth (for instance, by giving lots of very uncomfortable outward displays of high stress), then there is an incentive for people who highly value speaking truth to stay away from you as much as possible.

Comment by query on Is Rhetoric Worth Learning? · 2018-04-09T23:13:01.792Z · LW · GW

I think you're looking for Thurston's "On proof and progress in mathematics": https://arxiv.org/abs/math/9404236

Comment by query on Hufflepuff Cynicism on Hypocrisy · 2018-03-31T06:25:46.775Z · LW · GW
I think I may agree with the status version of the anti-hypocrisy flinch. It's the epistemic version I was really wanting to argue against.

Ok yeah, I think my concern was mostly with the status version-- or rather that there's a general sensor that might combine those things, and the parts of that related to status and social management are really important, so you shouldn't just turn the sensor off and run things manually.

... That doesn't seem like treating it as being about epistemics to me. Why is it epistemically relevant? I think it's more like a naive mix of epistemics and status. Status norms in the back of your head might make the hypocrisy salient and feel relevant. Epistemic discourse norms then naively suggest that you can resolve the contradiction by discussing it.

I was definitely unclear; my perception was the speaker's claiming "person X has negative attribute Y, (therefore I am more deserving of status than them)" and that, given a certain social frame, who is deserving of more status is an epistemic question. Whereas actually, the person isn't oriented toward really discussing who is more deserving of status within the frame, but rather is making a move to increase their status at the expense of the other person's.

I think my sense that "who is deserving of more status within a frame" is an epistemic question might be assigning more structure to status than is actually there for most people.

Comment by query on Hufflepuff Cynicism on Hypocrisy · 2018-03-30T22:43:32.300Z · LW · GW

I will see if I can catch a fresh one in the wild and share it. I recognize your last paragraph as something I've experienced before, though, and I endorse the attempt to not let that grow into righteous indignation and annoyance without justification -- with that as the archetype, I think that's indeed a thing to try to improve.

Most examples that come to mind for me have to do with the person projecting identity, knowledge, or an aura of competence that I don't think is accurate. For instance holding someone else to a social standard that they don't meet, "I think person X has negative attribute Y" when the speaker has also recently displayed Y in my eyes. I think the anti-hypocrisy instinct I have is accurate in most of those cases: the conversation is not really about epistemics, it's about social status and alliances, and if I try to treat it as about epistemics (by for instance, naively pointing out the ways the other person has displayed Y) I may lose utility for no good reason.

Comment by query on Hufflepuff Cynicism on Hypocrisy · 2018-03-30T18:30:33.695Z · LW · GW

As you say, there are certainly negative things that hypocrisy can be a signal of, but you recommend that we should just consider those things independently. I think trying to do this sounds really really hard. If we were perfect reasoners this wouldn't be a problem; the anti-hypocrisy norm should indeed just be the sum of those hidden signals. However, we're not; if you practice shutting down your automatic anti-hypocrisy norm, and replace it with a self-constructed non-automatic consideration of alternatives, then I think you'll do worse sometimes.

This has sort of a "valley of bad rationality" feel to me; I imagine trying to have legible, coherent thoughts about alternative considerations while ignoring my gut anti-hypocrisy instinct, and that reliably failing me in social situations where I should've just gone with my instinct.

I notice the argument I'm making applies generally to all "override social instinct" suggestions, and I think that you should sometimes try to override your social instincts -- but I do think that there's huge valleys of bad rationality near to this, so I'd take extreme care about it. My guess is I think you should override them much less than you do -- or I have a different sense of what "overriding" is.

Comment by query on The fundamental complementarity of consciousness and work · 2018-03-28T16:34:05.665Z · LW · GW

One hypothesis is that consciousness evolved for the purpose of deception -- Robin Hanson's "The Elephant in the Brain" is a decent read on this, although it does not address the Hard Problem of Consciousness.

If that's the case, we might circumvent its usefulness by having the right goals, or strong enough detection and norm-punishing behaviors. If we build factories that are closely monitored where faulty machines are destroyed or repaired, and our goal is output instead of survival of individual machines, then the machines being deceptive will not help with that goal.

If somehow the easy and hard versions of consciousness separate (i.e., things which don't functionally look like the conscious part of human brains end up "having experience" or "having moral weight"), then this might not solve the problem even under the deception hypothesis.

Comment by query on On Cognitive Distortions · 2018-03-25T23:19:08.418Z · LW · GW
Some reader might be thinking, "This is all nice and dandy, Quaerendo, but I cannot relate to the examples above... my cognition isn't distorted to that extent." Well, let me refer you to UTexas CMHC:
Maybe you are being realistic. Just for the sake of argument, what if you're only 90% realistic and 10% unrealistic? That mean's you're worrying 10% "more" than you really have to.

Not intending to be overly negative, but this is not a good argument for anything and also doesn't answer the hypothetical question of not relating to the examples. It sounds like, "You're not perfect along this dimension, so you should devote energy to it!" -- which is definitely not the case.

I appreciate the list of distortions; such lists are nice raw material.

Comment by query on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-19T00:55:19.219Z · LW · GW
For most questions you can't really compute the answer. You need to use some combination of intuition and explicit reasoning. However, this combination is indeed more trustworthy than intuition alone, since it allows treating at least some aspects of the question with precision.

I don't think this is true; intuition + explicit reasoning may have more of a certain kind of inside view trust (if you model intuition as not having gears that can be trustable), but intuition alone can definitely develop more outside-view/reputational trust. Sometimes explicitly reasoning about the thing makes you clearly worse at it, and you can account for this over time.

Finally, it is the explicit reasoning part which allows you to offset the biases that you know your reasoning to have, at least until you trained your intuition to offset these biases automatically (assuming this is possible at all).

I also don't think this is as clear cut as you're making it sound; explicit reasoning is also subject to biases, and intuitions can be the things which offset biases. As a quick and dirty example, even if your explicit reasoning takes the form of mathematical proofs which are verifiable, you can have biases about 1. which ontologies you use as your models to write proofs about, 2. which things you focus on proving, and 3. which proofs you decide to give. You can also have intuitions which push to correct some of these biases. It is not the case that intuition -> biased, explicit reasoning -> unbiased.

Explicit reflection is indeed a powerful tool, but I think there's a tendency to confuse legibility with ability; someone can illegibly to others or themselves have the capacity to do something (like use an intuition to correct a bias). It is hard to transmit such abilities, and without good external proof of their existence or transmissibility we are right to be skeptical and withhold social credit in any given case, else we be misled or cheated.

Comment by query on Caring less · 2018-03-14T00:15:21.724Z · LW · GW

If you choose to "care more" about something, and as a result other things get less of your energy, you are socially less liable for the outcome than if you intentionally choose to "care less" about a thing directly. For instance, "I've been really busy" is a common and somewhat socially acceptable excuse for not spending time with someone; "I chose to care less about you" is not. So even if your one and only goal was to spend less time on X, it may be more socially acceptable to do that by adding Y as cover.

Social excusability is often reused as internal excusability.

Comment by query on Misery Pits · 2018-03-11T20:07:18.226Z · LW · GW

Some reasons this is bad:

  1. It's false or not-even-wrong ("worthless parody of a human" is not something that I imagine epistemically applies to any human ever.)
  2. It's mixing epistemics and shoulds -- even if you categorized yourself as a misery pit, this does not come close to meaning you should throw yourself under a bus.
  3. Misery pits are a false framework, that may be useful for modeling phenomena, but may not be a useful model for people who would tend to identity themselves a misery pits. For instance, if they were likely to think the quoted thought, they'd be committing a lot of bucket errors.

I also dislike this comment because I think it's too glib.

Comment by query on Circling · 2018-02-28T17:31:05.388Z · LW · GW

I think it's a memetic adaptation type thing. I would claim that attempting to open up the group usage of NVC will also (in a large enough group) open up the usage of "language-that-appears-NVCish-even-if-against-the-stated-philosophy". I think that this type of language provides cover for power plays (re: the broken link to the fish selling scenario), and that using the language in a way that maintains boundaries requires the group to adapt and be skillful enough at detecting these violations. It is not enough if you do so as an individual if your group does not lend support; it may be enough if as an individual you are highly skilled at defending yourself in a way that does not lose face (and practicing NVC might raise that skill level), but it's harder than in the alternative scenario.

I'm definitely not trying to object to NVC in general, but I'm worried about it as a large social group style. I think the failures of it as a large group style would mostly appear as relatively silent status transfers to the less virtuous.

Also, these arguments are not super specific to NVC and Circling, so should probably be abstracted. I think any large scale group communication change has similar bad potential, and it's an object level question whether that actually happens. With NVC, I've seen some such dynamics in churches that remind me of it, hence why I raise the worry. I think I would feel queasy and like I was being attacked if someone started using NVC language at me in a public setting in front of others; I definitely feel like I've been "fish-sold" before.

It's entirely possible that there exist large groups with a high enough skill level or different values so that this is not a problem at all, and my experience is just too limited.

Comment by query on Mythic Mode · 2018-02-27T20:10:23.761Z · LW · GW

This is incorrect and I think only sounds like an argument because of the language you're choosing; there's nothing incoherent about 1. preferring evolutionary pressures that look like Moloch to exist so that you end up existing rather than not existing, and 2. wanting to solve Moloch-like problems now that you exist.

Also, there's nothing incoherent about wanting to solve Moloch-like problems now that you exist regardless of Moloch-like things causing you to come into existence. Our values are not evolution's values, if that even makes sense.

Comment by query on Mythic Mode · 2018-02-24T21:52:49.279Z · LW · GW

I'm not an expert, but I think MD5 isn't the best for this purpose due to collision attacks. If it's a very small plain-english ASCII message, then collision attacks are probably not a worry (I think?), but it's probably better to use something like SHA-2 or SHA-3 anyways.

Comment by query on Circling · 2018-02-24T20:21:09.987Z · LW · GW

Yeah, this definitely seems like a bug; permalinks to comments shouldn't require this. Unfortunately, I don't see any obvious way to report a bug.

Comment by query on Circling · 2018-02-19T21:40:29.504Z · LW · GW

Upfront note: I've enjoyed the circling I've done.

One reason to be cautious of circling: dropping group punishment norms for certain types of manipulation is extremely harmful.  From my experience of circling (which is limited to a CFAR workshop), it provides plausible cover for very powerful status grabs under the aegis of "(just) expressing feelings and experiences"; I think the strongest usual defense against this is actually group disapproval.  If someone is able to express such a status grab without receiving overt disapproval, they have essentially succeeded unless everyone in the group truly is superhuman at later correcting for this.  If mounting the obvious self-defense against the status grab is taken off the table, then you may just lose painfully unless you can out-do them.

Normalizing circling (or NVC) too much could lead to externalities, where this happens outside of an actual circling context. This could lead to people losing face who normally wouldn't, along with arms races that turn an X community into a circling-skill community.

If people are allowed to fish sell you (https://www.lesserwrong.com/posts/aFyWFwGWBsP5DZbHF/circling#E9dqjhm8Ca3HkFRMZ), and walking away loses you social status, and other people look on expectantly for your answer as you are fish sold instead of saying "Stop, they don't want to buy your fish", then depending on the type of fish and what escape routes to other social circles you have available, you may be in a hellishly difficult situation.

Note that I think this is bad regardless of your personal skill at resisting social pressure. The social incentive landscape changing leads to worse outcomes for everyone, even if you can individually get better outcomes for yourself by better learning to resist social pressure. That better outcome may be moving to a different community instead of being continually downgraded in status, which is a worse outcome than the community never having that bad incentive landscape to begin with.

Comment by query on Hufflepuff Cynicism on Crocker's Rule · 2018-02-15T06:33:37.151Z · LW · GW

"Complaining about your trade partners" at the level of making trade decisions is clearly absurd (a type error). "Complaining about your trade partners" at the level of calling them out, suggesting in an annoyed voice they behave differently, looking miffed, and otherwise attempting to impose costs on them (as object level actions inside of an ongoing trade/interaction which you both are agreeing to) is not. These are sometimes the mechanism via which things of value are traded or negotiations are made, and may be preferred by both parties to ceasing the interaction.

Comment by query on Two Coordination Styles · 2018-02-07T19:54:20.775Z · LW · GW

A potential explanation I think is implicit in Ziz's writing: the software for doing coordination within ourselves and externally is reused. External pressures can shape your software to be of a certain form; for instance, culture can write itself into people so that they find some ideas/patterns basically unthinkable.

So, one possibility is that fusion is indeed superior for self-coordination, but requires a software change that is difficult to make and can have significant costs to your ability to engage in treaties externally. Increased Mana allows you to offset some of the costs, but not all; some interactions are just pretty direct attempts to check that you've installed the appropriate mental malware.

Comment by query on Pareto improvements are rarer than they seem · 2018-01-28T22:33:11.966Z · LW · GW

Assuming the money transfer actually takes place, this sounds like a description of gains from trade; the "no pareto improvement" phrasing is that when actually making the trade, you lose the option of making the trade -- which is of greater than or equal value than the trade itself if the offer never expires. One avenue to get actual Pareto improvements is then to create or extend opportunities for trade.

If the money transfer doesn't actually take place: I agree that Kaldor-Hicks improvements and Pareto improvements shouldn't be conflated. It takes social technology to turn one into the other.

Comment by query on Against Instrumental Convergence · 2018-01-28T19:58:21.179Z · LW · GW

I was definitely very confused when writing the part you quoted. I think the underlying thought was that the processes of writing humans and of writing AlphaZero are very non-random; i.e., even if there's a random number generated in some sense somewhere as part of the process, there's other things going on that are highly constraining the search space -- and those processes are making use of "instrumental convergence" (stored resources, intelligence, putting the hard drives in safe locations.) Then I can understand your claim as "instrumental convergence may occur in guiding the search for/construction of an agent, but there's no reason to believe that agent will then do instrumentally convergent things." I think that's not true in general, but it would take more words to defend.

Comment by query on Against Instrumental Convergence · 2018-01-28T03:45:51.992Z · LW · GW

I think you have a good point, in that the VNM utility theorem is often overused/abused: I don't think it's clear how to frame a potentially self modifying agent in reality as a preference ordering on lotteries, and even if you could in theory do so it might require such a granular set of outcomes as to make the resulting utility function not super interesting. (I'd very much appreciate arguments for taking VNM more seriously in this context; I've been pretty frustrated about this.)

That said, I think instrumental convergence is indeed a problem for real world searches; the things we're classifying as "instrumentally convergent goals" are just "things that are helpful for a large class of problems." It turns out there are ways to do better than random search in general, and that some these ways (the most general ones) are making use of the things we're calling "instrumentally convergent goals": AlphaGo Zero was not a (uniformish) random search on Go programs, and humans were not a (uniformish) random search on creatures. So I don't think this particular line of thought should make you think potential AI is less of a problem.

Comment by query on An Apology is a Surrender · 2018-01-18T04:33:14.272Z · LW · GW

On equal and opposite advice: many more people want you to surrender to them than it is good for you to surrender to, and the world is full of people who will demand your apology (and make it seem socially mandatory) for things you do not or should not regret. Tread carefully with practicing surrender around people who will take advantage of it. Sometimes the apparent social need to apologize is due to a value/culture mismatch with your social group, and practicing minimal or non-internalized apologies is actually a good survival mechanic.

If you are high power, high status, low agreeableness, or inescapable to a certain set of people, and you often find yourself issuing non-apologies, then this may indicate blindspots (you're doing things you don't want to do without realizing) or that your exercise of power/disregard of social reality is hurting others.

If you are already highly agreeable, you currently have low mana (https://www.lesserwrong.com/posts/39aKPedxHYEfusDWo/mana/), or you feel like you can't escape a certain set of people who you need to apologize to, then internalizing apologies may be pushing in the wrong direction and overwriting your values with theirs.

Comment by query on Rationality: Abridged · 2018-01-06T21:23:02.168Z · LW · GW

Yeah; it's not open/shut. I guess I'd say in the current phrasing, the "but Aumann’s Agreement Theorem shows that if two people disagree, at least one is doing something wrong." is suggesting implications but not actually saying anything interesting -- at least one of them is doing something wrong by this standard whether or not they agree. I think adding some more context to make people less suspicious they're getting Eulered (http://slatestarcodex.com/2014/08/10/getting-eulered/) would be good.

I think this flaw is basically in the original article as well, though, so it's also a struggle between accurately representing the source and adding editorial correction.

Nitpicks aside, want to say again that this is really great; thank you!

Comment by query on Rationality: Abridged · 2018-01-06T19:24:19.813Z · LW · GW

This is completely awesome, thanks for doing this. This is something I can imagine actually sending to semi-interested friends.

Direct messaging seems to be wonky at the moment, so I'll put a suggested correction here: for 2.4, Aumann's Agreement Theorem does not show that if two people disagree, at least one of them is doing something wrong. From wikipedia: " if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be equal. " This could fail at multiple steps, off the top of my head:

  1. The humans might not be (mathematically pure) Bayesian rationalists (and in fact they're not.)
  2. The humans might not have common priors (even if they satisfied 1.)
  3. The humans might not have common knowledge of their posterior probabilities; a human saying words is a signal, not direct knowledge, so them telling you their posterior probabilities may not do the trick (and they might not know them.)

You could say failing to satisfy 1-3 means that at least one of them is "doing something wrong", but I think it's a misleading stretch -- failing to be normatively matched up to an arbitrary unobtainable mathematical structure is not what we usually call wrong. It stuck out to me as something that would put off readers with a bullshit detector, so I think it'd be worth fixing.

Comment by query on Mana · 2017-12-21T02:36:05.705Z · LW · GW

I found this extremely helpful; it motivated me to go read your entire blog history. I hope you write more; I think the "dark side" is a concept I had only the rough edges of, but one that I unknowingly desired to understand better (and had seen the hints of in other's writing around the community.) I feel like the similarly named "dark arts" may have been an occluding red herring.

The more you shine the light of legibility, required defensibility and justification, public scrutiny of beliefs, social reality that people's judgement might be flawed and they need to distrust themselves and have the virtue of changing their minds, the more those with low mana get their souls written into by social reality.

This is something I liked seeing written. This is a trade-off I don't often see recognized by people around me; without recognizing this, communal pursuits can destroy and subsume the weaker people within them. As one who has had their soul partly killed in the past by the legibility requirements of people who were alien to me, I value this lesson and anything that helps people develop protective immunity.

Comment by query on The art of grieving well · 2015-12-15T20:46:11.620Z · LW · GW

Beautifully written; thank you for sharing this.

Comment by query on Marketing Rationality · 2015-11-23T14:26:15.769Z · LW · GW

EDIT: On reflection, I want to tap out of this conversation. Thanks for the responses.

Comment by query on Marketing Rationality · 2015-11-21T05:43:26.833Z · LW · GW

Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?

Effectively no. I understand that you're aware of these risks and are able to list mitigating arguments, but the weight of those arguments does not resolve my worries. The things you've just said aren't different in gestalt from what I've read from you.

To be potentially more helpful, here's a few ways the arguments you just made fall flat for me:

I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.

Connectivity to the rationalist movement or "rationality" keyword isn't necessary to immunize people against the ideas. You're right that if you literally never use the word "bias" then it's unlikely my nightmare imaginary conversational partner will have a strong triggered response against the word "bias", but if they respond the same way to the phrase "thinking errors" or realize at some point that's the concept I'm talking about, it's the same pitfall. And in terms of catalyzing opposition, there is enough connectivity for motivated antagonists to make such connections and use every deviation from perfection as ammunition against even fully correct forms of good ideas.

For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I'm doing in that article above. Hope this helps address some of the concerns about arguing from authority.

I can't find any discussion in the linked article about why research is a key way of validating truth claims; did you link the correct article? I also don't know if I understand what you're trying to say; to reflect back, are you saying something like "People first need to be convinced that scientific studies are of value, before we can teach them why scientific studies are of value." ? I ... don't know about that, but I won't critique that position here since I may not be understanding.

(...) Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.

You seem to be saying that since the writing is of the form needed to get on Lifehack, and since in fact people are reading it on Lifehack, that they will then not suffer from any memetic immunization via the ideas. First, not all immunization is via negative reactions; many people think science is great, but have no idea how to do science. Such people can be in a sense immunized from learning to understand the process; their curiosity is already sated, and their decisions made. Second, as someone mentioned somewhere else on this comment stream, it's not obvious that the Lifehack readers who end up looking at your article will end up liking or agreeing with your article.

You're clearly getting some engagement, which is suggestive of positive responses, but what if the distribution of response is bimodal, with some readers liking it a little bit and some readers absolutely loathing it to the point of sharing their disgust with friends? Google searches reveal negative reactions to your materials as well. The net impact is not obviously positive.

Comment by query on Marketing Rationality · 2015-11-19T20:14:38.890Z · LW · GW

I have not a clue whether this sort of marketing is a good idea. Let me be clear what I mean: I think there's maybe a 30-40% chance that Gleb is having a net positive impact through these outreach efforts. I also think there's maybe a 10-20% chance that he's having a horrific long-term negative impact through these outreach efforts. Thus the whole thing makes me uncomfortable.

So here's some of the concerns I see; I've gone to some effort to be fair to Gleb, and not assume anything about his thoughts or motivations:

  • By presenting these ideas in weakened forms (either by giving short or invalid argumentation, or putting it in venues or contexts with negative associations), he may be memetically immunizing people against the stronger forms of the ideas.
  • By teaching people using arguments from authority, he may be worsening the primary "sanity waterline" issues rather than improving them. The articles, materials, and comments I've seen make heavy use of language like "science-based", "research-based" and "expert". The people reading these articles in general have little or no skill at evaluating such claims, so that they effectively become arguments from authority. By rhetorically convincing them to adopt the techniques or thoughts, he's spreading quite possibly helpful ideas, but reinforcing bad habits around accepting ideas.
  • Gleb's writing style strikes me as very unauthentic feeling. Let me be clear I don't mean to accuse him of anything negative; but I intuitively feel a very negative reaction to his writing. It triggers emotional signals in me of attempted deception and rhetorical tricks (whether or not this is his intent!) His writing risks associating "rationality" with such signals (should other people share my reactions) and again causing immunization, or even catalyzing opposition.

An illustration of the nightmare scenario from such an outreach effort would be that, 3 years from now when I attempt to talk to someone about biases, they respond by saying "Oh god don't give me that '6 weird tips' bullshit about 'rational thinking', and spare me your godawful rhetoric, gtfo."

Like I said at the start, I don't know which way it swings, but those are my thoughts and concerns. I imagine they're not new concerns to Gleb. I still have these concerns after reading all of the mitigating argumentation he has offered so far, and I'm not sure of a good way to collect evidence about this besides running absurdly large long-term "consumer" studies.

I do imagine he plans to continue his efforts, and thus we'll find out eventually how this turns out.

Comment by query on Reflexive self-processing is literally infinitely simpler than a many world interpretation · 2015-11-13T15:28:45.136Z · LW · GW

I disagree with your conclusion. Specifically, I disagree that

This is, literally, infinitely more parsimonious than the many worlds theory

You're reasoning isn't tight enough to have confidence answering questions like these. Specifically,

  • What do you mean by "simpler"?
  • Specifically how does physics "take into account the entire state of the universe"?

In order to actually say anything like the second that's consistent with observations, I expect your physical laws become much less simple (re: Bell's theorem implying non-locality, maybe, see Scott Aaronson's blog.)

A basic error you're making is equating simplicity of physical laws with small ontology. For instance, Google just told me there's ~10^80 atoms in the observable universe (+- a few orders of magnitude), but this is no blow against the atomic theory of matter. You can formalize this interplay via "minimum message length" for a finite, fully described system; check Wikipedia for details.

Even though MWI implies a large ontology, it's just a certain naive interpretation of our current local description of quantum mechanics. It's hard to see how there could be a global description that is simpler, though I'd be interested to see one. (Here local/global mean "dependent on things nearby" vs "dependent on things far away", which of course is assuming that ontology.)

With all kindness, the strength of your conclusion is far out of scope with the argument you've made. The linked paper looks like nonsense to me. I would recommend studying some basic textbook math and physics if you're truly interested in this subject, although be prepared for a long and humbling journey.

Comment by query on Open thread, Oct. 12 - Oct. 18, 2015 · 2015-10-12T18:19:49.846Z · LW · GW

Your question of "after finishing the supertask, what is the probability that 0 stays in place" doesn't yet parse as a question in ZFC, because you haven't specified what is meant by "after finishing the supertask". You need to formalize this notion before we can say anything about it.

If you're saying that there is no formalization you know of that makes sense in ZFC, then that's fine, but that's not necessarily a strike against ZFC unless you have a competitive alternative you're offering. The problem could just be that it's an ill-defined concept to begin with, or you just haven't found a good formalization. Just because your brain says "that sounds like it make sense", doesn't mean it actually makes sense.

To show that ZFC is inconsistent, you would need to display a formal contradiction deduced from the ZFC axioms. "I can't write down a formalization of this natural sounding concept" isn't a formal contradiction; the failure is at the modeling step, not inside the logical calculus.

Comment by query on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-10T20:50:55.294Z · LW · GW

The model is that persistent reflexes interact with the environment to give black swans; singular events with extremely high legal consequence. To effectively avoid all of them preemptively requires training the stable reflexes, but it could be that "editing out" only a few 10 minute periods retroactively would still be enough (those few periods when reflexes and environment interact extremely negatively.) So I think the "very regular basis" claim isn't substantiated.

That said, we cant actually retroactively edit anyways.

Comment by query on Thinking like a Scientist · 2015-07-20T19:08:40.624Z · LW · GW

Being to vague to be wrong is bad. Especially when you want to speak in favor of science.

I agree, it's good to pump against entropy with things that could be "Go Science!" cheers. I think the author's topic is not too vague to discuss, but his argument isn't strong or specific enough that you should leap to action based solely on it. I think it's a fine thing to post to Discussion though; maybe this indicate we have ideal different standards for Discussion posts?

There no reason to say "well maybe the author meant to say X" when he didn't say X.

Sure there is! Principle of charity, interpreting what they said in different language to motivate further discussion, rephrasing for your own understanding (and opening yourself to being corrected). Sometimes someone waves their hands in a direction, and you say "Aha, you mean..."

Above the author says "I think query worded it better", which is the sort of thing I was aiming to accomplish.

Comment by query on Thinking like a Scientist · 2015-07-19T17:50:32.314Z · LW · GW

Actually, this illustrates scientific thinking; the doctor forms a hypothesis based on observation and then experimentally tests that hypothesis.

Most interactions in the world are of the form "I have an idea of what will happen, so I do X, and later I get some evidence about how correct I was". So, taking that as a binary categorization of scientific thinking is not so interesting, though I endorse promoting reflection on the fact that this is what is happening.

I think the author intends to point out some of the degrees of scientiificism by which things vary: how formal is the hypothesis, how formal is the evidence gathering, are analytical techniques being applied, etc. Normal interactions with doctors are low on scientificism in this sense, though they are heavily utilizing the output of previous scientificism to generate a judgement.

Comment by query on List of Fully General Counterarguments · 2015-07-18T22:33:12.743Z · LW · GW

I think it would be good to separate the analysis into FGCA's which are always fallacious, versus those that are only warning signs/rude. For instance, the fallacy of grey is indeed a fallacy, so using it as a counter-argument is a wrong move regardless of its generality.

However, it may in fact be that your opponent is a very clever arguer or that the evidence they present you has been highly filtered. Conversationally, using these as a counter-argument is considered rude (and rightly so), and the temptation to use them is often a good internal warning sign; however you don't want to drop consideration of them from your mental calculus. For instance, perhaps you should be motivated after the conversation to investigate alternative evidence if you're suspicious that the evidence presented to you was highly filtered.

Comment by query on FAI Research Constraints and AGI Side Effects · 2015-06-06T22:56:32.625Z · LW · GW

Modulo nitpicking, agreed on both points.

Comment by query on FAI Research Constraints and AGI Side Effects · 2015-06-05T18:12:14.606Z · LW · GW

I very much favor bottom-up modelling based on real evidence rather than mathematical models that come out looking neat by imposing our preconceptions on the problem a priori.

(edit: I think I might understand after-all; it sounds like you're claiming AIXI-like things are unlikely to be useful since they're based mostly on preconceptions that are likely false?)

I don't think I understand what you mean here. Everyone favors modeling based on real evidence as opposed to fake evidence, and everyone favors avoiding the import of false preconceptions. It sounds like you prefer more constructive approaches?

Right. Which is precisely why I don't like when we attempt to do FAI research under the assumption of AIXI-like-ness.

I agree if you're saying that we shouldn't assume AIXI-like-ness to define the field. I disagree if you're saying it's a waste for people to explore that idea space though: it seems ripe to me.

Comment by query on FAI Research Constraints and AGI Side Effects · 2015-06-05T17:15:36.878Z · LW · GW

Formally, you don't. Informally, you might try approximate definitions and see how they fail to capture elements of reality, or you might try and find analogies to other situations that have been modeled well and try to capture similar structure. Mathematicians et al usually don't start new fields of inquiry from a set of definitions, they start from an intuition grounded in reality and previously discovered mathematics and iterate until the field takes shape. Although I'm not a physicist, the possibly incorrect story I've heard is that Feynman path integrals are a great example of this.

Comment by query on FAI Research Constraints and AGI Side Effects · 2015-06-05T17:04:36.531Z · LW · GW

I offered the transform as an example how things can mathematically factor, so like I said, that may not be what the solution looks like. My feeling is that it's too soon to throw out anything that might look like that pattern though.

Comment by query on FAI Research Constraints and AGI Side Effects · 2015-06-05T16:48:31.290Z · LW · GW

Oh yes, it sounds like I did misunderstand you. I thought you were saying you didn't understand how such a thing could happen in principle, not that you were skeptical of the currently popular models. The classes U and F above, should something like that ever come to pass, need not be AIXI-like (nor need they involve utility functions).

I think I'm hearing that you're very skeptical about the validity of current toy mathematical models. I think it's common for people to motte and bailey between the mathematics and the phenomena they're hoping to model, and it's an easy mistake for most people to make. In a good discussion, you should separate out the "math wank" (which I like to just call math) from the transfer of that wank to reality that you hope to model.

Sometimes toy models are helpful and some times they are distractions that lead nowhere or embody a mistaken preconception. I see you as claiming these models are distractions, not that no model is possible. Accurate?

Comment by query on FAI Research Constraints and AGI Side Effects · 2015-06-05T16:16:48.324Z · LW · GW

A mathematical model of what this might look like: you might have a candidate class of formal models U that you think of as "all GAI" such that you know of no "reasonably computable"(which you might hope to define) member of the class (corresponding to an implementable GAI). Maybe you can find a subclass F in U that you think models Friendly AI. You can reason about these classes without knowing any examples of reasonably computable members of either. Perhaps you could even give an algorithm for taking an arbitrary example in U and transforming it via reasonable computation into an example of F. Then, once you actually construct an arbitrary GAI, you already know how to transform it into an FAI.

So the problem may be factorable such that you can solve a later part before solving the first part.

So, I'd agree it might be hard to understand F without understanding U as a class of objects. And lets leave aside how you would find and become certain of such definitions. If you could, though, you might hope that you can define them and work with them without ever constructing an example. Patterns not far off from this occur in mathematical practice, for example families of graphs with certain properties known to exist via probabilistic methods, but with no constructed examples.

Does that help, or did I misunderstand somewhere?

(edit: I don't claim an eventual solution would fit the above description, this is just I hope a sufficient example that such things are mathematically possible)

Comment by query on Six Ways To Get Along With People Who Are Totally Wrong* · 2015-05-29T17:38:00.608Z · LW · GW

I've enlarged my social circles, or the set of circles I can comfortably move in, and didn't end up on that model. I think I originally felt that way a lot, and I worked on the "feeling like doing a dramatic facepalm" by reflecting on it in light of my values. When dramatic face palms aren't going to accomplish anything good, I examine why I have that feeling and usually I find out it's because my political brain is engaged even when this isn't a situation where I'll get good outcomes from political-style conversation. You can potentially change your feelings so that other responses that you value become natural.

A warning though, it took me a long time to learn how to do this in a manner that didn't make me feel conflicted. There's always the danger of "I was nice to the person even after they said X" priming you in a bad way. You could also do even worse than before in terms of your values, simply because you're inexperienced with the non-facepalm response and do it badly. I think it was worth it for me, but depending on your situation it might be dangerous to mess with.

Comment by query on Six Ways To Get Along With People Who Are Totally Wrong* · 2015-05-27T20:12:34.381Z · LW · GW

Agreed on the 2nd paragraph.

Optimally, you'd be have an understanding of the options available, how you work internally, and how other people respond so you could choose the appropriate level of anger, etc. Thus it's better to explore suggestions and see how they work than to naively apply them in all situations.

Comment by query on Six Ways To Get Along With People Who Are Totally Wrong* · 2015-05-27T19:57:37.489Z · LW · GW

Seconded! Another phrase (whose delivery might be hard to convey in text) is "Look, I dunno, but anyways..."

Maybe the big idea is to come across as not expressing much interest in the claim, instead of opposing the claim? I think most people are happy to move on with the conversation when they get a "move on" signal, and we exchange these signals all the time.

I also like that this is an honest way to think about: I really am not interested in what I expect will happen with that conversation (even if I am interested in the details of countering your claim.)

Comment by query on Six Ways To Get Along With People Who Are Totally Wrong* · 2015-05-27T19:48:17.390Z · LW · GW

I don't know what you mean, but I think I see a lot of people "being polite" but failing at one of these when it would be really useful for them.

For example, you can be polite while internally becoming more suspicious and angry at the other person (#3 and #4) which starts coming out in body language and the direction of conversation. Eventually you politely end the conversation in a bad mood and thinking the other person is a jerk, when you could've accomplished a lot more with a different internal response.

Comment by query on Six Ways To Get Along With People Who Are Totally Wrong* · 2015-05-27T19:36:10.223Z · LW · GW

I think your critique of this being only for disagreements that don't matter is too strong, and your examples miss the context of the article.

This is not a suggested resolution procedure for all humans in all states of disagreement; this is a set of techniques for when you already have and want to maintain some level of cooperative relationship with a person, but find yourself in a disagreement over something. Suggestion 5 above is specifically about disengaging from disagreements that "don't matter", and the rest are potentially useful even if it's a disagreement over something important.