Posts

Chaos and Consequentialism 2017-04-24T20:43:21.863Z
Thoughts on Automoderation 2017-04-12T21:29:51.000Z
The Monkey and the Machine 2017-02-23T21:38:48.033Z
Value Journaling 2017-01-25T06:10:52.180Z
Quick modeling: resolving disagreements. 2017-01-23T18:18:02.111Z
Fear or fear? (A Meteuphoric post on distinguishing feelings from considered positions.) 2017-01-17T03:26:41.359Z
Guilt vs Shame, Pride vs Joy? 2016-12-19T20:00:18.376Z
Stress Response, Growth Mindset, and Nonviolent Communication 2016-12-15T00:37:05.899Z
The Internal Lawyer 2016-12-06T17:00:45.751Z
Some thoughts on double crux. 2016-12-04T18:03:14.618Z
"Decisions" as thoughts which lead to actions. 2016-12-01T00:47:52.421Z
Things "Meta" Can Mean 2016-11-30T09:52:03.180Z

Comments

Comment by ProofOfLogic on Mathematical System For Calibration · 2017-06-13T17:12:23.775Z · LW · GW

Not exactly.

(1) What is the family of calibration curves you're updating on? These are functions from stated probabilities to 'true' probabilities, so the class of possible functions is quite large. Do we want a parametric family? A non-parametric family? We would like something which is mathematically convenient, looks as much like typical calibration curves as possible, but which has a good ability to fit anomalous curves as well when those come up.

(2) What is the prior oven this family of curves? It may not matter too much if we plan on using a lot of data, but if we want to estimate people's calibration quickly, it would be nice to have a decent prior. This suggests a hierarchical Bayesian approach (where we estimate a good prior distribution via a higher-order prior).

(3) As mentioned by cousin_it, we would actually want to estimate different calibration curves for different topics. This suggests adding at least one more level to the hierarchical Bayesian model, so that we can simultaneously estimate the general distribution of calibration curves in the population, the all-subject calibration curve for an individual, and the single-subject calibration curve for an individual. At this point, one might prefer to shut one's eyes and ignore the complexity of the problem.

Comment by ProofOfLogic on Chaos and Consequentialism · 2017-05-03T20:54:25.727Z · LW · GW

First I don't think conflating blame and "bad person" is necessarily helpful.

OK, yeah, your view of blame as social incentive (skin-in-the-game) seems superior.

The most case is what is traditionally called "being tempted by sin", e.g., someone procrastinating and not doing what he was supposed to do.

I agree that imposing social costs can be a useful way of reducing this, but I think we would probably have disagreements about how often and in what cases. I think a lot of cases where people blame other people for their failings are more harmful than helpful, and push people away from each other in the long term.

And don't get me started on situations where most of the participants are only there for a paycheck, a.k.a., the real world.

It sounds like we both agree that this is a nightmare scenario in terms of creating effective teams and good environments for people, albeit common.

However, even when the primary motive is money, there's some social glue holding things together. I recommend the book The Moral Economy, which discusses how capitalist societies rely to a large extend on the goodwill of the populace. As mutual trust decreases, transaction costs increase. The most direct effect is the cost of security; shops in different neighborhoods require different amounts of it. This is often cited as the reason the diamond industry is dominated by Hasidic Jews; they save on security cost due to the high level of trust they can have as part of a community. Some of this trust comes from imposing social costs, but some of it also comes from common goals of the community members.

The Moral Economy argues that the lesson of the impossibility theorems of mechanism design is that it would not be possible to run a society on properly aligned incentives alone. There is no way to impose the right costs to get a society of selfish agents to behave. Instead, a mechanism designer in the real world has to recognize, utilize, and foster people's altruistic and otherwise pro-social tendencies. It is also shown empirically that designing incentives as if people were selfish tends to make people act more selfish in many cases.

So, I will try and watch out for blame being a useful social mechanism in the way you describe. I'm probably underestimating the number of cases where imposed social costs are useful precisely because they don't end up being applied (IE, implicit threats). At present I still think it would be better if people were both less quick to employ blame, and less concerned about other people blaming them (making more room for self-motivation).

Comment by ProofOfLogic on Chaos and Consequentialism · 2017-04-26T21:21:17.047Z · LW · GW

Well, yes, and I think that's mostly unfortunate. The model of interaction in which people seek to blame each other seems worse -- that is, less effective for meeting the needs and achieving the goals of those involved -- than the one where constructive criticism is employed.

The blame model seems something like this. There are strong social norms which reliably distinguish good actions from bad actions, in a way which almost everyone involved can agree on. These norms are assumed to be understood. When someone violates these norms, the appropriate response is some form of social punishment, ranging from mild reprimand to deciding that they're a bad person and ostracizing them.

The constructive criticism model, on the other hand, assumes that there are some common group goals and norms, but different individuals may have different individual goals and preferences, and these might not be fully known, and the group norms might not be fully understood by everyone. When someone does something you don't like, it could be because they don't know about your preferences, they don't know about a group norm, they don't understand the situation as well as you and so fail to see a consequence of an action which you see, etc. Since we assume that people do have somewhat common goals, we don't have to enforce norm violations with punishment -- by default, we assume people already care about each other enough that they would have respected each other's wishes in an ideal situation. Perhaps they made a mistake because they lacked a skill (which is where the constructive feedback comes in), or didn't understand the situation, your preferences, or the existing norms. Or, perhaps, they have an overriding reason for doing what they did. Social punishment (even the mild social punishment associated with most cases of blame) often doesn't fix anything and may make things worse by escalating the conflict or creating hard feelings.

If you discuss the problem and find that they didn't misunderstand or lack a necessary skill or have an overriding reason that you can agree with, and aren't interested in doing differently in the future, then perhaps you don't have enough commonality in your goals to interact. This is still different from the blame model, where sufficiently bad violations mark someone as a "bad person" to be avoided. You may still wish them the best; you simply don't expect fruitful interactions with them.

That being said, there are cases where you might really judge someone to be a "bad person" in the more common sense, or where you really do want to impose social costs on some actions. Sociopaths exist, and may need to be truly avoided and outed as a "bad person" (although pro-social psychopaths also exist; being a sociopath doesn't automatically make you a bad person). However, it seems to me as if most people have overactive bad-person detectors in this regard, which harm other interactions. I don't think this is because easily-tripped bad-person detectors are on the optimal setting given the high cost of failing to detect sociopaths. I think it's because the concept of blame conflates the very different concepts involved in cheater-detection/sociopath-detection and situations where less adversarial responses are more appropriate.

(Response also posted back to the blog.)

Comment by ProofOfLogic on Chaos and Consequentialism · 2017-04-24T23:26:28.818Z · LW · GW

Edited to "You can’t really impose this kind of responsibility on someone else. It’s compatible with constructive criticism, but not with blame." to try to make the point clearer.

Comment by ProofOfLogic on An inquiry into memory of humans · 2017-04-19T17:27:50.359Z · LW · GW

Noticing the things one could be noticing. Reconstructing the field of mnemonics from personal experience. Applied phenomenology. Working toward an understanding of what one's brain is actually doing.

(Commenting in noun phrases. Conveying associations without making assertions.)

Comment by ProofOfLogic on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-14T22:24:47.362Z · LW · GW

I really like the idea, but agree that it is sadly not the right thing here. It would be a fun addition to an Arbital-like site.

Comment by ProofOfLogic on Thoughts on Automoderation · 2017-04-14T03:59:32.860Z · LW · GW

These signals could be used outside of automoderation. I didn't focus on the moderation aspect. Automoderation itself really does seem like a moderation system, though. It is an alternate way to address the concerns which would normally be addressed by a moderator.

Comment by ProofOfLogic on Thoughts on Automoderation · 2017-04-14T03:56:17.653Z · LW · GW

True, I didn't think about the added burden. This is especially important for a group with frequent newcomers.

I try hard to communicate these distinctions, and distinctions about amount and type of evidence, in conversation. However, it does seem like something more concrete could help propagate norms of making these sorts of distinctions.

And, you make a good point about these distinctions not always indicating the evidence difference that I claimed. I'll edit to add a note about that.

Comment by ProofOfLogic on Plan-Bot: A Simple Planning Tool · 2017-04-12T19:56:09.746Z · LW · GW

Very cool! I wonder if something like this could be added to a standard productivity/todo tool (thinking of Complice here).

I think the step "how can you prevent this from happening" should perhaps add something like "or how can you work around this" instead -- perhaps you cannot prevent the problem directly, but can come up with alternate routes to success.

I found it surprising that the script ended after a "yes" to "Are you surprised?". Mere surprise seems like too low a bar. I expected the next question to be "Are you so surprised that it doesn't seem worth planning for this eventuality?".

Also, I accidentally typed "done." rather than "done", and it was entered as a step in the plan. I think it would be good if variations like that were treated as the same. And, it would be nice to be able to go back one step rather than resetting entirely.

Comment by ProofOfLogic on Is Evidential Decision Theory presumptuous? · 2017-02-24T17:07:34.133Z · LW · GW

It's also considered the standard in the literature.

Comment by ProofOfLogic on Is Evidential Decision Theory presumptuous? · 2017-02-07T03:57:14.553Z · LW · GW

Somewhat. If it is known that the AI actually does not go into infinite loops, then this isn't a problem -- but this creates an interesting question as to how the AI is reasoning about the human's behavior in a way that doesn't lead to an infinite loop. One sort of answer we can give is that they're doing logical reasoning about each other, rather than trying to run each other's code. This could run into incompleteness problems, but not always:

http://intelligence.org/files/ParametricBoundedLobsTheorem.pdf

Comment by ProofOfLogic on Is Evidential Decision Theory presumptuous? · 2017-02-02T22:52:28.206Z · LW · GW

I find this and the smoker's lesion to have the same flaw, namely: it does not make sense to me to both suppose that the agent is using EDT, and suppose some biases in the agent's decision-making. We can perhaps suppose that (in both cases) the agent's preferences are what is affected (by the genes, or by the physics). But then, shouldn't the agent be able to observe this (the "tickle defense"), at least indirectly through behavior? And won't this make it act as CDT would act?

But: I find the blackmail letter to be a totally compelling case against EDT.

Comment by ProofOfLogic on How often do you check this forum? · 2017-01-31T10:34:39.526Z · LW · GW

ping

Comment by ProofOfLogic on Value Journaling · 2017-01-29T21:20:28.527Z · LW · GW

It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.

Ah. From my perspective, it seems the opposite way: overly specific stories about the future would be more like faith. Whether we have a specific story of the future or not, we shouldn't assume a good outcome. But perhaps you're saying that we should at least have a vision of a good outcome in mind to steer toward.

And personally speaking, it would be most dignifying if it could address (and maybe dissolve) those—probably less informed—intuitions about how there seems to be nothing wrong in principle with indulging all-or-nothing dispositions save for the contingent residual pain.

Ah, well, optimization generally works on relative comparison. I think of absolutes as a fallacy (whet in the realm of utility as opposed to truth) -- it means you're not admitting trade-offs. At the very least, the VNM axioms require trade-offs with respect to probabilities of success. But what is success? By just about any account, there are better and worse scenarios. The VNM theorem requires us to balance those rather than just aiming for the highest.

Or, even more basic. Optimization requires a preference ordering, <, and requires us to look through the possibilities and choose better ones over worse ones. Human psychology often thinks in absolutes, as if solutions were simply acceptable or unacceptable; this is called recognition primed decision. This kind of thinking seems to be good for quick decisions in domains where we have adequate experience. However, it can cause our thinking to spin out of control if we can't find any solutions which pass our threshold. It's then useful to remember that the threshold was arbitrary to begin with, and the real question is which action we prefer; what's relatively best?

Another common failure of optimization related to this is when someone criticizes without indicating a better alternative. As I said in the post, criticism without indication of a better alternative is not very useful. At best, it's just a heuristic argument that an improvement may exist if we try to address a certain issue. At worst, it's ignoring trade-offs by the fallacy of absolute thinking.

Comment by ProofOfLogic on Value Journaling · 2017-01-27T08:58:10.728Z · LW · GW

I sympathize with the worry, but my attitude is that comparing yourself to the best is a losing proposition; effectively everyone is an underdog when thinking like that. The intelligence/knowledge ladder is steep enough that you never really feel like you've "made it"; there are always smarter people to make you feel dumb. So at any level, you'd better get used to asking stupid questions.

And personally, finding some small niche and indirectly bolstering the front-lines in some relatively small way, whether now or in the future, would not be valuable, satisfying, or something to particularly look forward to. Also why I'm asking.

I think it would be nice if someone wrote a post on "visceral comparative advantage" giving tips on how to intuitively connect "the best thing I could be doing" with comparative advantage rather than absolute notions. I'm not quite sure how to do it myself. The inability to be satisfied by a small niche is something that made a lot more sense when humans lived in small tribes and there was a decent chance to climb to the top.

I don't think many people on the "front lines" as you put it have concrete predictions concerning merging with superintelligent AIs and so on. We don't know what the future will look like; if things go well, the options at the time will tend to be solutions we wouldn't think of now.

Comment by ProofOfLogic on Too Much Effort | Too Little Evidence · 2017-01-26T09:16:43.261Z · LW · GW

so maybe we are arguing from the momentum of our first disagreement :P

I think so, sorry!

Comment by ProofOfLogic on Too Much Effort | Too Little Evidence · 2017-01-26T09:14:44.708Z · LW · GW

The people that in the end tested lucid dreaming were the lucid dreamers themselves.

Ah, right. I agree that invalidates my argument there.

Yes, that makes sense. I don't think we disagree much. I might be just confusing you with my clumsy use of the word rationality in my comments.

Ok. (I think I might have also been inferring a larger disagreement than actually existed due to failing to keep in mind the order in which you made certain replies.)

Comment by ProofOfLogic on Too Much Effort | Too Little Evidence · 2017-01-26T00:09:58.430Z · LW · GW

Based on our rational approach we are at a disadvantage for discovering these truths.

As I argued, assigning accurate (perhaps low, perhaps high) probabilities to the truth of such claims (of the general category which lucid dreaming falls into) does not make it harder -- not even a little harder -- to discover the truth about lucid dreaming. What makes it hard is the large number of similar but bogus claims to sift through, as well as the difficulty of lucid dreaming itself. Assigning an appropriate probability based on past experience with these sorts of claims only helps us because it allows us to make good decisions about how much of our time to spend investigating such claims.

What you seem to be missing (maybe?) is that we need to have a general policy which we can be satisfied with in "situations of this kind". You're saying that what we should really do is trust our friend who is telling us about lucid dreaming (and, in fact, I agree with that policy). But if it's rational for us to ascribe a really low probability (I don't think it is), that's because we see a lot of similar claims to this which turn out to be false. We can still try a lot of these things, with an experimental attitude, if the payoff of finding a true claim balances well against the number of false claims we expect to sift through in the process. However, we probably don't have the attention of looking at all such cases, which means we may miss lucid dreaming by accident. But this is not a flaw in the strategy; this is just a difficulty of the situation.

I'm frustrated because it seems like you are misunderstanding a part of the response Kindly and I are making, but you're doing a pretty good job of engaging with our replies and trying to sift out what you think and where you start disagreeing with our arguments. I'm just not quite sure yet where the gap between our views is.

Comment by ProofOfLogic on Too Much Effort | Too Little Evidence · 2017-01-25T23:42:30.102Z · LW · GW

That's related to Science Doesn't Trust Your Rationality.

What I'd say is this:

Personally, I find the lucid-dreaming example rather absurd, because I tend to believe a friend who claims they've had a mental experience. I might not agree with their analysis of their mental experience; for example, if they say they've talked to God in a dream, then I would tend to suspect them of mis-interpreting their experience. I do tend to believe that they're honestly trying to convey an experience they had, though. And it's plausible (though far from certain) that the steps which they took in order to get that experience will also work for me.

So, I can imagine a skeptic who brushes off a friend's report of lucid dreaming as "unscientific", but I have no sympathy for it. My model of the skeptic is: they have the crazy view that observations made by someone who has a phd, works at a university, and publishes in an academic journal are of a different kind than observations made by other people. Perhaps the lucid-dreaming studies have some interesting MRI scans to show differences in brain activity (I haven't read them), but they must still rely on descriptions of internal experience which come from human beings in order to establish the basic facts about lucid dreams, right? In no sense is the skeptic's inability to go beyond the current state of science "rational"; in fact, it strikes me as rather irrational.

This is an especially easy mistake for non-Bayesian rationalists to make because they lack a notion of degrees of belief. There must be a set of trusted beliefs, and a process for beliefs to go from untrusted to trusted. It's natural for this process to involve the experimental method and peer review. But this kind of naive scientism only makes sense for a consumer of science. If scientists used the kind of "rationality" described in your post, they would never do the experiments to determine whether lucid dreaming is a real thing, because the argument in your post concludes that you can't rationally commit time and effort to testing uncertain hypotheses. So this kind of naive scientific-rationalism is somewhat self-contradictory.

Comment by ProofOfLogic on Too Much Effort | Too Little Evidence · 2017-01-25T20:09:44.339Z · LW · GW

You must move in much more skeptical circles than me. I've never encountered someone who even "rolled to disbelieve" when told about lucid dreaming (at least not visibly), even among aspiring rationalists; people just seem to accept that it's a thing. But it might be that most of them already heard about it from other sources.

Comment by ProofOfLogic on Too Much Effort | Too Little Evidence · 2017-01-25T20:01:45.197Z · LW · GW

Yes, I think that's right. Especially among those who identify as "skeptics", who see rationality/science as mostly heightened standards of evidence (and therefore lowered standards of disbelief), there can be a tendency to mistake "I have to assign this a low probability for now" for "I am obligated to ignore this due to lack of evidence".

The Bayesian system of rationality rejects "rationality-as-heightened-standard-of-evidence", instead accepting everything as some degree of evidence but requiring us to quantify those degrees. Another important distinction which bears on this point is "assuming is not believing", discussed on Black Belt Bayesian. I can't link to the individual post for some reason, but it's short, so here it is quoted in full:

Assuming Is Not Believing

Suppose I’m participating in a game show. I know that the host will spin a big wheel of misfortune with numbers 1-100 on it, and if it ends on 100, he will open a hatch in the ceiling over my head and dangerously heavy rocks will fall out. (This is a Japanese game show I guess.) For $1 he lets me rent a helmet for the duration of the show, if I so choose.

Do I rent the helmet? Yes. Do I believe that rocks will fall? No. Do I assume that rocks will fall? Yes, but if that doesn’t mean I believe it, then what does it mean? It means that my actions are much more similar (maybe identical) to the actions I’d take if I believed rocks would definitely fall, than to the actions I’d take if I believed rocks would definitely not fall.

So assuming and believing (at least as I’d use the words) are two quite different things. It’s true that the more you believe P the more you should assume P, but it’s also true that the more your actions matter given P, the more you should assume P. All of this could be put into math.

Hopefully nothing shocking here, but I’ve seen it confuse people.

With some stretching you can see the assumptions made by mathematicians in the same way. When you assume, with the intent to disprove it, that there is a largest prime number, you don’t believe there is a largest prime number, but you do act like you believe it. If you believed it you’d try to figure out the consequences too. It’s been argued that scientists disagree among themselves more than Aumann’s agreement theorem condones as rational, and it’s been pointed out that if they didn’t, they wouldn’t be as motivated to explore their own new theories; if so, you could say that the problem is that humans aren’t good enough at disbelieving-but-assuming.

Comment by ProofOfLogic on Value Journaling · 2017-01-25T19:42:50.055Z · LW · GW

Malcolm Ocean has also done the "let me see who lives in my head" exercise, inspired by Brienne.

Ah, cool, thanks!

I myself keep a normal journal every day, recording my state of mind and events. This isn't exactly the same thing, but I think it approximates some of the benefits, and it also feeds my desire to record my life so ephemeral things have some concrete backing. I'd recommend that if gratitude journals don't feel right.

For me, regular journalling never felt interesting. I've kept a "research thoughts" journal for a long time, but writing about everyday events just didn't feel very motivating -- until CFAR convinced me that life debugging was an interesting thing to do. And then I still needed to find this format to make it into a thing I'd do regularly.

"much" to connect with, I think.

Fixed.

Comment by ProofOfLogic on Too Much Effort | Too Little Evidence · 2017-01-25T07:54:41.768Z · LW · GW

But (if my reasoning is correct) the fact is that a real method can work before there is enough evidence to support it. My post attempts to bring to our attention that this will make it really hard to discover certain experiences assuming that they exist.

Discounting the evidence doesn't actually make it any harder for us to discover those experiences. If we don't want to lose out on such things, then we should try some practices which we assign low probability, to see which ones work. Assigning low probability isn't what makes this hard -- what makes this hard is the large number of similarly-goofy-sounding things which we have to choose from, not knowing which ones will work. Assigning a more accurate probability just allows us to make a more accurate cost-benefit analysis in choosing how much of our time to spend on such things. The actual amount of effort it takes to achieve the results (in cases where results are real) doesn't change with the level of rationality of our beliefs.

Comment by ProofOfLogic on Too Much Effort | Too Little Evidence · 2017-01-25T07:46:01.756Z · LW · GW

We also have to take into account priors in an individual situation. So, for example, maybe I have found that shamanistic scammers who lie about things related to dreams are pretty common. Then it would make sense for me to apply a special-case rule to disbelieve strange-sounding dream-related claims, even if I tend to believe similarly surprising claims in other contexts (where my priors point to people's honesty).

Comment by ProofOfLogic on Quick modeling: resolving disagreements. · 2017-01-24T09:08:46.762Z · LW · GW

I didn't write the article, but I think "quick modeling" is referring to the previous post on that blog: simple rationality. It's an idiosyncratic view, though; I think the "quick modeling" idea works just as well if you think of it as referring to Fermi-estimate style fast modeling instead (which isn't that different in any case). The point is really just to have any model of the other person's belief at all (for a broad notion of "model"), and then try to refine that. This is more flexible than the double crux algorithm.

From my experience with CFAR, I suspect CFAR staff would call the strategy described here a form of double crux anyway. The double crux algorithm is an ideal to shoot for, but the broader spirit of double crux is more like what this article is recommending I think.

Comment by ProofOfLogic on Descriptive Before Normative · 2016-12-06T08:48:21.372Z · LW · GW

Seems there's no way to edit the link, so I have to delete.

Comment by ProofOfLogic on Double Crux — A Strategy for Mutual Understanding · 2016-12-02T01:17:44.698Z · LW · GW

Disagreements can lead to bad real-world consequences for (sort of) two reasons:

1) At least one person is wrong and will make bad decisions which lead to bad consequences. 2) The argument itself will be costly (in terms of emotional cost, friendship, perhaps financial cost, etc).

In terms of #1, an unnoticed disagreement is even worse than an unsettled disagreement; so thinking about #1 motivates seeking out disagreements and viewing them as positive opportunities for intellectual progress.

In terms of #2, the attitude of treating disagreements as opportunities can also help, but only if both people are on board with that. I'm guessing that is what you're pointing at?

My strategy in life is something like: seek out disagreements and treat them as delicious opportunities when in "intellectual mode", but avoid disagreements and treat them as toxic when in "polite mode". This heuristic isn't always correct. I had to be explicitly told that many people often don't like arguing even over intellectual things. Plus, because of #1, it's sometimes especially important to bring up disagreements in practical matters (that don't invoke "intellectual mode") even at risk of a costly argument.

It seems like something like "double crux attitude" helps with #2 somewhat, though.

Comment by ProofOfLogic on Open thread, Nov. 28 - Dec. 04, 2016 · 2016-11-30T19:30:38.227Z · LW · GW

Yeah, I think the links thing is pretty important. Getting bloggers in the rationalist diaspora to move back to blogging on LW is something of an uphill battle, whereas them or others linking to their stuff is a downhill one.

Comment by ProofOfLogic on Double Crux — A Strategy for Mutual Understanding · 2016-11-30T09:17:14.305Z · LW · GW

If double crux felt like the Inevitable Correct Thing, what other things would we most likely believe about rationality in order for that to be the case?

I think this is a potentially useful question to ask for three reasons. One, it can be a way to install double crux as a mental habit -- figure out ways of thinking which make it seem inevitable. Two, to the extent that we think double crux really is quite useful, but don't know exactly why, that's Bayesian evidence for whatever we come up with as potential justification for it. But, three, pinning down sufficient conditions for double crux can also help us see limitations in its applicability (IE, point toward necessary conditions).

I like the four preconditions Duncan listed:

  • Epistemic humility.
  • Good faith.
  • Confidence in the existence of objective truth.
  • Curiosity.

I made my list mostly by moving through the stages of the algorithm and trying to justify each one. Again, these are things which I think might or might not be true, but which I think would help motivate one step or another of the double crux algorithm if they were true.

  • A mindset of gathering information from people (that is, a mindset of honest curiosity) is a good way to combat certain biases ("arguments are soldiers" and all that).
  • Finding disagreements with others and finding out why they believe what they believe is a good way to gather information from them.
  • Most people (or perhaps, most people in the intended audience) are biased to argue for their own points as a kind of dominance game / intelligence signaling. This reduces their ability to learn things from each other.
  • Telling people not to do that, in some appropriate way, can actually improve the situation -- perhaps by subverting the signaling game, making things other than winning arguments get you intelligence-signaling-points.
  • Illusion of transparency is a common problem, and operationalizing disagreements is a good way to fight against the illusion of transparency.
  • Or: Free-floating beliefs are a common problem, and operationalization is a good way to fight free-floating beliefs.
  • Or: operationalizing / discussing examples is a good way to make things easier to reason about, which people oftem don't take enough advantage of.
  • Seeking your cruxes helps ensure your belief isn't free-floating: if the belief is doing any work, it must make some predictions (which means it could potentially be falsified). So, in looking for your cruxes, you're doing yourself a service, not just the other person.
  • Giving your cruxes to the other person helps them disprove your beliefs, which is a good thing: it means you're providing them with the tools to help you learn. You have reason to think they know something you don't. (Just be sure that your conditions for switching beliefs are good!)
  • Seeking out cruxes shows the other person that you believe things for reasons: your beliefs could be different if things were different, so they are entangled with reality.
  • In ordinary conversations, people try to have modus ponens without modus tollens: they want a belief that implies lots of things very strongly, but which is immune to attack. Bayesian evidence doesn't work this way; a hypothesis which makes sharp prediction is necessarily sticking its neck out for the chopping block if the prediction turns out false. So, asking what would change your mind (asking for cruxes) is in a way equivalent to asking for implications of your belief. However, it's doing it in a way which enforces the equivalence of implication and potential falsifier.
  • Asking for cruxes from them is a good way to avoid wasting time in a conversation. You don't want to spend time explaining something only to find that it doesn't change their mind on the issue at hand. (But, you have to believe that they give honest cruxes, and also that they are working to give you cruxes which could plausibly lead to progress rather than ones which will just be impossible to decide one way or the other.)
  • It's good to focus on why you believe what you believe, and why they believe what they believe. The most productive conversations will tend to concentrate on the sources of beliefs rather than the after-the-fact reasoning, because this is often where the most evidence lies.
  • If you disagree with their crux but it isn't a crux for you, then you may have info for them, but the discussion won't be very informative for your belief. Also, the weight of the information you have is less likely to be large. Perhaps discuss it, but look for a double crux.
  • If they disagree with your crux but it isn't a crux for them, then there may be information for you to extract from them, but you're allowing the conversation to be bias toward cherry-picking disproof of your belief; perhaps discuss, but try to get them to stick their neck out more so that you're mutually testing your beliefs.

Of all of this, my attempt to justify looking for a double crux rather than accepting single-person cruxes sticks out to me as especially weak. Also, I think a lot of the above points get something wrong with respect to good faith, but I'm not quite sure how to articulate my confusion on that.

Comment by ProofOfLogic on Open thread, Nov. 28 - Dec. 04, 2016 · 2016-11-30T07:32:49.916Z · LW · GW

Could I get a couple of upvotes so that I could post links? I'd like to put some of the LW-relevant content from weird.solar here now that link posts are a thing.

Comment by ProofOfLogic on Open thread, Nov. 28 - Dec. 04, 2016 · 2016-11-30T03:54:22.043Z · LW · GW

Basically, this:

https://intelligence.org/2016/07/27/alignment-machine-learning/

It's now MIRI's official 2nd agenda, with the previous agenda going under the name "agent foundations".

Comment by ProofOfLogic on Terminology is important · 2016-11-30T02:36:50.138Z · LW · GW

Reminds me of the general tone of Nate Soares' Simplifience stuff.