Posts

Comments

Comment by sirjackholland on Credibility of the CDC on SARS-CoV-2 · 2020-03-07T21:52:37.935Z · LW · GW

This post is a bad idea and it would be better if it were taken down. It's "penny-wise, pound-foolish" applied to epistemology and I would be utterly shocked if this post had a net positive effect.

I wrote a big critique outlining why I think it's bad, but I couldn't keep it civil and don't want to spend another hour editing it to be, so I'll keep it brief and to the point: lesswrong has been a great source of info and discussion on COVID-19 in the past couple of weeks, much better than most mainstream sources, but as usual, I don't recommend the site to friends or family because I know posts like this always pop up and I don't want to expose people to this obvious info hazard or be put in the position of defending why I recommended a community that posts info hazards like this.

As a mostly-lurker, I'm really just raising my hand here and saying "posts like this make me extremely uncomfortable and unwilling to recommend this community to others." Obviously not everyone wants this community to become mainstream and I'm really not trying to make anyone feel bad, but I think it's worth mentioning since other than David Manheim, I don't see my opinion represented in the comments yet and it looks like it's a minority one.

(Obviously it's up to the author whether or not to remove the post - I'm not requesting anything here, just expressing my preferences.)

Comment by sirjackholland on Why do humans not have built-in neural i/o channels? · 2019-08-09T17:49:25.494Z · LW · GW

Thinking that evolution is smart on the timescales we care about is probably a worse heuristic, though. Evolution can't look ahead, which is fine when it's possible to construct useful intermediate adaptations, but poses a serious problem when there are no useful intermediates. In the case of infosec, it's as all-or-nothing as it gets. A single mistake exposes the whole system to attack by adversaries. In this case, the attack could destroy the mind of the person using their neural connection.

Consider it from this perspective: a single deleterious mutation to part of the genome encoding the security system opens the person up to someone else poisoning their mind in serious and sudden ways: consider literal toxins, including the wide variety of organochlorides and other chemicals that can bind acetylcholinesterase and cause seizures (i.e., how many pesticides work), but also consider memetic attacks that can cause the person to act against their own interests (yes, language also permits these attacks, but much less efficiently than being able to directly update someone's beliefs/memories/heuristics/thoughts, which is entirely possible once you open a direct, physical connection to someone's brain from the outside of their skull - eyes are bad enough, from this perspective!).

A secure system would not only have to be secure for the individual it evolved in, but also be robust to the variety of mutations it will encounter in that individual's descendants. And the stage in between wherein some individuals have secure neural communication while others can have their minds ravaged by adversaries (or unwitting friends) would prevent any widespread adoption of the genes involved.

Over millions upon millions of years, it's possible that evolution could devise an ingenious system that gets around all of this, but my guess is that direct neural communication would only noticeably help language-bearing humans, which have existed for only ~100K years. Simpler organisms can just exchange chemicals or other simple signals. I don't think 100K years nearly enough time to evolve a robust-to-mutations security system for a process that can directly update the contents of someone's mind.

Comment by sirjackholland on No nonsense version of the "racial algorithm bias" · 2019-07-15T21:18:13.047Z · LW · GW

I'm not sure what "statistically immoral" means nor have I ever heard the term, which makes me doubt it's common speech (googling it does not bring up any uses of the phrase).

I think we're using the term "historical circumstances" differently; I simply mean what's happened in the past. Isn't the base rate purely a function of the records of white/black convictions? If so, then the fact that the rates are not the same is the reason that we run into this fairness problem. I agree that this problem can apply in other settings, but in the case where the base rate is a function of history, is it not accurate to say that the cause of the conundrum is historical circumstances? An alternative history with equal, or essentially equal, rates of convictions would not suffer from this problem, right?

I think what people mean when they say things like "machines are biased because they learn from history and history is biased" is precisely this scenario: historically, conviction rates are not equal between racial groups and so any algorithm that learns to predict convictions based on historical data will inevitably suffer from the same inequality (or suffer from some other issue by trying to fix this one, as your analysis has shown).

Comment by sirjackholland on No nonsense version of the "racial algorithm bias" · 2019-07-15T14:45:44.145Z · LW · GW

Didn't you just show that "machines are biased because it learns from history and history is biased" is indeed the case? The base rates differ because of historical circumstances.

Comment by sirjackholland on [deleted post] 2019-07-10T20:34:38.748Z

Going along with this, our world doesn't appear to be the result of each individual making "random" choices in this way. If every good decision was accompanied by an alternate world with the corresponding bad decision, you'd expect to see people do very unexpected things all the time. e.g., this model predicts that each time I stop at a red light, there is some alternate me that just blows right through it. Why aren't there way more car crashes if this is how it works?

Comment by sirjackholland on Everybody Knows · 2019-07-05T14:05:48.498Z · LW · GW

For #1, I'm not sure I agree that not everyone in the room knows. I've seen introductions like this at conferences dedicated entirely to proteins where it assumed, rightly or not, that everyone knows the basics. It's more that not everyone will have the information cached as readily as the specialists. So I agree that sometimes it is more accurate to say "As I'm sure most of you know" but many times, you really are confident that everyone knows, just not necessarily at the tip of their tongue. It serves as a reminder, not actually new knowledge.

I suppose you could argue: since everyone is constantly forgetting little things here and there, even specialists forget some basics some of the time and so, at any given time, when a sufficiently large number of people is considered, it is very likely that at least one person cannot recall some basic fact X. Thus, any phrase like "everybody knows X" is almost certainly false in a big enough room.

With this definition of knowledge, I would agree with you that the phrase should be "as most of you know" or something similarly qualified. But I find this definition of knowledge sort of awkward and unintuitive. There is always some amount of prompting, some kind of cue, some latency required to access my knowledge. I think "remembers after 30 seconds of context" still counts as knowledge, for most practical purposes, especially for things outside my wheelhouse. Perhaps the most accurate phrase would be something like "As everyone has learned but not necessarily kept fresh in their minds..."

For #2, I should have clarified: this was an abbreviated reference to a situation in an apartment complex I lived in in which management regularly reminded everybody that bears would wreak havoc if trash were left out, and people regularly left trash out, to the delight of the bears. So I think in that scenario, everybody involved really did know, they just didn't care enough.

Comment by sirjackholland on Everybody Knows · 2019-07-03T20:29:37.515Z · LW · GW

Echoing the other replies so far, I can think of other practical explanations for saying "everybody knows..." that don't fall into your classification.

1) Everybody knows that presenting a fact X to someone who finds X obvious can sometimes give them the impression that you think they're stupid/uninformed/out-of-touch. For instance, the sentence you just read. For another instance, the first few slides of a scientific talk often present basic facts of the field, e.g. "Proteins comprise one or more chains of amino acids, of which there are 20 natural types." Everybody who's a professional biologist/biochemist/bioinformatician/etc. knows this [1]. If you present this information as being even a little bit novel, you look ridiculous. So a common thing to do is to preface such basic statements of fact with "As is well known / As everybody knows / As I'm sure you know / etc." [2]

No bad faith at all! Just a clarification that your statements are meant to help newcomers or outsiders who may not remember such facts as readily as people who work with them every day.

2) I find myself saying "but everybody knows..." to myself or the person I'm talking to when trying to understand puzzling behavior of others. For example, "everybody knows that if trash bags are left outside the dumpster, bears will come and tear everything up, so why do people keep leaving them there?" In this context, the "everybody knows" clause isn't meant as a literal truth but as a seemingly reasonable hypothesis in tension with concrete evidence to the contrary. If everybody has been told, repeatedly, that trash is to be put in the dumpster and not next to it, why do they act like they don't know this? Obviously there is no real mystery here: people do know, they just don't care enough to put in the effort.

But especially in more complex situations, it often helps to lay out a bunch of reasonable hypotheses and then think about why they might not hold. "Everybody knows ..." is a very common type of reasonable hypothesis and so discussion of this sort will often involve good faith uses of the phrase. Put another way: not all statements that look like facts are meant as facts and in particular, many statements are made expressly for the purpose of tearing them down as an exercise in reasoning (essentially, thinking out loud). But if you're not aware of this dynamic, and it's done too implicitly, it might seem like people are speaking in bad faith.

I guess what I'm trying to say in general is: "this statement of fact is too obviously false to be a mistake" has two possible implications: one, as you say, is that the statement was made in bad faith. The other, though, is that it's not a statement of fact. It's a statement intended to do something more so than to say something.

[1] Of course, even such basic facts aren't even strictly true. There are more than 20 natural amino acids if you include all known species, but, as everybody knows, everybody excludes selenocysteine and pyrrolysine in the canonical list.

[2] The alternative is to exclude these first few slides altogether, but this often makes for a too-abrupt start and the non-specialists are more likely to get lost partway through without those initial reminders of what's what.

Comment by sirjackholland on What's the most "stuck" you've been with an argument, that eventually got resolved? · 2019-07-01T17:58:16.831Z · LW · GW

Simplified examples from my own experience of participating in or witnessing this kind of disagreement:

Poverty reduction: Alice says "extreme poverty is rapidly falling" and Bob replies "$2/day is not enough to live on!" Alice and Bob talked past each other for a while until realizing that these statements are not in conflict; the conflict concerns the significance of making enough money to no longer be considered in "extreme poverty." The resolution came from recognizing that extreme poverty reduction is important, but that even 0% extreme poverty does not imply that we have solved starvation, homelessness, etc. That is, Alice thought Bob was denying how fast and impressively extreme poverty is being reduced, which he was not, and Bob thought Alice believed approaching 0% extreme poverty was sufficient, when she in fact did not.

Medical progress: Alice says "we don't understand depression" and Bob replies "yes we do, look at all the anti-depression medications out there." Alice and Bob talked past each other for a while, with the discussion getting increasingly angry, until it was realized that Alice's position was "you don't fully understand a problem until you can reliably fix it" and Bob's position was "you partially understand a problem when you can sometimes fix it". These are entirely compatible positions and Alice and Bob didn't actually disagree on the facts at all!

Free markets: Alice says "free markets are an essential part of our economy" and Bob replies "no they're not because there are very few free markets in our economy and none of the important industries can be considered to exist within one." The resolution to this one is sort of embarrassing because it's so simple and yet took so long to arrive at: Alice's implicit definition of a free market was "a market free from government interference" while Bob's implicit definition was "a market with symmetric information and minimal barriers to entry." Again, while it sounds blindingly obvious why Alice and Bob were talking past each other when phrased like this, it took at least half an hour of discussion among ~6 people to come to this realization.

Folk beliefs vs science: Alice says "the average modern-day Westerner does not have a more scientific understanding of the world than the average modern-day non-Westerner who harbors 'traditional'/'folk'/'pseudoscientific' beliefs" and Bob replies "how can you argue that germ theory is no more scientific than the theory that you're sick because a demon has inhabited you?" After much confusing back and forth, it turns out Alice is using the term 'scientific' to denote the practices associated with science while Bob is using the term to denote the knowledge associated with science. The average person inculcated in Western society indeed has more practical knowledge about how diseases work and spread than the average person inculcated in their local, traditional beliefs, but both people are almost entirely ignorant of why the believe what they believe and could not reproduce the knowledge if needed, e.g. the average person does not know the biological differences between a virus and a bacterium even though they are aware that antibiotics work on bacteria but not viruses. Once the distinction was made between "science as a process" and "science as the fruits of that process" Alice and Bob realized they actually agreed.

I think the above are somewhat "trivial" or "basic" examples in that the resolution came down to clearly defining terms: once Alice and Bob understood what each was claiming, the disagreement dissolved. Some less trivial ones for which the resolution was not just the result of clarifying nebulous/ambiguous terms:

AI rights: Alice says "An AGI should be given the same rights as any human" and Bob replies "computer programs are not sentient." After much deliberation, it turns out Alice's ethics are based on reducing suffering, where the particular identity and context surrounding the suffering don't really matter, while Bob's are based on protecting human-like life, with the moral value of entities rapidly decreasing as an inverse function of human-like-ness. Digging deeper, for Alice, any complex system might be sentient and the possibility of a sentient being suffering is particularly concerning when that being is traditionally not considered to have any moral value worth protecting. For Bob, sentience can't possibly exist outside of a biological organism and so efforts into that ensuring computer programs aren't deleted while running are a distraction that seems orthogonal to ethics. So while the ultimate question of "should we give rights to sentient programs?" was not resolved, a great amount of confusion was reduced when Alice and Bob realized they disagree about a matter of fact - can digital computers create sentience? - and not so much about how to ethically address suffering once the matter of who is suffering has been agreed on (Actually, it isn't so much a "matter of fact" since further discussion revealed substantial metaphysical disagreements between Alice and Bob, but at least the source of the disagreements was discovered).

Government regulation: Alice says "the rise of the internet makes it insane to not abolish the FDA" and Bob replies "A lack of drug regulation would result in countless deaths." Alice and Bob angrily, vociferously disagree with each other, unfortunately ending the discussion with a screaming match. Later discussion reveals that Alice believes drug companies can and will regulate themselves in the absence of the FDA and that 1) for decades now, essentially no major corporation has deliberately hurt their customers to make more profit and that 2) the constant communication enabled by the internet will educate customers on which of the few bad apples to avoid. Bob believes drug companies cannot and will not regulate themselves in the absence of the FDA and that 1) there is a long history of corporations hurting their customers to make more profit and that 2) the internet will promote just as much misinformation as information and will thus not alleviate this problem. Again, the object-level disagreement - should we abolish the FDA given the internet? - was not resolved, but the reason for that became utterly obvious: Alice and Bob have *very* different sets of facts about corporate behavior and the nature of the internet.

How to do science: Alice says "you should publish as many papers as possible during your PhD" and Bob replies "paper count is not a good metric for a scientist's impact." It turns out that Alice was giving career advice to Carol in her particular situation while Bob was speaking about things in general. In Carol's particular, bespoke case, it may have been true that she needed to publish as many papers as possible during her PhD in order to have a successful career even though Alice was aware this would create a tragedy-of-the-commons scenario if everyone were to take this advice. Bob didn't realize Alice was giving career advice instead of her prescriptive opinion on the matter (like Bob was giving).

Role playing: Alice says "I'm going to play this DnD campaign as a species of creature that can't communicate with most other species" and Bob replies "but then you won't be able to chat to your fellow party members or share information with them or strategize together effectively." Some awkwardness ensued until it became clear that Alice *wanted* to be unable to communicate with the rest of the party due to anxiety related concerns. Actually, realizing this didn't really reduce the awkwardness, since it was an awkward situation anyway, but Alice and Bob definitely talked past each other until the difference in assumptions was revealed and had Bob realized what Alice's concerns were to begin with, he probably would not have initiated the conversation since he didn't have a problem with a silent character but simply wanted to ensure Alice understood the consequences of this, with the discussion revealing that she did.

Language prescriptivism: Alice says "that's grammatically incorrect" and Bob replies "there is no 'correct' or 'incorrect' grammar - language is socially constructed!" Alice and Bob proceed to have an extraordinarily unproductive discussion until Alice points out that while she doesn't know exactly how it's decided what is correct and incorrect in English, there *must* be some authority that decides, and that's the authority she follows. While Alice and Bob did not come to an agreement per se, it became clear that what they really disagreed about was whether or not the English language has a definitive authority, not whether or not one should follow the authority assuming it exists.

I'm going to stop here so the post isn't too long, but I very much enjoyed thinking about these circumstances and identifying the "A->B" vs "X->Y" pattern. So much time and emotional energy wasted that wouldn't have been had Alice and Bob first established exactly what they were talking about.


Comment by sirjackholland on Whence decision exhaustion? · 2019-06-29T15:30:55.241Z · LW · GW

One notable aspect in my experience with this is that exhaustion is not exclusively a function of the decision's complexity. I can experience exhaustion when deciding what to eat for dinner, for instance, even though I've made similar decisions literally thousands of times before, the answer is always obvious (cook stuff I have at home or order from a restaurant I like - what else is there?), and the stakes are low ("had I given it more thought, I would have realize I was more in the mood for soup than a sandwich" is not exactly a harrowing loss).

Another aspect to note is that decisions that end up exhausting me usually entail doing work I don't want to do. I never get exhausted when deciding where to hike, for instance, because no matter what I know I will enjoy myself, even if one spot requires a long drive, or inconvenient preparations, or whatever. One possibility is that part of me recognizes that the correct decision will inevitably cause me to do work I don't want to do. Actually deciding sets whatever work I have to do into motion while "deliberating" endlessly lets me put it off, which might end up feeling internally like the decision is hard to make. A motivated mind is great at coming up with bogus reasons for why an obvious decision is not so obvious.

A key insight for me was recognizing that my reluctance to do work is pretty directly proportional to what I expect the value of its product to be, biased towards short term gains unless I explicitly visualize the long term consequences. If realizing that the best decision for dinner is to cook, and that reminds me that I need to do dishes and chop vegetables and clean the stove, etc. etc. then I have a hard time "deciding" that cooking is the way to go because it implies that in the short term, I will be less happy than I am currently. If I think about the scenario where I procrastinate and don't cook, and focus on how hungry I will be and how unpleasant that feeling is, then my exhaustion often fades and the decision becomes clearer.

Comment by sirjackholland on Epistemic Spot Check: The Role of Deliberate Practice in the Acquisition of Expert Performance · 2019-06-27T14:49:53.814Z · LW · GW

Thanks for the spot check! I had heard this number (~4 hours per day) as well and I now have much less confidence in it. That most of the cited studies focus on memorization / rote learning seriously limits their generality.

Anecdotally, I have observed soft limits for the amount of "good work" I can do per day. In particular, I can do good work for several hours in a day but - somewhat mysteriously - I find it more difficult to do even a couple hours of good work the next day. I say "mysteriously" because sometimes the lethargy manifests itself in odd ways but the end result is always less productivity. My folk theory-ish explanation is that I have some amount of "good work" resources that only gradually replenish, but I have no idea what the actual mechanism might be and my understanding is that ego depletion has not survived the replication crisis, so I'm not very confident in this.

Comment by sirjackholland on Is your uncertainty resolvable? · 2019-06-21T15:02:51.361Z · LW · GW

While a true Bayesian's estimate already includes the probability distributions of future experiments, in practice I don't think it's easy for us humans to do that. For instance, I know based on past experience that a documentary on X will not incorporate as much nuance and depth as an academic book on X. I *should* immediately reduce the strength of any update to my beliefs on X upon watching a documentary given that I know this, but it's hard to do in practice until I actually read the book that provides the nuance.

In a context like that, I definitely have experienced the feeling of "I am pretty sure that I will believe X less confidently upon further research, but right now I can't help but feel very confident in X."

Comment by sirjackholland on FB/Discord Style Reacts · 2019-06-03T14:54:25.581Z · LW · GW

Other than both being pictographic, I'm not sure emoticons and reactions are that related. Emoticons are either objects (neither here nor there for our purposes) or facial/bodily expressions. Reactions are emotional or high-level responses to information.

You can't really express the thumbs-up reaction with a facial expression emoticon. You can use a smiley face or something similar, but thumbs-up means approval, not happiness. If someone says "I'll be five minutes late - start without me" I don't want to express happiness at this, but I do want to acknowledge it and (if this is the case) say it's OK. A thumbs-up does this wonderfully: by definition, it means I have acknowledged the message, and it signals approval rather than disapproval, but nothing else. You can't really do that with emoticons.

I think there are lots of situations in which reactions can do things emoticons can't, and I've found that I notice nice opportunities for reactions more when I'm in an environment in which they're readily available.

Comment by sirjackholland on On alien science · 2019-06-03T14:27:15.126Z · LW · GW

(Just an attempt at an answer)

Both an explanation and a prediction seek to minimize the loss of information, but the information under concern differs between the two.

For an explanation, the goal is to make it as human understandable as possible, which is to say, minimize the loss of information resulting from an expert human predicting relevant phenomena.

For a prediction, the goal is to make it as machine understandable as possible, which is to say, minimize the loss of information resulting from a machine predicting relevant phenomena.

The reason there isn't a crisp distinction between the two is because there isn't a crisp distinction between a human and a machine. If humans had much larger working memories and more reliable calculation abilities, then explanations and predictions would look more similar: both could involve lots of detail. But since humans have limited memory and ability to calculate, explanations look more "narrative" than predictions (or from the other perspective, predictions look more "technical" than explanations).

Note that before computers and automation, machine memory and calculation wasn't always better than the human equivalent, which would have elided the distinction between explanation and prediction in a way that could never happen today. e.g., if all you have to work with is a compass and straight edge, then any geometric prediction is also going to look like an explanation because we humans grok the compass and straightedge in a way we'll never, without modifications anyway, grok the more technical predictions modern geometry can make. The exceptions that prove the rule are very long geometric methods/proofs, which strain human memory and so feel more like predictions than methods/proofs that can be summarized in a picture.

As machines get more sophisticated, the distinction will grow larger, as we've already seen in debates about whether automated proofs with 10^8 steps are "really proofs" - this gets at the idea that if the steps are no longer grokable by humans, then it's just a prediction and not an explanation, and we seem to want proofs to be both.

Comment by sirjackholland on Von Neumann’s critique of automata theory and logic in computer science · 2019-05-28T15:37:37.000Z · LW · GW

I think what he's saying is that the existence of noise in computing hardware means that any computation done on this hardware must be (essentially) invariant to this noise, which leads the methods away from the precise, all-or-nothing logic of discrete math and into the fuzzier, smoother logic of probability distributions and the real line. This makes me think of analog computing, which is often done in environments with high noise and can indeed produce computations that are mostly invariant to it.

But, of course, analog computing is a niche field dwarfed by digital computing, making this prediction of von Neumann's comically wrong: the solution people went with wasn't to re-imagine all computations in a noise-invariant way, it was to improve the hardware to the point that the noise becomes negligible. But I don't want to sound harsh here at all. The prediction was so wrong only because, at least as far as I know, there was no reasonable way to predict the effect that transistors would have on computing in the '50s since they were not invented until around then. It seems reasonable from that vantage point to expect creative improvements in mathematical methods before a several-orders-of-magnitude improvement in hardware accuracy.

Comment by sirjackholland on Highlights from "Integral Spirituality" · 2019-04-15T21:36:03.854Z · LW · GW

The pre/post conflation reminds me of Terence Tao's discussion of math pre/post proofs (https://terrytao.wordpress.com/career-advice/theres-more-to-mathematics-than-rigour-and-proofs/), which I've found to be a helpful guide in my journeys through math. I'm not surprised the distinction occurs more widely than in just math, but this post has encouraged me to keep the concept on hand in contexts outside of math.

I also enjoyed the discussion about how various religions are all getting at the same concepts through different lenses/frameworks. As an atheist, I have no interest in, say, Christianity per se; I enjoy learning about the historical, psychological, and sociological components in the same way I enjoy learning about many aspects of humanity, but I'm not really interested in things like grace or transubstantiation or exegesis because it all falls under the label "false" or "irrelevant". Having said that, I'm also very much aware that many Christian thinkers have insights that are relevant even for people who don't share their belief in God. But I can't get myself to slog through writing that is mostly false/irrelevant just to glean some nuggets of wisdom.

It would be excellent to find a book that synthesizes all of the most insightful aspects of the major religions, strips them of their cultural/theological labels into something more generic, and presents the stuff that's been "replicated" (in the sense of multiple religions all coming to the same conclusion modulo cultural/theological labels). Do you know of a book that does this? Is Integral Spirituality a good example? It seems like it's in the right ballpark, or at least would reference many books that are.

Comment by sirjackholland on Blackmail · 2019-02-20T21:34:02.710Z · LW · GW

I'm a little confused about how the burden of proof ended up as it is in this discussion. I think most people intuitively understand that blackmail is a bad thing. That they are not able to articulate a rigorous, general argument for why seems like a much higher bar than we expect for other things.

Consider murder. Murder should be illegal, obviously (I hope?! Not sure there is much to discuss if we disagree on that). But it's not trivial to construct a rigorous, general argument for why. Any demonstrated harm can be countered with another hypothetical in which some convoluted chain of events following the murder creates a net benefit.

"Oh, it's always bad to shoot a stranger in the street? What if you have an uncanny ability to identify serial killers and recognize some stranger as one who's about to kill again and you can prevent even more deaths by shooting them on sight?"

"You think killing your unfaithful spouse should be illegal? What about if the knowledge that they're continuing to see other people causes you such great psychological harm that killing the spouse is actually less bad, huh?"

And you don't really have to go to silly extremes like that to generate counter examples; most claims that "murder should be illegal" implicitly except killing for self defense, during wartime or for national security reasons, euthanasia, late term abortion for medical reasons, and the death penalty itself. I'm not saying everyone has the same list of exceptions, but I've never met anyone who rejects all of those simultaneously and claims that literally all murder, no matter the context or consequences, should be illegal. That's not what people mean when they say murder should be illegal.

How is the blackmail situation any different? It's trivial to come up with endless exceptions in which the blackmail is more like whistleblowing and provides a net benefit. I have two responses to that. First off, even if blackmail were legalized, the government would never allow routine whistleblowing. There are always a million exceptions when it comes to how three letter organizations and their information are treated legally. But more importantly, why does the possibility of black swan whistleblowing scenarios make the law worse to have than not? There are plenty of exceptions to the general rule that murder is bad and yet we still have laws against murder, right?

Comment by sirjackholland on Epistemic Tenure · 2019-02-20T20:43:53.518Z · LW · GW

Something I didn't notice in the comments is how to handle the common situation that Bob is a one-hit wonder. Being a one-hit wonder is pretty difficult; most people are zero-hit wonders. Being a two-hit wonder is even more difficult, and very few people ever create many independent brilliant ideas / works / projects / etc.

Keeping that in mind, it seems like a bad idea to make a precedent of handing out epistemic tenure. Most people are not an ever-flowing font of brilliance and so the case that their one hit is indicative of many more is much less likely than the case that you've already witnessed the best thing they'll do.

Just anecdotally, I can think of many public intellectuals who had one great idea, or bundle of ideas, and now spend most of their time spouting unrelated nonsense. And, troublingly, the only reason people take their nonsense seriously is that there is, at least implicitly, some notion of epistemic tenure attached to them. These people are a tremendous source of "intellectual noise", so to speak, and I think discourse would improve if the Bobs out there had to demonstrate the validity of their ideas from as close to scratch as possible rather than getting an endless free pass.

My biggest hesitation with never handing out intellectual tenure is that might make it harder for super geniuses to work as efficiently. Would von Neumann have accomplished what he did if he had to compete as if he were just another scientist over and over? But I think a lack of intellectual tenure would have to really reduce super genius efficiency for it to make up for all the noise it produces. There's just so many more one-hit wonders than (N>1)-hit wonders.

Comment by sirjackholland on Spaghetti Towers · 2019-02-07T21:32:52.189Z · LW · GW

I would call this a good visual representation of technical debt. I like to think of it as chaining lots of independently reasonable low order approximations until their joint behavior becomes unreasonable.

It's basically fine to let this abstraction be a little leaky, and it's basically reasonable to let that edge case be handled clumsily, and it's basically acceptable to assume the user won't ever give this pathological input, etc., until the number of "basically reasonable" assumptions N becomes large enough that 0.99^N ends up less than 0.5 (or some other unacceptably low probability of success). And even with a base as high as 0.99, the N that breaks 50% is only ~70!

The visual depiction of this as parts being stacked such that each additional part is placed in what looks to be a reasonable way but all the parts together look ridiculously fragile is excellent! It really emphasizes that this problem mode can only be understood with a global, rather than a local or incremental, view.

Comment by sirjackholland on Is Clickbait Destroying Our General Intelligence? · 2018-11-19T18:59:39.988Z · LW · GW

Alternative hypothesis: the internet encourages people who otherwise wouldn't contribute to the general discourse to contribute to it. In the past, contributing meant writing some kind of article, or at least letter-to-the-editor, which 1) requires a basic level of literacy and intellectual capacity, and 2) provides a filter, removing the voices of those who can't write something publishers consider worth of publication (with higher-influence publications having, in general, stricter filters).

Anecdote in point: I have yet to see an internet comment that I couldn't imagine one of my relatives writing (sorry, relatives, but a few of y'all have some truly dumb opinions!). But these relatives I have in mind wouldn't have contributed to the general discourse before the internet was around, so if you don't have That Uncle in your family you may not have been exposed to ideas that bad before seeing YouTube comments.

Last minute edit: I mean that I have yet to see an internet comment that I couldn't imagine one of my relatives writing years and years ago, i.e. I expect that we would have seen 2018 level discourse in 2002 if That Uncle had posted as much in 2002 as in 2018.

Comment by sirjackholland on Explore/Exploit for Conversations · 2018-11-15T19:45:57.146Z · LW · GW

I really like this framework! I've noticed that if someone makes a comment that assumes everyone in the group has CI, but I'm not sure if everyone does, I get a sense of awkwardness and feel the need to model two conversations: the one happening assuming everyone has CI, and the one happening assuming at least one person doesn't. This has the unfortunate side effect of consuming most of my thought-bandwidth, which makes me boring and quiet even if I would have otherwise been engaged and talkative.

Comment by sirjackholland on Mandatory Obsessions · 2018-11-15T19:17:54.646Z · LW · GW

I think most political opinions are opinions about priorities of issues, not issues per se. I remember from years ago, before most states had started legalizing same sex marriage, a relative of mine expressing the sentiment "I'm not against legalizing gay marriage, I just don't want to hear about the topic ever again." I think this is the attitude that the (admittedly very obnoxious and frustrating) party guest is concerned about. If more people held the opinion of my relative then we'd be stuck in a bad equilibrium, with everyone agreeing that they would be OK with same sex marriage but no one bothering to put in the effort to legalize it.

It doesn't matter if everyone agrees X is an issue if everyone also believes that solving the much more difficult Y should always take priority over solving X - this has the same consequences as a world in which no one believes X is an issue. Of course that doesn't mean you should go around yelling at people for not being obsessed with your favorite obsession, but I think "unconscious selfishness" and "mind viruses" are uncharitable explanations for what seems to be the reasonable concern that low priority tasks often never get completed and thus those that claim to support those causes but with low priority are effectively not supporting those causes.

Having said that, I completely agree with your larger point about diversity - I would much prefer a world in which people can obsess over what they want to obsess over even when their obsessions and lack-of-obsessions are contrarian.

Comment by sirjackholland on Do Animals Have Rights? · 2018-10-17T20:22:48.780Z · LW · GW
This also explains why you don't have a "right" to medical care. Someone else has to provide it. If you have a right to it, then the provider, who has no choice but to provide it, is no more than a slave.

I don't understand why this logic would apply to medical care but not, for instance, the right to speech. Freedom of speech must be provided. Even in the narrow sense of the term - the right to not be persecuted by the government for your speech/expression - it requires a government with enough checks and balances and a society with enough rule-of-law style norms that any old judge/cop/politician can't abuse their power to persecute you for what you've said. That in turn requires taxes, elections, all that jazz.

You could argue that, empirically, the cost to ensure a right to medical care happens to be greater than the cost to ensure a right to speech (I don't know if that's true, but, questions about what counts as a cost aside, it's an empirical question), but I don't think a right to speech is a different type of right than a right to medical care.

In that light, it's not clear why animals couldn't have rights. It's clearly true that these rights would incur costs - if there were no cost to upholding the right, there would be nothing to uphold - and that these costs would mostly, if not entirely, fall on us humans. But why would the fact that the costs fall disproportionately on some people invalidate/disqualify the right? It costs a lot more to uphold some people's right to speech than others.

In fact, Jordan Peterson is a great example of someone whose right to speech costs a lot more to uphold than the average person's because he's actually saying things that cause governments and other powerful institutions to try to silence him. Similarly, some people require a lot more medical care than others. The cost to animal rights would, by definition, be incurred entirely by animals who can't provide anything back, so this makes it a sort of extreme case, but I (genuinely, not just for the sake of argument) don't understand why the extreme should be handled any differently (again, modulo empirical findings like "this right is too expensive to uphold right now, but we'll keep working on it and uphold it when we can").

More concisely: if upholding animal rights makes us slaves to animals, why does upholding Jordan Peterson's particularly costly right to speech not make us slaves (or almost-slaves) to him?

Comment by sirjackholland on The Valley of Bad Theory · 2018-10-08T16:35:39.207Z · LW · GW

I'm curious how the complexity of the system affects the results. If someone hasn't learned at least a little physics - a couple college classes' worth or the equivalent - then the probability of inventing/discovering enough of the principles of Newtonian mechanics to apply them to a multi-parameter mechanical system in a few hours/days is ~0. Any theory of mechanics developed from scratch in such a short period will be terrible and fail to generalize as soon as the system changes a little bit.

But what about solving a simpler problem? Something non-trivial but purely geometric or symbolic or something for which a complete theory could realistically be developed by a group of people passing down data and speculation through several rounds of tests. Is it still true that the blind optimizers outperform the theorizers?

What I'm getting at is that this study seems to point to a really interesting and useful limitation to "amateur" theorizing, but if the system under study is sufficiently complicated, it becomes easy to explain the results with the less interesting, less useful claim that a group of non-specialists will not, in a matter of hours or days, come up with a theory that required a community of specialists years to come up with.

For instance, a bunch of undergrads in a psych study are not going to rederive general relativity to improve the chances of predicting when pictures of stars look distorted - clearly in that case the random optimizers will do better but this tells us little about the expected success of amateur theorizing in less ridiculously complicated domains.

Comment by sirjackholland on What the Haters Hate · 2018-10-04T17:14:28.202Z · LW · GW
Whereas when I read a comment that is not math-heavy, and is phrased in what seems to be perfectly ordinary plain English, and makes reference to ideas and words and phrases with which I am familiar, and does not seem confusing, then… how am I to know that the comment is actually not aimed at me? Why would I assume that it isn’t?

Is that what happened here, though? I posted a comment referencing the SSC rule and you objected to its use in that context. We both knew what I was referring to. The confusion seems to have arisen because I was intending the reference as a shortcut through my reasoning and you interpreted it as me smuggling a foreign norm into the discourse as if it were already widely accepted.

If I had been clearer about how I was using the reference, would there by any illusion of transparency, much less a double illusion? I didn't expect anyone unfamiliar or not on board with the reference to understand my logic and you didn't think you understood my point while actually misunderstanding it - in fact, you very clearly expressed that you didn't understand my point. So both of us were aware of the lack of transparency from the get-go, I think.

It would no longer be required of participants in Less Wrong, that they make their writing comprehensible.

I mean this in the least sarcastic way possible: to 99% of people I talk to, LW writing is incomprehensible. I have tried many times to introduce LW-related concepts to people unfamiliar with LW and, in my experience, it's insanely difficult to export anything from here to the rest of the world. Obviously my success also depends on how well I explain things, but the only subjects I have similar difficulty explaining to people are very technical things from my own field.

To be clear, I'm not saying "well everything's already too hard to explain so let's go full Zizek!" It is always better to be more comprehensible, all else equal. But all else is not equal - unwrapping explanations to the extent that they are understandable to someone with no familiarity with the subject comes at a great cost. It's good to have develop jargon and shorthand to expedite communication between people in the know, and when the jargon is explicitly jargon (e.g., "the 2-out-of-3 rule from SSC"), I don't think there is any illusion of transparency.

Sorry for the wall of text - I'm trying to keep these responses as short as possible but I also want to be clear. One more thing:

Should anyone question them, they simply respond that their comment was aimed only and exclusively at those who already understand them

If I'm trying to explain X but end up only explaining it to people who understand X, then yes, this is pointless and silly. But if I'm trying to explain Y and end up only explaining it to people who understand X, that is useful, especially when many people understand X.

Comment by sirjackholland on What the Haters Hate · 2018-10-04T15:28:33.546Z · LW · GW
it must therefore be true (in your view) that these reasons have been discussed and elucidated, prominently and publicly

If I wanted to ensure that every interlocutor understood (or could easily understand given a small amount of time) my point, then this would have to be true. You can't decode the message if you don't have the key, so if you want everyone to decode the message then the key must be as public and prominent as possible.

But I didn't want to ensure that because doing so would take a lot more effort than I spent on that first comment. I am totally OK with only some people decoding the message if it reduces what would have been a significant writing effort into a quick comment. Please correct me if I'm mistaken, but you seem to think that sending messages like this - messages that are easier to decode for some than others - harms the discourse. But couldn't that argument be made for any message that requires investment to understand?

For instance, LW contains many posts that assume a solid understanding of linear algebra, something that very few people (out of, say, all people who can read English and access the internet) have. To those unfamiliar with linear algebra, most of those LA-heavy posts are unintelligible. Should we avoid posting LA-heavy posts?

It's actually a little funny because thinking through this has made me realize that a math-heavy post creates a polarized response (those who understand the math enough to get something out of the post vs those who don't) in the same way that a political post does. And by your argument, referencing a framework that not everyone understands / agrees with / believes is useful also polarizes the response in a similar way.

Given that I see no problem with referencing math or disputed frameworks as a means of communicating with some people at the expense of losing others, this makes me much less confident that posts expected to produce politically polarized responses are problematic. If people unfamiliar with LA can skip LA-heavy posts, people with partisan biases/feelings can skip posts that touch on those.

Comment by sirjackholland on What the Haters Hate · 2018-10-03T21:27:26.754Z · LW · GW
For it to be right and proper to follow a rule, it is not necessary that all, or even most, or even anyone, of those who are subject to this rule, agree that the rule is justified.

Thanks for the clarification. I strongly disagree with this but have no interest or expectation that we'll resolve it talking here since it probably comes down to fundamental values concerning authority.

It goes like this:

Aha! Now I understand what you're saying. From your perspective, I criticized the post based on a norm that is not accepted as widely as those who use it seem to think it is. I agree this is bad because it lends undue weight to an argument - it comes across as "everyone agrees you should do X" when really it's "some people from some community agree you should do X", which is obviously less persuasive/relevant.

But I was not trying to make an argument from authority. Apparently I should be clearer in the future since that's how it came across. I was trying to make a purely logic-based (as in, not evidence-based or authority-based, etc.) argument and cited the SSC rule as a shortcut to what I was trying to explain. Rather than write a long post explaining exactly why I felt the post was unjustified given its expected impact, I thought that citing something most people here are familiar with would be an easier/faster explanation.

I meant my original point to read something like: I expect this post to be harmful to the discourse here for reasons most easily summarized by saying that it fails the SSC 2-out-of-3 rule. I was not suggesting that everyone does or should follow that rule.

As a loose analogy, it would be like bringing up a fake framework not because everyone does or should think through that lens but merely because the point being made is most easily expressed through that lens. It would be fine, and actually encouraged, for people who don't like that fake framework to translate the point into other frameworks.

Comment by sirjackholland on What the Haters Hate · 2018-10-03T20:29:07.556Z · LW · GW
And while there’s nothing wrong with an arbitrary or unjustified rule

Why is there nothing wrong with an unjustified rule? If a rule is unjustified, then surely there's something wrong with it? I don't think I understand what you're saying.

I agree that blindly following norms without ever considering their value is bad but that's not what I did. I brought up that norm because, having spent a little time thinking about it and evaluating its usefulness, I've concluded that it is useful. I think it's a great decomposition. You may disagree and I would like to hear your thoughts on it (if you're interested in sharing them) since you may have noticed a problem with it I haven't.

But I don't understand why, when I presented a template for why I thought a post was not good for LW, you concluded that my use of that template must mean that I'm bound by rules/norms that I'm unaware of. I'm definitely aware of the norm and the fact that it's a norm - that's why I brought it up!

Comment by sirjackholland on What the Haters Hate · 2018-10-03T20:14:53.265Z · LW · GW

I would say there are two central points of the article: one, the general/meta point that there is a cognitive pattern that leads people to incorrect conclusions about their outgroup, and two, that this explains Klein's response to Rubin in this particular scenario. I would agree that the first point is true in the sense that it's a plausible hypothesis that we should keep in mind when trying to understand ingroup/outgroup dynamics. I disagree that this is going on in the example you've provided - that part doesn't seem true to me.

My general point is that if you choose a controversial current event as your example, you will reliably polarize the response in a way that wouldn't happen if you chose almost any other kind of example.

Comment by sirjackholland on What the Haters Hate · 2018-10-03T16:10:04.768Z · LW · GW

Perhaps "rule" was the wrong word - I meant "norm" or "adage". To reiterate, I'm not suggesting any kind of moderation or censorship. Let me rephrase my point: if your post is clearly unkind and not something you think people absolutely need to know/discuss, then the only thing going for it is its truth value, something a large fraction of people will dispute if the topic prods a partisan imbroglio.

My point is not that SSC rules should bind people's behavior here - and certainly not if those rules are implicit! I totally agree with you there. What I'm saying is that, using what I think is a good decomposition of a post's value (the "rule" of three), posts like these will systematically produce little value that isn't conditional on partisan preferences.

If a large enough fraction of posts here have high value to tribe X but low value to tribe Y then tribe Y is going to find LW to hold less value and may not stick around. Worse, it might not be obvious why this is happening from X's perspective because to them, the quality of posts has remained high.

Comment by sirjackholland on What the Haters Hate · 2018-10-02T17:29:33.168Z · LW · GW
I don’t know how to put this delicately: this sentence is written from a position so deep up one’s own ass that a proctologist wouldn’t dare venture near.

This is a satisfying quip and also, as others have expressed, not something I'd like to see on LW. I'm not saying you shouldn't be allowed to post snark like this - I'll leave decisions like that to people who are more active in the community and have thought about the details more than I have - but I expect this kind of post does substantial harm to the discourse norms here.

But I also think my opposition to this kind of snark depends on how much I agree with it. Following the rule that a post should satisfy at least two of the three criteria {true, kind, relevant}, I see your comment above as neither true nor kind. The lack of kindness isn't really debatable, but clearly people disagree on its truth value. I think posting things that a substantial fraction of the community will deem as failing the two-out-of-three test is a bad idea if we want to avoid demon threads and related discourse disasters (although the civility in the comments I've read so far suggests that demon threads are not as inevitable for posts like these as I thought).

I don't want to get into a point by point discussion of everything I disagreed with in this post (because demon threads) but I would like to ask you one q: if you think Ezra is up his own ass claiming that social justice issues (race, gender, identity, etc.) are the most important dividing lines in 2018 US politics, then what do think the most important ones are? Obviously there is not one topic that will cleanly divide everyone in the US into two categories, but if you wanted to partition the population into two clusters as cleanly as possible, what would that topic or set of topics be? Or do you think the set would have to be so large that it wouldn't be useful to even consider? Or, and I suspect this might be the case, do you think the question is not precise/coherent enough to have a meaningful answer?

Comment by sirjackholland on Expressive Vocabulary · 2018-05-28T21:03:20.605Z · LW · GW

I definitely agree with the general principle of this post and the "technology" example made the principle clear and useful to me, but something feels off about applying this principle to the "chemicals" example. I think it's because most of the time, when someone says that something has "chemicals", what they mean is that it contains ingredients that aren't "natural", which is a term I've always found very confusing. There are plenty of technically false dichotomies that are nevertheless useful approximations, e.g. I'm sure there are edge cases between deciduous and evergreen trees, but it's an obviously useful label when discussing whether or not you expect a tree to have leaves in the winter.

But I genuinely don't know what "natural" is supposed to (approximately) carve up, especially in the realm of foods. If you boil tea leaves, are the resulting compounds natural? If yes, then at what point do things become unnatural? If no, then is anything that's not raw and unprocessed unnatural, including e.g. cooked meat or boiled potatoes? There is clearly a spectrum between "raw and unprocessed" and "industrially engineered" but I don't see any reasonable place to draw the line. And this makes the word "natural" in the context of foods too vague to be useful - every time someone uses it, you have to ask a series of followup questions to figure out where they (arbitrarily) draw the line.

And so I think a reasonable followup to "this food contains chemicals" is "virtually everything is chemicals - what do you mean it contains chemicals?" This is essentially a less snarky version of your "no" example, but I don't think it's stripping from someone a word we don't have an optimal replacement for - it's stripping from someone a word that is not clear enough to have any substantial meaning without further clarification. That is, it's stripping from someone a word that wastes everyone else's time.

This makes me want to slightly amend your rule: "Thou shalt not strike a term from others' expressive vocabulary without suitable replacement unless the term invariably requires thou to ask for a clarifying definition." - someone who gets really annoyed when people assume their arbitrary threshold in (un)natural-space is everyone else's.