Posts

The essay "Interstellar Communication Using Microbes: Implications for SETI" has implications for The Great Filter. 2017-12-22T06:05:22.671Z

Comments

Comment by MakerOfErrors on That Magical Click · 2019-09-02T15:36:48.342Z · LW · GW

Although, Noesis is just the original Greek Philosophy name for That Magic Click, and not an explanation in and of itself. At least, not any more than the "dark matter" or "phlogiston".

However, it seems like if anyone has figured out what actually is in that magic click, Noesis is the magic search term to find that gem of knowledge in the vast ocean of information. It's a Schelling Point for people to discuss possible answers, so if anyone has found an answer, they or someone learning it from them would introduce it to the discussions using that term.

If those discussions are the sort that are especially interested in truth as an end unto itself rather than as a useful tool for winning arguments, then I'd expect the answer to spread broadly and float to the top of things like the Nous Wikipedia article.

Comment by MakerOfErrors on That Magical Click · 2019-09-02T15:23:39.144Z · LW · GW
Is that really just it?  Is there no special sanity to add, but only ordinary madness to take away?  Where do superclickers come from - are they just born lacking a whole lot of distractions?
What the hell is in that click?

Noesis.

https://en.wikipedia.org/wiki/Nous

Comment by MakerOfErrors on Why the tails come apart · 2018-12-01T16:35:42.271Z · LW · GW

This post has been a core part of how I think about Goodhart's Law. However, when I went to search for it just now, I couldn't find it, because I was using Goodhart's Law as a search term, but it doesn't appear anywhere in the text or in the comments.

So, I thought I'd mention the connection, to make this post easier for my future self and others to find. Also, other forms of this include:

Maybe it would be useful to map out as many of the forms of Goodhart's Law as possible, Turchin style.

Comment by MakerOfErrors on Adult Neurogenesis – A Pointed Review · 2018-04-27T23:26:07.765Z · LW · GW

Sorry; normally I try not to make claims like that without a citation, but I was on my phone at the time and couldn't find the source easily. But here it is:

https://jamanetwork.com/journals/jamapsychiatry/fullarticle/210112

It's a twin study with 5952 participants. Here's the highlight:

In genetically identical twin pairs, the twin who exercised more did not display fewer anxious and depressive symptoms than the co-twin who exercised less. Longitudinal analyses showed that increases in exercise participation did not predict decreases in anxious and depressive symptoms.
Conclusion Regular exercise is associated with reduced anxious and depressive symptoms in the population at large, but the association is not because of causal effects of exercise.

Maybe everyone's mood still goes up a little from exercise, due to endorphins or whatever? Like, I assume that people with depression can still experience runner's high, just like I'm pretty sure they can still experience a heroine high. Maybe it's numbed or less intense or something, I dono. But neither is going to cure their depression. Or, at least that's my interpretation. (Maybe a permanent heroine high would count as a cure, if it somehow didn't kill you?)

For whatever reason, they display about the same levels of depressive symptoms, regardless of exercise. But, I assume that those symptoms are somewhat independent of moment-to-moment mood, or how you feel about something in particular. So, it seems perfectly possible for the mood effects of exercise to be real, without conflicting with the study.

Personally, I don't think exercise itself has much effect on mood, aside from runner's high, which seems well-documented. Playing a game or sport definitely can, if you let yourself get really into it, but I think that's mostly independent of the physical exertion. But, all I have to back up this particular impression is personal subjective experience, and most of that has been doing fun things that also happened to involve physical activity.

Comment by MakerOfErrors on Adult Neurogenesis – A Pointed Review · 2018-04-27T17:38:53.430Z · LW · GW

I’m sure I’ve made some inexcusable mistakes somewhere in the process of writing this.

Found it. :P (Well, kind of.)

And if exercise has antidepressant effects in humans, then the claim that those effects are neurogenesis-mediated must be wrong too.

Apparently exercise correlates with less depression, but isn't causal. That is, depressed people tend to exercise less, but exercising more doesn't cause you to be less depressed.

Unrelated tangent thought: I'd really like to know if the huge correlation with lifespan/healthspan has the same issue. Like, I'm pretty sure VO2 Max is the metric we should be optimizing for, rather than a target weight or muscle mass. Like, once you control for excercise, weight is no longer a strong predictor of health/lifespan.

But maybe exercise has the same problem. If most people have a hard time doing callorie restriction, but can up their metabolism through exercise, then the only benefit of excercise might be avoiding a caloric surplus. Or maybe exercise isn't causal at all, but the sorts of people who exercise also do other things that help, or are just healthy enough to be able to exercise.

Comment by MakerOfErrors on April Fools: Announcing: Karma 2.0 · 2018-04-01T18:43:32.082Z · LW · GW

I'm loving this new Karma system!

Metaculus (a community prediction market for tech/science/transhumanist things) has a similar feature, where comments from people with higher prediction rankings have progressively more golden usernames. The end result is that you can quickly pick out the signal from the noise, and good info floats to the top while misinformation and unbalanced rhetoric sinks.

But, karma is more than just a measure of how useful info is. It's also a measure of social standing. So, while I applaud the effort it took to implement this and don't want to discourage such contributions, I'd personally suggest tweaking it to avoid trying to do 2 things at once.

Maybe let users link to Metaculus/PredictionBook/prediction market accounts, and color their usernames based on Brier score?

Then, to handle the social side of things, make the font size of their posts and/or usernames scale with social standing. Maybe make a list of people from highest to lowest on the ladder? You could ask users to judge each other anonymously, or ideally use machine learning to detect submissive gestures and whether or not displays of dominance are recognized by other commenter.

As the power of AI improves exponentially, and learns to detect ever more subtle social cues, the social ranking would become more and more accurate! Eventually, it would be able to tell you your precise social standing, to within ±1 person, and predict exactly what concrete advice to follow if you want to get in front of the person ahead of you. You'd know their name, personality, knowledge-base, etc, and could see exactly what they were doing right that you were doing wrong. It would solve social awkwardness, by removing all the ambiguity and feelings of crippling uncertainty around how we're supposed to be acting!

Comment by MakerOfErrors on Are you the rider or the elephant? · 2018-02-25T04:40:45.297Z · LW · GW
Have you seen Richard_Kennaway's comment on the circling thread which compares talking with NVC folks to talking with chatbots?

Went digging, and found it here:

https://www.lesserwrong.com/posts/aFyWFwGWBsP5DZbHF/circling#cgtM3SRHyFwbzBa56

Comment by MakerOfErrors on Circling · 2018-02-25T02:49:34.494Z · LW · GW

The "fish sell" link isn't working - it just takes me to the top of the circling post.

Also, when I search for "fish sell" on Lesser Wrong, I get a result under "comments" of CronoDAS saying:

The "fish sell" link isn't working - it just takes me to the top of the Circling post.

And that link, itself, just takes me to the top of the circling post. And weirdly, I don't see that comment here anywhere. Is this a error on the website, rather than the way the link was formatted? Like, is it not possible to link to comments yet? I'll poke around a little, but I'm not all that hopeful, since that's a guess in the dark.

Comment by MakerOfErrors on Tune Your Cognitive Strategies · 2018-02-16T00:39:55.054Z · LW · GW

TL;DR: The core concept is this:

<quote>

Your brain already has the ability to update its cognitive strategies (this is called "meta-cognitive reinforcement learning"). However, the usual mechanism works with unnecessary levels of indirection, as in:

  • Cognitive strategy -> Thought -> Action -> Reward or punishment
    • You get rewarded or punished for what you do (as measured by your brain's chemical responses). Good thoughts are more likely to be followed by good actions. Good cognitive strategies are more likely to generate good thoughts. On average, your brain will slowly update its cognitive strategies in the right direction.
  • Cognitive strategy -> Thought -> Reward or punishment
    • You have learned to be happy or unhappy about having certain ideas, even when you don't yet know how they apply to the real world. Now your brain gets rewarded or punished for thoughts, and on average good thoughts are more likely to be generated by good cognitive strategies. Your brain can update cognitive strategies faster, according to heuristics about what makes ideas "good".

However, by carefully looking at the "deltas" between conscious thoughts, we can get rid of the last remaining level of indirection (this is the key insight of this whole page!):

  • Cognitive strategy -> Reward or punishment
    • You have learned to perceive your cognitive strategies as they happen, and developed some heuristics that tell you whether they are good or bad. Now your brain can update cognitive strategies immediately, and do it regardless of the topic of your thoughts.
    • Even when you generate a useless idea from another useless idea, you can still track whether the cognitive strategy behind it was sound, and learn from the experience.

</quote>

(It doesn't look like it's possible to quote bullet points, especially not nested bullet points, and I didn't want to remove more than one layer of bullets because I thought they made the whole thing more clear.)

The rest of the linked post is mostly about how to actually go about implementing this. (And, I feel like that probably deserves a book and regular practice, rather than just a short blog post. So, if you want notice and learn better cognative strategies, reading the full thing is well-worth the time investment.)

Comment by MakerOfErrors on The different types (not sizes!) of infinity · 2018-02-02T02:30:49.746Z · LW · GW

I've been keeping 330 browser tabs open with the intention of getting back to each and every one of them some day. And finally one of those tabs comes in handy! This just proves that I should never close anything.

This is a video explaining the distinctions between cardinals and ordinals. This post may be useful in letting people know that there are different types of infinities, but it does nothing for actually explaining them. There are probably other good resources available online for those who want to know, but this is the only one I've ever seen. (Wikipedia is hopeless here.)

Comment by MakerOfErrors on Adequacy as Levels of Play · 2018-01-28T17:34:44.976Z · LW · GW

I completely agree with seriousness and aliveness, but think competitiveness is only applicable in extremely narrow, well-defined circumstances like some games, and that these circumstances aren't present in the real world. Sports is an edge case, since the boundaries are artificial, but not as abstract as the rules of, say, chess, and so have real-world gray area which is exploitable.

So I would argue that, most of the time, competitiveness leads to a much, much lower level of play in individuals, not higher. I see several routes to this:

  1. Goodhart's Law can attack, and people can start trying to game the metric and play the system to "win" at the expense of making real hard-to-measure progress, and the field suffers greatly.
  2. People under pressure perform much worse on the candle problem. Offering cash prizes or deadline threats can only motivate well-defined, mechanical thinking, except by drawing attention to the desired problems.

#2 seems to be the bigger contributor, from my perspective.

(As a point of clarrification: I fully agree that if you make sports a multi-billion dollar industry, you will attract the interest of enough people capable of ignoring their incentives and just exploring the problem philosophically. However, my point is that for an individual already focused on a particular problem, a deadline or cash prize is the last thing they need. It had the distracting effect of focusing the mind. (Also, more generally, these sorts of phenomenon are known as the overjustification effect, which is why any system built on incentives alone is doomed to failure.) )

This (#2) is likely to sound highly surprising, so let me expand the model a bit: The actual frame of mind you want to be in to make real, novel breakthroughs is one of detached, unhurried curiosity. The world is usually way to eager for tangible, measurable results, and so is way way way to far toward the "exploit" side of the "explore/exploit" trade-off (formally, the multi-armed bandit problem). Goodhart's Law strikes again.

Examples (of #2) to try and triangulate meaning:

If you actually wanted to win at a competitive sport, the way to do it is NOT to practice or compete. Everyone's already doing that, so it's an efficient market. The way to win is to look for loopholes which haven't been exploited yet.

Example 1: Moneyball fundamentally changed the way people thought about sports, by playing a fundamentally different game with a different goal (optimizing for return on investment, not winning/prestige/warm fuzzies).

So, of the top of my head, you might take a close look at which sorts of transhumanist enhancements might not be illegal yet. Can you stretch leg bones, create new drugs that aren't banned yet, (example 2) or recruit from untapped demographics which have some unique trait? Lets explore that avenue of the transhumanist approach.

I happen to randomly recall that ADHD meds stunt growth, so that might help make smaller gymnasts who are still old enough to participate in the Olympics. (Example 3) (I also happen to vaguely recall that China lied about the age of their gymnasts to get yunger into the Olympics, because smaller is apparently better for gymnastics.) So, since no one is going to ban a drug which a kid legitimately needs for non-sports reasons, ADHD meds sound promising. There are presumably health and moral reasons why kids are often taken off of ADHD drugs for the summer, to allow them to grow, but if you're a true creative Machiavellian focused solely on winning, you could probably put them in a school system without a summer break or something.

Meta discussion of these examples:

Note that all we've done here is randomly mash up transhumanism + sports, and see what relevant knowledge our brains already have which might be useful. Having 2 things I randomly recall suggested one approach, but we've hardly scratched the surface of the search space. In order to generate a few thousand new, better approaches, we might read more about extreme features (height/weight/body proportions/endurance/hormones/etc.) or different athletes. (I happen to also randomly recall that some tour de france champoin had testicular cancer and ridiculously high testosterone levels or something. Similarly, the swimmer Michael Phelps is bizarrely proportioned, with short legs and long arms.) Combining that with looking up whether it might be possible to induce such features should be fertile ground for ideas.

But, even that approach is extremely narrow. We could broaden it to looking for the key limiting factors in various sports, physical or not, and then researching how to overcome them. Or better yet, spend a lot of time thinking about which search spaces are excluded even by that broader methodology, and then search in them. The entire point is not to get sucked into one concrete path until you are reasonably sure you've exhausted all the search space reachable by human-like minds, and are at least starting you implementation in the right ballpark. You should still expect to pivot to different sports or approaches though, even halfway through implementation. Discard sunk costs quickly.

Back to gesturing at #2:

Some of this sort of creative problem solving may occur in sports from time to time. I don't follow sportsball close enough to guess exactly how inadequate/low-level-of-play sports is generally. But, the mere fact that people are bothering to waste time playing the object-level game, rather than concentrating all their efforts on the meta-game, isn't a sign of competence. That's probably for the best though, since any competent arms-race in just about anything is just a paperclip maximizer built out of human institutions instead of AI.

If there's something important that you actually want to make substantive contributions too though, then the trick is to figure out how to revolutionize the entire field, rather than wasting time making incremental improvements. That means thinking outside the box, and outside the "outside the box" box. Douglas Hofstadter calls this JOOTSing, for "Jumping Out Of The System". This is entirely distinct from merely repeating the innovation of a previous jump.

Saying “…so I can clearly not choose the wine in front of [me/you]...” over and over again isn't actually going any additional meta-levals deep, which is why Yudkowsky's Law of Ultrafinite Recursion applies to such situations. To win such a battle of wits, you have to completely upend the previous level of play, such as by building up an immunity to iocaine powder.

And such creative solutions require precisely the opposite mindset from the hyper-focused blinders which narrow attention to nothing but the object-level task at hand, to the exclusion of everything else. The odds of the ideal solution actually lying in such a tiny search space are infinitesimal.

Comment by MakerOfErrors on Demon Threads · 2018-01-13T18:07:19.807Z · LW · GW
if the government decides to increase the tax on gasoline to "fight Global Warming" this will impact the status of a lot of people.

That's an indirect impact, which I don't think is a plausible motivator. Like, it's a tragedy of the commons, because each individual would be better off letting others jump in to defend their side, and free-riding off their efforts. It may feel like the real reason we jump into demon threads, but I think that's a post-hoc rationalization, because we don't actually feel twice as strong an impulse when the odds of changing someone's mind are twice as high.

So, if it's a tragedy of the commons, evolution wouldn't have given us an impulse to jump into such arguments. If it did, our impulses would be to convince the other side rather than attack them, since that's what benefits us the most through this indirect route. So, gining direct benefits form the argument itself, by signaling tribal affiliations and repelling status hits, seams more plausible to me.

A discussion on agricultural subsidies might have a huch larger indirect impact on an individual than a discussion on climate change, especially because it's discussed so much less often. But talking isn't about information.

Comment by MakerOfErrors on Demon Threads · 2018-01-13T17:44:47.796Z · LW · GW
I can easily see people furiously arguing about any of those, I doubt there is much variation between them.

My prediction is that almost no discussion that starts about whether Donald Trump is 1.88m tall should turn into a demon thread, unless someone first changes the topic to something else.

Similarly, the details of climate change itself should start fewer object-level arguments. I would first expect to see a transition to (admitedly closely related) topics like climate change deniers and/or gullible liberals. Sure, people may then pull out the charts and links on the object level issue, but the subtext is then "...and therefore the outgroup are idiots/the ingroup isn't dumb".

We could test this by seeing whether strict and immediate moderator action prevents demon threads if it's done as soon as discussion drifts into inherently-about-status topics. I think if so, we could safely discuss status-adjacent topics without anywhere near as many incidents. (Although I don't actually think there's much value in such discussions most of the time, so I wouldn't advocate for a rule change to allow them.)

Comment by MakerOfErrors on Demon Threads · 2018-01-12T02:33:10.760Z · LW · GW

Good point. I dono, maybe almost everything really is about status. But some things seem to have a much stronger influence on status than others, and some are perceived as much larger threats than others, regardless of whether those perceptions are accurate outside of our evolutionary environment.

Even if everything has a nonzero status component, so long as there is variation we'd need a theory to explain the various sources of that variation. I was trying to gesture at situations where the status loss was large (high severity) and would inevitably happen to at least one side (large scope, relative to audience size).

Change My View (the source I thought might make a good proxy for LW with politics) has a list of common topics. I think they span both the scope and severity range.

  • Abortion/Legal Parental Surrender: Small scope, high severity. If discussed in the abstract, think mostly only people who've had abortions are likely to loose status if they let a statement stand that infers that they made a bad decision. If the discussion branches out to body autonomy, though, this would be a threat to anyone who might be prevented from having one or have tribal members who would be prevented.
  • Climate Change: Low scope, low severity. Maybe some climatologists will always loose status by letting false statements stand, but most other people's status is about as divorced from the topic as it's possible to be. Maybe there's a tiny status hit from the inference that you're a bad person if you're not helping, which motivates a defensive desire to deny it's really a problem. But both scope of people with status ties and the severity of status losses are about zero.
  • Donald Trump: If I say "Donald Trump is 1.88m tall" no one looses any status, so that topic is low-scope, low-severity. that's defining him as a topic overly narrowly, though. There certainly are a surprisingly large number of extremely inflammatory topics immediately adjacent. The topic of whether he's doing a good job will inevitably be a status hit for either people who voted for him or against him, since at least one side had to have made a poor decision. But not everyone votes, so the scope is maybe medium sized. The severity depends on the magnitude of the particular criticism/praise.
  • Feminism: Judgments make feminists look bad, but I don't really know what fraction of people identify as feminist, so I don't quite know how to rate the scope. Probably medium-ish? Again, severity depends on the strength of the criticism. And of course specific feminist issues may have status implications for a different fraction of people in the discussion.

I could continue for the rest of the common topics on the list, but I think I'm repeating myself. I'm having a hard time selecting words to precisely define the exact concepts I'm pointing at though, so maybe more examples would help triangulate meaning?

Comment by MakerOfErrors on Demon Threads · 2018-01-10T05:44:08.490Z · LW · GW

Maybe this is discussed in one of the linked articles (I haven't read them). But interestingly, the following examples of demon topics all have one thing in common:

Latent underlying disagreements about how to think properly... or ideal social norms... or which coalitions should be highest status... or pure, simple you're insulting me and I'm angry

While it's possible to discuss most things without also making status implications, it's not possible with these issues. Like, even when discussing IQ, race, or gender, it's usually possible to signal that you aren't making a status attack, and just discuss the object-level thing. But with the quoted items, the object-level is about status.

If one method of thinking empirically works better, others work worse, and so the facts themselves are a status challenge, and so every technicality must be hashed out as thoroughly as possible to minimize the loss of status. If some social norm is ideal, then others aren't, and so you must rally your tribe to explain all the benefits of the social norm under attack. Same with which coalition should have highest status.

You could move borderline topics like IQ into that category by discussing a specific person's IQ, or by making generalizations about people with a certain IQ without simultaneously signaling that there are many exceptions and that IQ is only really meaningful for discussing broad trends.

Random musings:

I wonder if most/all demon topics are inherently about status hierarchies? Like, within a single group, what percent of the variation in whether a thread turns demonic is explained by how much status is intrinsically tied to the original topic?

It would be interesting to rate a bunch of post titles on /r/Change My View (or somewhere similar without LW's ban on politics) by intrinsic status importance, and then correlate that with the number of deleted comments by the mods, based on the logs. The second part could be scripted, but the first bit would require someone to manually rate everything 0 if it wasn't intrinsically about status, or 1 if it was. Or better yet, get a representative sample of people to answer how much status they and their tribes would loose, from 0 to 1, if the post went unanswered.

I'd bet that a good chunk the variance in number of deleted comments could be attributed to intrinsically status-relevant topics. A bunch more would be due to topics which were simply adjacent to these intrinsically status-changing topics. Maybe if you marked enough comments that drifted onto these topics, you could build a machine-learning system to predict the probability of a discussion going there based on the starting topic? That would give you a measure of inferential-distance between topics which are intrinsically about status and those adjacent to them.

A big chunk is actually due to the people involved being inflammatory, but my current impression after ~5min of thought is that more than half of demon threads in more intellectual communities either start on topics which are intrinsically about status, or are adjacent to such topics but discussion wanders into that minefield.

I'll keep an eye out for demon threads, and look for counterexamples. If true though, then there's a fairly clear line for mods to follow. That'd be a huge step up from the current state of the art in Moderationology. (Messy subjective guesswork and personal untested hypotheses.)

Comment by MakerOfErrors on The Copernican Revolution from the Inside · 2017-11-11T07:29:08.882Z · LW · GW

Note: I wrote most of this, and the sat on it for a couple days. I'm commenting here just to get it out there, because I think the approach is a good one, but I haven't proofread it or tweaked the phrasing to make it clearer. Hopefully I'll come back to it soon, though.

1. If you lived in the time of the Copernican revolution, would you have accepted heliocentrism?

No, absolutely not. I think this is roughly how we should have reasoned:

The best models of physics say that earthly objects are inherently center-seeking. It’s the nature of rocks and people and such to move toward the center. That’s the simplest explanation.
Now, celestial objects don’t have this property, which is why they are found so far from the center. What mechanisms govern their motion are a mystery, but the models which best fir the data are not heliocentric.
Sure, you could separate the ideas of “center” and “what attracts objects”. There’s no *a priori* reason they should coincide. And, Tycho Brahe’s combined geoheliocentric theory does just this. It places the sun at the center of the rotations of the planets, and the earth at the center of the rotation of the moon and the sun.
But, this only changes our interpretation of the celestial world, not the earthly world. And, our knowledge there is much less intimate than our practical, day-to-day knowledge of the physical laws that govern earthly objects. So, rocks are still drawn to their respective puller when thrown, and the sorts of objects that don’t fall and aren’t bound by this pull rotate around whatever it is they rotate around, sun or earth.
But, we know the moon orbits earth, so it is just a question of whether it’s simpler to have everything else also orbit us, but with complex epicycles, or to say that everything but the moon orbits the sun.
But, this second approach still requires the introduction of epicycles, and so is strictly more complex. So, in all likelihood, the earth is the center of all things.

I think this logic is correct and sound, at least until Newton. We should have notices we were confused after Galileo. He shattered the illusion that celestial objects were of a fundamentally different nature than earthly objects. Before that, earthly objects were rough and oddly shaped, while celestial objects were all perfectly round, or infinitely small points of light.

Celestial objects glowed, for god’s sake, and nonstop, in a way that we could only reproduce temporarily with fire. Conservation of energy clearly didn’t apply to them, especially because they moved constantly in mysterious unceasing patterns. Earthly objects are subject to friction, and even the fastest moving bullet eventually succumbs to gravity. The proper and natural thing to do is to classify them as fundamentally different.

2. How should you develop intellectually, in order to become the kind of person who would have accepted heliocentrism during the Copernican revolution?

I think the proper lesson here is NOT epistemic humility. We shouldn’t retain high degrees of model uncertainty forever, and agonize over whether we’re missing something that fuzzy, spiritual, mystical insight.

Mysticism happened to get the right answer in this case, but not because of anything intrinsic to mysticism. Instead, I think we can pull out the property that made it work, and leave the rest. (Pull out the baby, pitch the bathwater.)

But first, let’s look at our model for model uncertainty. Bayes’ Theorem would have us update from our priors to some ideal probability estimate, hopefully >99.9%, or <0.1%, if we can dig up enough data. Usually, we only pay attention to the p, but the amount of total evidence collected is also a decent measure of the progress from priors to truth.

Another measure I like even better is how large you expect future updates to be. Maybe I’m 20% sure of my best hypothesis, and I expect to update by +/- about 5% based on some experiment which I can’t do yet. The relative ratio of these 2 percentages is telling, because it tells you how much variability is left in your model. (Or, more precisely but using different units, maybe you give odds 10:1 in favor of something, but still expect to update by a factor of 100 after the next experiment, in one direction or another.)

By conservation of expected evidence, you can’t know in which *direction* that update will be. (Or if you can, than you should already have updated on *that* knowledge.) But, you can at least get a feel for the size of the update, and compare it to the probability of your current model.

So, you start out with uncountably many priors, all of which have only a tiny chance of being true. Then, as more and more evidence comes in, some hypotheses go past the 1% threshold, and you have a humanly manageable number, some of which are more probably than others. But, these shouldn't add up to 100%. Most of your probability mass should still be on unknown unknowns. And really, most of your models should only be thought of as rough outlines, rather than formal definitions.

I think this is where Capernacus should have considered himself to be. He had bad reasons for trying to come up with variants of the current best models. But, that’s exactly what he should have been doing, regardless of the reasons. And, note that, despide getting quite close, he was still wrong. The sun is not the center of the universe, or even the galaxy. It’s just the center of the solar system. Ish. Really, there’s some point that’s the center of mass of everything in the solar system, and if I recall it’s actually technically outside the sun. The sun and everything else just orbit that point.

So, you can only really expect to put non-negligible probability on models in this state of understanding when you include a bunch of weasel words, and phrase things as broadly as possible. Instead of “The earth and all the celestial objects but the moon rotate around the sun”, append this with “or the majority of them do, or they approximately do but some second-order correction terms are needed.”

And even then, it’s probably still not quite right. In this sense, we’re probably wrong about just about everything science claims to know, with probability nearly 1. But, I think we’re homing in on the truth asymptotically. Even if we never quite get to anything that’s 100% right, we can get arbitrarily close. So, is everything we know a lie, then? Should we put near-zero probability on everything, since we probably haven’t added enough weasel words to capture every possible subtlety we may have missed?

Isaac Asimov wrote a fantastic description of this problem, which he summed up this way:

John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

It would be nice to have some absolute measure of this similarity in terms of Kolmogorov complexity or something. Like, a circle and an ellipse are quite similar mathematically, and there are probably plenty of ways to quantify how far a circle is from an ellipse. So, it seems like it should be possible to quantify how similar any 2 arbitrary mathematical models are. Maybe in terms of how different their predictions are, or how similar their mathematical structure is? I dono.

But, I’m not aware of any generalizations of circle/ellipse differences to all possible computations. How far is 1+1 from the Pythagorean theorem? I dono, but I think modeling something as a circle when it’s really closer to an ellipse (ignoring orbital perturbations from nearby planets) is a lot closer than 1+1 is to the Pythagorean theorem. And, I think that modeling everything as revolving around the sun is significantly closer to reality than modeling everything as orbiting the earth. It’d be interesting to be able to quantify exactly how much closer, though.

Comment by MakerOfErrors on Does Age Bring Wisdom? · 2017-11-09T05:36:11.889Z · LW · GW

Forgetting arguments but remembering conclusions may be part of this. Same with already being vaccinated against a wide-range of memes. Also, Dunning-Kruger, as we either forget more than we realize but still think we're an expert, or as the state-of-the-art progresses far beyond where it was the last decade we looked. Also, just acquiring more random knowledge makes it easier to offer counterarguments to anything we don't want to change our mind about, or even create fully-general counterarguments.

If "wisdom" really is the result of something like declining brain function due to NMDA receptor decline, though, maybe anti-aging drugs will help? One argument against biological immortality is that "science advances one funeral at a time". Same applies culturally. Imagine if all the catholics from 1500 were still alive and voting. No religious tolerance, no enlightenment, no abolition of slavery, and certainly no gender equality or LGBTQ rights.

But, if resistance to such ideas is mainly due to NMDA receptor decline, or decreasing neural plasticity, or hormonal shifts or whatever, then that's fantastic news. It means we might naturally sidestep the whole ethical conundrum of weighing all social progress against genocide-by-inaction form not curing aging. (And, ending social/philosophical/scientific progress certainly counts as an X-risk.)

No need to limit the voting age to below 130, or quarantine billions of centinarians in internment camps for millennia, or keep them off the internet or whatever memespaces they might poison, or whatever other dystopias society decides are less bad than the genocide-by-aging dystopia and the end of progress.

Comment by MakerOfErrors on Moloch's Toolbox (2/2) · 2017-11-08T13:44:08.236Z · LW · GW

Thanks. The Overton Window stuff was mainly about why First Past The Post might be stuck in metaphorical molasses, and I hadn't generalized the concept to other things yet.

Side note: this also gives an interesting glimpse into what it feels like from the inside to have one's conceptual framework become more interconnected. Tools and mental models can exist happily side by side without interacting, even while explicitly wondering about a gap in one's model that could be filled by another tool/model you already know.

It takes some activation energy (in the form of Actually Trying, ie thinking about it and only it for 5+ minutes by the clock), and then maybe you'll get lucky enough to try the right couple pieces in the right geometry, and get model that that makes sense on reflection.

This suggests that re-reading the book later might be high-value, since it would help increase the cross-linking in my Bayesian net or whatever it is our brains think with.

Comment by MakerOfErrors on Moloch's Toolbox (2/2) · 2017-11-08T07:56:42.580Z · LW · GW

Two things: 1) A medium-sized correction, and 2) a clarification of something that wasn't clear to me at first.

1) The correction (more of an expansion of a model to include a second-order effect) is on this bit:

simplicio:  Ah, I’ve heard of this. It’s called a Keynesian beauty contest, where everyone tries to pick the contestant they expect everyone else to pick. A parable illustrating the massive, pointless circularity of the paper game called the stock market, where there’s no objective except to buy the pieces of paper you’ll think other people will want to buy.
cecie:  No, there are real returns on stocks—usually in the forms of buybacks and acquisitions, nowadays, since dividends are tax-disadvantaged. If the stock market has the nature of a self-fulfilling prophecy, it’s only to the extent that high stock prices directly benefit companies, by letting the company get more capital or issue bonds at lower interest. If not for the direct effect that stock prices had on company welfare, it wouldn’t matter at all to a 10-year investor what other investors believe today. If stock prices had zero effect on company welfare, you’d be happy to buy the stock that nobody else believed in, and wait for that company to have real revenues and retained assets that everyone else could see 10 years later.
simplicio: But nobody invests on a 10-year horizon! Even pension companies invest to manage the pension manager’s bonus this year!
visitor: Surely the recursive argument is obvious? If most managers invest with 1-year lookahead, a smarter manager can make a profit in 1 year by investing with a 2-year lookahead, and can continue to extract value until there’s no predictable change from 2-year prices to 1-year prices.

It’s hard to see how 10-year time horizons could be common enough to overcome the self-fulfilling-prophecy effect in the entire stock market, when a third of all companies will be gone or taken over in 5 years. :p

We can edit the model to account for this in a couple ways. But this depends on whether investors are killing companies, and what the die off rate is for larger, fortune 500 companies. As I understand it, the big companies mostly aren't optimizing for long term survival, but there are a few that are. I'd expect most to optimizing for expected revenue at the expense of gambler's ruin, especially because the government subsidizes risk-taking by paying debts after bankruptcy.

(I'm not necessarily against bankruptcy though, as I understand that it makes companies much more willing to do business with each other, since they know they'll get paid.)

I don't know the details, but I'd lean a lot further toward the full Simplicio position than the chapter above does. That is, markets really are at least partly a Keynesian beauty contest / self-fulfilling prophecy.

2) Also, this section was confusing for me at first:

When the Red politicians do something that Red-haters really dislike, that gives the Blue politicians more leeway to do additional things that Red-haters mildly dislike, which can give the Red politicians more leeway of their own, and so the whole thing slides sideways.
simplicio: Looking at the abstract of that Abramowitz and Webster paper, isn’t one of their major findings that this type of hate-based polarization has increased a great deal over the last twenty years?
cecie: Well, yes. I don’t claim to know exactly why that happened, but I suspect the Internet had something to do with it.
In the US, the current two parties froze into place in the early twentieth century—before then, there was sometimes turnover (or threatened turnover). I suspect that the spread of radio broadcasting had something to do with the freeze. If you imagine a country in the pre-telegraph days, then it might be possible for third-party candidates to take hold in one state, then in nearby states, and so a global change starts from a local nucleus. A national radio system makes politics less local.

Let me make sure I understand, by stating the model explicitly: before effective communication, we were polarized locally but not so much nationally. You might have green and purple tribes in one state, and orange and yellow tribes in another. Now, as society is less regional, all these micro-tribalisms are aligning. I’m envisioning this as a magnetic field orienting thousands of tiny regions on a floppy disk, flipping them from randomized to aligned.

visitor: Maybe it’s naive of me… but I can’t help but think… that surely there must be some breaking point in this system you describe, of voting for the less bad of two awful people, where the candidates just get worse and worse over time.

Ah, I understand what you were getting at with the leeway model now. To state the axioms it's built from explicitly, coalitions form mainly based on a common outgroup. (Look at the Robbers Cave experiment. Look at studies showing that a shared dislikes builds friendships faster than a shared interests.)

So, if we vote mainly based on popularity contests involving which politician appears to have the highest tribal overlap with us, rather than policy, then the best way for politicians to signal that they are in our ingroup is to be offensive to our outgroup. that’s what leads to a signaling dynamic where politicians just get more and more hateful.

It’s not clear what the opposing forces are. There must be something, or we’d instantly race to the bottom over just a couple election cycles. Maybe politicians are just slow on adopting such a repulsive strategy? Maybe, as Eliezer suggests, it’s that politicians used to have to balance being maximally repulsive to lots of tiny local factions, but now are free to maximally repulsive to one of two single, fairly unified outgroups?

Comment by MakerOfErrors on Moloch's Toolbox (2/2) · 2017-11-08T07:18:06.155Z · LW · GW
Telling people that American politics is so messy as it is because of formal arguments about first-past-the-post voting is similar to explaining that the way highways get build with formal mathematical formulas about traffic density.

Related data:

A couple years ago, there were a bunch of sensational headlines along the lines of "US is an Oligarchy, not a Democracy, New Study Finds". The actual study is an interesting read: Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens
It correlated the average preferences of average citizens, economic elites, and special interest groups (divided into "mass public interest groups" and "business interest groups"). Interestingly, Table 2 shows that economic elite preferences actually correlate with average citizen preferences at .78. Also, business interest groups correlate with mass public interest group preferences at -0.05, but all interest groups overall at 0.96, so I guess the vast majority of interest groups are business interest groups? Aside from "all interest groups", business interests seem to correlate negatively with everything else, although not as negatively as I would have thought.
They then looked at the probability of various legislation passing, given that various categories were for or against it on average. They focus on cases where, for example, economic elites favor something but average citizens are against it, and compare it to actual policy outcomes passing or failing. They build several models for power and influence, and do a bunch of statistics I haven't read the details of to try to figure out how much influence various groups have on actual policy outcomes.
Eyeballing figure 1, if looks like if 10% of average americans support something, it has maybe a 30% chance of passing, but if 90% of average citizens support something, it has maybe a 31% chance of passing. Conversely, if 10% of economic elites favor a piece of legislation, it has only a 10% chance of passing, but if 90% of them favor it, it has a 60% chance of passing.
Both of those graphs were fairly linear, and had a good spread, with lots of legislation with both high support and high opposition. They also tried to graph the net number of interest groups in support or opposition to legislation, vs. the probability of it passing. However, in almost all cases, there were less than 4 more either supporting or opposing each bill, so all their data points are clustered in the center. I'm not sure I believed their model projections outside of that narrow range, but if you believe that weird-shaped s-curve, then more net interest groups in support of a bill does cause it to be much more likely to pass.


Take all this with a grain of salt, though. There have been a couple follow up studies which stepped this back substantially. As Vox puts it "When the rich and middle class disagree, each wins about half the time":

That leaves only 185 bills on which the rich and the middle class disagree, and even there the disagreements are small. On average, the groups' opinion gaps on the 185 bills is 10.9 percentage points; so, say, 45 percent of the middle class might support a bill while 55.9 percent of the rich support it.
Bashir and Branham/Soroka/Wlezien find that on these 185 bills, the rich got their preferred outcome 53 percent of the time and the middle class got what they wanted 47 percent of the time. The difference between the two is not statistically significant.

And also:

Bashir also notes that the Gilens and Page model explains very little. Its R-squared value is a measly 0.074. That is, 7.4 percent of variation in policy outcomes is determined by the measured views of the rich, the poor, and interest groups put together. So even if the rich control the bulk of that (and Bashir argues they do not), the absolute amount of sway over policy that represents is quite limited indeed.

That makes you wonder: If only 7.4% of the variance is due to these groups, then what on earth is determining policy??!!

Moloch the incomprehensible prison! Moloch the crossbone soulless jailhouse and Congress of sorrows! Moloch whose buildings are judgment! Moloch the vast stone of war! Moloch the stunned governments!

Right. Obvious answer is obvious.

Or maybe the middle class, since they are conspicuously absent from "rich, the poor, and interest groups put together"? I think "rich" is still being defined as top 10% here, and "poor" as bottom 10%.

Or, some interest groups are much more effective than others, so their preferences will be the main driver of everything, but only weekly correlated with the average interest group preference. The same could be true of certain influential elites, if their opinions diverged from those of the rest of the elites. But, that's a wild guess on my part. This whole thing is a mess I haven't even begun to sort out in my head.

But it's probably Moloch.

Comment by MakerOfErrors on Moloch's Toolbox (1/2) · 2017-11-06T04:23:53.535Z · LW · GW

Practical, actionable advice ideas:

Seriously, if bookmarking and remembering a tab can change your life expectency by years, that's one hell of a cost/benefit ratio. I'd put absurdly high odds on it being worth picking out a default hospital beforehand, so you don't have to make a split-second decision in an emergency. Like, you just never see low hanging fruit with cost/benefit ratios that high in day-to-day life.

Like, maybe print off the info for a couple choices, and magnet them to the fridge with the closest one on top and the best one underneath? Add closest address to your GPS, and drive the route once so you know that you know it? I'm pretty sure this could be optimized further with more thought.

Also, I can't find it now, but I think I recall Robin Hanson commenting on proximity of home to the nearest hospital being a major determinent in life expectency, along with things like rural living. I think someone was musing about whether property values were any higher near hospitals, but I don't remember if they got an answer or what it was.

Comment by MakerOfErrors on Moloch's Toolbox (1/2) · 2017-11-06T03:07:56.394Z · LW · GW

It just occurred to me that your link is likely to be an Incredibly Important(tm) tool for the weird sort of person who might actually be interested in themselves/friends/family not dying during a procedure.

(In opposed to just being interested in signaling how caring we are, or seeking medicine to feel cared for.)

Comment by MakerOfErrors on Moloch's Toolbox (1/2) · 2017-11-05T10:03:36.447Z · LW · GW

Nice catch!

Googling the term brought me to the Vendor Lock-In Wikipedia page, but no page just for "lock in", even though that's a common term for this sort of thing. However, the "see also" section mentions Path Dependence, which mentions the Bandwagon Effect, which is the perfect term for the Craigslist phenomenon.

These aren't all quite the same thing, but they all seem related. They all highlight different aspects or special cases of similar phenomena.

Comment by MakerOfErrors on Moloch's Toolbox (1/2) · 2017-11-05T09:19:19.376Z · LW · GW

More generally, could we fight all such problems of this class by claiming to believe in Moloch, as a vengeful god? Then ask for religious exemption from all coordination problems where exemption is legally possible.

Why not organize a religion to spite its god, rather than worship it?

EDIT: The concrete benefits could come from a single commandment to defy Moloch whenever possible. All shoes must be velcro and exempt from dress codes, doctors must be specialists when available, and you can sue for religious discrimination if someone makes hiring decisions based on autodidact/community college attendance over Ivy League, etc.

I know the IRS's definition of a religion is deliberately fuzzy, and allows anyone with a "sincerely held belief" to call their thing a religion. If other religious exemptions are similarly open, then it would be hilarious way to fight Moloch. After all, coordination problems are a real, empirically verifiable thing, which economists at least sincerely believe in.

If the Flying Spaghetti Monster is considered a valid belief in some legal contexts, then so should Moloch. And, personifications of natural forces were some of the first historical gods, so there's a precedent. Egregores may not be physical things, but believing in the processes they are names for should qualify if people believing in separate "Non-Overlapping Magisteria" qualifies.

There's a big gap between "should work" and "works in practice", though. Anyone know how big this particular gap is?

Comment by MakerOfErrors on Moloch's Toolbox (1/2) · 2017-11-05T08:39:40.908Z · LW · GW

Two things: an expansion on the "employers optimizing for IQ" model, and a defense of regulations as critical tools for *solving* coordination problems.

suppose that there’s a magical tower that only people with IQs of at least 100 and some amount of conscientiousness can enter, and this magical tower slices four years off your lifespan. The natural next thing that happens is that employers start to prefer prospective employees who have proved they can enter the tower.

I think most companies are sufficiently broken that they aren't even capable of optimizing for the metrics which would earn them the most money. (Before anyone cries “EMH”, It’s a Principal-Agent Problem, and probably unexploitable unless you unilaterally control a whole company. :p)

For example, you'd think that law firms would be as ruthlessly motivated by money as anyone. They should always use the best available metrics to hire the most cost-effective lawyers. But, as Robin Hanson points out, they ignore track records and hire based on fuzzy personal impressions, even when this leads to demonstrably worse outcomes. And, this is pattern repeats in media pundits, teachers, etc. Why? What on earth are we actually optimizing for, if not expected revenue generation by the new hire?

Robin's answer is that hiring managers are optimizing for looking good to their bosses, and to powerful elites more generally. In short, hiring managers are optimizing for prestige in the eyes of everyone who's opinion they care about, not money. (At least, not money for the firm.) Maybe they do care about IQ to the degree that it gives them prestige, but not as much as you would expect from assuming they're maximizing expected profits.

We like to see ourselves as egalitarian, resisting any overt dominance by our supposed betters. But in fact, unconsciously, we have elites and we bow to them. We give lip service to rebelling against them, and they pretend to be beaten back. But in fact we constantly watch out for any actions of ours that might seem to threaten elites, and we avoid them like the plague. Which explains our instinctive aversion to objective metrics in people choice, when such metrics compete with elite advice.

Ok, onto my second point:

I absolutely love the extended Tower metaphor for College burning 4 years of life and money on runaway signaling competitions. (Although it’s never stated explicitly, it screams it once the thought strikes you, and footnote 6 corroborates this. Not sure if footnote 5 was supposed to include a link to the original SSC comment, so I can’t verify that the tower metaphor started as a college metaphor.) At least, I loved everything but this:

simplicio: I agree that trying to build a cheaper Tower Two is solving the wrong problem. The interior of Tower One boasts some truly exquisite architecture and decor. It just makes sense that someone should pay a lot to allow people entry to Tower One. What we really need is for the government to subsidize the entry fees on Tower One, so that more people can fit inside.

That's a bit of an unfair straw-man of tuition subsidies, conditional on the college metaphor being intentional. If we wanted to read people bashing the outgroup, we'd all go to /r/atheism. Like, I got a strong "childish kicking someone when they can't kick back" reaction, which I found hard to put aside to read the rest. It's thinly veiled, so not all liberals will realize they’re being kicked, but that doesn't help much.

Obligatory defense which I don’t want to give but feel obligated to anyway: tuition subsidies are mostly for community colleges, or are only large enough to cover just what community college would have cost if the student decides to burn the cash on the added prestige of another school. No one is suggesting everyone attend Harvard, or hand out Harvard-sized subsidies.

And, college does offer some real education along with the signaling. The art degrees are a form of countersignaling by the upper class, but if you look at the degrees that lower class people get, I would bet that they skew much more toward trade schools and more practical information. (Also, observation-selection effect may be incredibly strong here.) So, tuition assistance programs likely do far more good than harm. Sure, it would help more if some of it wasn't going to zero-sum signaling competitions, but the subsidies themselves are hardly as big a driver as they would be in the Tower model.

</end obligatory defense of defenseless punching bag.>

And, more generally, I see this as a symptom of a larger problem. I Agree Denotationally But Object Connotationally with a lot of the libertarian undercurrents.

Maybe future chapters will feture more of the Dilbert-scale inadequacies in industry, like Robin Hanson's point on law firms. But I'm worried not, based on quips like this:

visitor: Do they not have markets on your planet? Because on my planet, when you manufacture your product in a crazy, elaborate, expensive way that produces an inferior product, someone else will come along and rationalize the process and take away your customers.

Although, while scanning through looking for that quote, I noticed the Craigslist thing, which removed about half my worry. The remainder would be solved by explicitly stating that privatizing and/or deregulating everything won't magically make everything better, as they appear to have on the Visitor’s planet.

(And, for those who need evidence for that claim:

1) Cost Disease is just as bad in for-profit hospitals as not-for-profit ones.

2) SSC on Rehab clinics:

They’re minimally regulated. There’s no credentialing process or anything. There are many different kinds, each privately led, and low entry costs to creating a new one. They can be very profitable – pretty much any rehab will cost thousands of dollars, and the big-name ones cost much more. This should be a perfect setup for a hundred different models blooming, experimenting, and then selecting for excellence as consumers drift towards the most effective centers. Instead, we get rampant abuse, charlatanry, and uselessness.
On the other hand, when the government rode in on a white horse to try to fix things, all they did was take the one effective treatment [which the rehab clinics also strongly discourage], regulate it practically out of existence, then ride right back out again. So I would be ashamed to be taking either the market’s or the state’s side here."

So, it looks to me that any given regulation has about 1:1 odds of either hurting or helping. I would love to have a density-map of inadequate equilibria by things like different industries, regions, scientific/academic disciplines, section of government, type of organization (publicly traded, privately owned, government owned, charity, church, co-op, hobby, club, sport, and anything else humans do in groups), size of organization, age of organization, organizational structure, etc. Not as a way to choose designs, since honestly most human institutions need to be redesigned from the ground up using game theory, but as a map of what needs fixing.)

So, using all these politicized examples is probably a Bad Idea.This is difficult to avoid, given the subject mater, but your usual habit of using historical examples whenever making inherently political points was a good one. Maybe that could be done for the most controversial ones? (Doing it for all would leave the reader thinking "yeah, but is that even still a problem today?")

And, I especially don't want a “yay-markets! Boo-government!” message to dissuade EAs from interacting with politicians or trying to push for legislation. It can be extremely influential to pull sideways in policy tug-of-war. Both parties may be pulling in opposite directions on some issues, and have no spare time (read: no free energy in the system) to invest in researching possible policies on an orthogonal axis. If you do all that for them though, and just hand them a Pareto-improvement, then both sides will likely support it, or at least not oppose it bitterly with every fiber of their being, like with partisan issues.

After all, what politician wants to be the guy to kill a important-sounding thing with no downsides? They might support the regulation or whatever it is, just to avoid the bad press.

Government's main benefit to humanity is the ability to unilaterally solve coordination problems. It can unilaterally discourage pollutants, or build roads, or regulate monopolistic utilities. Sure, things like unions can fight monopolies too, but nothing else has quite the versatility of government. It's not the only tool we have in the fight against Moloch, but it's the biggest single one. Maybe Moloch is corrupting it too, but it hasn't fully lost that battle yet.