Posts

How can I efficiently read all the Dath Ilan worldbuilding? 2024-02-06T16:52:32.558Z
What are the best Siderea posts? 2023-12-19T23:07:59.027Z
What did you change your mind about in the last year? 2023-11-23T20:53:45.664Z
What will you think about the Current Thing in a year? 2023-11-20T22:39:37.630Z
[Linkpost] 7 Swedish Words to Import 2022-03-19T02:36:44.328Z
Some ideas for interacting with reporters 2022-02-14T00:38:42.740Z
Five Missing Moods 2021-12-16T01:25:09.409Z
[Linkpost] Cat Couplings 2021-12-09T01:41:11.646Z
[linkpost] Why Going to the Doctor Sucks (WaitButWhy) 2021-11-23T03:02:47.428Z
[linkpost] Crypto Cities 2021-11-12T21:26:28.959Z
[linkpost] Fantasia for Two Voices 2021-10-13T02:55:21.775Z
my new shortsight reminder 2021-10-11T20:06:30.678Z
[ACX Linkpost] Too Good to Check: A Play in Three Acts 2021-10-05T05:04:40.837Z
[linkpost] Vitalik Buterin on Nathan Schneider on the limits of cryptoeconomics 2021-10-02T19:11:06.108Z
[Linkpost] Partial Derivatives and Partial Narratives 2021-09-13T21:02:49.285Z
[linkpost] Political Capital Flow Management and the Importance of Yutting 2021-09-10T07:27:23.009Z
[ACX Linkpost] Highlights From The Comments On Missing School 2021-08-29T08:01:59.534Z
Predictions about the state of crypto in ten years 2021-08-08T16:18:13.941Z
Optimism about Social Technology 2021-06-27T23:35:31.174Z
What is the biggest crypto news of the past year? 2021-05-22T02:01:49.040Z
[ACX Linkpost] A Modest Proposal for Republicans 2021-04-30T18:43:17.252Z
[ACX Linkpost] Prospectus on Próspera 2021-04-15T22:48:00.545Z
Auctioning Off the Top Slot in Your Reading List 2021-04-14T07:11:07.881Z
Speculations Concerning the First Free-ish Prediction Market 2021-03-31T03:20:48.379Z
Some Complaint-Action Gaps 2021-03-29T21:15:50.012Z
Predictions for future dispositions toward Twitter 2021-03-14T22:10:17.720Z
The Puce Tribe 2021-02-28T21:11:05.778Z
some random parenting ideas 2021-02-13T15:53:43.855Z
How would free prediction markets have altered the pandemic? 2021-02-09T10:55:43.987Z
Against Sam Harris's personal claim of attentional agency 2021-01-30T09:08:45.145Z
Change My View: Incumbent religions still get too much leeway 2021-01-07T19:44:45.208Z
A dozen habits that work for me 2021-01-06T22:52:37.776Z
Pre-Hindsight Prompt: Why did 2021 NOT bring a return to normalcy? 2020-12-06T17:35:00.409Z
In Addition to Ragebait and Doomscrolling 2020-12-03T18:26:18.602Z
mike_hawke's Shortform 2020-11-29T19:57:57.415Z

Comments

Comment by mike_hawke on mike_hawke's Shortform · 2024-03-29T22:08:21.614Z · LW · GW

Would I somehow feel this problem less acutely if I had never been taught Fahrenheit, Celcius, or Kelvin; and instead been told everything in terms of gigabytes per nanojoule? I guess probably not. Inconvenient conversions are not preventing me from figuring out the relations and benchmarks I'm interested in.

Comment by mike_hawke on From the outside, American schooling is weird · 2024-03-29T21:32:52.883Z · LW · GW

It's important to remember, though, that I will be fine if I so choose. After all, if the scary impression was the real thing then it would appear scary to everyone.  

 

Reading this makes me feel some concern. I think it should be seriously asked: Would you be fine if you hypothetically chose to take a gap year or drop out? Those didn't feel like realistic options for me when I was in high school and college, and I think this ended up making me much less fine than I would have been otherwise. Notably, a high proportion of my close friends in college ended up dropping out or having major academic problems, despite being the smartest and most curious people I could find.

My experiences during and after college seemed to make a lot more sense after hearing about ideas like credential inflation, surplus elites, and the signaling model. It seems plausible that I might have made better decisions if I had been encouraged to contemplate those ideas as a high schooler.

Comment by mike_hawke on mike_hawke's Shortform · 2024-03-29T18:35:37.462Z · LW · GW

In measuring and communicating about the temperature of objects, humans can clearly and unambiguously benchmark things like daily highs and lows, fevers, snow, space heaters, refrigerators, a cup of tea, and the wind chill factor. We can place thermometers and thereby say which things are hotter than others, and by how much. Daily highs can overlap with fevers, but neither can boil your tea.
 

But then I challenge myself to estimate how hot a campfire is, and I'm totally stuck.

It feels like there are no human-sensible relationships once you're talking about campfires, self-cleaning ovens, welding torches, incandescent filaments, fighter jet exhaust, solar flares, Venus, Chernobyl reactor #4, the anger of the volcano goddess Pele, fresh fulgurites, or the boiling point of lead. Anything hotter than boiling water has ascended into the magisterium of the Divinely Hot, and nothing more detailed can be said of them by a mortal. If I were omnipotent, omniscient, & invulnerable, then I could put all those things in contact with each other and then watch which way the heat flows. But I am a human, so all I can say is that anything on that list could boil water.

Comment by mike_hawke on Vernor Vinge, who coined the term "Technological Singularity", dies at 79 · 2024-03-25T19:32:42.271Z · LW · GW

Presumably he understood the value proposition of cryonics and declined it, right?

Comment by mike_hawke on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2024-03-23T19:45:13.627Z · LW · GW

If everyone in town magically receives the same speedup in their "verbal footwork", is that good for meta-honesty? I would like some kind of story explaining why it wouldn't be neutral.

Point for yes: 
Sure seems like being able to quickly think up an appropriately nonspecific reference class when being questioned about a specific hypothetical does not make it harder for anyone else to do the same.

Point against: 

The code of literal truth only lets people navigate anything like ordinary social reality to the extent that they are very fast on their verbal feet, and can respond to the question "How are you?" by saying "Getting along" instead of "Horribly" or with an awkward silence while they try to think of something technically true.

This particular case seems anti-inductive and prone to the euphemism treadmill. Indeed, one person one time can navigate ordinary social reality by saying "Getting along" instead of giving an awkward silence; but many people doing so many times will find that it tends to work less well over time. If everyone magically becomes faster on their verbal feet, they can all run faster on the treadmill, but this isn't necessarily good for meta-honesty.

Implications: either cognitive enhancement becomes even more of a moral priority, or adhering to meta-honesty becomes a trustworthy signal of being more intelligent than those who don't. Neither outcome seems terrible to me, nor even all that much different from the status quo.

Comment by mike_hawke on How do you feel about LessWrong these days? [Open feedback thread] · 2024-03-19T17:43:43.815Z · LW · GW

One concrete complaint I have is that I feel a strong incentive toward timeliness, at the cost of timelessness. Commenting on a fresh, new post tends to get engagement. Commenting on something from more than two weeks ago will often get none, which makes effortful comments feel wasted.

I definitely feel like there is A Conversation, or A Discourse, and I'm either participating in it during the same week as everyone else, or I'm just talking to myself.

(Aside: I have a live hypothesis that this is tightly related to The Twitterization of Everything.)

Comment by mike_hawke on Social Class · 2024-03-17T15:25:53.139Z · LW · GW

Glad to see some discussion of social class.

Here's something in the post that I would object to:

Non-essential weirdnesses, on the other hand, should be eliminated as much as possible because pushing lifestyle choices onto disinterested working-class people is a misuse of class privilege. Because classes are hierarchical in nature, this is especially important for middle-upper class people to keep in mind. An example of non-essential weirdness is “only having vegan options for dinner”.


This example seems wrong to me. It seems like serving non-vegan options does in fact risk doing a great injustice (to the animals eaten). I tried and failed to think of an example that seemed correct, so now I'm feeling pretty unconvinced by the entire concept. 

One contrary idea might be that class norms and lifestyle choices are usually load-bearing, often in ways that are deliberately obscured or otherwise non-obvious. Therefore, one may want to be cautious when labeling something a non-essential weirdness. 

(Also maybe worth mentioning that I think class phenomena are in general anti-inductive and much harder to reach broad conclusions about than other domains.)

Comment by mike_hawke on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2024-03-16T03:23:28.032Z · LW · GW
  • Most people, even most unusually honest people, wander about their lives in a fog of internal distortions of reality. Repeatedly asking yourself of every sentence you say aloud to another person, "Is this statement actually and literally true?", helps you build a skill for navigating out of your internal smog of not-quite-truths. For that is our mastery.

I think some people who read this post ought to reverse this advice. The advice I would give to those people is: if you're constantly forcing every little claim you make through a literalism filter, you might end up multiplying disfluencies and generally raising the cost of communicating with you. Maybe put a clause limit on your sentences and just tack on a generic hedge like "or something" if you need to.

Comment by mike_hawke on 'Empiricism!' as Anti-Epistemology · 2024-03-15T00:47:10.810Z · LW · GW

Only praise yourself as taking 'the outside view' if (1) there's only one defensible choice of reference class;

 

I think this point is underrated. The word "the" in "the outside view" is sometimes doing too much work, and it is often better to appeal to an outside view, or multiple outside views.

Comment by mike_hawke on My Clients, The Liars · 2024-03-14T17:52:02.319Z · LW · GW

What do you think the internal experience of these liars is like? I could believe that some of them have gotten a lot of practice with fooling themselves in order to fool others, in settings where doing so is adaptive. Do you think they would get different polygraph results than the believer in the invisible dragon hypothetically would?

Comment by mike_hawke on Counting arguments provide no evidence for AI doom · 2024-03-12T23:49:56.934Z · LW · GW

Damn, woops.

My comment was false (and strident; worst combo). I accept the strong downvote and I will try to now make a correction.

I said:

I spent a bunch of time wondering how you could could put 99.9% on no AI ever doing anything that might be well-described as scheming for any reason.


What I meant to say was:

I spent a bunch of time wondering how you could put 99.9% on no AI ever doing anything that might be well-described as scheming for any reason, even if you stipulate that it must happen spontaneously.

And now you have also commented:

Well, I have <0.1% on spontaneous scheming, period. I suspect Nora is similar and just misspoke in that comment.

So....I challenge you to list a handful of other claims that you have similar credence in. Special Relativity? P!=NP? Major changes in our understanding of morality or intelligence or mammal psychology? China pulls ahead in AI development? Scaling runs out of steam and gives way to other approaches like mind uploading? Major betrayal against you by a beloved family member?
The OP simply says "future AI systems" without specifying anything about these systems, their paradigm, or what offworld colony they may or may not be developed on. Just...all AI systems henceforth forever. Meaning that no AI creators will ever accidentally recapitulate the scheming that is already observed in nature...? That's such a grand, sweeping claim. If you really think it's true, I just don't understand your worldview. If you've already explained why somewhere, I hope someone will link me to it.

Comment by mike_hawke on mike_hawke's Shortform · 2024-03-10T15:21:53.883Z · LW · GW

Foregone mutually beneficial trades sometimes provide value in the form of plausible deniability. 

If a subculture started trying to remove barriers to trade, for example by popularizing cheerful prices, this might have the downside of making plausible deniability more expensive. On net that might be good or bad (or weird), but either way I think it's an underrated effect (because I also think that the prevalence and load-bearing functions of plausible deniability are also underrated). People have prospects and opportunity costs, often largely comprising things that are more comfortable to leave unsaid.

(Continuing from this comment.)

Comment by mike_hawke on Counting arguments provide no evidence for AI doom · 2024-03-06T01:11:30.909Z · LW · GW

EDIT: This is wrong. See descendent comments.

 

I spent a bunch of time wondering how you could could put 99.9% on no AI ever doing anything that might be well-described as scheming for any reason. I was going to challenge you to list a handful of other claims that you had similar credence in, until I searched the comments for "0.1%" and found this one. 

I'm annoyed at this, and I request that you prominently edit the OP.

Comment by mike_hawke on Counting arguments provide no evidence for AI doom · 2024-03-05T07:40:47.459Z · LW · GW

I followed this exchange up until here and now I'm lost. Could you elaborate or paraphrase?

Comment by mike_hawke on If you weren't such an idiot... · 2024-03-04T22:11:20.768Z · LW · GW

I will push against.

I feel unhappy with this post, and not just because it called me an idiot. I think epithets and thoughtless dismissals are cheap and oversupplied. Patience and understanding are costly and undersupplied.

A lot of the seemingly easy wins in Mark's list were not so easy for me. Becoming more patient helped me a lot, whereas internal vitriol made things worse.I benefitted hugely from Mr. Money Mustache, but I think I was slower to implement his recommendations because he kept calling me an idiot and literally telling me to punch myself in the face.

If a bunch of people get enduring benefits from adopting the "such an idiot" frame, then maybe I'll change my mind. (They do have to be enduring though.)

 

Here is a meme I would be much happier to see spread: 

You, yes you might be able to permanently lower the cost of exercise to yourself if you spend a few days' worth of discretionary resources on sampling the sports in Mark Xu's list. But if you do that and it doesn't work, then ok, maybe you really are one of the metabolically underprivileged, and I hope you figure out some alternative.

Side notes:

  • It seems like this post is in tension with Beware Other Optimizing. And perhaps also a bit with Do Life Hacks Ever Reach Fixation? Not exactly, because Mark's list mostly relies on well-established life upgrades. But insofar as there is a tension here, I will tend to take the side of those two posts.
  • Perhaps this is a needless derail, and if so I won't press it, but I'm feeling some intense curiosity over whether Mark Xu and Critch would agree about whether Critch at all qualifies as an idiot. According to Raemon, Critch recently said, "There aren't things lying around in my life that bother me because I always notice and deal with it."
  • I find something both cliche and fatalistic about the notion that lots of seemingly maladaptive behaviors are secretly rational. But indeed I have had to update quite a few times in that direction over the years since I first started reading LessWrong.
Comment by mike_hawke on Increasing IQ is trivial · 2024-03-04T19:50:14.026Z · LW · GW

Thirteen points?! If I could get results like that, it would be even better than the CFAR handbook, which merely doubled my IQ.

Comment by mike_hawke on A dozen habits that work for me · 2024-03-03T04:39:42.840Z · LW · GW

I made this comment about Raemon's habit update. Here is my own habit update.

  1. Still going great.
  2. Started using a bidet instead. Heard these were bad for the plumbing anyway.
  3. Still going strong with these. I do have an addiction to slack and discord, which is less bad but still problematic.
  4. I fell off of both of these. The list seemed good so I'm rebooting it today. The controversy burning was good too, but mentally/emotionally taxing, so I'm not gonna restart it without some deliberate budgeting first.
  5. I fell off of this. It was easier to stay away from tempting snacks when I didn't work at an office full of them.
  6. Yup, still doing this. It's just good.
  7. I kept this habit until I replaced it with life coach sessions which are on net much more helpful.
  8. Still going strong with this one.
  9. Yeah, so low that I quit entirely.
  10. Still technically true, but I'm doing this less than once a week now.
  11. Yup, except that I memorized my laundry list so I don't need that one anymore.
  12. Yes, still using these.
Comment by mike_hawke on Rationality Research Report: Towards 10x OODA Looping? · 2024-03-03T01:20:07.185Z · LW · GW

They also attempt to generate principles to follow from, well, first principles, and see how many they correctly identify. 

Second principles?

========

I'm really glad to see you quoting Three Levels. Seems important.

Comment by mike_hawke on Sunset at Noon · 2024-02-29T21:31:34.851Z · LW · GW

I am compelled to express my disappointment that this comment was not posted more prominently. 

Habit formation is important and underrated, and I see a lot of triumphant claims from a lot of people but I don't actually see a lot of results that persuade me to change my habituation procedure. I myself have some successful years-old habits and I got them by a different process than what you've described. In particular, I skip twice all the time and it doesn't kill my longterm momentum.

And I hope you'll forgive the harshness if I harken back to point #4 of this comment.

Comment by mike_hawke on Your Cheerful Price · 2024-02-29T10:09:23.706Z · LW · GW

Q:  Wait, does that mean that if I give you a Cheerful Price, I'm obligated to accept the same price again in the future?

No, because there may be aversive qualities of a task, or fun qualities of a task, that scale upward or downward with repeating that task.  So the price that makes your inner voices feel cheerful about doing something once, is not necessarily the same price that makes you feel cheerful about doing it twenty times.

I feel like this needs a caveat about plausible deniability. Sometimes the price goes up or down for reasons that I don't want to make too obvious. Like if it turns out you have bad breath, or if my opportunity cost involves mingling with attractive people, or if you behaved badly yesterday and our peergroup has wordlessly coordinated to lightly socially embargo you for a week and I don't want to be seen violating that. Anticipating some complication like that (consciously or not), I might want to hedge my initial price, or if that's mentally taxing, just weasel out of giving the cheerful price at all. 

This is maybe all accounted for when you say that cheerful prices may not work for someone if Tell culture doesn't work for them. I think plausible deniability tends to be pretty important though, even among nerds who virtue signal otherwise.

Comment by mike_hawke on Rationality Research Report: Towards 10x OODA Looping? · 2024-02-27T01:10:45.839Z · LW · GW

If I'm building my own training and tests, there's always the risk of ending up "teaching to the test", even if unintentionally. I think it'd be cool if other people were working on "Holdout Questions From Holdout Domains", that I don't know anything about, so that it's possible to test if my programs actually output people who are better-than-baseline (controlling for IQ).


I am hoarding at least one or two fun facts that I have seen smart rationalists get wrong. Specifically, a claim was made, I ask, "huh, really?" they doubled down, and then later I go look it up and find out that they were significantly wrong. Unfortunately I think that if I had read the book first and started the conversation with it in mind, I might not have discovered that they were confidently incorrect. Likewise, I think it would be hard to replicate this in a test setting.

Comment by mike_hawke on mike_hawke's Shortform · 2024-02-22T20:25:24.749Z · LW · GW

Here are some thoughts about numeracy as compared to literacy. There is a tl;dr at the end.


The US supposedly has 95% literacy rate or higher. An 14yo english-speaker in the US is almost always an english-reader as well, and will not need much help interpreting an “out of service” sign or a table of business hours or a “Vote for Me” billboard. In fact, most people will instantaneously understand the message, without conscious effort--no need to look at individual letters and punctuation, nor any need to slowly sound it out. You just look, scan, and interpret a sentence in one automatic action. (If anyone knows a good comparison of the bitrates of written sentences vs pictograms, please share.)


I think there is an analogy here with numeracy, and I think there is some depth to the analogy. I think there is a possible world in which a randomly selected 14yo would instantly, automatically have a sense of magnitude when seeing or hearing about about almost anything in the physical world--no need to look up benchmark quantities or slowly compute products and quotients. Most importantly, there would be many more false and misleading claims that would (instantly, involuntarily!) trigger a confused squint from them. You could still mislead them about the cost per wattage of the cool new sustainability technology, or the crime rate in some distant city. But not too much more than you could mislead them about tangible things like the weight of their pets or the cost per calorie of their lunch or the specs of their devices. You could only squeeze so many OoMs of credibility out of them before they squint in confusion and ask you to give some supporting details. 


Automatic, generalized, quantitative sensitivity of this sort is rare even among college graduates. It’s a little better among STEM graduates, but still not good. I think adulthood is too late to gain this automaticity, the same way it is too late to gain the automatic, unconscious literacy that elementary school kids get.
We grow up hearing stories about medieval castle life that are highly sanitized, idealized, and frankly, modernized, so that we will enjoy hearing them at all. And we like to imagine ourselves in the shoes of knights and royalty, usually not the shoes of serfs. That’s all well and good as far as light-hearted fiction goes, but I think it leads us to systematically underestimate not only the violence and squalor of those conditions, but less-obviously the low-mobility and general constraint of illiteracy. I wonder what it would be like to visit a place with very low literacy (and perhaps where the few existing signs are written in an unfamiliar alphabet). I bet it would be really disorienting. Everything you learn would be propaganda and motivated hearsay, and you would have to automatically assume much worse faith than in places where information and flows quickly and cheaply. Potato prices are much lower two days south? Well, who claimed that to me, how did they hear it, and what incentives might they have to say it to me? Unfortunately there are no advertisements or PSAs for me to check against. Well, I’m probably not going to make that trip south without some firmer authority.I can imagine this information environment having a lot in common with the schoolyard. 

My point is that it seems easy to erroneously take for granted the dynamics of a 95% literate society, and that things suddenly seem very different even after only a minute of deliberate imagination. It is that size of difference that I think might be possible between our world and an imaginary place where 8-year-olds are trained to become fluent in simple quantities as they are in written english.

 

Tl;dr: I think widespread literacy and especially widespread fluency is a modern miracle. I think people don't realize what a total lack of numerical fluency there is. I'm not generally fluent in numbers--in general, you can suggest absurd quantities to me and I will not automatically notice the absurdity in the way I will automatically laugh at a sentence construction error on a billboard.

misspelling or grammatical error on purpose advertising
Comment by mike_hawke on Raising children on the eve of AI · 2024-02-19T02:52:32.292Z · LW · GW

To me it feels pretty clear that if someone will have a reasonably happy life, it’s better for them to live and have their life cut short than to never be born.

I agree with this conditional, but I question whether the condition (bolded) is a safe assumption. For example, if you could go back in time to survey all of the hibakusha and their children, I wonder what they would say about that C.S. Lewis quotation. It wouldn't surprise me if many of them would consider it badly oversimplified, or even outright wrong.
 

My friend’s parents asked their priest if it was ok to have a child in the 1980s given the risk of nuclear war. Fortunately for my friend, the priest said yes.

This strikes me as some indexical sleight of hand. If the priests were instead saying no during the 1980s, wouldn't that have led to a baby boom in the 1990s...?

Comment by mike_hawke on mike_hawke's Shortform · 2024-02-15T19:50:11.741Z · LW · GW

I remember reading that fish oil pills do not seem to have the same effect as actual fish. So maybe the oily water will also be less effective.

Comment by mike_hawke on mike_hawke's Shortform · 2024-02-15T18:55:39.239Z · LW · GW

Should I drink sardine juice instead of dumping it down the drain?

 

I eat sardines that are canned in water, not oil, because I care about my polyunsaturated fatty acid ratio. They're very unappetizing but from my inexpert skimming, they seem like one of the best options in terms of health. But I only eat most of the flesh incidentally, with the main objective being the fat. This is why I always buy fish that is unskinned, and in fact I would buy cans of fish skin if it were easy.

So on this basis, is it worth it for me to just go ahead and choke down the sardine water as well? ...or perhaps instead? It is visibly fatty.

Comment by mike_hawke on CFAR Takeaways: Andrew Critch · 2024-02-15T01:48:41.785Z · LW · GW

Beware of Other-Optimizing?

Comment by mike_hawke on CFAR Takeaways: Andrew Critch · 2024-02-14T22:49:25.433Z · LW · GW

There aren't things lying around in my life that bother me because I always notice and deal with it.

I assume he said something more nuanced and less prone to blindspots than that.

Ten minutes a day is 60 hours a year. If something eats 10 minutes each day, you'd break even in a year if you spent a whole work week getting rid of it forever.

In my experience, I have not been able to reliably break even. This kind of estimate assumes a kind of fungibility that is sometimes correct and sometimes not. I think When to Get Compact is relevant here--it can feel like my bottleneck is time, when in fact it is actually attentional agency or similar. There are black holes that will suck up as much of our available time as they can.

External memory is essential to intelligence augmentation.

Highly plausible. Also perhaps more tractrable and testable than many other avenues. I remember an old LW Rationality Quotation along the lines of, "There is a big difference between a human and human with a pen and paper."

Comment by mike_hawke on TurnTrout's shortform feed · 2024-02-13T23:52:29.168Z · LW · GW

I feel like the more detailed image adds in an extra layer of revoltingness and scaryness (e.g. the sharp teeth) than would be appropriate given our state of knowledge.


Now I'm really curious to know what would justify the teeth. I'm not aware of any AIs intentionally biting someone, but presumably that would be sufficient.

Comment by mike_hawke on Dreams of AI alignment: The danger of suggestive names · 2024-02-13T22:45:24.272Z · LW · GW

Long comment, points ordered randomly, skim if you want.

1)
Can you give a few more examples of when the word "optimal" is/isn't distorting someone's thinking? People sometimes challenge each other's usage of that word even when just talking about simple human endeavors like sports, games, diet, finance, etc. but I don't get the sense that the word is the biggest danger in those domains. (Semi-related, I am reminded of this post.)

2)

When I try to point out such (perceived) mistakes, I feel a lot of pushback, and somehow it feels combative. I do get somewhat combative online sometimes (and wish I didn't, and am trying different interventions here), and so maybe people combat me in return. But I perceive defensiveness even to the critiques of Matthew Barnett, who seems consistently dispassionate.

Maybe it's because people perceive me as an Optimist and therefore my points must be combated at any cost.

Maybe people really just naturally and unbiasedly disagree this much, though I doubt it.

When you put it like this, it sounds like the problem runs much deeper than sloppy concepts. When I think my opponents are mindkilled, I see only extreme options available, such as giving up on communicating, or budgeting huge amounts of time & effort to a careful double-crux. What you're describing starts to feel not too dissimilar from questions like, "How do I talk my parents out of their religion so that they'll sign up for cryonics?" In most cases it's either hopeless or a massive undertaking, worthy of multiple sequences all on it's own, most of which are not simply about suggestive names. Not that I expect you to write a whole new sequence in your spare time, but I do wonder if this makes you more interested in erisology and basic rationality.

3)

'The behaviorists ruined words like "behavior", "response", and, especially, "learning". They now play happily in a dream world, internally consistent but lost to science.'

I myself don't know anything about the behaviorists except that they allegedly believed that internal mental states did not exist. I certainly don't want to make that kind of mistake. Can someone bring me up to speed on what exactly they did to the words "behavior", "response", and "learning"? Are those words still ruined? Was the damage ever undone?

4)

perhaps implying an expectation and inner consciousness on the part of the so-called "agent"

That reminds me of this passage from EY's article in Time:

None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria. With that said, I’d be remiss in my moral duties as a human if I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.

The rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.

I'm curious if you think this passage is also mistaken, or if it is correctly describing a real problem with current trajectories. EY usually doesn't bring up consciousness because it is not a crux for him, but I wonder if you think he's wrong in this recent time that he did bring it up.

Comment by mike_hawke on Attitudes about Applied Rationality · 2024-02-13T20:47:13.938Z · LW · GW

Also, here are a couple of links that seem relevant to me, even if they are not fully on-topic.
 

Schools Proliferating without Evidence

3 Levels of Rationality Verification

Comment by mike_hawke on Attitudes about Applied Rationality · 2024-02-13T19:48:33.635Z · LW · GW

Man, getting stereotyped feels bad, but unfortunately there is no alternative for humans. Great list. I might have drawn the boundaries differently, but I still like what you wrote.

 

I'll plant this flag right here and now: I feel some affinity for all of these attitudes, some more than others. Above all, I have only a vague and partial sense of what a rational culture would be like. Dath Ilan is inspiring, but also feels vague and partial. It does feel easy to imagine that we are not close to the frontier of efficiency, and that this is due to silly mistakes.

Comment by mike_hawke on story-based decision-making · 2024-02-13T19:21:12.831Z · LW · GW

Bezos gave some of his investors a 70% chance they'd lose their whole investment. Those investors...were his parents.

Elon Musk was hooked up to the Palpal Mafia social network.

Anyway, a lot of stories like that are misleading. My understanding is that those examples are mostly just their after-the-fact disclosures of their private thinking, not what they told investors at the time?

 

Thanks for the reply. Maybe I'll reread that chapter of the book and see if there are any sharp updates to make.

Comment by mike_hawke on story-based decision-making · 2024-02-10T00:54:26.207Z · LW · GW

Here are some questions this post raises for me.

  • Do people ever try to pitch you on projects, and if so, do the story-based pitches work better or worse than others?
  • Where are the investors that you expected? With Vision Fund way down, are the reality-based decision makers on the rise, or not?
  • "Look at bios of founders of their last few investments (as presented on company websites) and see if they follow a pattern. Look at the main characters of the movies they like. Look at their retweets and see what stupid memes they fall for." This sounds like advice on how to be a better grifter. Is there an implicit step 0 where you try and fail to get money from the less manipulable investors? Is your idea that if some entrepreneurial LW users swallow this particular red pill, they will be less held back by their maladaptive honesty and be more competitive in raising money, and that this will result in more rational entrepreneurs?
  • Have you read The Scout Mindset? In it, author Julia Galef gives examples of entrepreneurs who honestly and publicly gave low odds of success, but were able to raise funding and succeed anyway (like Musk and Buterin). Were these just random flukes? Did I get the wrong takeaway from that part of the book?
     
Comment by mike_hawke on More Hyphenation · 2024-02-09T02:19:30.929Z · LW · GW

Seems like brackets would remove this problem, at the cost of being highly nonstandard and perhaps jarring to some people.

I was jarred and grossed out the first time I encountered brackets used this way. But at the end of the day, I think 20th century writing conventions just aren't quite good enough for what we want to do on LW. (Relatedly, I have higher tolerance for jargon than a lot of other people.)

Caveat: brackets can be great for increasing the specificity of what you are able to say, but I sometimes see the specificity of people's thoughts fail to keep up with the specificity of their jargon and spoken concepts, which can be grating.

Comment by mike_hawke on More Hyphenation · 2024-02-09T02:07:17.602Z · LW · GW

Refactoring your writing for clarity is taxing and will reduce overall word count on LW. That would be an improvement for some users but not others.

I know some major offenders when it comes to unnecessary-hyphenation-trains, but usually I still find all their posts and comments net positive.

Of course, I would be happy if those users could increase clarity without sacrificing other things.

Comment by mike_hawke on Medical Roundup #1 · 2024-01-17T00:25:12.163Z · LW · GW

I clicked on the heart disease algorithm link, and it was just a tweet of screenshots, with no link to the article. I typed in the name of the article into my search bar so that I could read it. 

Your commentary about this headline may be correct, but I find it questionable after reading the whole article. The article includes the following paragraph: 

Two years ago, a scientific task force of the National Kidney Foundation and American Society of Nephrology called for jettisoning a measure of kidney function that adjusted results by race, often making Black patients seem less ill than they are and leading to delays in treatment.

I find that claim questionable as well, but not in a way that increases my credence in your summary. I clicked through again to an NEJM article mentioned in the NYT article, and it went into detail about how the racial corrections are made. My current belief is now, "this stuff is controversial for seemingly real reasons. Benefits & harms may both be present, and I do not know which way the scales tip." Hardly a slam dunk against the woke menace, which is the impression I had when I first clicked your link.

Am I wrong? Do you stand by your summary? Did you read the article? Do contend that you didn't need to read it?

Perhaps ironically, I didn't read your whole post before commenting. It's possible that you have some appropriate disclaimer somewhere in it, which I missed in my skim. If not though, I want to at least flag this, because I see potential for misinformation cascades if I don't :/

Comment by mike_hawke on On the Contrary, Steelmanning Is Normal; ITT-Passing Is Niche · 2024-01-12T00:51:19.906Z · LW · GW

I agree with this, and I've been fairly unimpressed by the critiques of steelmanning that I've read, including Rob's post. Maybe I would change my mind if someone wrote a long post full of concrete examples[1] of steelmen going wrong, and of ITT being an efficient alternative. I think I pretty regularly see good reasoning and argumentation in the form of steelmen.

  1. ^

    But it's not trivial to contrive examples of arguments that will effectively get your point across without triggering, distracting, or alienating too much of the audience.

Comment by mike_hawke on mike_hawke's Shortform · 2024-01-10T22:16:42.795Z · LW · GW

I've seen a few people run the thought experiment where one imagines the best life a historical person could live, and/or the most good they could do. There are several variants, and you can tune which cheat codes they are given. People seem to get different answers, and this has me pretty curious.

  • Eliezer said in the sequences that maybe all this rationality stuff just wouldn't help a 14th century peasant at all, unless they were given explicit formulas from the future. (See also, the Chronophone of Archimedes.)
  • I've heard people ask why the industrial revolution didn't happen in China, and whether that was a contingent fluke of history, or a robust result of geography and culture. (I should admit at this point that I haven't read any of the Progress Studies stuff, but I want to.)
  • I think Paul or Ajeya or Katja have wondered aloud about what a medieval peasant or alchemist could have done if they had been divinely struck by the spirit of EA and rationality, and people have argued back and forth about how sticky various blockers were. (I would appreciate links if anyone has them.)
  • I skimmed that recent post about "Social Dark Matter" and wondered if 1950s America would have been much better if the social dark matter meme had somehow gotten a big edge in the arena of ideas.

I realized that I'm really uncertain about historical trajectories at pretty much every level. I am really unsure whether the Roman empire could have lasted longer and made a few extra major advancements (like steam engines and evolutionary biology). I'm unsure whether a medieval peasant armed with modern textbooks could have made a huge dent in history. And I'm also pretty unsure what it would have taken for 20th century America to have made faster moral progress than it did.

But I do notice that the more recent the alternate history, the less clueless I feel, which is a little motivating[1]. So here are a few prompts of the top of my head:

  1. What advice would you send back to your 10-year-old self if you weren't allowed to give lottery numbers or spoilers about global-scale events?
  2. What could your parents or their immediate peers have done differently that would have substantially improved their material circumstances, emotional lives, or moral rectitude? What would have been the costs?
  3. Could you get any easy wins if you were allowed to magically advertise one book or article to American intellectuals in the 1950s? (They can be wins of any size: global catastrophic risks, social dark matter, or your own pet peeve.)
  1. ^

    Possibly spurious, but this kind of reminds me of Read History of Philosophy Backwards

Comment by mike_hawke on mike_hawke's Shortform · 2024-01-08T20:27:01.274Z · LW · GW

Over a year later, I stand by this sentiment. I think this thought experiment is important and underrated.

Comment by mike_hawke on mike_hawke's Shortform · 2024-01-05T20:29:41.650Z · LW · GW

I'm planting this flag right here and now: the phenomena of social class (putatively distinct from economic class) is very broad, very deep, and anti-inductive. For these reasons, no no one really knows what's going on or has anything close to the full picture. As a rough heuristic, the more well-known and easily changed a class stereotype is, the more likely it is to be out of date. 

Comment by mike_hawke on Here's the exit. · 2024-01-04T20:14:44.602Z · LW · GW

The standard rationalist defense I've noticed against this amounts to mental cramping. Demand everything go through cognition, and anything that seems to try to route around cognition gets a freakout/shutdown/"shame it into oblivion" kind of response. The stuff that disables this immune response is really epistemically strange — things like prefacing with "Here's a fake framework, it's all baloney, don't believe anything I'm saying." Or doing a bunch of embodied stuff to act low-status and unsure. A Dark Artist who wanted to deeply mess with this community wouldn't have to work very hard to do some serious damage before getting detected, best as I can tell (and as community history maybe illustrates).

Can you spell this out a little more? Did Brent and LaSota employ baloney-disclaimers and uncertainty-signaling in order to bypass people's defenses?

Comment by mike_hawke on What Helped Me - Kale, Blood, CPAP, X-tiamine, Methylphenidate · 2024-01-04T02:09:29.637Z · LW · GW

probably at least some of the things listed here are spurious

If I have read this all correctly, you're saying "Probably benfotiamine and/or CPAP are spurious, but I am very sure that the rest are not."

Comment by mike_hawke on mike_hawke's Shortform · 2023-12-28T01:33:09.003Z · LW · GW

They deserve sympathy, but also they must be stopped/avoided/distrusted.

This sentiment is common and I wish there was a common and compact way to express it. Notice the dissonance if the conjunction "but" is swapped out for "and".

Comment by mike_hawke on Darklight's Shortform · 2023-12-27T20:22:01.922Z · LW · GW

Personally, I find shortform to be an invaluable playground for ideas. When I get downvoted, it feels lower stakes. It's easier to ignore aloof and smugnorant comments, and easier to update on serious/helpful comments. And depending on how it goes, I sometimes just turn it into a regular post later, with a note at the top saying that it was adapted from a shortform.

If you really want to avoid smackdowns, you could also just privately share your drafts with friends first and ask for respectful corrections.

Spitballing other ideas, I guess you could phrase your claims as questions, like "have objections X, Y, or Z been discussed somewhere already? If so, can anyone link me to those discussions?" Seems like that could fail silently though, if an over-eager commenter gives you a link to low-quality discussion. But there are pros and cons for every course of action/inaction.

Comment by mike_hawke on mike_hawke's Shortform · 2023-12-24T00:35:41.315Z · LW · GW

Now this is more like it.

Comment by mike_hawke on A Sense That More Is Possible · 2023-12-23T20:13:01.443Z · LW · GW

Has Eliezer made explicit updates about this? Maybe @Rob Bensinger knows. If he has, I'd like to see it posted prominently and clearly somewhere. Either way, I wonder why he doesn't mention it more often. Maybe he does, but only in fiction.

[...] I think that recognizing successful training and distinguishing it from failure is the essential, blocking obstacle.

Does this come up in the Dath Ilan stories?

There are experiments done now and again on debiasing interventions for particular biases, but it tends to be something like, "Make the students practice this for an hour, then test them two weeks later."  Not, "Run half the signups through version A of the three-month summer training program, and half through version B, and survey them five years later."

Surely there is more to say about this now than in 2009. Eliezer had some idea of the replication crisis back then, but I think he has become much more pessimistic about academia in the time since. 

But first, because people lack the sense that rationality is something that should be systematized and trained and tested like a martial art, that should have as much knowledge behind it as nuclear engineering, whose superstars should practice as hard as chess grandmasters, whose successful practitioners should be surrounded by an evident aura of awesome.

I think there's gotta be more to say about this too. Since then we have seen Tetlock's Superforecasting, Inadequate Equilibria[1], the confusing story of CFAR[2][3][4], and the rise to prominence of EA. I can now read retrospectives by accomplished rationalists arguing over whether rationality increases accomplishment, but I always come away feeling highly uncertain. (Not epistemically helpless, but frustratingly uncertain.) What do we make of all this?

Eliezer asks:

Why are there schools of martial arts, but not rationality dojos?  (This was the first question I asked in my first blog post.)  Is it more important to hit people than to think?

My answer, which gets progressively less charitable, and is aimed at no one in particular: thinking rationally appears to be a lower priority than learning particular mathematical methods, obtaining funding and recruits, mingling at parties, following the news, scrolling social media, and playing videogames.

  1. ^

    Consensus is that Eliezer verifiably outperformed the medical establishment with the lumenator, right?

  2. ^
  3. ^
  4. ^
  5. ^
  6. ^
Comment by mike_hawke on mike_hawke's Shortform · 2023-12-21T22:50:48.178Z · LW · GW

I'm looking through some of the posts tagged Practical. I notice that a lot of them, especially the older ones, seem overoptimistic in similar ways. Here are a few of my particular thoughts:

  1. Daily interventions need to be usable by busy, tired zombies. Or else they will only be usable by people who already have well-balanced lives (or people with hypomania or something). 
  2. Closely related, sometimes people omit cost-benefit analysis entirely, as if their practical intervention pays for itself immediately. Even when an analysis is included, I think it often underrates trivial inconveniences and willpower costs. Some of these things sound so easy and simple, and yet if I'm tired after a day of work, it can feel like a major imposition to defy my automatic, low-effort habits. And it's not just me, when I see the people around me trying out new life hacks, I can often feel the resentment radiating off of them that they have to spend a week's worth of their discretionary willpower on a small expected gain.
  3. I perceive some complacency about resource budgeting, specifically around resource sinks that are optimized against you. It is my guess that if you show someone a way to save time on their chores, a lot of those savings will go toward mindless scrolling.
  4. Two and a half weeks is not long enough for you to triumphantly declare that your clever life-hack has permanently upgraded your life. Two and a half years probably is. Experience tells me that biological & behavioral set points are real, and it takes more than a single-sentence summary to convince me that you've permanently altered or outpaced yours.
  5. I give special praise to those doing serious experimentation with followups. I'll single out Trivial Inconvenience Day and Sabbath.
Comment by mike_hawke on What are the best Siderea posts? · 2023-12-21T15:49:08.118Z · LW · GW

Fixed, thanks.

Comment by mike_hawke on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-18T23:09:14.954Z · LW · GW

I wonder what fraction of the weirdest writers here feel the same way. I can't remember the last time I've read something on LessWrong and thought to myself, "What a strange, daring, radical idea. It might even be true. I'm scared of what the implications might be." I miss that.

Do you remember any examples from back in the day?

Comment by mike_hawke on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-18T22:47:49.906Z · LW · GW
  • Too much AI content, not enough content about how to be wrong less.
  • I don't like that I can't take for granted that any particular poster has read the Sequences. Honestly that's a pretty major crux for how to engage with people. The Sequences are a pretty major influence on my worldview, which should matter a lot for those who want to change my mind about things.
  • Sometimes I think of johnswentworth's comment about the Fosbury Flop, and I feel some yearning and disappointment.
  • I like Exercises and Challenges. The Babble challenge was cool. So was that time Luke Muelhauser did math in public(!). It would at least be kind of fun and engaging to see more things like this. They also allow people to demonstrate their process and get critiques.
  • I would like to see more explicit practice of debiasing techniques. I want LessWrong to be more than just a smarter version of Reddit or Twitter. I want to see different types of interactions, that are verifiably truth-loaded. Things sort of in this direction include: Zvi sticking his neck out with covid predictions, public bets, EY's LK99 crux-mapping, adversarial collaboration, ITT, the anti-kibitzer, and hypothetical apostasy.