Posts

How can I efficiently read all the Dath Ilan worldbuilding? 2024-02-06T16:52:32.558Z
What are the best Siderea posts? 2023-12-19T23:07:59.027Z
What did you change your mind about in the last year? 2023-11-23T20:53:45.664Z
What will you think about the Current Thing in a year? 2023-11-20T22:39:37.630Z
[Linkpost] 7 Swedish Words to Import 2022-03-19T02:36:44.328Z
Some ideas for interacting with reporters 2022-02-14T00:38:42.740Z
Five Missing Moods 2021-12-16T01:25:09.409Z
[Linkpost] Cat Couplings 2021-12-09T01:41:11.646Z
[linkpost] Why Going to the Doctor Sucks (WaitButWhy) 2021-11-23T03:02:47.428Z
[linkpost] Crypto Cities 2021-11-12T21:26:28.959Z
[linkpost] Fantasia for Two Voices 2021-10-13T02:55:21.775Z
my new shortsight reminder 2021-10-11T20:06:30.678Z
[ACX Linkpost] Too Good to Check: A Play in Three Acts 2021-10-05T05:04:40.837Z
[linkpost] Vitalik Buterin on Nathan Schneider on the limits of cryptoeconomics 2021-10-02T19:11:06.108Z
[Linkpost] Partial Derivatives and Partial Narratives 2021-09-13T21:02:49.285Z
[linkpost] Political Capital Flow Management and the Importance of Yutting 2021-09-10T07:27:23.009Z
[ACX Linkpost] Highlights From The Comments On Missing School 2021-08-29T08:01:59.534Z
Predictions about the state of crypto in ten years 2021-08-08T16:18:13.941Z
Optimism about Social Technology 2021-06-27T23:35:31.174Z
What is the biggest crypto news of the past year? 2021-05-22T02:01:49.040Z
[ACX Linkpost] A Modest Proposal for Republicans 2021-04-30T18:43:17.252Z
[ACX Linkpost] Prospectus on Próspera 2021-04-15T22:48:00.545Z
Auctioning Off the Top Slot in Your Reading List 2021-04-14T07:11:07.881Z
Speculations Concerning the First Free-ish Prediction Market 2021-03-31T03:20:48.379Z
Some Complaint-Action Gaps 2021-03-29T21:15:50.012Z
Predictions for future dispositions toward Twitter 2021-03-14T22:10:17.720Z
The Puce Tribe 2021-02-28T21:11:05.778Z
some random parenting ideas 2021-02-13T15:53:43.855Z
How would free prediction markets have altered the pandemic? 2021-02-09T10:55:43.987Z
Against Sam Harris's personal claim of attentional agency 2021-01-30T09:08:45.145Z
Change My View: Incumbent religions still get too much leeway 2021-01-07T19:44:45.208Z
A dozen habits that work for me 2021-01-06T22:52:37.776Z
Pre-Hindsight Prompt: Why did 2021 NOT bring a return to normalcy? 2020-12-06T17:35:00.409Z
In Addition to Ragebait and Doomscrolling 2020-12-03T18:26:18.602Z
mike_hawke's Shortform 2020-11-29T19:57:57.415Z

Comments

Comment by mike_hawke on Dragon Agnosticism · 2024-08-04T20:20:56.813Z · LW · GW

I am agnostic about various dragons. Sometimes I find myself wondering how I would express my dragon agnosticism in a world where belief in dragons was prevalent and high status. I am often disturbed by the result of this exercise. It turns out that what feels like agnosticism is often sneakily biased in favor of what will make me sound better or let me avoid arguments.

This effect is strong enough and frequent enough that I don't think the agnosticism described by this post is a safe epistemic fallback for me. However, it might still be my best option in situations where I want to look good or avoid arguments.


Possibly related: 

Selective Reporting and the Tragedy of the Green Rationalists by Zack M Davis

Kolmogorov Complicity and the Parable of Lightning by Scott Alexander

Comment by mike_hawke on Universal Basic Income and Poverty · 2024-08-04T00:07:27.493Z · LW · GW

Yeah, given that Eliezer mentioned Georgism no less than 3 times in his Dath Ilan AMA, I'm pretty surprised it didn't come up even once in this post about UBI.

Personally, I wouldn't be surprised to find we already have most or all the pieces of the true story.

  • Ricardo's law of rent + lack of LVT
  • Supply and demand for low-skill labor
  • Legal restrictions on jobs that disproportionately harm low-wage workers. For example, every single low wage job I have had has been part time, presumably because it wasn't worth it to give me health benefits.
  • Boumol effect?
  • People really want to eat restaurant food, and seem to underestimate (or just avoid thinking about) how much this adds up.
  • A lot of factors that today cause poverty would have simply caused death in the distant past.

That's just off the top of my head

EDIT: Also the hedonic treadmill is such a huge effect that I would be surprised if it wasn't part of the picture. How much worse is it for your kid's tooth to get knocked out at school than to get a 1920's wisdom tooth extraction?

Comment by mike_hawke on Reliable Sources: The Story of David Gerard · 2024-07-12T00:42:56.238Z · LW · GW

That part is in a paragraph that starts with "My impression is...".

Fair. 

And yet I felt the discomfort before reading that particular paragraph, and I still feel it now. For me personally, the separators you included were not enough: I did indeed have to apply extra effort throughout the post to avoid over-updating on the interpretations as opposed to the hard facts.

Maybe I'm unusual and few other readers have this problem. I suspect that's not the case, but given that I don't know, I'll just say that I find this writing style to be a little too Dark Artsy and symmetrical for my comfort.

I still think this post was net good to publish, and I might end up linking it to someone if I think they're being too credulous toward Gerard. But if I do, it might be with some disclaimer along the lines of, "I think the author got a little carried away with a particular psychological story. I recommend putting in the effort to mentally separate the facts from the fun narrative."

Also, to give credit where it's due, the narrative style really was entertaining.

 

(EDIT: typos)

Comment by mike_hawke on Reliable Sources: The Story of David Gerard · 2024-07-11T19:36:35.531Z · LW · GW

I read as far as this part:

Because Gerard was on LessWrong when the internet splintered and polarized, he saw the whole story through the lens of LessWrong, and on an instinctive level the site became his go-to scapegoat for all that was going wrong for his vision of the internet.

And I want to make a comment before continuing to read.

I'm uncomfortable with the psychologizing here. I feel like your style is inviting me to suspend disbelief for the sake of a clean and entertaining narrative. Not that you should never do such a thing, but I think it maybe warrants some kind of disclaimer or something. If you had written this specifically for LW, instead of as a linkpost to your blog, I would be suggesting major rewrites in order meet the standards I'm used to around here.

I wouldn't be surprised if the true psychological story was significantly different than the picture you paint here. Especially if it involved real life events, e.g. some tragedy in his family, or problems with his friends or job. Would those things have even been visible in your research?

I'll keep reading, but I'm now going to spend extra effort to maintain the right level of skepticism. None of what I've read so far contradicts my priors, but I'm going to avoid updating too hard on your interpretations (as opposed to the citations & other hard facts).

 

I am bothered that no other commenters have brought this up yet.

Comment by mike_hawke on mike_hawke's Shortform · 2024-06-12T17:19:25.558Z · LW · GW

Point well taken that technological development and global dominance were achieved by human cultures, not individual humans. But I claim that it is obviously a case of motivated reasoning to treat this as a powerful blow against the arguments for fast takeoff. A human-level AI (able to complete any cognitive task at least as well as you) is a foom risk unless it has specific additional handicaps. These might include:
- For some reason it needs to sleep for a long time every night.
- Its progress gets periodically erased due to random misfortune or enemy action.
- It is locked into a bad strategic position, such as having no cognitive privacy from overseers.
- It can't copy itself.
- It can't gain more compute.
- It can't reliably modify itself.

I'll be pretty surprised if we get AI systems that can do any cognitive task that I can do (such as make longterm plans and spontaneously correct my own mistakes without them being pointed out to me) but that can also only improve themselves very slowly. It really seems like, if I were able to easily edit my own brain, then I would be able to increase my abilities across the board, including my ability to increase my abilities.

Comment by mike_hawke on A civilization ran by amateurs · 2024-06-07T21:15:27.771Z · LW · GW

The part about airports reminds me of "If All Stories were Written Like Science Fiction Stories" by Mark Rosenfelder: 
https://www.bzpower.com/blogs/entry/58514-if-all-stories-were-written-like-science-fiction-stories/
 

No one else has mentioned The Case Against Education by Bryan Caplan. He says that after reading and arithmetic, schooling is mostly for signaling employable traits like conscientiousness, not for learning. I think Zvi Mowshowitz and Noah Smith had some interesting discussion about this years ago. Scott Alexander supposes that another secret purpose of school is daycare. Whatever the real purposes are, they will tend to be locked into place by laws. Richard Hanania has written a bit about what he thinks families might choose instead of standard schooling if the laws were relaxed.

Comment by mike_hawke on [deleted post] 2024-05-25T02:31:45.210Z

Without passing judgment on this, I think it should be noted that it would have seemed less out of place when the Sequences were fresh. At that time, the concept of immaterial souls and the surrounding religious memeplexes seemed to be a genuinely interfering with serious discussion about minds.

However, and relatedly, there was not a lot of cooking discussion on LW in 2009, and this tag was created in 2020.

Comment by mike_hawke on Ilya Sutskever and Jan Leike resign from OpenAI [updated] · 2024-05-21T19:42:57.835Z · LW · GW

I'm out of the loop. Did Daniel Kokotajlo lose his equity or not? If the NDA is not being enforced, are there now some disclosures being made?

Comment by mike_hawke on Ilya Sutskever and Jan Leike resign from OpenAI [updated] · 2024-05-18T02:08:16.581Z · LW · GW

Thanks for the source.

I've intentionally made it difficult for myself to log into twitter. For the benefit of others who avoid Twitter, here is the text of Kelsey's tweet thread:

I'm getting two reactions to my piece about OpenAI's departure agreements: "that's normal!" (it is not; the other leading AI labs do not have similar policies) and "how is that legal?" It may not hold up in court, but here's how it works:

OpenAI like most tech companies does salaries as a mix of equity and base salary. The equity is in the form of PPUs, 'Profit Participation Units'. You can look at a recent OpenAI offer and an explanation of PPUs here: https://t.co/t2J78V8ee4

Many people at OpenAI get more of their compensation from PPUs than from base salary. PPUs can only be sold at tender offers hosted by the company. When you join OpenAI, you sign onboarding paperwork laying all of this out.

And that onboarding paperwork says you have to sign termination paperwork with a 'general release' within sixty days of departing the company. If you don't do it within 60 days, your units are cancelled. No one I spoke to at OpenAI gave this little line much thought.

And yes this is talking about vested units, because a separate clause clarifies that unvested units just transfer back to the control of OpenAI when an employee undergoes a termination event (which is normal).

There's a common legal definition of a general release, and it's just a waiver of claims against each other. Even someone who read the contract closely might be assuming they will only have to sign such a waiver of claims.

But when you actually quit, the 'general release'? It's a long, hardnosed, legally aggressive contract that includes a confidentiality agreement which covers the release itself, as well as arbitration, nonsolicitation and nondisparagement and broad 'noninterference' agreement.

And if you don't sign within sixty days your units are gone. And it gets worse - because OpenAI can also deny you access to the annual events that are the only way to sell your vested PPUs at their discretion, making ex-employees constantly worried they'll be shut out.

Finally, I want to make it clear that I contacted OpenAI in the course of reporting this story. So did my colleague SigalSamuel They had every opportunity to reach out to the ex-employees they'd pressured into silence and say this was a misunderstanding. I hope they do.

Comment by mike_hawke on Ilya Sutskever and Jan Leike resign from OpenAI [updated] · 2024-05-17T20:40:34.513Z · LW · GW

Even acknowledging that the NDA exists is a violation of it.

This sticks out pretty sharply to me.

Was this explained to the employees during the hiring process? What kind of precedent is there for this kind of NDA? 

Comment by mike_hawke on mike_hawke's Shortform · 2024-05-14T22:54:32.432Z · LW · GW

There are things I would buy if they existed. Is there any better way to signal this to potential sellers, other than tweeting it and hoping they hear? Is there some reason to believe that sellers are already gauging demand so completely that they wouldn't start selling these things even if I could get through to them? 

Comment by mike_hawke on mike_hawke's Shortform · 2024-03-29T22:08:21.614Z · LW · GW

Would I somehow feel this problem less acutely if I had never been taught Fahrenheit, Celcius, or Kelvin; and instead been told everything in terms of gigabytes per nanojoule? I guess probably not. Inconvenient conversions are not preventing me from figuring out the relations and benchmarks I'm interested in.

Comment by mike_hawke on From the outside, American schooling is weird · 2024-03-29T21:32:52.883Z · LW · GW

It's important to remember, though, that I will be fine if I so choose. After all, if the scary impression was the real thing then it would appear scary to everyone.  

 

Reading this makes me feel some concern. I think it should be seriously asked: Would you be fine if you hypothetically chose to take a gap year or drop out? Those didn't feel like realistic options for me when I was in high school and college, and I think this ended up making me much less fine than I would have been otherwise. Notably, a high proportion of my close friends in college ended up dropping out or having major academic problems, despite being the smartest and most curious people I could find.

My experiences during and after college seemed to make a lot more sense after hearing about ideas like credential inflation, surplus elites, and the signaling model. It seems plausible that I might have made better decisions if I had been encouraged to contemplate those ideas as a high schooler.

Comment by mike_hawke on mike_hawke's Shortform · 2024-03-29T18:35:37.462Z · LW · GW

In measuring and communicating about the temperature of objects, humans can clearly and unambiguously benchmark things like daily highs and lows, fevers, snow, space heaters, refrigerators, a cup of tea, and the wind chill factor. We can place thermometers and thereby say which things are hotter than others, and by how much. Daily highs can overlap with fevers, but neither can boil your tea.
 

But then I challenge myself to estimate how hot a campfire is, and I'm totally stuck.

It feels like there are no human-sensible relationships once you're talking about campfires, self-cleaning ovens, welding torches, incandescent filaments, fighter jet exhaust, solar flares, Venus, Chernobyl reactor #4, the anger of the volcano goddess Pele, fresh fulgurites, or the boiling point of lead. Anything hotter than boiling water has ascended into the magisterium of the Divinely Hot, and nothing more detailed can be said of them by a mortal. If I were omnipotent, omniscient, & invulnerable, then I could put all those things in contact with each other and then watch which way the heat flows. But I am a human, so all I can say is that anything on that list could boil water.

Comment by mike_hawke on Vernor Vinge, who coined the term "Technological Singularity", dies at 79 · 2024-03-25T19:32:42.271Z · LW · GW

Presumably he understood the value proposition of cryonics and declined it, right?

Comment by mike_hawke on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2024-03-23T19:45:13.627Z · LW · GW

If everyone in town magically receives the same speedup in their "verbal footwork", is that good for meta-honesty? I would like some kind of story explaining why it wouldn't be neutral.

Point for yes: 
Sure seems like being able to quickly think up an appropriately nonspecific reference class when being questioned about a specific hypothetical does not make it harder for anyone else to do the same.

Point against: 

The code of literal truth only lets people navigate anything like ordinary social reality to the extent that they are very fast on their verbal feet, and can respond to the question "How are you?" by saying "Getting along" instead of "Horribly" or with an awkward silence while they try to think of something technically true.

This particular case seems anti-inductive and prone to the euphemism treadmill. Indeed, one person one time can navigate ordinary social reality by saying "Getting along" instead of giving an awkward silence; but many people doing so many times will find that it tends to work less well over time. If everyone magically becomes faster on their verbal feet, they can all run faster on the treadmill, but this isn't necessarily good for meta-honesty.

Implications: either cognitive enhancement becomes even more of a moral priority, or adhering to meta-honesty becomes a trustworthy signal of being more intelligent than those who don't. Neither outcome seems terrible to me, nor even all that much different from the status quo.

Comment by mike_hawke on How do you feel about LessWrong these days? [Open feedback thread] · 2024-03-19T17:43:43.815Z · LW · GW

One concrete complaint I have is that I feel a strong incentive toward timeliness, at the cost of timelessness. Commenting on a fresh, new post tends to get engagement. Commenting on something from more than two weeks ago will often get none, which makes effortful comments feel wasted.

I definitely feel like there is A Conversation, or A Discourse, and I'm either participating in it during the same week as everyone else, or I'm just talking to myself.

(Aside: I have a live hypothesis that this is tightly related to The Twitterization of Everything.)

Comment by mike_hawke on Social Class · 2024-03-17T15:25:53.139Z · LW · GW

Glad to see some discussion of social class.

Here's something in the post that I would object to:

Non-essential weirdnesses, on the other hand, should be eliminated as much as possible because pushing lifestyle choices onto disinterested working-class people is a misuse of class privilege. Because classes are hierarchical in nature, this is especially important for middle-upper class people to keep in mind. An example of non-essential weirdness is “only having vegan options for dinner”.


This example seems wrong to me. It seems like serving non-vegan options does in fact risk doing a great injustice (to the animals eaten). I tried and failed to think of an example that seemed correct, so now I'm feeling pretty unconvinced by the entire concept. 

One contrary idea might be that class norms and lifestyle choices are usually load-bearing, often in ways that are deliberately obscured or otherwise non-obvious. Therefore, one may want to be cautious when labeling something a non-essential weirdness. 

(Also maybe worth mentioning that I think class phenomena are in general anti-inductive and much harder to reach broad conclusions about than other domains.)

Comment by mike_hawke on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2024-03-16T03:23:28.032Z · LW · GW
  • Most people, even most unusually honest people, wander about their lives in a fog of internal distortions of reality. Repeatedly asking yourself of every sentence you say aloud to another person, "Is this statement actually and literally true?", helps you build a skill for navigating out of your internal smog of not-quite-truths. For that is our mastery.

I think some people who read this post ought to reverse this advice. The advice I would give to those people is: if you're constantly forcing every little claim you make through a literalism filter, you might end up multiplying disfluencies and generally raising the cost of communicating with you. Maybe put a clause limit on your sentences and just tack on a generic hedge like "or something" if you need to.

Comment by mike_hawke on 'Empiricism!' as Anti-Epistemology · 2024-03-15T00:47:10.810Z · LW · GW

Only praise yourself as taking 'the outside view' if (1) there's only one defensible choice of reference class;

 

I think this point is underrated. The word "the" in "the outside view" is sometimes doing too much work, and it is often better to appeal to an outside view, or multiple outside views.

Comment by mike_hawke on My Clients, The Liars · 2024-03-14T17:52:02.319Z · LW · GW

What do you think the internal experience of these liars is like? I could believe that some of them have gotten a lot of practice with fooling themselves in order to fool others, in settings where doing so is adaptive. Do you think they would get different polygraph results than the believer in the invisible dragon hypothetically would?

Comment by mike_hawke on Counting arguments provide no evidence for AI doom · 2024-03-12T23:49:56.934Z · LW · GW

Damn, woops.

My comment was false (and strident; worst combo). I accept the strong downvote and I will try to now make a correction.

I said:

I spent a bunch of time wondering how you could could put 99.9% on no AI ever doing anything that might be well-described as scheming for any reason.


What I meant to say was:

I spent a bunch of time wondering how you could put 99.9% on no AI ever doing anything that might be well-described as scheming for any reason, even if you stipulate that it must happen spontaneously.

And now you have also commented:

Well, I have <0.1% on spontaneous scheming, period. I suspect Nora is similar and just misspoke in that comment.

So....I challenge you to list a handful of other claims that you have similar credence in. Special Relativity? P!=NP? Major changes in our understanding of morality or intelligence or mammal psychology? China pulls ahead in AI development? Scaling runs out of steam and gives way to other approaches like mind uploading? Major betrayal against you by a beloved family member?
The OP simply says "future AI systems" without specifying anything about these systems, their paradigm, or what offworld colony they may or may not be developed on. Just...all AI systems henceforth forever. Meaning that no AI creators will ever accidentally recapitulate the scheming that is already observed in nature...? That's such a grand, sweeping claim. If you really think it's true, I just don't understand your worldview. If you've already explained why somewhere, I hope someone will link me to it.

Comment by mike_hawke on mike_hawke's Shortform · 2024-03-10T15:21:53.883Z · LW · GW

Foregone mutually beneficial trades sometimes provide value in the form of plausible deniability. 

If a subculture started trying to remove barriers to trade, for example by popularizing cheerful prices, this might have the downside of making plausible deniability more expensive. On net that might be good or bad (or weird), but either way I think it's an underrated effect (because I also think that the prevalence and load-bearing functions of plausible deniability are also underrated). People have prospects and opportunity costs, often largely comprising things that are more comfortable to leave unsaid.

(Continuing from this comment.)

Comment by mike_hawke on Counting arguments provide no evidence for AI doom · 2024-03-06T01:11:30.909Z · LW · GW

EDIT: This is wrong. See descendent comments.

 

I spent a bunch of time wondering how you could could put 99.9% on no AI ever doing anything that might be well-described as scheming for any reason. I was going to challenge you to list a handful of other claims that you had similar credence in, until I searched the comments for "0.1%" and found this one. 

I'm annoyed at this, and I request that you prominently edit the OP.

Comment by mike_hawke on Counting arguments provide no evidence for AI doom · 2024-03-05T07:40:47.459Z · LW · GW

I followed this exchange up until here and now I'm lost. Could you elaborate or paraphrase?

Comment by mike_hawke on If you weren't such an idiot... · 2024-03-04T22:11:20.768Z · LW · GW

I will push against.

I feel unhappy with this post, and not just because it called me an idiot. I think epithets and thoughtless dismissals are cheap and oversupplied. Patience and understanding are costly and undersupplied.

A lot of the seemingly easy wins in Mark's list were not so easy for me. Becoming more patient helped me a lot, whereas internal vitriol made things worse.I benefitted hugely from Mr. Money Mustache, but I think I was slower to implement his recommendations because he kept calling me an idiot and literally telling me to punch myself in the face.

If a bunch of people get enduring benefits from adopting the "such an idiot" frame, then maybe I'll change my mind. (They do have to be enduring though.)

 

Here is a meme I would be much happier to see spread: 

You, yes you might be able to permanently lower the cost of exercise to yourself if you spend a few days' worth of discretionary resources on sampling the sports in Mark Xu's list. But if you do that and it doesn't work, then ok, maybe you really are one of the metabolically underprivileged, and I hope you figure out some alternative.

Side notes:

  • It seems like this post is in tension with Beware Other Optimizing. And perhaps also a bit with Do Life Hacks Ever Reach Fixation? Not exactly, because Mark's list mostly relies on well-established life upgrades. But insofar as there is a tension here, I will tend to take the side of those two posts.
  • Perhaps this is a needless derail, and if so I won't press it, but I'm feeling some intense curiosity over whether Mark Xu and Critch would agree about whether Critch at all qualifies as an idiot. According to Raemon, Critch recently said, "There aren't things lying around in my life that bother me because I always notice and deal with it."
  • I find something both cliche and fatalistic about the notion that lots of seemingly maladaptive behaviors are secretly rational. But indeed I have had to update quite a few times in that direction over the years since I first started reading LessWrong.
Comment by mike_hawke on Increasing IQ is trivial · 2024-03-04T19:50:14.026Z · LW · GW

Thirteen points?! If I could get results like that, it would be even better than the CFAR handbook, which merely doubled my IQ.

Comment by mike_hawke on A dozen habits that work for me · 2024-03-03T04:39:42.840Z · LW · GW

I made this comment about Raemon's habit update. Here is my own habit update.

  1. Still going great.
  2. Started using a bidet instead. Heard these were bad for the plumbing anyway.
  3. Still going strong with these. I do have an addiction to slack and discord, which is less bad but still problematic.
  4. I fell off of both of these. The list seemed good so I'm rebooting it today. The controversy burning was good too, but mentally/emotionally taxing, so I'm not gonna restart it without some deliberate budgeting first.
  5. I fell off of this. It was easier to stay away from tempting snacks when I didn't work at an office full of them.
  6. Yup, still doing this. It's just good.
  7. I kept this habit until I replaced it with life coach sessions which are on net much more helpful.
  8. Still going strong with this one.
  9. Yeah, so low that I quit entirely.
  10. Still technically true, but I'm doing this less than once a week now.
  11. Yup, except that I memorized my laundry list so I don't need that one anymore.
  12. Yes, still using these.
Comment by mike_hawke on Rationality Research Report: Towards 10x OODA Looping? · 2024-03-03T01:20:07.185Z · LW · GW

They also attempt to generate principles to follow from, well, first principles, and see how many they correctly identify. 

Second principles?

========

I'm really glad to see you quoting Three Levels. Seems important.

Comment by mike_hawke on Sunset at Noon · 2024-02-29T21:31:34.851Z · LW · GW

I am compelled to express my disappointment that this comment was not posted more prominently. 

Habit formation is important and underrated, and I see a lot of triumphant claims from a lot of people but I don't actually see a lot of results that persuade me to change my habituation procedure. I myself have some successful years-old habits and I got them by a different process than what you've described. In particular, I skip twice all the time and it doesn't kill my longterm momentum.

And I hope you'll forgive the harshness if I harken back to point #4 of this comment.

Comment by mike_hawke on Your Cheerful Price · 2024-02-29T10:09:23.706Z · LW · GW

Q:  Wait, does that mean that if I give you a Cheerful Price, I'm obligated to accept the same price again in the future?

No, because there may be aversive qualities of a task, or fun qualities of a task, that scale upward or downward with repeating that task.  So the price that makes your inner voices feel cheerful about doing something once, is not necessarily the same price that makes you feel cheerful about doing it twenty times.

I feel like this needs a caveat about plausible deniability. Sometimes the price goes up or down for reasons that I don't want to make too obvious. Like if it turns out you have bad breath, or if my opportunity cost involves mingling with attractive people, or if you behaved badly yesterday and our peergroup has wordlessly coordinated to lightly socially embargo you for a week and I don't want to be seen violating that. Anticipating some complication like that (consciously or not), I might want to hedge my initial price, or if that's mentally taxing, just weasel out of giving the cheerful price at all. 

This is maybe all accounted for when you say that cheerful prices may not work for someone if Tell culture doesn't work for them. I think plausible deniability tends to be pretty important though, even among nerds who virtue signal otherwise.

Comment by mike_hawke on Rationality Research Report: Towards 10x OODA Looping? · 2024-02-27T01:10:45.839Z · LW · GW

If I'm building my own training and tests, there's always the risk of ending up "teaching to the test", even if unintentionally. I think it'd be cool if other people were working on "Holdout Questions From Holdout Domains", that I don't know anything about, so that it's possible to test if my programs actually output people who are better-than-baseline (controlling for IQ).


I am hoarding at least one or two fun facts that I have seen smart rationalists get wrong. Specifically, a claim was made, I ask, "huh, really?" they doubled down, and then later I go look it up and find out that they were significantly wrong. Unfortunately I think that if I had read the book first and started the conversation with it in mind, I might not have discovered that they were confidently incorrect. Likewise, I think it would be hard to replicate this in a test setting.

Comment by mike_hawke on mike_hawke's Shortform · 2024-02-22T20:25:24.749Z · LW · GW

Here are some thoughts about numeracy as compared to literacy. There is a tl;dr at the end.


The US supposedly has 95% literacy rate or higher. An 14yo english-speaker in the US is almost always an english-reader as well, and will not need much help interpreting an “out of service” sign or a table of business hours or a “Vote for Me” billboard. In fact, most people will instantaneously understand the message, without conscious effort--no need to look at individual letters and punctuation, nor any need to slowly sound it out. You just look, scan, and interpret a sentence in one automatic action. (If anyone knows a good comparison of the bitrates of written sentences vs pictograms, please share.)


I think there is an analogy here with numeracy, and I think there is some depth to the analogy. I think there is a possible world in which a randomly selected 14yo would instantly, automatically have a sense of magnitude when seeing or hearing about about almost anything in the physical world--no need to look up benchmark quantities or slowly compute products and quotients. Most importantly, there would be many more false and misleading claims that would (instantly, involuntarily!) trigger a confused squint from them. You could still mislead them about the cost per wattage of the cool new sustainability technology, or the crime rate in some distant city. But not too much more than you could mislead them about tangible things like the weight of their pets or the cost per calorie of their lunch or the specs of their devices. You could only squeeze so many OoMs of credibility out of them before they squint in confusion and ask you to give some supporting details. 


Automatic, generalized, quantitative sensitivity of this sort is rare even among college graduates. It’s a little better among STEM graduates, but still not good. I think adulthood is too late to gain this automaticity, the same way it is too late to gain the automatic, unconscious literacy that elementary school kids get.
We grow up hearing stories about medieval castle life that are highly sanitized, idealized, and frankly, modernized, so that we will enjoy hearing them at all. And we like to imagine ourselves in the shoes of knights and royalty, usually not the shoes of serfs. That’s all well and good as far as light-hearted fiction goes, but I think it leads us to systematically underestimate not only the violence and squalor of those conditions, but less-obviously the low-mobility and general constraint of illiteracy. I wonder what it would be like to visit a place with very low literacy (and perhaps where the few existing signs are written in an unfamiliar alphabet). I bet it would be really disorienting. Everything you learn would be propaganda and motivated hearsay, and you would have to automatically assume much worse faith than in places where information and flows quickly and cheaply. Potato prices are much lower two days south? Well, who claimed that to me, how did they hear it, and what incentives might they have to say it to me? Unfortunately there are no advertisements or PSAs for me to check against. Well, I’m probably not going to make that trip south without some firmer authority.I can imagine this information environment having a lot in common with the schoolyard. 

My point is that it seems easy to erroneously take for granted the dynamics of a 95% literate society, and that things suddenly seem very different even after only a minute of deliberate imagination. It is that size of difference that I think might be possible between our world and an imaginary place where 8-year-olds are trained to become fluent in simple quantities as they are in written english.

 

Tl;dr: I think widespread literacy and especially widespread fluency is a modern miracle. I think people don't realize what a total lack of numerical fluency there is. I'm not generally fluent in numbers--in general, you can suggest absurd quantities to me and I will not automatically notice the absurdity in the way I will automatically laugh at a sentence construction error on a billboard.

misspelling or grammatical error on purpose advertising
Comment by mike_hawke on Raising children on the eve of AI · 2024-02-19T02:52:32.292Z · LW · GW

To me it feels pretty clear that if someone will have a reasonably happy life, it’s better for them to live and have their life cut short than to never be born.

I agree with this conditional, but I question whether the condition (bolded) is a safe assumption. For example, if you could go back in time to survey all of the hibakusha and their children, I wonder what they would say about that C.S. Lewis quotation. It wouldn't surprise me if many of them would consider it badly oversimplified, or even outright wrong.
 

My friend’s parents asked their priest if it was ok to have a child in the 1980s given the risk of nuclear war. Fortunately for my friend, the priest said yes.

This strikes me as some indexical sleight of hand. If the priests were instead saying no during the 1980s, wouldn't that have led to a baby boom in the 1990s...?

Comment by mike_hawke on mike_hawke's Shortform · 2024-02-15T19:50:11.741Z · LW · GW

I remember reading that fish oil pills do not seem to have the same effect as actual fish. So maybe the oily water will also be less effective.

Comment by mike_hawke on mike_hawke's Shortform · 2024-02-15T18:55:39.239Z · LW · GW

Should I drink sardine juice instead of dumping it down the drain?

 

I eat sardines that are canned in water, not oil, because I care about my polyunsaturated fatty acid ratio. They're very unappetizing but from my inexpert skimming, they seem like one of the best options in terms of health. But I only eat most of the flesh incidentally, with the main objective being the fat. This is why I always buy fish that is unskinned, and in fact I would buy cans of fish skin if it were easy.

So on this basis, is it worth it for me to just go ahead and choke down the sardine water as well? ...or perhaps instead? It is visibly fatty.

Comment by mike_hawke on CFAR Takeaways: Andrew Critch · 2024-02-15T01:48:41.785Z · LW · GW

Beware of Other-Optimizing?

Comment by mike_hawke on CFAR Takeaways: Andrew Critch · 2024-02-14T22:49:25.433Z · LW · GW

There aren't things lying around in my life that bother me because I always notice and deal with it.

I assume he said something more nuanced and less prone to blindspots than that.

Ten minutes a day is 60 hours a year. If something eats 10 minutes each day, you'd break even in a year if you spent a whole work week getting rid of it forever.

In my experience, I have not been able to reliably break even. This kind of estimate assumes a kind of fungibility that is sometimes correct and sometimes not. I think When to Get Compact is relevant here--it can feel like my bottleneck is time, when in fact it is actually attentional agency or similar. There are black holes that will suck up as much of our available time as they can.

External memory is essential to intelligence augmentation.

Highly plausible. Also perhaps more tractrable and testable than many other avenues. I remember an old LW Rationality Quotation along the lines of, "There is a big difference between a human and human with a pen and paper."

Comment by mike_hawke on TurnTrout's shortform feed · 2024-02-13T23:52:29.168Z · LW · GW

I feel like the more detailed image adds in an extra layer of revoltingness and scaryness (e.g. the sharp teeth) than would be appropriate given our state of knowledge.


Now I'm really curious to know what would justify the teeth. I'm not aware of any AIs intentionally biting someone, but presumably that would be sufficient.

Comment by mike_hawke on Dreams of AI alignment: The danger of suggestive names · 2024-02-13T22:45:24.272Z · LW · GW

Long comment, points ordered randomly, skim if you want.

1)
Can you give a few more examples of when the word "optimal" is/isn't distorting someone's thinking? People sometimes challenge each other's usage of that word even when just talking about simple human endeavors like sports, games, diet, finance, etc. but I don't get the sense that the word is the biggest danger in those domains. (Semi-related, I am reminded of this post.)

2)

When I try to point out such (perceived) mistakes, I feel a lot of pushback, and somehow it feels combative. I do get somewhat combative online sometimes (and wish I didn't, and am trying different interventions here), and so maybe people combat me in return. But I perceive defensiveness even to the critiques of Matthew Barnett, who seems consistently dispassionate.

Maybe it's because people perceive me as an Optimist and therefore my points must be combated at any cost.

Maybe people really just naturally and unbiasedly disagree this much, though I doubt it.

When you put it like this, it sounds like the problem runs much deeper than sloppy concepts. When I think my opponents are mindkilled, I see only extreme options available, such as giving up on communicating, or budgeting huge amounts of time & effort to a careful double-crux. What you're describing starts to feel not too dissimilar from questions like, "How do I talk my parents out of their religion so that they'll sign up for cryonics?" In most cases it's either hopeless or a massive undertaking, worthy of multiple sequences all on it's own, most of which are not simply about suggestive names. Not that I expect you to write a whole new sequence in your spare time, but I do wonder if this makes you more interested in erisology and basic rationality.

3)

'The behaviorists ruined words like "behavior", "response", and, especially, "learning". They now play happily in a dream world, internally consistent but lost to science.'

I myself don't know anything about the behaviorists except that they allegedly believed that internal mental states did not exist. I certainly don't want to make that kind of mistake. Can someone bring me up to speed on what exactly they did to the words "behavior", "response", and "learning"? Are those words still ruined? Was the damage ever undone?

4)

perhaps implying an expectation and inner consciousness on the part of the so-called "agent"

That reminds me of this passage from EY's article in Time:

None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria. With that said, I’d be remiss in my moral duties as a human if I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.

The rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.

I'm curious if you think this passage is also mistaken, or if it is correctly describing a real problem with current trajectories. EY usually doesn't bring up consciousness because it is not a crux for him, but I wonder if you think he's wrong in this recent time that he did bring it up.

Comment by mike_hawke on Attitudes about Applied Rationality · 2024-02-13T20:47:13.938Z · LW · GW

Also, here are a couple of links that seem relevant to me, even if they are not fully on-topic.
 

Schools Proliferating without Evidence

3 Levels of Rationality Verification

Comment by mike_hawke on Attitudes about Applied Rationality · 2024-02-13T19:48:33.635Z · LW · GW

Man, getting stereotyped feels bad, but unfortunately there is no alternative for humans. Great list. I might have drawn the boundaries differently, but I still like what you wrote.

 

I'll plant this flag right here and now: I feel some affinity for all of these attitudes, some more than others. Above all, I have only a vague and partial sense of what a rational culture would be like. Dath Ilan is inspiring, but also feels vague and partial. It does feel easy to imagine that we are not close to the frontier of efficiency, and that this is due to silly mistakes.

Comment by mike_hawke on story-based decision-making · 2024-02-13T19:21:12.831Z · LW · GW

Bezos gave some of his investors a 70% chance they'd lose their whole investment. Those investors...were his parents.

Elon Musk was hooked up to the Palpal Mafia social network.

Anyway, a lot of stories like that are misleading. My understanding is that those examples are mostly just their after-the-fact disclosures of their private thinking, not what they told investors at the time?

 

Thanks for the reply. Maybe I'll reread that chapter of the book and see if there are any sharp updates to make.

Comment by mike_hawke on story-based decision-making · 2024-02-10T00:54:26.207Z · LW · GW

Here are some questions this post raises for me.

  • Do people ever try to pitch you on projects, and if so, do the story-based pitches work better or worse than others?
  • Where are the investors that you expected? With Vision Fund way down, are the reality-based decision makers on the rise, or not?
  • "Look at bios of founders of their last few investments (as presented on company websites) and see if they follow a pattern. Look at the main characters of the movies they like. Look at their retweets and see what stupid memes they fall for." This sounds like advice on how to be a better grifter. Is there an implicit step 0 where you try and fail to get money from the less manipulable investors? Is your idea that if some entrepreneurial LW users swallow this particular red pill, they will be less held back by their maladaptive honesty and be more competitive in raising money, and that this will result in more rational entrepreneurs?
  • Have you read The Scout Mindset? In it, author Julia Galef gives examples of entrepreneurs who honestly and publicly gave low odds of success, but were able to raise funding and succeed anyway (like Musk and Buterin). Were these just random flukes? Did I get the wrong takeaway from that part of the book?
     
Comment by mike_hawke on More Hyphenation · 2024-02-09T02:19:30.929Z · LW · GW

Seems like brackets would remove this problem, at the cost of being highly nonstandard and perhaps jarring to some people.

I was jarred and grossed out the first time I encountered brackets used this way. But at the end of the day, I think 20th century writing conventions just aren't quite good enough for what we want to do on LW. (Relatedly, I have higher tolerance for jargon than a lot of other people.)

Caveat: brackets can be great for increasing the specificity of what you are able to say, but I sometimes see the specificity of people's thoughts fail to keep up with the specificity of their jargon and spoken concepts, which can be grating.

Comment by mike_hawke on More Hyphenation · 2024-02-09T02:07:17.602Z · LW · GW

Refactoring your writing for clarity is taxing and will reduce overall word count on LW. That would be an improvement for some users but not others.

I know some major offenders when it comes to unnecessary-hyphenation-trains, but usually I still find all their posts and comments net positive.

Of course, I would be happy if those users could increase clarity without sacrificing other things.

Comment by mike_hawke on Medical Roundup #1 · 2024-01-17T00:25:12.163Z · LW · GW

I clicked on the heart disease algorithm link, and it was just a tweet of screenshots, with no link to the article. I typed in the name of the article into my search bar so that I could read it. 

Your commentary about this headline may be correct, but I find it questionable after reading the whole article. The article includes the following paragraph: 

Two years ago, a scientific task force of the National Kidney Foundation and American Society of Nephrology called for jettisoning a measure of kidney function that adjusted results by race, often making Black patients seem less ill than they are and leading to delays in treatment.

I find that claim questionable as well, but not in a way that increases my credence in your summary. I clicked through again to an NEJM article mentioned in the NYT article, and it went into detail about how the racial corrections are made. My current belief is now, "this stuff is controversial for seemingly real reasons. Benefits & harms may both be present, and I do not know which way the scales tip." Hardly a slam dunk against the woke menace, which is the impression I had when I first clicked your link.

Am I wrong? Do you stand by your summary? Did you read the article? Do contend that you didn't need to read it?

Perhaps ironically, I didn't read your whole post before commenting. It's possible that you have some appropriate disclaimer somewhere in it, which I missed in my skim. If not though, I want to at least flag this, because I see potential for misinformation cascades if I don't :/

Comment by mike_hawke on On the Contrary, Steelmanning Is Normal; ITT-Passing Is Niche · 2024-01-12T00:51:19.906Z · LW · GW

I agree with this, and I've been fairly unimpressed by the critiques of steelmanning that I've read, including Rob's post. Maybe I would change my mind if someone wrote a long post full of concrete examples[1] of steelmen going wrong, and of ITT being an efficient alternative. I think I pretty regularly see good reasoning and argumentation in the form of steelmen.

  1. ^

    But it's not trivial to contrive examples of arguments that will effectively get your point across without triggering, distracting, or alienating too much of the audience.

Comment by mike_hawke on mike_hawke's Shortform · 2024-01-10T22:16:42.795Z · LW · GW

I've seen a few people run the thought experiment where one imagines the best life a historical person could live, and/or the most good they could do. There are several variants, and you can tune which cheat codes they are given. People seem to get different answers, and this has me pretty curious.

  • Eliezer said in the sequences that maybe all this rationality stuff just wouldn't help a 14th century peasant at all, unless they were given explicit formulas from the future. (See also, the Chronophone of Archimedes.)
  • I've heard people ask why the industrial revolution didn't happen in China, and whether that was a contingent fluke of history, or a robust result of geography and culture. (I should admit at this point that I haven't read any of the Progress Studies stuff, but I want to.)
  • I think Paul or Ajeya or Katja have wondered aloud about what a medieval peasant or alchemist could have done if they had been divinely struck by the spirit of EA and rationality, and people have argued back and forth about how sticky various blockers were. (I would appreciate links if anyone has them.)
  • I skimmed that recent post about "Social Dark Matter" and wondered if 1950s America would have been much better if the social dark matter meme had somehow gotten a big edge in the arena of ideas.

I realized that I'm really uncertain about historical trajectories at pretty much every level. I am really unsure whether the Roman empire could have lasted longer and made a few extra major advancements (like steam engines and evolutionary biology). I'm unsure whether a medieval peasant armed with modern textbooks could have made a huge dent in history. And I'm also pretty unsure what it would have taken for 20th century America to have made faster moral progress than it did.

But I do notice that the more recent the alternate history, the less clueless I feel, which is a little motivating[1]. So here are a few prompts of the top of my head:

  1. What advice would you send back to your 10-year-old self if you weren't allowed to give lottery numbers or spoilers about global-scale events?
  2. What could your parents or their immediate peers have done differently that would have substantially improved their material circumstances, emotional lives, or moral rectitude? What would have been the costs?
  3. Could you get any easy wins if you were allowed to magically advertise one book or article to American intellectuals in the 1950s? (They can be wins of any size: global catastrophic risks, social dark matter, or your own pet peeve.)
  1. ^

    Possibly spurious, but this kind of reminds me of Read History of Philosophy Backwards

Comment by mike_hawke on mike_hawke's Shortform · 2024-01-08T20:27:01.274Z · LW · GW

Over a year later, I stand by this sentiment. I think this thought experiment is important and underrated.