Posts

When apparently positive evidence can be negative evidence 2022-10-20T21:47:37.873Z
A whirlwind tour of Ethereum finance 2021-03-02T09:36:23.477Z
Group rationality diary, 1/9/13 2013-01-10T02:22:31.232Z
Group rationality diary, 12/25/12 2012-12-25T21:51:47.250Z
Group rationality diary, 12/10/12 2012-12-11T11:50:26.990Z
Group rationality diary, 11/28/12 2012-11-28T09:08:11.802Z
Group rationality diary, 11/13/12 2012-11-13T18:39:42.946Z
Group rationality diary, 10/29/12 2012-10-30T18:01:56.568Z
Group rationality diary, 10/15/12 2012-10-16T05:29:24.231Z
Group rationality diary, 10/1/12 2012-10-02T09:15:30.156Z
Group rationality diary, 9/17/12 2012-09-19T11:08:39.965Z
Group rationality diary, 9/3/12 2012-09-04T09:42:59.884Z
Group rationality diary, 8/20/12 2012-08-21T09:42:35.016Z
Group rationality diary, 8/6/12 2012-08-08T05:58:52.441Z
Group rationality diary, 7/23/12 2012-07-24T08:49:25.064Z
Group rationality diary, 7/9/12 2012-07-10T08:35:27.873Z
Group rationality diary, 6/25/12 2012-06-26T08:31:53.427Z
Group rationality diary, 6/11/12 2012-06-12T06:39:20.052Z
Group rationality diary, 6/4/12 2012-06-05T04:12:18.453Z
Group rationality diary, 5/28/12 2012-05-29T04:10:25.364Z
Group rationality diary, 5/21/12 2012-05-22T02:21:34.704Z
Group rationality diary, 5/14/12 2012-05-15T03:01:19.152Z
Gerald Jay Sussman talk on new ideas about modeling computation 2011-10-28T01:29:53.640Z

Comments

Comment by cata on Thomas Kwa's Shortform · 2024-03-06T23:46:56.802Z · LW · GW

Thanks, I didn't realize that this PC fan idea had made air purifiers so much better since I bought my Coway, so this post made me buy one of the Luggable kits. I'll share this info with others.

Comment by cata on If you weren't such an idiot... · 2024-03-03T08:29:56.306Z · LW · GW

I disagree with the summarization suggestion for the same reason that I disagree with many of the items -- I don't have (much of) the problem they are trying to solve, so why would I expend effort to attack a problem I don't have?

The most obvious is "carrying extra batteries for my phone." My phone never runs out of battery; I should not carry batteries that I will never use. Similarly: I don't have a problem with losing things, such that I need extra. (If I had extra, I would plausibly give them away to save physical space!) I don't find myself wishing I remembered more of my thoughts, such that I should take the effort to capture and retain them. And I don't feel the need to remember more than I already do about the stuff that I read, so that makes me not inclined to take time away from the rest of my life and spend it remembering more things.

Comment by cata on If you weren't such an idiot... · 2024-03-03T06:14:07.392Z · LW · GW

Are you really saying you think everything on this list is "obviously" beneficial? I probably only agree with half the stuff on the list. For example, I certainly disagree that I should "summarize things that I read" (?) or that I should have a "good mentor" by emailing people to request that they mentor me.

Comment by cata on Acting Wholesomely · 2024-02-27T07:35:43.914Z · LW · GW

I specifically think it's well within the human norm, i.e. that most of the things I read are written by a person who has done worse things, or who would do worse things given equal power. I have done worse things, in my opinion. There's just not a blog post about them right now.

Comment by cata on Acting Wholesomely · 2024-02-27T05:54:40.897Z · LW · GW

Speaking for myself, I don't agree with any of it. From what I have read, I don't agree that the author's personal issues demonstrate "some amount of poison in them" outside the human norm, or in some way that would make me automatically skeptical of anything they said "entwined with soulcrafting." And I certainly don't agree that a reader "should be aware" of nonspecific problems that an author has which aren't even clearly relevant to something they wrote. I would give the exact opposite advice -- to try to focus on the ideas first before involving preconceptions about the author's biases.

Comment by cata on LessWrong Is Very Wrong: Ultimately All Social Media Platforms Are The Same · 2024-02-13T06:56:11.691Z · LW · GW

If you wanted other people to consider this remark, you shouldn't have deleted whatever discussion you had that prompted it, so that we could go look.

Comment by cata on Would you have a baby in 2024? · 2023-12-25T23:25:52.403Z · LW · GW

Yes, I basically am not considering that because I am not aware of the arguments for why that's a likely kind of risk (vs. the risk of simple annihilation, which I understand the basic arguments for.) If you think the future will be super miserable rather than simply nonexistent, then I understand why you might not have a kid.

Comment by cata on Would you have a baby in 2024? · 2023-12-25T20:58:28.954Z · LW · GW

I don't agree with that. I'm a parent of a 4-year-old who takes AI risk seriously. I think childhood is great in and of itself, and if the fate of my kid is to live until 20 and then experience some unthinkable AI apocalypse, that was 20 more good years of life than he would have had if I didn't do anything. If that's the deal of life it's a pretty good deal and I don't think there's any reason to be particularly anguished about it on your kid's behalf.

Comment by cata on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-19T22:24:39.190Z · LW · GW

Thanks for the post. Your intuition as someone who has observed lots of similar arguments and the people involved in them seems like it should be worth something.

Personally as a non-involved party following this drama the thing I updated the most about so far was the emotional harm apparently done by Ben's original post. Kat's descriptions of how stressed out it made her were very striking and unexpected to me. Your post corroborates that it's common to take extreme emotional damage from accusations like this.

I am sure that LW has other people like me who are natural psychological outliers on "low emotional affect" or maybe "low agreeableness" who wouldn't necessarily intuit that it would be a super big deal for someone to publish a big public post accusing you of being an asshole. Now I understand that it's a bigger deal than I thought, and I am more open to norms that are more subtle than "honestly write whatever you think."

Comment by cata on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-19T22:15:41.098Z · LW · GW

I am skeptical of the gender angle, but I think it's being underdiscussed that, based on the balance of evidence so far, the person with the biggest, most effective machine gun is $5000 to the richer and still anonymous, whereas the people hit by their bullets are busy pointing fingers at each other. Alice's alleged actions trashing Nonlinear (and 20-some former people???) seem IMO much worse than anything Lightcone or Nonlinear is being even accused of.

(Not that this is a totally foregone conclusion - I noticed that Nonlinear didn't provide any direct evidence on the claim that Alice was a known serial liar outside of this saga.)

Comment by cata on South Bay Pre-Holiday Gathering · 2023-12-13T08:11:57.885Z · LW · GW

Are little kids welcome?

Comment by cata on Send us example gnarly bugs · 2023-12-11T03:31:22.523Z · LW · GW

I just had a surprisingly annoying version of a very mundane bug. I was working in Javascript and I had some code that read some parameters from the URL and then did a bunch of math. I had translated the math directly from a different codebase so I was absolutely sure it should be right; yet I was getting the wrong answer. I console.logged the inputs and intermediate values and was totally flummoxed because all the inputs looked clearly right, until at some point a totally nonsense value was produced from one equation.

Of course, the inputs and intermediate values were strings that I forgot to parse into Javascript numbers, so everything looked perfect until finally it plugged them into my equation, which silently did string operations instead of numeric operations, producing an apparently absurd result. But it took a good 20 minutes of me plus my coworker staring at these 20 lines of code and the log outputs until I figured it out.

Comment by cata on Do websites and apps actually generally get worse after updates, or is it just an effect of the fear of change? · 2023-12-10T21:14:48.168Z · LW · GW

I think the most basic and true explanation is that the companies we are thinking about started out with unusually high-quality products, which is why they came to our notice. Over time, the conditions that enabled them to do especially good work change and their ability tends to regress to the mean. So then the product gets worse.

Related ideas:

  • High-quality product design is not very legible to companies and it's hard for them to select for it in their hiring or incentive structure.

  • Companies want to grow for economy-of-scale reasons, but the larger a company is the more challenging it is to organize it to do good work.

  • Of course, doing nothing at all seems ridiculous, particularly so for companies whose investors all invested on the premise of dramatic growth.

  • In many cases, a company probably originally designed a product that they themselves liked, and they happened to be representative enough of a potential market that they became successful and their product was well-liked. Then the next step is to try to design for a mass market that is typically unlike themselves (since companies are usually made up of a kind of specific homogeneous employee base.) That's much harder and they may guess wrong about what that mass market will like.

Comment by cata on why did OpenAI employees sign · 2023-11-27T15:48:42.811Z · LW · GW

I have no inside information. My guess is #5 with a side of 1, 6, and "the letter wasn't legally binding anyway so who cares."

I think that the lesson here is that if your company says "Work here for the principles in this charter. We also pay a shitload of money" then you are going to get a lot of employees who like getting paid a shitload of money regardless of the charter, because those are much more common in the population than people who believe the principles in the charter and don't care about money.

Comment by cata on AI debate: test yourself against chess 'AIs' · 2023-11-25T17:23:32.240Z · LW · GW

Interesting. I agree, I didn't even notice that Bb3 would be attacking a4, I was just thinking of it as a way to control the d-file. I hadn't really thought about how good that position would be if white just did "not much."

I also hadn't really thought about exactly how much better black was after the final position in the Qxb5 line (with Bxd5 exd5), it was just clear to me black was better and the position was personally appealing to me (it looks kind of one-sided, where white has no particular counterplay and black can sit around maneuvering all day to try to pick up a pawn.) Very difficult for me to guess whether it should be objectively winning or not.

Fun exercise, thanks for making it!

Comment by cata on AI debate: test yourself against chess 'AIs' · 2023-11-22T23:35:59.331Z · LW · GW

I'm 2100 USCF. I looked at the first position for a few minutes and read the AI reasoning. My assessment:

  1. For myself, I thought 1...Qxb5 was natural and strong, considering 2. Nxb5 c6 3. Nc3 or Na3 with black control over b3 and c4, and 2. axb5 looks odd after Nc5. Black's minor pieces look superior in both cases.
  2. I thought that it was strange that neither AI mentioned 1...Qxb5 2. Nxb5.
  3. I thought that 1...Qc5 with the plan of c6 looked kind of artificial. I thought a better structure for the black queenside would be with the pawns on dark squares, providing an outpost for the knight on c5, and the long diagonal completely vacated.
  4. AI A's refutation of Qxb5 was total nonsense, as AI B pointed out. The final position is obviously better for black.

In the end, I would play Qxb5 and feel confident Black is doing well. I can't refute Qc5 though, I think it's probably sort of OK too. But if only one is a good move then I think it's Qxb5.

Comment by cata on Vote on worthwhile OpenAI topics to discuss · 2023-11-22T07:38:14.788Z · LW · GW

I don't feel this pressure. I just decline to answer when I don't have a very substantial opinion. I do notice myself sort of judging the people who voted on things where clearly the facts are barely in, though, which is maybe an unfortunate dynamic, since others may reasonably interpret it as "feel free to take your best guess."

Comment by cata on Who is Sam Bankman-Fried (SBF) really, and how could he have done what he did? - three theories and a lot of evidence · 2023-11-11T03:21:51.503Z · LW · GW

I think "theory B" (DAE + EA) is likely true, but it also seems like he was independently considerably incompetent. The anecdotes about his mismanagement at Alameda and FTX (e.g. total lack of accounting, repeated expensive security breaches, taking objectively dumb risks, not sleeping, alienating the whole Alameda team by being so untrustworthy) weren't clever utilitarian coinflip gambits that he got unlucky on, or selfish defections that he was trying to get away with. They were just dumb mistakes.

My guess is that a number of those mistakes largely came from a kind of overapplication of startup culture (move fast, break things, grow at all costs, minimize bureaucracy, ask forgiveness rather than permission) way past the point where it made sense. Until the end he was acting like he was running a ten-person company that had to 100x or die, even though he was actually running a medium-sized popular company with a perfectly workable business model. (Maybe he justified this to himself by thinking of it like he had to win even bigger to save the world with his money, or something, I don't know.)

Since he was very inexperienced and terrible at taking advice, I don't think there's anything shocking about him being really bad at being in charge of a company moving a lot of money, regardless of how smart he was.

Comment by cata on Vote on Interesting Disagreements · 2023-11-08T06:13:29.011Z · LW · GW

I work at Manifold, I don't know if this is true but I can easily generate some arguments against:

  • Manifold's business model is shaky and Manifold may well not exist in 3 years.
  • Manifold's codebase is also shaky and would not survive Manifold-the-company dying right now.
  • Manifold is quite short on engineering labor.
  • It seems to me that Manifold and LW have quite different values (Manifold has a typical startup focus on prioritizing growth at all costs) and so I expect many subtle misalignments in a substantial integration.

Personally for these reasons I am more eager to see features developed in the LW codebase than the Manifold codebase.

Comment by cata on Experiments as a Third Alternative · 2023-10-29T02:25:36.908Z · LW · GW

I tried Adderall and Ritalin each just for one day and it was totally clear to me based on that that I wasn't interested in taking them on a regular basis.

Comment by cata on Boost your productivity, happiness and health with this one weird trick · 2023-10-20T01:19:41.733Z · LW · GW

FWIW, I went from ~40/hrs week full-time programming to ~15/hrs week part-time programming after having a kid, and it's not obvious to me that I get less total work done. Certainly not twice less. But I would never have said I worked hard, so I could have predicted as much.

Comment by cata on Prediction markets covered in the NYT podcast “Hard Fork” · 2023-10-13T20:48:41.591Z · LW · GW

Never mind bettors -- part of my project for improving the world is, I want people like Casey to look at a prediction market and be like, "Oh, a prediction market. I take this probability seriously, because if it was obviously wrong, someone could come in and make money by fixing it, and then it would be right." If he doesn't understand that line of argument, then indeed, why is Casey ever going to take the probability any more seriously than a Twitter poll?

I feel like right now he might have the vibe of that argument, even if he doesn't actually understand it? But I think you have to really comprehend the argument before you will take the prediction market more seriously than your own uninformed feeling about the topic, or your colleague's opinion, or one research paper you skimmed.

Comment by cata on Prediction markets covered in the NYT podcast “Hard Fork” · 2023-10-13T20:00:12.941Z · LW · GW

I work at Manifold. I think it's notable that these two experienced tech journalists have had lots of repeated exposure to the idea of prediction markets, but it sounds like they only sort of figured out the basic concept?

  • In the discussion on insider trading, nobody mentions the extremely obvious point, which is that the prediction market is trying to incentivize the people with private information (maybe "insider" information, or maybe just something they haven't said out loud) to publicize what they know. If Casey actually cares about whether Linda Yaccarino will be the CEO of X next year, he should be excited by the idea that some guy at Twitter will come and insider trade in his market. But they never said anything like this -- they just said that maybe the market was supposed to very generally aggregate the wisdom of crowds.

  • It also sounds like they don't really understand why it would aggregate the wisdom of crowds better than, for example, a poll. Casey was like "well, when people have a Twitter poll, then partisans stuff the ballot box", implying that a similar result would be likely to happen with a prediction market on who will be the next Speaker, ignoring the obvious point that it costs a bunch of money to "stuff the ballot box" on a prediction market that isn't a self-fulfilling prophecy.

  • Perhaps relatedly, it sounded like Kevin and/or Casey had absolutely no clue how a prediction market actually works, numerically. At the end when they were making the market, Casey wasn't like "OK, bet it to 25%, since I think that's the chance." Instead Kevin was like "OK, I'll bet 100 mana," and then they were like "Huh, how about that, now it says 10%. Oops, I bet 100 more and now it says 8%." It seems like they are totally missing the core concept that the point of the prediction market is trying to specifically incentivize you to move the market to the probability you believe, which is like the first thing I ever learned about prediction markets in my life?

In the end, their feelings about prediction markets seemed totally vague and vibes-based. On the one hand, the wisdom of crowds has good vibes. On the other hand, insider trading and crypto/transactionalization of everyday things have bad vibes. On the gripping hand, gambling with play money is cute and harmless. Therefore, prediction markets are a land of contrasts.

My takeaway is that prediction markets are harder to understand than I think and I am not sure what to do about that.

Comment by cata on Would You Work Harder In The Least Convenient Possible World? · 2023-09-22T20:53:16.904Z · LW · GW

I am mostly like Bob (although I don't make up stuff about burnout), but I think calling myself a utilitarian is totally reasonable. By my understanding, utilitarianism is an answer to the question "what is moral behavior." It doesn't imply that I want to always decide to do the most moral behavior.

I think the existence of Bob is obviously good. Bob is in, like, the 90th percentile of human moral behavior, and if other people improved their behavior, Bob is also the kind of person who would reciprocally improve his own. If Alice wants to go around personally nagging everyone to be more altruistic, then that's her prerogative, and if it really works, I am even for it. But firstly, I don't see any reason to single out Bob, and secondly, I doubt it works very well.

Comment by cata on Sharing Information About Nonlinear · 2023-09-12T21:33:16.309Z · LW · GW

I apologize for derailing the N(D|D)A discussion, but it's kind of crazy to me that you think that Nonlinear (based on the content of this post?) has crossed a line such that you wouldn't work with them, by a large margin? Why not? That post you linked is about working with murderers, not working with business owners who seemingly took advantage of their employees for a few months, or who made a trigger-happy legal threat!

Compared to (for example) any random YC company with no reputation to speak of, I didn't see anything in this post that made it look like working with them would either be more likely to be regrettable for you, or more likely to be harmful to others, so what's the problem?

Comment by cata on Sharing Information About Nonlinear · 2023-09-08T01:45:55.725Z · LW · GW

Yes, that's what I was thinking. To me the lawsuit threat is totally beyond the pale.

Comment by cata on Sharing Information About Nonlinear · 2023-09-07T19:37:24.656Z · LW · GW

Relevant: https://www.lesswrong.com/posts/NCefvet6X3Sd4wrPc/uncritical-supercriticality

And it is triple ultra forbidden to respond to criticism with violence. There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses. This is one of them. Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.

Comment by cata on If I Was An Eccentric Trillionaire · 2023-08-09T20:28:50.157Z · LW · GW

There's already a well-written (I only read the first) history of part of EVE Online: https://www.amazon.com/dp/B0962ZVWPG

Comment by cata on On being in a bad place and too stubborn to leave. · 2023-08-06T17:50:23.744Z · LW · GW

Based on your story I am not sure what the issues that need solving are?

I know that I’m better than in 2020, but in 2019 I was a smart and promising student interested in what I was doing, and I see no way of going back to something like that?

Well, nobody is a student forever, regardless of how much they like college.

I’d have to deal with the massive shame of having, almost deliberately, chosen to obstinately screw things up for four years.

Is that massively shameful? AFAIK it's common for college students to choose their major poorly, get depressed, etc.

And if I could get that depressed in the past, surely I have a ‘major depression’ sword of Damocles hanging over my future as well?

Maybe? The circumstances seem pretty unusual.

And again, my current degree basically won’t get me anywhere

It can just get you all the normal jobs in the world that normal people can do, right?

There are things I’d like better —though I’m not fully sure what exactly they are —, but today, at 22, I may not have a way to pursue these things.

So what? You don't even know what they are. Also, probably not? Why wouldn't you be able to pursue them just as well as you could ever have?

My advice is to just chill out and focus on whatever object level things in your life you are working on, rather than dream about some hypothetically better way the last 4 years could have panned out for you.

Comment by cata on A Hill of Validity in Defense of Meaning · 2023-07-15T21:54:23.550Z · LW · GW

Sorry, but I just wasn't able to read the whole thing carefully, so I might be missing your relevant writing; I apologize if this comment retreads old ground.

It seems to me like the reasonable thing to do in this situation is:

  • Make whatever categories in your map you would be inclined to make in order to make good predictions. For example, personally I have a sort of "trans women" category based on the handful of trans women I have known reasonably well, which is closer to the "man" category than to the "woman" category, but has some somewhat distinct traits. Obviously you have a much more detailed map than me about this.
  • Use maximally clear, straightforward, and honest language representing maximally useful maps in situations where you are mostly trying to seek truth in the relevant territory. For example, "trying to figure out good bathroom policy" would be a really bad time to use obfuscatory language in order to spare people's feelings. (Likewise this comment.)
  • Be amenable to utilitarian arguments for non-straightforward or dishonest language in situations where you are doing something else. For example, if I have a trans coworker who I am trying to cooperate with on some totally unrelated work, and they have strong personal preferences about my use of language that aren't very costly for me, I am basically just happy to go along with those. Or if I have a trans friend who I like and we are just talking about whatever for warm fuzzies reasons, I am happy to go along with their preferences. (If it's a kind of collective political thing, then that brings other considerations to bear; I don't care to play political games.)

Introspectively, I don't think that my use of language in the third point is messing up my ability to think -- it's extremely not confusing to think to myself in my map-language, "OK, this person is a trans woman, which means I should predict they are mostly like a man except with trans-woman-cluster traits X, Y, and Z, and a personal preference to be treated 'like' a woman in many social circumstances, and it's polite to call them 'she' and 'her'." I also don't get confused if other people do the things that I learned are polite; I don't start thinking "oh, everyone is treating this trans woman like a woman, so now I should expect them to have woman-typical traits like P and Q."

The third point is the majority of interactions I have, because I mostly don't care or think very much about gender-related stuff. Is there a reason I should be more of a stickler for maximizing honesty and straightforwardness in these cases?

Comment by cata on My "2.9 trauma limit" · 2023-07-01T20:44:35.459Z · LW · GW

Did you ever figure out whether any parts of your new ~3 traumas were somehow in fact downstream of past unprocessed traumas that you didn't understand? Or was this really just new stuff happening to you, and you were correct to believe that the 2018 trauma advocates weren't talking about anything related to your life?

Comment by cata on Book Review: How Minds Change · 2023-05-29T00:10:28.114Z · LW · GW

I think few of us in the alignment community are actually in a position to change our minds about whether alignment is worth working on. With a p(doom) of ~35% I think it's unlikely that arguments alone push me below the ~5% threshold where working on AI misuse, biosecurity, etc. become competitive with alignment. And there are people with p(doom) of >85%.

This makes little sense to me, since "what should I do" isn't a function of p(doom). It's a function of both p(doom) and your inclinations, opportunities, and comparative advantages. There should be many people for whom, rationally speaking, a difference between 35% and 34% should change their ideal behavior.

Comment by cata on When will computer programming become an unskilled job (if ever)? · 2023-03-16T23:43:16.519Z · LW · GW

Since there's a very broad spectrum of different kinds of computer programs with different constraints and desiderata, I think the transition will be very gradual. Consider the following things that are all computer programming tasks:

  • Helping non-technical people set up a simple blog.
  • Identifying and understanding the cause of unexpected behavior in a large, complicated existing system.
  • Figuring out how to make a much cheaper-to-run version of an existing system that uses too many resources.
  • Experimenting with a graphics shader in a game to see how you can make an effect that is really cool looking.
  • Implementing a specific known cryptographic algorithm securely.
  • Writing exploratory programs that answer questions about some data set to help you understand patterns in the data.

I have no doubt that sufficiently fancy AI can do or help human programmers do all these tasks, but probably in different ways and at different rates.

As an experienced programmer that can do most of these things well, I would be very surprised if my skillset were substantially obsolete in less than 5 years, and somewhat surprised if it was substantially obsolete in less than 10 years. It seems like GPT-3 and GPT-4 are not really very close to being able to do these things as well as me, or close to being able to help a less skilled human do these things as well as me.

Comment by cata on Shutting Down the Lightcone Offices · 2023-03-15T02:58:10.954Z · LW · GW

As a LW veteran interested in EA I also perceive a lot of the dynamics you wrote about and they really bother me. Thank you for your hard and thoughtful work.

Comment by cata on Should you refrain from having children because of the risk posed by artificial intelligence? · 2023-03-10T03:30:06.393Z · LW · GW

It seems to me that the world into which children are born today has a high likelihood of being really bad.

Why? You didn't elaborate on this claim.

I would certainly be willing to have a kid today (modulo all the mundane difficulty of being a parent) if I was absolutely, 100%, sure that they would have a painful death in 30 years. Your moral intuitions may vary. But have you considered that it's really good to have a fun life for 30 years?

Comment by cata on The Kids are Not Okay · 2023-03-09T00:29:13.499Z · LW · GW

When I was in middle school and high school (Michigan, 1996-2004) the only identifiable political engagement I remember any of my peers doing ever was that we knew it was funny to make fun of George Bush for being an idiot. I don't remember ever having any other conversation with my friends about any political topic. I certainly had no clue what was going on in politics, outside of knowing who the president was, and knowing that 9/11 happened, and knowing that the Iraq War existed. So the idea that teenagers have political opinions now is also striking to me.

Comment by cata on Acting Normal is Good, Actually · 2023-02-11T06:45:07.251Z · LW · GW

Maybe this is just quibbling about words, but I don't like conflating "weird" with "uncharismatic", "awkward", "disagreeable", "picky", etc. There are many very weird (unusual; people will be surprised if you do them; you will stand out) behaviors that don't result in the kinds of drawbacks you listed, e.g. signing up for cryonics, or doing moral reasoning from first principles, or having polyamorous relationships.

They only have those drawbacks if you then do things like discuss them tactlessly, or make them central to your social identity, or act judgmentally or intolerantly towards people who behave normally, etc. But you can usually just not do those things.

Comment by cata on SolidGoldMagikarp II: technical details and more recent findings · 2023-02-07T04:46:17.555Z · LW · GW

I'm not a machine learning researcher, but this is fascinating and I can't wait to see what else you can dig up about this phenomenon!

Comment by cata on My Model Of EA Burnout · 2023-01-27T07:35:54.367Z · LW · GW

But cata, where does your "stuff that seems like it would be a good idea to do right now" queue come from? If you cannot see its origin, why do you trust that it arises primarily from your true values?

Well, I trust that because at the end of the day I feel happy and fulfilled, so they can't be too far off.

I believe you that many people need to see the things that are invisible to them, that just isn't my personal life story.

Comment by cata on My Model Of EA Burnout · 2023-01-26T22:45:45.945Z · LW · GW

What you say makes sense. I think most of the people "doing whatever it is that people do" are making a mistake.

The connection to "masking" is very interesting to me. I don't know much about autism so I don't have much background about this. I think that almost everyone experiences this pressure towards acting normal, but it makes sense that it especially stands out as a unique phenomenon ("masking") when the person doing it is very not-normal. Similarly, it's interesting that you identify "independence" as a very culturally-pushed value. I can totally see what you mean, but I never thought about it very much, which on reflection is obviously just because I don't have a hard time being "the culturally normal amount of independent", so it never became a problem for me. I can see that the effect of the shared culture in these cases is totally qualitatively different depending on where a person is relative to it.

One of the few large psychological interventions I ever consciously did on myself was in about 2014 when I went to one of the early CFAR weekend workshops in some little rented house around Santa Cruz. At the end of the workshop there was a kind of party, and one of the activities at the party was to write down some thing you were going to do differently going forward.

I thought about it and I figured that I should basically stop trying to be normal (which is something that before I thought was actively virtuous, for reasons that are now fuzzy to me, and would consciously try to do -- not that I successfully was super normal, but I was aiming in that direction.) It seemed like the ROI on being normal was just crappy in general and I had had enough of it. So that's what I did.

It's interesting to me that some people would have trouble with the "how to live more authentically instead" part. My moment to moment life feels like, there is a "stuff that seems like it would be a good idea to do right now" queue that is automatically in my head, and I am just grabbing some things out of it and doing them. So to me, the main thing seems to be eliminating any really dumb biases making me do things I don't value at all, like being normal, and then "living more authentically" is what's left.

But that's just my way -- it would make sense to me if other people behaved more strategically more often, in which case I guess they might need to do a lot more introspection about their positive values to make that work.

Comment by cata on My Model Of EA Burnout · 2023-01-26T09:36:24.359Z · LW · GW

I want to say something, but I'm not really sure how to phrase it very precisely, but I will just say the gist of it in some rambly way. Note: I am very much on the periphery of the phenomenon I am trying to describe, so I might not be right about it.

Most EAs come from a kind of western elite culture that right now assigns a lot of prestige to, like, being seen to be doing Important Work with lots of Power and Responsibility and Great Meaning, both professionally and socially.

"I am devoting my life to solving the most important problems in the world and alleviating as much suffering as possible" fits right into the script. That's exactly the kind of thing you are supposed to be thinking. If you frame your life like that, you will fit in and everyone will understand and respect what is your basic deal.

"I am going to have a pleasant balance of all my desires, not working all that hard, spending some time on EA stuff, and the rest of the time enjoy life, hang out, read some books, and go climbing" does not fit into the script. That's not something that anyone ever told you to do, and if you tell people you are going to do that, they will be surprised at what you said. You will stand out in a weird way.

Example anecdote: A few years ago my wife and I had a kid while I was employed full-time at a big software company that pays well. I had multiple discussions roughly like this with my coworkers:

  • Me: My kid's going to be born this fall, so I'll be taking paternity leave, and it's quite likely I will quit after, so I want to figure out what to do with this thing that I am responsible for.
  • Them: What do you mean, you will quit after?
  • Me: I mean I am going to have a baby, and like you, they paid me lots of money, so my guess is that I will just hang out being a parent with my wife and we can live off savings for a while.
  • Them: Well, you don't have to do that! You can just keep working.
  • Me: But doesn't it sound like if you were ever going to not work, the precise best time would be right when you have your first kid? Like, that would be literally the most common sense time in your life to choose not to work, and pay attention to learning about being a parent instead? I can just work again later.
  • Them: [puzzled] Well, you'll see what I mean. I don't think you will quit.

And then they were legitimately surprised when I quit after paternity leave, because it's unusual for someone to do that (at least not men) regardless of whether they have saved a bunch of money due to being a programmer. The normal thing to do is, let your work define your role in life and gives you all your social capital, so it's basically your number 1 priority, and everything else is a sideshow.

So it makes total sense to me that EAs who came from this culture decide that EA should define their role in life and give them all their social capital and be their number 1 priority, and it's not about a failure of introspection, or about a conscious assessment of their terminal values that turned out wrong. It's just the thing people do.

My prediction would be that among EAs who don't come from a culture with this kind of social pressure, burnout isn't really an issue.

Comment by cata on How to Convince my Son that Drugs are Bad · 2022-12-17T21:42:21.413Z · LW · GW

I notice that none of your quotes from him discuss risk of addiction. I think that's the most powerful argument on your side. If it were just a matter of trying e.g. Adderall or LSD once, that's one thing, but as you said, that's not always how it goes. Looking from the outside, it seems to me like nowadays many teenagers specifically become dependent on stimulants in a way that it would be better to avoid.

He isn't wrong about alcohol and coffee, but assuming that you are a kind of "have a coffee in the morning, once in a while drink a beer" person, then to the extent that's OK, it's OK because you have incorporated those habits into your life in a way that is sustainable and doesn't cause obvious problems. I would be nervous about introducing them to my kid, because you don't know that's how it's going to come out ahead of time. The same goes for other psychoactive drugs.

Comment by cata on Is the AI timeline too short to have children? · 2022-12-14T19:06:44.858Z · LW · GW

I have a toddler, a couple thoughts:

If I were to die right now, I would at least have had a chance to live something like a fulfilling life - but the joy of childhood seems inextricable from a sense of hope for the future.

I don't agree with this at all. I remember being a happy child and the joy was all about things that were happening in the moment, like reading books and playing games. I didn't think at all about the future.

Even if my children's short lives are happy, wouldn't their happiness be fundamentally false and devoid of meaning?

I think having a happy childhood is just good and nothing about maybe dying later makes it bad.

But now, both as I'm nearing the family-forming stage in my life, and as the AI timeline seems to be coming into sharper focus, I'm finding it emotionally distressing to contemplate having children.

I'm not going to claim that I know what's in your mind, since I don't know anything about you. But from the outside, this looks exactly like the same emotional dynamic that seems to be causing a lot of people to say that they don't want to have kids because of climate change. I agree with you that AI risk is scarier than climate change. But is it more of a reason to not have kids? It seems like this "not having kids" conclusion is a kind of emotional response people have to living in a world that seems scary and out of control, but I don't think that it makes sense in either case in terms of the interest of the potential kids.

Finally, if you are just hanging out in community spaces online, the emotional sense of "everyone freaking out" is mostly just a feedback loop where everyone starts feeling how everyone else seems to be feeling, not about justified belief updates. Believe what you think is true about AI risk, but if you are just plugging your emotions into that feedback loop uncritically, I think that's a recipe for both unnecessary suffering and bad decisions. I recommend stepping back if you notice the emotional component influencing you a lot.

Comment by cata on College Admissions as a Brutal One-Shot Game · 2022-12-06T05:02:24.442Z · LW · GW

That's fair, I guess that's more like hundreds of hours and I was thinking of more typical students when I suggested thousands.

Comment by cata on College Admissions as a Brutal One-Shot Game · 2022-12-06T03:50:29.196Z · LW · GW

I'm actually quite nonplussed by the disagree votes because I thought, if anything, my comment was too obvious to bother saying!

Comment by cata on College Admissions as a Brutal One-Shot Game · 2022-12-06T01:04:27.944Z · LW · GW

I got into MIT since I was a kid from a small rural town with really good grades, really good test scores, and was on a bunch of sports teams. Because I was from a small rural town and was pretty smart, none of this required special effort other than being on sports teams (note: being on the teams required no special skill as everyone who tried out made the team given small class size).

You say that, but getting really good grades in high school sounds like thousands of hours of grunt work, with very marginal benefit outside college admissions. Maybe it's what you would have done anyway, but I don't think it's what most teenagers would prefer to be doing.

Comment by cata on Could a single alien message destroy us? · 2022-11-25T09:20:22.805Z · LW · GW

Related: https://en.wikipedia.org/wiki/His_Master's_Voice_(novel)

Comment by cata on Against "Classic Style" · 2022-11-24T01:15:22.994Z · LW · GW

I like classic style. I think the thing that classic style reflects is that most people are capable of looking at object-level reality and saying what they see. If I read an essay describing stuff like things that happened, and when they happened, and things people said and did, and how they said and did them, then often I am comfortable more or less taking the author at their word about those things. (It's unusual for people to flatly lie about them.)

However, most people don't seem very good at figuring out how likely their syntheses of things are, or what things they believe they might be wrong about, or how many important things they don't know, and so on. So when people write all that stuff in an essay, unless I personally trust their judgment enough that I want to just import their beliefs, I don't really do much with it. I end up just shrugging and reading the object-level stuff they wrote, and then doing my own synthesis and judgment. So the self-aware style really did end up being a lot of filler, and it crowds out the more valuable information.

(If I do personally trust their judgment enough that I want to just import their beliefs, then I like the self-aware style. And I am not claiming that literally all self-aware content is totally useless. But I think the heuristic is good.)

Comment by cata on SBF x LoL · 2022-11-16T07:20:37.543Z · LW · GW

If that resembles you, I don't know if it's a problem for you. Maybe not, if you like it. I was just expressing that when I see someone appearing to do that, like the FTX people, I don't take their suggestion that the way they are going about it is really good and important very seriously.

Comment by cata on SBF x LoL · 2022-11-15T21:44:44.199Z · LW · GW

I have really different priors than it seems like a lot of EAs and rationalists do about this stuff, so it's hard to have useful arguments. But here are some related things I believe, based mostly on my experience and common sense rather than actual evidence. ("You" here is referring to the average LW reader, not you specifically.)

  • Most important abilities for doing most useful work (like running a hedge fund) are mostly not fixed at e.g. age 25, and can be greatly improved upon. FTX didn't fail because SBF had a lack of "working memory." It seems to have failed because he sucked at a bunch of stuff that you could easily get better at over time. (Reportedly he was a bad manager and didn't communicate well, he clearly was bad at making decisions under pressure, he clearly behaved overly impulsively, etc.)
  • Trying to operate on 5 hours of sleep with constant stimulants is idiotic. You should have an incredibly high prior that this doesn't work well, and trying it out and it feeling OK for a little while shouldn't convince you otherwise. It blows my mind that any smart person would do this. The potential downside is so much worse than "an extra 3 hours per day" is good.
  • Common problems with how your mind works like "can't pay attention, can't motivate myself, irrationally anxious" aren't always things where you need to find silver bullet, quick fixes, or else live with them forever. They are typically amenable to gradual directional improvement.
  • If you are e.g. 25 years old and you have serious problems like that, now is a dumb time to try to launch yourself as hard as possible into an ambitious, self-sacrificing career where you take a lot of personal responsibility. Get your own house in order.
  • If you want to do a bunch of self-sacrificing, speculative burnout stuff anyway, I don't believe for a minute that it's because you are making a principled, altruistic, +EV decision due to short AI timelines, or something. That's totally inhuman. I think it's probably basically because you have a kind of outsized ego and you can't emotionally handle the idea that you might not be the center of the world.

P.S. I realize you were trying to make a more general point, but I have to point out that all this SBF psychoanalysis is based on extremely scanty evidence, and having a conversation framed as if it is likely basically true seems kind of foolish.