Posts

When apparently positive evidence can be negative evidence 2022-10-20T21:47:37.873Z
A whirlwind tour of Ethereum finance 2021-03-02T09:36:23.477Z
Group rationality diary, 1/9/13 2013-01-10T02:22:31.232Z
Group rationality diary, 12/25/12 2012-12-25T21:51:47.250Z
Group rationality diary, 12/10/12 2012-12-11T11:50:26.990Z
Group rationality diary, 11/28/12 2012-11-28T09:08:11.802Z
Group rationality diary, 11/13/12 2012-11-13T18:39:42.946Z
Group rationality diary, 10/29/12 2012-10-30T18:01:56.568Z
Group rationality diary, 10/15/12 2012-10-16T05:29:24.231Z
Group rationality diary, 10/1/12 2012-10-02T09:15:30.156Z
Group rationality diary, 9/17/12 2012-09-19T11:08:39.965Z
Group rationality diary, 9/3/12 2012-09-04T09:42:59.884Z
Group rationality diary, 8/20/12 2012-08-21T09:42:35.016Z
Group rationality diary, 8/6/12 2012-08-08T05:58:52.441Z
Group rationality diary, 7/23/12 2012-07-24T08:49:25.064Z
Group rationality diary, 7/9/12 2012-07-10T08:35:27.873Z
Group rationality diary, 6/25/12 2012-06-26T08:31:53.427Z
Group rationality diary, 6/11/12 2012-06-12T06:39:20.052Z
Group rationality diary, 6/4/12 2012-06-05T04:12:18.453Z
Group rationality diary, 5/28/12 2012-05-29T04:10:25.364Z
Group rationality diary, 5/21/12 2012-05-22T02:21:34.704Z
Group rationality diary, 5/14/12 2012-05-15T03:01:19.152Z
Gerald Jay Sussman talk on new ideas about modeling computation 2011-10-28T01:29:53.640Z

Comments

Comment by cata on Would You Work Harder In The Least Convenient Possible World? · 2023-09-22T20:53:16.904Z · LW · GW

I am mostly like Bob (although I don't make up stuff about burnout), but I think calling myself a utilitarian is totally reasonable. By my understanding, utilitarianism is an answer to the question "what is moral behavior." It doesn't imply that I want to always decide to do the most moral behavior.

I think the existence of Bob is obviously good. Bob is in, like, the 90th percentile of human moral behavior, and if other people improved their behavior, Bob is also the kind of person who would reciprocally improve his own. If Alice wants to go around personally nagging everyone to be more altruistic, then that's her prerogative, and if it really works, I am even for it. But firstly, I don't see any reason to single out Bob, and secondly, I doubt it works very well.

Comment by cata on Sharing Information About Nonlinear · 2023-09-12T21:33:16.309Z · LW · GW

I apologize for derailing the N(D|D)A discussion, but it's kind of crazy to me that you think that Nonlinear (based on the content of this post?) has crossed a line such that you wouldn't work with them, by a large margin? Why not? That post you linked is about working with murderers, not working with business owners who seemingly took advantage of their employees for a few months, or who made a trigger-happy legal threat!

Compared to (for example) any random YC company with no reputation to speak of, I didn't see anything in this post that made it look like working with them would either be more likely to be regrettable for you, or more likely to be harmful to others, so what's the problem?

Comment by cata on Sharing Information About Nonlinear · 2023-09-08T01:45:55.725Z · LW · GW

Yes, that's what I was thinking. To me the lawsuit threat is totally beyond the pale.

Comment by cata on Sharing Information About Nonlinear · 2023-09-07T19:37:24.656Z · LW · GW

Relevant: https://www.lesswrong.com/posts/NCefvet6X3Sd4wrPc/uncritical-supercriticality

And it is triple ultra forbidden to respond to criticism with violence. There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses. This is one of them. Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.

Comment by cata on If I Was An Eccentric Trillionaire · 2023-08-09T20:28:50.157Z · LW · GW

There's already a well-written (I only read the first) history of part of EVE Online: https://www.amazon.com/dp/B0962ZVWPG

Comment by cata on On being in a bad place and too stubborn to leave. · 2023-08-06T17:50:23.744Z · LW · GW

Based on your story I am not sure what the issues that need solving are?

I know that I’m better than in 2020, but in 2019 I was a smart and promising student interested in what I was doing, and I see no way of going back to something like that?

Well, nobody is a student forever, regardless of how much they like college.

I’d have to deal with the massive shame of having, almost deliberately, chosen to obstinately screw things up for four years.

Is that massively shameful? AFAIK it's common for college students to choose their major poorly, get depressed, etc.

And if I could get that depressed in the past, surely I have a ‘major depression’ sword of Damocles hanging over my future as well?

Maybe? The circumstances seem pretty unusual.

And again, my current degree basically won’t get me anywhere

It can just get you all the normal jobs in the world that normal people can do, right?

There are things I’d like better —though I’m not fully sure what exactly they are —, but today, at 22, I may not have a way to pursue these things.

So what? You don't even know what they are. Also, probably not? Why wouldn't you be able to pursue them just as well as you could ever have?

My advice is to just chill out and focus on whatever object level things in your life you are working on, rather than dream about some hypothetically better way the last 4 years could have panned out for you.

Comment by cata on A Hill of Validity in Defense of Meaning · 2023-07-15T21:54:23.550Z · LW · GW

Sorry, but I just wasn't able to read the whole thing carefully, so I might be missing your relevant writing; I apologize if this comment retreads old ground.

It seems to me like the reasonable thing to do in this situation is:

  • Make whatever categories in your map you would be inclined to make in order to make good predictions. For example, personally I have a sort of "trans women" category based on the handful of trans women I have known reasonably well, which is closer to the "man" category than to the "woman" category, but has some somewhat distinct traits. Obviously you have a much more detailed map than me about this.
  • Use maximally clear, straightforward, and honest language representing maximally useful maps in situations where you are mostly trying to seek truth in the relevant territory. For example, "trying to figure out good bathroom policy" would be a really bad time to use obfuscatory language in order to spare people's feelings. (Likewise this comment.)
  • Be amenable to utilitarian arguments for non-straightforward or dishonest language in situations where you are doing something else. For example, if I have a trans coworker who I am trying to cooperate with on some totally unrelated work, and they have strong personal preferences about my use of language that aren't very costly for me, I am basically just happy to go along with those. Or if I have a trans friend who I like and we are just talking about whatever for warm fuzzies reasons, I am happy to go along with their preferences. (If it's a kind of collective political thing, then that brings other considerations to bear; I don't care to play political games.)

Introspectively, I don't think that my use of language in the third point is messing up my ability to think -- it's extremely not confusing to think to myself in my map-language, "OK, this person is a trans woman, which means I should predict they are mostly like a man except with trans-woman-cluster traits X, Y, and Z, and a personal preference to be treated 'like' a woman in many social circumstances, and it's polite to call them 'she' and 'her'." I also don't get confused if other people do the things that I learned are polite; I don't start thinking "oh, everyone is treating this trans woman like a woman, so now I should expect them to have woman-typical traits like P and Q."

The third point is the majority of interactions I have, because I mostly don't care or think very much about gender-related stuff. Is there a reason I should be more of a stickler for maximizing honesty and straightforwardness in these cases?

Comment by cata on My "2.9 trauma limit" · 2023-07-01T20:44:35.459Z · LW · GW

Did you ever figure out whether any parts of your new ~3 traumas were somehow in fact downstream of past unprocessed traumas that you didn't understand? Or was this really just new stuff happening to you, and you were correct to believe that the 2018 trauma advocates weren't talking about anything related to your life?

Comment by cata on Book Review: How Minds Change · 2023-05-29T00:10:28.114Z · LW · GW

I think few of us in the alignment community are actually in a position to change our minds about whether alignment is worth working on. With a p(doom) of ~35% I think it's unlikely that arguments alone push me below the ~5% threshold where working on AI misuse, biosecurity, etc. become competitive with alignment. And there are people with p(doom) of >85%.

This makes little sense to me, since "what should I do" isn't a function of p(doom). It's a function of both p(doom) and your inclinations, opportunities, and comparative advantages. There should be many people for whom, rationally speaking, a difference between 35% and 34% should change their ideal behavior.

Comment by cata on When will computer programming become an unskilled job (if ever)? · 2023-03-16T23:43:16.519Z · LW · GW

Since there's a very broad spectrum of different kinds of computer programs with different constraints and desiderata, I think the transition will be very gradual. Consider the following things that are all computer programming tasks:

  • Helping non-technical people set up a simple blog.
  • Identifying and understanding the cause of unexpected behavior in a large, complicated existing system.
  • Figuring out how to make a much cheaper-to-run version of an existing system that uses too many resources.
  • Experimenting with a graphics shader in a game to see how you can make an effect that is really cool looking.
  • Implementing a specific known cryptographic algorithm securely.
  • Writing exploratory programs that answer questions about some data set to help you understand patterns in the data.

I have no doubt that sufficiently fancy AI can do or help human programmers do all these tasks, but probably in different ways and at different rates.

As an experienced programmer that can do most of these things well, I would be very surprised if my skillset were substantially obsolete in less than 5 years, and somewhat surprised if it was substantially obsolete in less than 10 years. It seems like GPT-3 and GPT-4 are not really very close to being able to do these things as well as me, or close to being able to help a less skilled human do these things as well as me.

Comment by cata on Shutting Down the Lightcone Offices · 2023-03-15T02:58:10.954Z · LW · GW

As a LW veteran interested in EA I also perceive a lot of the dynamics you wrote about and they really bother me. Thank you for your hard and thoughtful work.

Comment by cata on Should you refrain from having children because of the risk posed by artificial intelligence? · 2023-03-10T03:30:06.393Z · LW · GW

It seems to me that the world into which children are born today has a high likelihood of being really bad.

Why? You didn't elaborate on this claim.

I would certainly be willing to have a kid today (modulo all the mundane difficulty of being a parent) if I was absolutely, 100%, sure that they would have a painful death in 30 years. Your moral intuitions may vary. But have you considered that it's really good to have a fun life for 30 years?

Comment by cata on The Kids are Not Okay · 2023-03-09T00:29:13.499Z · LW · GW

When I was in middle school and high school (Michigan, 1996-2004) the only identifiable political engagement I remember any of my peers doing ever was that we knew it was funny to make fun of George Bush for being an idiot. I don't remember ever having any other conversation with my friends about any political topic. I certainly had no clue what was going on in politics, outside of knowing who the president was, and knowing that 9/11 happened, and knowing that the Iraq War existed. So the idea that teenagers have political opinions now is also striking to me.

Comment by cata on Acting Normal is Good, Actually · 2023-02-11T06:45:07.251Z · LW · GW

Maybe this is just quibbling about words, but I don't like conflating "weird" with "uncharismatic", "awkward", "disagreeable", "picky", etc. There are many very weird (unusual; people will be surprised if you do them; you will stand out) behaviors that don't result in the kinds of drawbacks you listed, e.g. signing up for cryonics, or doing moral reasoning from first principles, or having polyamorous relationships.

They only have those drawbacks if you then do things like discuss them tactlessly, or make them central to your social identity, or act judgmentally or intolerantly towards people who behave normally, etc. But you can usually just not do those things.

Comment by cata on SolidGoldMagikarp II: technical details and more recent findings · 2023-02-07T04:46:17.555Z · LW · GW

I'm not a machine learning researcher, but this is fascinating and I can't wait to see what else you can dig up about this phenomenon!

Comment by cata on My Model Of EA Burnout · 2023-01-27T07:35:54.367Z · LW · GW

But cata, where does your "stuff that seems like it would be a good idea to do right now" queue come from? If you cannot see its origin, why do you trust that it arises primarily from your true values?

Well, I trust that because at the end of the day I feel happy and fulfilled, so they can't be too far off.

I believe you that many people need to see the things that are invisible to them, that just isn't my personal life story.

Comment by cata on My Model Of EA Burnout · 2023-01-26T22:45:45.945Z · LW · GW

What you say makes sense. I think most of the people "doing whatever it is that people do" are making a mistake.

The connection to "masking" is very interesting to me. I don't know much about autism so I don't have much background about this. I think that almost everyone experiences this pressure towards acting normal, but it makes sense that it especially stands out as a unique phenomenon ("masking") when the person doing it is very not-normal. Similarly, it's interesting that you identify "independence" as a very culturally-pushed value. I can totally see what you mean, but I never thought about it very much, which on reflection is obviously just because I don't have a hard time being "the culturally normal amount of independent", so it never became a problem for me. I can see that the effect of the shared culture in these cases is totally qualitatively different depending on where a person is relative to it.

One of the few large psychological interventions I ever consciously did on myself was in about 2014 when I went to one of the early CFAR weekend workshops in some little rented house around Santa Cruz. At the end of the workshop there was a kind of party, and one of the activities at the party was to write down some thing you were going to do differently going forward.

I thought about it and I figured that I should basically stop trying to be normal (which is something that before I thought was actively virtuous, for reasons that are now fuzzy to me, and would consciously try to do -- not that I successfully was super normal, but I was aiming in that direction.) It seemed like the ROI on being normal was just crappy in general and I had had enough of it. So that's what I did.

It's interesting to me that some people would have trouble with the "how to live more authentically instead" part. My moment to moment life feels like, there is a "stuff that seems like it would be a good idea to do right now" queue that is automatically in my head, and I am just grabbing some things out of it and doing them. So to me, the main thing seems to be eliminating any really dumb biases making me do things I don't value at all, like being normal, and then "living more authentically" is what's left.

But that's just my way -- it would make sense to me if other people behaved more strategically more often, in which case I guess they might need to do a lot more introspection about their positive values to make that work.

Comment by cata on My Model Of EA Burnout · 2023-01-26T09:36:24.359Z · LW · GW

I want to say something, but I'm not really sure how to phrase it very precisely, but I will just say the gist of it in some rambly way. Note: I am very much on the periphery of the phenomenon I am trying to describe, so I might not be right about it.

Most EAs come from a kind of western elite culture that right now assigns a lot of prestige to, like, being seen to be doing Important Work with lots of Power and Responsibility and Great Meaning, both professionally and socially.

"I am devoting my life to solving the most important problems in the world and alleviating as much suffering as possible" fits right into the script. That's exactly the kind of thing you are supposed to be thinking. If you frame your life like that, you will fit in and everyone will understand and respect what is your basic deal.

"I am going to have a pleasant balance of all my desires, not working all that hard, spending some time on EA stuff, and the rest of the time enjoy life, hang out, read some books, and go climbing" does not fit into the script. That's not something that anyone ever told you to do, and if you tell people you are going to do that, they will be surprised at what you said. You will stand out in a weird way.

Example anecdote: A few years ago my wife and I had a kid while I was employed full-time at a big software company that pays well. I had multiple discussions roughly like this with my coworkers:

  • Me: My kid's going to be born this fall, so I'll be taking paternity leave, and it's quite likely I will quit after, so I want to figure out what to do with this thing that I am responsible for.
  • Them: What do you mean, you will quit after?
  • Me: I mean I am going to have a baby, and like you, they paid me lots of money, so my guess is that I will just hang out being a parent with my wife and we can live off savings for a while.
  • Them: Well, you don't have to do that! You can just keep working.
  • Me: But doesn't it sound like if you were ever going to not work, the precise best time would be right when you have your first kid? Like, that would be literally the most common sense time in your life to choose not to work, and pay attention to learning about being a parent instead? I can just work again later.
  • Them: [puzzled] Well, you'll see what I mean. I don't think you will quit.

And then they were legitimately surprised when I quit after paternity leave, because it's unusual for someone to do that (at least not men) regardless of whether they have saved a bunch of money due to being a programmer. The normal thing to do is, let your work define your role in life and gives you all your social capital, so it's basically your number 1 priority, and everything else is a sideshow.

So it makes total sense to me that EAs who came from this culture decide that EA should define their role in life and give them all their social capital and be their number 1 priority, and it's not about a failure of introspection, or about a conscious assessment of their terminal values that turned out wrong. It's just the thing people do.

My prediction would be that among EAs who don't come from a culture with this kind of social pressure, burnout isn't really an issue.

Comment by cata on How to Convince my Son that Drugs are Bad · 2022-12-17T21:42:21.413Z · LW · GW

I notice that none of your quotes from him discuss risk of addiction. I think that's the most powerful argument on your side. If it were just a matter of trying e.g. Adderall or LSD once, that's one thing, but as you said, that's not always how it goes. Looking from the outside, it seems to me like nowadays many teenagers specifically become dependent on stimulants in a way that it would be better to avoid.

He isn't wrong about alcohol and coffee, but assuming that you are a kind of "have a coffee in the morning, once in a while drink a beer" person, then to the extent that's OK, it's OK because you have incorporated those habits into your life in a way that is sustainable and doesn't cause obvious problems. I would be nervous about introducing them to my kid, because you don't know that's how it's going to come out ahead of time. The same goes for other psychoactive drugs.

Comment by cata on Is the AI timeline too short to have children? · 2022-12-14T19:06:44.858Z · LW · GW

I have a toddler, a couple thoughts:

If I were to die right now, I would at least have had a chance to live something like a fulfilling life - but the joy of childhood seems inextricable from a sense of hope for the future.

I don't agree with this at all. I remember being a happy child and the joy was all about things that were happening in the moment, like reading books and playing games. I didn't think at all about the future.

Even if my children's short lives are happy, wouldn't their happiness be fundamentally false and devoid of meaning?

I think having a happy childhood is just good and nothing about maybe dying later makes it bad.

But now, both as I'm nearing the family-forming stage in my life, and as the AI timeline seems to be coming into sharper focus, I'm finding it emotionally distressing to contemplate having children.

I'm not going to claim that I know what's in your mind, since I don't know anything about you. But from the outside, this looks exactly like the same emotional dynamic that seems to be causing a lot of people to say that they don't want to have kids because of climate change. I agree with you that AI risk is scarier than climate change. But is it more of a reason to not have kids? It seems like this "not having kids" conclusion is a kind of emotional response people have to living in a world that seems scary and out of control, but I don't think that it makes sense in either case in terms of the interest of the potential kids.

Finally, if you are just hanging out in community spaces online, the emotional sense of "everyone freaking out" is mostly just a feedback loop where everyone starts feeling how everyone else seems to be feeling, not about justified belief updates. Believe what you think is true about AI risk, but if you are just plugging your emotions into that feedback loop uncritically, I think that's a recipe for both unnecessary suffering and bad decisions. I recommend stepping back if you notice the emotional component influencing you a lot.

Comment by cata on College Admissions as a Brutal One-Shot Game · 2022-12-06T05:02:24.442Z · LW · GW

That's fair, I guess that's more like hundreds of hours and I was thinking of more typical students when I suggested thousands.

Comment by cata on College Admissions as a Brutal One-Shot Game · 2022-12-06T03:50:29.196Z · LW · GW

I'm actually quite nonplussed by the disagree votes because I thought, if anything, my comment was too obvious to bother saying!

Comment by cata on College Admissions as a Brutal One-Shot Game · 2022-12-06T01:04:27.944Z · LW · GW

I got into MIT since I was a kid from a small rural town with really good grades, really good test scores, and was on a bunch of sports teams. Because I was from a small rural town and was pretty smart, none of this required special effort other than being on sports teams (note: being on the teams required no special skill as everyone who tried out made the team given small class size).

You say that, but getting really good grades in high school sounds like thousands of hours of grunt work, with very marginal benefit outside college admissions. Maybe it's what you would have done anyway, but I don't think it's what most teenagers would prefer to be doing.

Comment by cata on Could a single alien message destroy us? · 2022-11-25T09:20:22.805Z · LW · GW

Related: https://en.wikipedia.org/wiki/His_Master's_Voice_(novel)

Comment by cata on Against "Classic Style" · 2022-11-24T01:15:22.994Z · LW · GW

I like classic style. I think the thing that classic style reflects is that most people are capable of looking at object-level reality and saying what they see. If I read an essay describing stuff like things that happened, and when they happened, and things people said and did, and how they said and did them, then often I am comfortable more or less taking the author at their word about those things. (It's unusual for people to flatly lie about them.)

However, most people don't seem very good at figuring out how likely their syntheses of things are, or what things they believe they might be wrong about, or how many important things they don't know, and so on. So when people write all that stuff in an essay, unless I personally trust their judgment enough that I want to just import their beliefs, I don't really do much with it. I end up just shrugging and reading the object-level stuff they wrote, and then doing my own synthesis and judgment. So the self-aware style really did end up being a lot of filler, and it crowds out the more valuable information.

(If I do personally trust their judgment enough that I want to just import their beliefs, then I like the self-aware style. And I am not claiming that literally all self-aware content is totally useless. But I think the heuristic is good.)

Comment by cata on SBF x LoL · 2022-11-16T07:20:37.543Z · LW · GW

If that resembles you, I don't know if it's a problem for you. Maybe not, if you like it. I was just expressing that when I see someone appearing to do that, like the FTX people, I don't take their suggestion that the way they are going about it is really good and important very seriously.

Comment by cata on SBF x LoL · 2022-11-15T21:44:44.199Z · LW · GW

I have really different priors than it seems like a lot of EAs and rationalists do about this stuff, so it's hard to have useful arguments. But here are some related things I believe, based mostly on my experience and common sense rather than actual evidence. ("You" here is referring to the average LW reader, not you specifically.)

  • Most important abilities for doing most useful work (like running a hedge fund) are mostly not fixed at e.g. age 25, and can be greatly improved upon. FTX didn't fail because SBF had a lack of "working memory." It seems to have failed because he sucked at a bunch of stuff that you could easily get better at over time. (Reportedly he was a bad manager and didn't communicate well, he clearly was bad at making decisions under pressure, he clearly behaved overly impulsively, etc.)
  • Trying to operate on 5 hours of sleep with constant stimulants is idiotic. You should have an incredibly high prior that this doesn't work well, and trying it out and it feeling OK for a little while shouldn't convince you otherwise. It blows my mind that any smart person would do this. The potential downside is so much worse than "an extra 3 hours per day" is good.
  • Common problems with how your mind works like "can't pay attention, can't motivate myself, irrationally anxious" aren't always things where you need to find silver bullet, quick fixes, or else live with them forever. They are typically amenable to gradual directional improvement.
  • If you are e.g. 25 years old and you have serious problems like that, now is a dumb time to try to launch yourself as hard as possible into an ambitious, self-sacrificing career where you take a lot of personal responsibility. Get your own house in order.
  • If you want to do a bunch of self-sacrificing, speculative burnout stuff anyway, I don't believe for a minute that it's because you are making a principled, altruistic, +EV decision due to short AI timelines, or something. That's totally inhuman. I think it's probably basically because you have a kind of outsized ego and you can't emotionally handle the idea that you might not be the center of the world.

P.S. I realize you were trying to make a more general point, but I have to point out that all this SBF psychoanalysis is based on extremely scanty evidence, and having a conversation framed as if it is likely basically true seems kind of foolish.

Comment by cata on When apparently positive evidence can be negative evidence · 2022-10-20T22:00:12.831Z · LW · GW

From the HN comments:

If my test suite never ever goes red, then I don't feel as confident in my code as when I have a small number of red tests.

That seems like an example of this that I have definitely experienced, where A is "my code is correct", B is "my code is not correct", and the failure case is "my tests appear to be exercising the code but actually aren't."

Comment by cata on Transformative VR Is Likely Coming Soon · 2022-10-13T19:25:29.299Z · LW · GW

I don't really think that cost is an important bottleneck anymore. I and many others have a Rift collecting dust because I don't really care to use it regularly. Many people have spent more money on cameras, lighting, microphones, and other tinkering for Zoom than it would cost them to buy a Quest.

Any technology is more useful if everyone owns it, but to get there, it has to be useful at reasonable levels of adoption (e.g. a quarter of your friends own it), or it's not going to happen.

To me, the plausible route towards getting lots of people into VR for meetings is to have those people incidentally using a headset for all kinds of everyday computing stuff -- watching movies, playing games, doing office work -- and then, they are already wearing it and using it, and it's easy to have meetings with everyone else who is also already wearing it and using it. That's clearly achievable but also clearly not ready yet.

Comment by cata on Transformative VR Is Likely Coming Soon · 2022-10-13T09:50:07.051Z · LW · GW

I don't think it's going to be transformative until you are happy to wear a headset for hours on end. In and of themselves, VR meetings are better than Zoom meetings, but having a headset on sucks compared to sitting at your computer with nothing on your face.

Comment by cata on On the proper piloting of flesh shoots · 2022-10-11T23:14:58.379Z · LW · GW

I used to think it was a good idea to experiment with basically every psychoactive drug, but nowadays I am more skeptical of anyone's understanding of the effects of basically any chemical intervention on the human body, and I adopt more of a "if it's not broken, don't fix it" principle towards all of it. It's a lot easier to make my body or mind work worse than to make it work better.

(Of course, if you were already "pretty sure" you were trans, then that's a different story.)

Comment by cata on How do you get a job as a software developer? · 2022-10-07T04:59:41.427Z · LW · GW

It may be entirely a myth, or may have been true only long ago, or may be applicable to specific sub-industries. It doesn't have anything to do with my experience of interviewing applicants for random Silicon Valley startups over the last decade.

There is a grain of truth to it, which is that some people who can muddle through accomplishing things given unlimited tries, unlimited Googling, unlimited help, unlimited time, and no particular quality bar, do not have a clear enough understanding of programming or computing to accomplish almost anything, even a simple thing, by themselves, on the first try, in an interview, quickly.

Comment by cata on Public-facing Censorship Is Safety Theater, Causing Reputational Damage · 2022-09-24T02:31:23.054Z · LW · GW

If alignment is about getting models to do what you want and not engaging in certain negative behavior, then researching how to get models to censor certain outputs could theoretically produce insights for alignment.

I was referred by 80k Hours to talk to a manager on the OpenAI safety team who argued exactly this to me. I didn't join, so no idea to what extent it makes sense vs. just being a nice-sounding idea.

Comment by cata on Seeing the Schema · 2022-09-16T21:11:35.152Z · LW · GW

That doesn't seem right -- to solve it via pure rotation, you would need to do up to 5 + 4 + 3 + 2 + 1 rotations, looking at each pair once, not 6!. Not at all unrealistic.

Comment by cata on How do you get a job as a software developer? · 2022-08-15T20:13:09.019Z · LW · GW

If you're a good programmer and you interview well (i.e. you think fast and sound sharp) you will likely have little difficulty, so don't sweat the details. There's a huge volume of "how to get hired as a programmer" prescriptive advice, and it's mostly aimed at newbies who have median or below-median skill, because those are the majority of people looking for jobs. If you know yourself to be in the top percentiles of this stuff, it doesn't apply. The fundamental truth of this labor market is that demand greatly exceeds supply.

I suggest doing a couple practice interviews with someone in your network. You can try https://interviewing.io/ if you don't have anyone handy, although I am sure a bunch of people reading this post would be willing to practice interview you for free. I would do it. The environment of a typical real-time interview is very different from the "I am doing Leetcode problems on my computer" environment, so I think it's much better practice.

The most common problem I observe with interviewees is that they are very nervous, so figure out whatever will make you not very nervous and do that.

I recommend hitting up your network to find companies that you think you would enjoy working at. This also means you will probably have someone who can help you get past the screening stage, so your resume will matter even less. If you don't have any good candidate companies in your pocket, I would consult the recent Hacker News "who's hiring" threads; since you are a good writer with credible merits, corresponding directly with the poster there might similarly bypass any kind of "bozo recruiter screening out illegible resumes" filter.

Comment by cata on chinchilla's wild implications · 2022-08-04T00:14:30.133Z · LW · GW

We should require a high bar before we're willing to not-post potentially-world-destroying information to LW, because LW has a strong commitment to epistemic rationality' seems like an obviously terrible argument to me.

I think that argument is good if you expand out its reasoning. The reason we have a strong commitment to epistemic rationality is because learning and teaching true things is almost always very good. You need to establish a fair chunk of probable bad to outweigh it.

Comment by cata on Hiring Programmers in Academia · 2022-07-25T01:21:19.546Z · LW · GW

This makes it sound like if I am a good programmer and I want to help people out (and not get paid market rate, and not get job security), I should find lots of unusually good opportunities within academia. Do you think that's the case? If so, do you have any suggestions for how to find them? I have zero experience with any university ever.

Comment by cata on Consider Multiclassing · 2022-07-07T22:48:35.495Z · LW · GW

I would point out that the benefits of "multiclassing" don't only arise when you have, like, the equivalent of two different college majors. They arise if you can both, for example, do your work and also write well. Or do your work and also think one level up about the purpose of your work. Or do your work in a way that takes into account the specific details of the work of your direct colleagues.

Most people like being specialized. They like the feeling of knowing what's expected of them, feeling that they know how to do that well, and then knowing that they are done. If you just become a little more general than that, you will be super useful, because you will delete the coordination costs that would have been present if multiple people were required to do the things that you can take care of by yourself.

Comment by cata on Tarnished Guy who Puts a Num on it · 2022-07-06T22:13:37.606Z · LW · GW

If you're a selfless altruist who eliminated video games from your life so that you can spend more time on altruism, then hats off to you and genuine thanks for your service. But don't act like you're confused that someone else would not do it. You can be a rationalist and care about AI safety without being a selfless altruist devoting every minute to it.

Comment by cata on Failing to fix a dangerous intersection · 2022-06-30T21:08:06.855Z · LW · GW

I wonder if this process would work better if there were a Kickstarter-like mechanism for people to pay the Department for specific work that they wanted done that wasn't getting done.

Comment by cata on LessWrong Has Agree/Disagree Voting On All New Comment Threads · 2022-06-30T00:00:27.268Z · LW · GW

I think a single vote already conveys a bunch of information about agreement. Very very few people upvote things they disagree with, even on LW, and most of the time they do, they leave a disambiguating comment (I've seen Rob and philh and Daystar do this, for instance)...So making the second vote "agree/disagree" feels like adding a redundant feature; the single vote was already highly correlated with agree/disagree. (Claim.)

I am not very knowledgeable about a lot of things people post about on LW, so my median upvote is on a post or comment which is thought-provoking but which I don't have a strong opinion about. I don't know if I am typical, but I bet there are at least many people like me.

Comment by cata on I applied for a MIRI job in 2020. Here's what happened next. · 2022-06-17T23:45:22.852Z · LW · GW

I'm not sure I make the connection between what you described and meta-honesty norms. The flakiness and unresponsiveness you described is unfortunately endemic to company recruiters. I attribute it to the fact that corresponding with lots of job applicants is a lot of work which is not fun, and bad incentive structures that don't appropriately reward/punish doing a good/bad job with hiring. It's too bad that MIRI hasn't managed to outperform that status quo, but I guess they didn't.

Comment by cata on On The Spectrum, On The Guest List · 2022-05-22T07:03:25.014Z · LW · GW

Very well written, thanks for the linkpost! I should also listen to the Tyler Cowen podcast you referenced.

Comment by cata on [deleted post] 2022-05-09T02:32:50.048Z

I skipped college and became a programmer. About 15 years later, I have yet to ever see one single piece of evidence that any employer or recruiter has, at any point, ever had the thought, "I would think better of this candidate if they had a degree." I have had offers from Google, Facebook, and Microsoft (none of which I accepted, so they don't show up on my resume), and recruiters spam me constantly, so this isn't just a tiny eccentric startup thing. Maybe IBM cares. Do you specifically want to work for IBM that bad?

Presuming you can do the usual other stuff to demonstrate your competence and find jobs, which I think is likely if you are hanging out posting dialogues on LW (e.g. you are really good at programming, sharp sounding, publicly visible work, charismatic, willing to approach people) I think you should assume the degree is a totally worthless piece of paper and work from there.

(Obvious caveats: Maybe this doesn't work outside of Silicon Valley culture, or maybe it doesn't work well until you have one job on your resume, or maybe it doesn't work if you don't look like a stereotypical hacker type, or something. I can mostly only speak to my own experience.)

Comment by cata on Narrative Syncing · 2022-05-01T07:01:02.465Z · LW · GW

Scenario 3 bothers me. Did you really have to do the thing where you generalize about the whole social group?

Compare:

We try to have a culture around here where there is no vetted-by-the-group answer to this; we instead try to encourage forming your own inside-view model of how AI risk might work, what paths through to a good future might be possible, etc...

with

I don't think I have any specific best answer for this in general. My best suggestion would be to encourage forming your own inside-view model of how AI risk might work, what paths through to a good future might be possible, etc...

To me, the new version sounds more personal, equally helpful, and is not misleading or attempting to do "narrative syncing." (Maybe I just don't understand what's going on, because the first scenario sounded pretty reasonable to me, and seems to contain basically the same content as the third scenario, so I would not have predicted vastly different reactions. The first scenario is phrased a little more negatively, I guess?)

Comment by cata on My least favorite thing · 2022-04-15T02:47:03.666Z · LW · GW

Aren't you kind of preaching to the choir? Who involved in AI alignment is actually giving advice like this?

Wouldn't the median respondent tell A and B something like "go start participating at the state of the art by reading and publishing on the Alignment Forum and by reading, reproducing, and publishing AI/ML papers, and maybe go apply for jobs at alignment research labs?"

Comment by cata on Explaining the Twitter Postrat Scene · 2022-04-13T21:01:29.586Z · LW · GW

If I correctly understand the argument in "But how does shitposting lead to the pursuit of truth", I see three somewhat independent lines of reasoning:

There are several important topics today that are nearly impossible to discuss directly for both individual and social reasons. The best way to approach them is with “shitposting” that expresses a real opinion but doesn’t immediately bind the writer to a legible object-level position that could be attacked.

I'm not sure I agree that this is preventing the pursuit of truth, because topics like feminism and QAnon and collective insanity aren't impossible to discuss with other people who are interested in pursuing truth. They are only impossible to discuss with people playing status games. But it didn't benefit your understanding very much to discuss them with those people anyway.

I could write this essay on my blog, and 750 people would read it. But 75,000 chuckled at my tweet and hopefully more than 1% of them went hmmm afterward and asked themselves why this is funny and what it means.

I think one crux is -- is it really true that if 75k people read that tweet, 1% of them go "hmmm" and ask themselves why it's funny and what it means? I think that's incredibly overoptimistic. I would expect a handful at best to end up anywhere near the reasoning that you typed out. You ought to know better than me, since you use Twitter, and I don't, but 1% seems unbelievable. I barely seriously reflect on 1% of LW comments I read.

This is related to the point above. I get why "shitposting" (is "shitposting" different than "comedy"?) might get more people to read what you wrote, since you made it shorter and funnier. But I don't get why it would make people have a more open mind about their preconceptions, or think harder, or think in a useful different direction. Maybe it can "raise the sanity waterline" a tiny bit by putting a very simplistic good idea in a lot of people's heads, although I am skeptical that your example tweet would accomplish that.

I think Twitter is a great compliment to LessWrong for anyone pursuing the art of rationality. It trains you to play with ideas, to improv, read between the lines, make the shadow visible. It allows you to make friends with people who know deep and important things and will never communicate those things to you in a LessWrong-legible way. It teaches you to express yourself outside the constraints of epistemic statuses and acceptable topics of discussion. It teaches you to filter truth from bullshit in the real memetic jungle, real-life rationality under fire.

Those sound like good fun things to do which are not really about the art of rationality, except insofar as going out and doing anything at all can teach you something. They sound like basically the same things that would happen if you hung out talking to people at a random bar, or on a random Discord server, or at work.

Comment by cata on Challenges to Yudkowsky's Pronoun Reform Proposal · 2022-03-17T00:12:49.696Z · LW · GW

Basically, my experience went like this:

  1. I didn't know anyone trans or think about it at all.
  2. I moved to California and hung out with some rationalists and met some trans people IRL and online. I understood that it was polite to try to use whatever pronouns they preferred, decoupled from their physical appearance, so I did my best to do so, and other than that I continued to not think about it at all.
  3. After observing that it's hard to reliably remember to use pronouns that conflict with people's surface appearance to me, I adopted a "default to 'they', especially in case of ambiguous appearance" strategy that seemed to typically be doable by me, and satisfactory to most people I met.
  4. Somehow last year I was talking to another cis person about whether different strategies for remembering to use the right pronouns were easier or harder, and this idea of "actually reconsider their gender" came up.
Comment by cata on Challenges to Yudkowsky's Pronoun Reform Proposal · 2022-03-14T04:29:35.624Z · LW · GW

I agree with these criticisms, I don't understand what Eliezer was doing with his conclusion. (How did the word "simplest" get in there?)

I just wanted to mention that regarding this quote:

a lot of cis people use 'learning someone's pronoun' as a copout from doing the important internal work of actually reconsidering their impression of the person's gender

As a cis person who has interacted occasionally with trans people for the past ten years, it literally never occurred to me until last year that what trans people were asking me to do was actually reconsider my impression of their gender! I sincerely thought they were just asking me to memorize a different word to call them. I will at least try out a "reconsidering" process the next time I regularly interact with a trans person IRL and see whether it works. (I have also never read about what kind of "reconsidering" processes work for people, but I have some guesses for how I could approach it.)

It seems bad that the huge focus on pronouns completely obscured the actual request! I wonder if a lot of other people also don't know this.

Comment by cata on Epsilon is not a probability, it's a cop-out · 2022-02-15T04:23:28.590Z · LW · GW

When I say this I'm expressing that I see no practical reason to distinguish between 1 in a trillion and 1 in a googol because the rest of my behavior will be the same anyway. I think this is totally reasonable because quantifying probabilities is a lot of work.