Posts

Karma votes: blind to or accounting for score? 2024-06-22T21:40:34.143Z
Why you should learn a musical instrument 2024-05-15T20:36:16.034Z
When apparently positive evidence can be negative evidence 2022-10-20T21:47:37.873Z
A whirlwind tour of Ethereum finance 2021-03-02T09:36:23.477Z
Group rationality diary, 1/9/13 2013-01-10T02:22:31.232Z
Group rationality diary, 12/25/12 2012-12-25T21:51:47.250Z
Group rationality diary, 12/10/12 2012-12-11T11:50:26.990Z
Group rationality diary, 11/28/12 2012-11-28T09:08:11.802Z
Group rationality diary, 11/13/12 2012-11-13T18:39:42.946Z
Group rationality diary, 10/29/12 2012-10-30T18:01:56.568Z
Group rationality diary, 10/15/12 2012-10-16T05:29:24.231Z
Group rationality diary, 10/1/12 2012-10-02T09:15:30.156Z
Group rationality diary, 9/17/12 2012-09-19T11:08:39.965Z
Group rationality diary, 9/3/12 2012-09-04T09:42:59.884Z
Group rationality diary, 8/20/12 2012-08-21T09:42:35.016Z
Group rationality diary, 8/6/12 2012-08-08T05:58:52.441Z
Group rationality diary, 7/23/12 2012-07-24T08:49:25.064Z
Group rationality diary, 7/9/12 2012-07-10T08:35:27.873Z
Group rationality diary, 6/25/12 2012-06-26T08:31:53.427Z
Group rationality diary, 6/11/12 2012-06-12T06:39:20.052Z
Group rationality diary, 6/4/12 2012-06-05T04:12:18.453Z
Group rationality diary, 5/28/12 2012-05-29T04:10:25.364Z
Group rationality diary, 5/21/12 2012-05-22T02:21:34.704Z
Group rationality diary, 5/14/12 2012-05-15T03:01:19.152Z
Gerald Jay Sussman talk on new ideas about modeling computation 2011-10-28T01:29:53.640Z

Comments

Comment by cata on Haotian's Shortform · 2024-12-12T22:59:54.727Z · LW · GW

To me, since LessWrong has a smart community that attracts people with high standards and integrity, by default if you (a median LW commenter) write your considered opinion about something, I take that very seriously and assume that it's much, much more likely to be useful than an LLM's opinion.

So if you post a comment that looks like an LLM wrote it, and you don't explain which parts were the LLM's opinion and which parts were your opinion, then that makes it difficult to use it. And if there's a norm of posting comments that are partly unmarked LLM opinions, that means that I have to adopt the very large burden of evaluating every comment to try to figure out whether it's an LLM, in order to figure out if I should take it seriously.

Comment by cata on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-07T00:24:22.692Z · LW · GW

I have been to lots of conferences at lots of kinds of conference centers and Lighthaven seems very unusual:

  • The space has been extensively and well designed to be comfortable and well suited to the activities.
  • The food/drink/snack situation is dramatically superior.
  • The on-site accommodations are extremely convenient.

I think it's great that rationalist conferences have this extremely attractive space to use that actively makes people want to come, rather than if they were in like, a random hotel or office campus.

As for LW, I would say something sort of similar:

  • The website and feature set is now dramatically superior to e.g. Discourse or PHPBB.
  • It's operated by people who spend lots of time trying to figure out new adjustments that make it better, including ones that nobody else is doing, like splitting out karma and agree voting, and cultivating the best old posts.
  • Partially as a result, the quality of the discussion is basically off the charts for a free general-interest public forum.

In both cases it seems like I don't see other groups trying to max out the quality level in these ways, and my best guess for why is that there is no other group who is equally capable, has a similarly strong vision of what would be a good thing to create, and wants to spend the effort to do it.

Comment by cata on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-06T22:33:18.316Z · LW · GW

I would think to approach this by figuring something like the Shapley value of the involved parties, by answering the questions "for a given amount of funding, how many people would have been willing to provide this funding if necessary" and "given an amount of funding, how many people would have been willing and able to do the work of the Lightcone crew to produce similar output."

I don't know much about how Lightcone operates, but my instinct is that the people are difficult to replace, because I don't see many other very similar projects to Lighthaven and LW, and that the funding seems somewhat replaceable (for example I would be willing to donate much more than I actually did if I thought there should be less other money available.) So probably the employees should be getting the majority of the credit.

Comment by cata on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-11-30T09:55:33.131Z · LW · GW

I was going to email but I assume others will want to know also so I'll just ask here. What is the best way to donate an amount big enough that it's stupid to pay a Stripe fee, e.g. $10k? Do you accept donations of appreciated assets like stock or cryptocurrency?

Comment by cata on Yonatan Cale's Shortform · 2024-11-25T18:56:54.578Z · LW · GW

But as a secondary point, I think today's models can already use bash tools reasonably well.

Perhaps that's true, I haven't seen a lot of examples of them trying. I did see Buck's anecdote which was a good illustration of doing a simple task competently (finding the IP address of an unknown machine on the local network).

I don't work in AI so maybe I don't know what parts of R&D might be most difficult for current SOTA models. But based on the fact that large-scale LLMs are sort of a new field that hasn't had that much labor applied to it yet, I would have guessed that a model which could basically just do mundane stuff and read research papers, could spend a shitload of money and FLOPS to run a lot of obviously informative experiments that nobody else has properly run, and polish a bunch of stuff that nobody else has properly polished.

Comment by cata on Yonatan Cale's Shortform · 2024-11-24T22:03:35.835Z · LW · GW

I'm not confident but I am avoiding working on these tools because I think that "scaffolding overhang" in this field may well be most of the gap towards superintelligent autonomous agents.

If you imagine a o1-level entity with "perfect scaffolding", i.e. it can get any info on a computer into its context whenever it wants, and it can choose to invoke any computer functionality that a human could invoke, and it can store and retrieve knowledge for itself at will, and its training includes the use of those functionalities, it's not completely clear to me that it wouldn't already be able to do a slow self-improvement takeoff by itself, although the cost might be currently practically prohibitive.

I don't think building that scaffolding is a trivial task at all, though.

Comment by cata on Making a conservative case for alignment · 2024-11-24T21:51:03.390Z · LW · GW

I don't have a bunch of citations but I spend time in multiple rationalist social spaces and it seems to me that I would in fact be excluded from many of them if I stuck to sex-based pronouns, because as stated above there are many trans people in the community, of whom many hold to the consensus progressive norms on this. The EA Forum policy is not unrepresentative of the typical sentiment.

So I don't agree that the statements are misleading.

(I note that my typical habit is to use singular they for visibly NB/trans people, and I am not excluded for that. So it's not precisely a kind of compelled speech.)

Comment by cata on When do "brains beat brawn" in Chess? An experiment · 2024-11-22T23:22:16.744Z · LW · GW

I was playing this bot lately myself and one thing it made me wonder is, how much better would it be at beating me if it was trained against a model of me in particular, rather than how it actually was trained? I feel I have no idea.

Comment by cata on [deleted post] 2024-11-20T01:02:22.249Z

2 data points: I have 15-20 years of experience at a variety of companies but no college and no FANG, currently semi-retired. Recruiters still spam me with many offers and my professional network wants to hire me at their small companies.

A friend of mine has ~2 years of experience as a web dev and some experience as a mechanical engineer + random personal projects, no college, and he worked hard to look for a software job and found absolutely nothing, with most companies never contacting him after an application.

Comment by cata on When will computer programming become an unskilled job (if ever)? · 2024-11-16T22:46:21.166Z · LW · GW

One and a half years later it seems like AI tools are able to sort of help humans with very rote programming work (e.g. changing or writing code to accomplish a simple goal, implementing versions of things that are well-known to the AI like a textbook algorithm or a browser form to enter data, answering documentation-like questions about a system) but aren't much help yet on the more skilled labor parts of software engineering.

Comment by cata on OpenAI Email Archives (from Musk v. Altman and OpenAI blog) · 2024-11-16T21:34:30.913Z · LW · GW

It seems like Musk in 2018 dramatically underestimated the ability of OpenAI to compete with Google in the medium term.

Comment by cata on OpenAI Email Archives (from Musk v. Altman and OpenAI blog) · 2024-11-16T21:31:02.211Z · LW · GW

Thanks for not only doing this but noting the accuracy of the unchecked transcript, it's always hard work to build a mental model of how good LLM tools are at what stuff.

Comment by cata on Eli's shortform feed · 2024-11-11T05:01:59.859Z · LW · GW

I don't know whether this resembles your experience at all, but for me, skills translate pretty directly to moment-to-moment life satisfaction, because the most satisfying kind of experience is doing something that exercises my existing skills. I would say that only very recently (in my 30s) do I feel "capped out" on life satisfaction from skills (because I am already quite skilled at almost everything I spend all my time doing) and I have thereby begun spending more time trying to do more specific things in the world.

Comment by cata on Eli's shortform feed · 2024-11-11T04:54:46.159Z · LW · GW

I worked at Manifold but not on Love. My impression from watching and talking to my coworkers was that it was a fun side idea that they felt like launching and seeing if it happened to take off, and when it didn't they got bored and moved on. Manifold also had a very quirky take on it due to the ideology of trying to use prediction markets as much as possible and making everything very public. I would advise against taking it seriously as evidence that an OKC-like product is a bad idea or a bad business.

Comment by cata on Is the Power Grid Sustainable? · 2024-10-26T03:03:26.016Z · LW · GW

Why is it cheaper for individuals to install some amount of cheap solar power for themselves than for the grid to install it and then deliver it to them, with economies of scale in the construction and maintenance? Transmission cost?

Comment by cata on If far-UV is so great, why isn't it everywhere? · 2024-10-20T23:20:42.408Z · LW · GW

If you installed it in a preschool and it successfully killed all the pathogens there wouldn't be essentially no effect.

Comment by cata on shminux's Shortform · 2024-09-29T01:06:24.315Z · LW · GW

Superficially, human minds look like they are way too diverse for that to cause human extinction by accident. If new ideas toast some specific human subgroup, other subgroups will not be equally affected.

Comment by cata on Eye contact is effortless when you’re no longer emotionally blocked on it · 2024-09-27T22:13:28.151Z · LW · GW

Why do you feel so strongly about using so much eye contact in normal conversations? I sometimes make eye contact and sometimes don't and that seems fine.

I agree with your sentiment that being very uncomfortable with eye contact is probably an indication of some other psychological thing you could work on, but it sounds like you maybe feel more strongly about it than that.

Comment by cata on [Completed] The 2024 Petrov Day Scenario · 2024-09-27T21:40:07.463Z · LW · GW

I played General Anderson and also wrote that note. My feeling is that this year seemed more "game-like" and less "ritual-like" than past years, but the "game" part suffered for the reasons I mentioned above, and the combination to me felt awkward. Choosing to emphasize either the "game" nature or the "ritual" nature seems to have some pros and cons. Since participating in the game inevitably made me curious about the choices involved, I will be interested to hear the LW team's opinion on this in the retrospective.

Comment by cata on Puzzle Games · 2024-09-27T04:24:01.050Z · LW · GW

A new promising game was just released, Maxwell's Puzzling Demon. It looks like it goes deep with clever puzzles.

Comment by cata on AI Safety is Dropping the Ball on Clown Attacks · 2024-09-20T07:43:54.374Z · LW · GW

This post was difficult to take seriously when I read it but the "clown attack" idea very much stuck with me.

Comment by cata on Debate: Get a college degree? · 2024-08-13T04:39:52.217Z · LW · GW

I think you should go to college if it sounds pleasant and fulfilling to go to one of the colleges you could go to (as Saul stated colleges have many fancy amenities) and you are OK with sacrificing:

  • The cost of the preparatory work you need to do to be admitted at that college.
  • The cost of the tuition itself.
  • 4+ years of your career and adult life.

in order to do something pleasant and fulfilling. You should also go to college if you don't have any plan to get a job you like without a college degree, but you do have a plan to do it with a college degree, since it's very important to get a job you like. Although, given that college is a huge investment, maybe you should have made that plan, or be making it.

If you aren't looking forward to spending 4 more years in school a lot, and you could get a reasonable job without going to college, I think it would be crazy to go to college.

I don't think most people are likely to be confused about which of these groups they are in. If Saul is confused I apologize but I think he must be a rare case.

The other arguments Saul made in his opening statement about why you might want to go to college seem very weak to me:

  • It's a strong Chesterton's fence.
    • This is an argument for why a fully generic high school student who knows nothing should go to college. It's not an argument for why it's good to get a college degree.
    • Defaults are for what a person with no information should do without thinking. Everyone at 16 has a huge amount of information about themselves, their dreams, their abilities, how they relate to school, how they relate to others, what the contemporaneous world is like. The default is not responsive to any of that. It's completely inappropriate to be applying some super-general policy about norms and conformity when considering some giant extremely specific high-stakes offer that is only about your own life. This is what I disagree with the most in this dialogue.
  • General upkeeping of norms/institutions is good.
    • No it's not. If it's not in someone's self-interest to get a college degree, there's no way it's in the social interest for there to be a norm of everyone getting college degrees.
  • Some people may be totally unproductive and/or be a drain on society (e.g. crime) if they don't go.
    • That's a reason to not be a career criminal, not a reason to get a college degree.
    • By the way, it's pretty unproductive to go to college for 4 years while someone else pays for your room, board, and entertainment.
    • I don't believe there are a substantial number of people who are incapable of being productive after 12 years of high school, but then if you send them to college for 4 years, now they can be productive. That doesn't make sense. The way you would train a very low-skill person to be productive is by training them on a specific job, not sending them to college.
Comment by cata on Why People in Poverty Make Bad Decisions · 2024-07-16T02:52:04.510Z · LW · GW

Do you believe the result about priming people with a $1500 bill and a $150 bill? That pattern matches perfectly to an infinite list of priming research that failed to replicate, so by default I would assume it is probably wrong.

The one about people scoring better after harvest makes a lot more sense since, like, it's a real difference and not some priming thing, so I am not as skeptical about that.

Comment by cata on Reliable Sources: The Story of David Gerard · 2024-07-13T19:23:44.555Z · LW · GW

It kind of weirds me out that this post has such a high karma score. It's a fun read, and maybe it will help some Wikipedia admins get their house in order, but I don't like "we good guys are being wronged by the bad outsider" content on LessWrong. No offense to Trace who is a great writer and clearly worked hard putting all this together.

Comment by cata on Monthly Roundup #19: June 2024 · 2024-06-25T22:12:36.700Z · LW · GW

It seems like this is a place where "controversial" and "taboo" diverge in meaning. The politician would notice that the sentence was about a taboo topic and bounce off, but that's probably totally unconnected to whether or not it would be controversial among people who know anything about genetics or intelligence and are actually expressing a belief. For example, they would bounce off regardless of whether the number in the sentence was 1%, 50%, or 90%.

Comment by cata on Sci-Fi books micro-reviews · 2024-06-24T19:50:03.627Z · LW · GW

I thought the sequels were far better than the first book. But I have seen people with the opposite opinion.

Comment by cata on keltan's Shortform · 2024-06-22T06:03:44.644Z · LW · GW

How did you like your trip in the end?

Comment by cata on How I Think, Part Two: Distrusting Individuals · 2024-06-14T21:27:07.525Z · LW · GW

It definitely depends. I think there are lots of people for which there are lots of domains of information for which they are highly trustworthy in realtime conversation. For example, if I am working as a programmer, and I talk to my smart, productive coworker and ask him some normal questions about the system he built recently, I expect him to be highly confident and well calibrated on what he knows. Or if I talk to my friend with a physics PhD and ask him some question like what makes there be friction, I expect him to be highly confident and well calibrated. Certainly he isn't likely to say something confident and then I look on Wikipedia and he was totally wrong.

In general I take more seriously what people say if

  • They are a person who has a source of information that could be good about the thing they are saying.
  • They are a person who is capable of saying they don't know instead of bullshitting me, when they don't know. And in general they respect the value of expressing uncertainty.
  • The thing they are saying could be something that is easier to actually know and understand and remember, instead of super hard. For example, maybe it is part of a kind of consistent gears-level model of some domain, so if they forgot or got it mixed up, they may notice their error.
Comment by cata on Humming is not a free $100 bill · 2024-06-06T20:32:13.576Z · LW · GW

I hope you don't feel dumb! What could be smarter than sitting around thinking up good ideas, writing about them, and getting a bunch of people to work together to figure out what to make of them? It seems like the most smart possible behavior!

Comment by cata on Notifications Received in 30 Minutes of Class · 2024-05-27T21:07:17.095Z · LW · GW

It seems like the students think that eliminating the distractions wouldn't improve how much they learn in class. That sounds ridiculous to me, but public school classrooms are a weird environment that already aren't really set up well to teach anyone anything, so maybe it could be true. Is it credible?

Comment by cata on Some perspectives on the discipline of Physics · 2024-05-20T18:44:43.543Z · LW · GW

As a non-physicist I kind of had the idea that the reason I was taught Newtonian mechanics in high school was that it was assumed I wasn't going to have the time, motivation, or brainpower to learn some kind of fancy, real university version of it, so the alternate idea that it's useful for intuition-building of the concepts is novel and interesting to me.

Comment by cata on Why you should learn a musical instrument · 2024-05-17T01:56:58.750Z · LW · GW

Learning piano I have been pretty skeptical about the importance of learning to read sheet music fluently. All piano players culturally seem to insist that it's very important, but my sense is that it's some kind of weird bias. If you tell piano players that you should hear it in your head and play it expressively, they will start saying stuff about, what if you don't already know what it's supposed to sound like, how will you figure it out, and they don't like "I will go listen to it" as an answer.

So far, I am not very fluent at reading, so maybe I just don't get it yet.

Comment by cata on Against Student Debt Cancellation From All Sides of the Political Compass · 2024-05-14T09:25:14.957Z · LW · GW

Why is it bad to have wealth inequality by age? Basically everyone gets to be every age, so there's nothing "unfair" about it.

Comment by cata on Should I Finish My Bachelor's Degree? · 2024-05-11T22:48:55.247Z · LW · GW

I still don't get why you are even considering finishing the degree, even though you clearly tried to explain it to me. Taking eight college classes is a lot of work actually? "Why not" doesn't really seem to cover it. How is doing a "terrible" commute several times per week for two semesters and spending many hours per week a low cost?

You sort of imply that someone is judging you for not having the degree but you didn't give any examples of actually being judged.

If you really really want to prove to yourself that you can do it, or if you really want to learn more math (I agree that taking college courses seems like a fine way to learn more math) then I understand, but based on your post it's not clear to me.

Comment by cata on LessOnline Festival Updates Thread · 2024-05-05T20:33:25.944Z · LW · GW

That just sounds great, thanks.

Comment by cata on LessOnline Festival Updates Thread · 2024-04-19T01:56:34.986Z · LW · GW

How's the childcare situation looking? Last I heard it wasn't clear and the organizers were seeing how much interest there was in it.

Comment by cata on How does it feel to switch from earn-to-give? · 2024-03-31T23:23:20.610Z · LW · GW

This isn't quite what you asked for, but I did feel a related switch.

When I was a kid, I thought that probably people in positions of power were smart people working towards smart goals under difficult constraints that made their actions sometimes look foolish to me, who knew little. Then there was a specific moment in my early 20s, when the political topic of the day was the design of Obamacare, and so if you followed the news, you would see all the day-to-day arguments between legislators and policy analysts about what would go in the legislation and why. And the things they said about it were so transparently stupid and so irredeemably ridiculous, that it completely cured me of the idea that they were the thing I said up above. It was clear to me that it was just a bunch of people who weren't really experts on the economics of healthcare or anything, and they weren't even aspiring to be experts. They were just doing and saying whatever sort of superficially seemed like it would further their career.

So now I definitely have no internal dissonance about trusting myself to make decisions about what work to do, because I don't take seriously the idea that someone else would be making any better decision, unless it's some specific person that I have specific evidence about.

Comment by cata on [deleted post] 2024-03-22T04:19:40.057Z

I am surprised by this, for example. Can you give examples of some of your controversial takes on any issues? I am wondering if you just do not have very controversial takes.

Controversial is obviously relative to the audience, but I have lots of opinionated beliefs that might make various audiences mad at me. Some different flavors include

  • I am roughly a total utilitarian, which involves lots of beliefs about what actions are moral that all kinds of people might strongly disagree with. For example, I don't agree that inequality is intrinsically bad.
  • I roughly agree with (my understanding of) Zack Davis's arguments about the superiority of cluster-of-traits-based definitions of gender words, rather than self-ID based definitions, which I am sure would make many trans people mad.
  • I think it's ridiculous for suicide to be illegal and marginal efforts to increase the availability of suicide seem great.
  • I frequently criticize my coworkers' ideas of what to work on as being bad or not worth doing.

Stuff like correlation between IQ and ethnicity is a bit more controversial, but my takes are usually much more controversial than that. I often wonder what would have happened if the US had wiped out USSR’s main cities post WW2 and established global hegemony (wipe out any nation that doesn’t submit, maintain nuclear monopoly). I have genuine respect and admiration for people like hitler or the unabomber, more than for a lot of the people I see around me, despite disagreeing with their object level opinions (I’m not a nazi or an anarchoprimitivist).

I am not very knowledgeable about or interested in history or social science, so I have less strong opinions about things like this, and don't talk about them very often. For example, my opinion about IQ and ethnicity is that the obvious group differences seem to obviously suggest some kind of genetic difference, but I know psychologists have some complicated statistical argument for why that may not be the case, so therefore I don't know.

I note, however, that I can't think of the last time before now that I have ever been in a conversation where it seemed like my views on IQ and ethnic groups were relevant, so I don't have a problem with pissing people off by expressing them. Is this different for you? How do you end up in discussions about it with people who will then be offended when you say your opinion? Is it some kind of thing where you participate in social media conversations about it which then broadcast your opinions to basically random people? (I don't use any platform like that.)

Do you expect to ever become at all famous in your life?

Definitely not. It sounds very annoying. I am not altruistic enough to want to do something that involves being substantially famous.

I can send a list of examples of people whose lives have been ruined by this. Do you claim I am misjudging the probability this happens to me personally?

Probably, if it's a big consideration to you. I think it seems like a tail risk that isn't very substantial, unless your life depends on the approval of others in a somewhat atypical way. (Perhaps it does, if your life involves being famous.)

Do you have actual experience in bio security? I doubt most people in EA circles or even many academics would provide you with any of the funding or connections required to work in bio security if this is your current stance on the matter.

No, I just quoted this because it was the example you gave. I know little about biosecurity and I don't intend my remarks to extend to "infohazard" kinds of information. Perhaps you know things about the biosecurity nonprofit world that I don't. However, I know something about the kinds of things that some EA grantmakers like SFF consider, and I don't see why being the kind of person who speaks their mind about controversial beliefs would make them less likely to fund you.

Comment by cata on [deleted post] 2024-03-21T23:59:19.557Z

I am 37 and I am a partially retired programmer after a ~20 year career. I basically try to maximize clarity while obeying normal politeness norms, prioritizing clarity and honesty over politeness where the topic is important (e.g. delivering actionable criticism or bad news.) I would say that during my career I received very strong evidence that this is an effective communication style for working well with others. For example, I have had numerous coworkers spontaneously tell me that they respected my straightforwardness, and seek out my feedback on what they were thinking. I have also had coworkers who were hurt by my criticism, but the balance seems clear to me.

I certainly have no "hot takes which I feel uncomfortable sharing with people around me", nor would I ever "assume...whatever they [I] tell anyone could eventually end up being broadcasted by a famous person on the internet", which sounds pathologically anxious. I don't start random arguments with random people unbidden, because that's impolite, but I would not consider concealing my beliefs about something true and important.

My comparative advantage in the world is my ability to make and fix practical things. If I aspired to be a professional persuader, or a political operator in a large organization, and I was talented at persuasion and manipulation, maybe I would behave differently. But I wouldn't behave differently if I were to "run a research nonprofit working on biosecurity" for example.

I think most people who regularly conceal or lie about their beliefs are doing so because of emotional anxiety and conflict avoidance that is not based on a sober judgment of the consequences. If they reflected on the fact that they know well who in their lives is straightforward and trustworthy, and who is an untrustworthy bullshitter, then they would realize that it's a huge benefit to join the first category, and typically disproportionate to any risk. I have the good fortune to naturally be not very socially anxious, leading me to a better path.

Comment by cata on Puzzle Games · 2024-03-21T08:02:33.932Z · LW · GW

Lucas Watson, who co-wrote Hanano Puzzle 2, just published an exceptional new game, I Wanna Lockpick, which I would put in your tier 1.

One thing which I really enjoyed about it is that it uses its mechanics to build interesting puzzles in all of the different puzzle categories above, and mixes them freely, so it feels like there is a nice variety of kinds of thinking involved.

Comment by cata on Thomas Kwa's Shortform · 2024-03-06T23:46:56.802Z · LW · GW

Thanks, I didn't realize that this PC fan idea had made air purifiers so much better since I bought my Coway, so this post made me buy one of the Luggable kits. I'll share this info with others.

Comment by cata on If you weren't such an idiot... · 2024-03-03T08:29:56.306Z · LW · GW

I disagree with the summarization suggestion for the same reason that I disagree with many of the items -- I don't have (much of) the problem they are trying to solve, so why would I expend effort to attack a problem I don't have?

The most obvious is "carrying extra batteries for my phone." My phone never runs out of battery; I should not carry batteries that I will never use. Similarly: I don't have a problem with losing things, such that I need extra. (If I had extra, I would plausibly give them away to save physical space!) I don't find myself wishing I remembered more of my thoughts, such that I should take the effort to capture and retain them. And I don't feel the need to remember more than I already do about the stuff that I read, so that makes me not inclined to take time away from the rest of my life and spend it remembering more things.

Comment by cata on If you weren't such an idiot... · 2024-03-03T06:14:07.392Z · LW · GW

Are you really saying you think everything on this list is "obviously" beneficial? I probably only agree with half the stuff on the list. For example, I certainly disagree that I should "summarize things that I read" (?) or that I should have a "good mentor" by emailing people to request that they mentor me.

Comment by cata on Acting Wholesomely · 2024-02-27T07:35:43.914Z · LW · GW

I specifically think it's well within the human norm, i.e. that most of the things I read are written by a person who has done worse things, or who would do worse things given equal power. I have done worse things, in my opinion. There's just not a blog post about them right now.

Comment by cata on Acting Wholesomely · 2024-02-27T05:54:40.897Z · LW · GW

Speaking for myself, I don't agree with any of it. From what I have read, I don't agree that the author's personal issues demonstrate "some amount of poison in them" outside the human norm, or in some way that would make me automatically skeptical of anything they said "entwined with soulcrafting." And I certainly don't agree that a reader "should be aware" of nonspecific problems that an author has which aren't even clearly relevant to something they wrote. I would give the exact opposite advice -- to try to focus on the ideas first before involving preconceptions about the author's biases.

Comment by cata on LessWrong Is Very Wrong: Ultimately All Social Media Platforms Are The Same · 2024-02-13T06:56:11.691Z · LW · GW

If you wanted other people to consider this remark, you shouldn't have deleted whatever discussion you had that prompted it, so that we could go look.

Comment by cata on Would you have a baby in 2024? · 2023-12-25T23:25:52.403Z · LW · GW

Yes, I basically am not considering that because I am not aware of the arguments for why that's a likely kind of risk (vs. the risk of simple annihilation, which I understand the basic arguments for.) If you think the future will be super miserable rather than simply nonexistent, then I understand why you might not have a kid.

Comment by cata on Would you have a baby in 2024? · 2023-12-25T20:58:28.954Z · LW · GW

I don't agree with that. I'm a parent of a 4-year-old who takes AI risk seriously. I think childhood is great in and of itself, and if the fate of my kid is to live until 20 and then experience some unthinkable AI apocalypse, that was 20 more good years of life than he would have had if I didn't do anything. If that's the deal of life it's a pretty good deal and I don't think there's any reason to be particularly anguished about it on your kid's behalf.

Comment by cata on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-19T22:24:39.190Z · LW · GW

Thanks for the post. Your intuition as someone who has observed lots of similar arguments and the people involved in them seems like it should be worth something.

Personally as a non-involved party following this drama the thing I updated the most about so far was the emotional harm apparently done by Ben's original post. Kat's descriptions of how stressed out it made her were very striking and unexpected to me. Your post corroborates that it's common to take extreme emotional damage from accusations like this.

I am sure that LW has other people like me who are natural psychological outliers on "low emotional affect" or maybe "low agreeableness" who wouldn't necessarily intuit that it would be a super big deal for someone to publish a big public post accusing you of being an asshole. Now I understand that it's a bigger deal than I thought, and I am more open to norms that are more subtle than "honestly write whatever you think."

Comment by cata on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-19T22:15:41.098Z · LW · GW

I am skeptical of the gender angle, but I think it's being underdiscussed that, based on the balance of evidence so far, the person with the biggest, most effective machine gun is $5000 to the richer and still anonymous, whereas the people hit by their bullets are busy pointing fingers at each other. Alice's alleged actions trashing Nonlinear (and 20-some former people???) seem IMO much worse than anything Lightcone or Nonlinear is being even accused of.

(Not that this is a totally foregone conclusion - I noticed that Nonlinear didn't provide any direct evidence on the claim that Alice was a known serial liar outside of this saga.)