Posts

Speaking to Congressional staffers about AI risk 2023-12-04T23:08:52.055Z
The Apprentice Thread 2 2023-05-01T20:09:50.977Z
[Linkpost] The Story Of VaccinateCA 2022-12-09T23:54:48.703Z
Cultivating And Destroying Agency 2022-06-30T03:59:27.239Z
What do you do to deliberately practice? 2022-06-04T22:20:50.609Z
hath's Shortform 2022-01-30T23:02:43.830Z
Deepmind's Gopher--more powerful than GPT-3 2021-12-08T17:06:32.650Z
Self-Responsibility 2021-06-21T16:17:43.803Z

Comments

Comment by hath on hath's Shortform · 2023-12-02T19:46:48.804Z · LW · GW

I might start a newsletter on the economics of individual small businesses. Does anyone know anyone who owns or manages e.g. a restaurant, or a cafe, or a law firm, or a bookstore, or literally any kind of small business? Would love intros to such people so that I can ask them a bunch of questions about e.g. their main sources of revenue and costs or how they make pricing decisions.

Comment by hath on Announcing Dialogues · 2023-10-09T19:39:12.799Z · LW · GW

I'd be interested in doing something resembling an interview/podcast, where my main role is to facilitate someone else talking about their models (and maybe asking questions that get them to look closer at blurry parts of their own models). If you have something you want to talk/write about, but don't feel like making a whole post about it, consider asking me to do a dialogue with you as a relatively low-effort way to publish your takes.

Some potential topics:

  • What's your organization trying to do? What are some details about the world that are informing your current path to doing that?
  • What are you spending your time on, and why do you think it's useful?
  • What's a common mistake (or class of mistake) you see people making, and how might one avoid it?

So, if you have something you want to talk about, and want to recruit me as an interlocutor/facilitator, let me know.

Comment by hath on The Apprentice Thread 2 · 2023-05-02T16:55:14.520Z · LW · GW

Took APCS (Java 101-102) in high school (culminating with coding Tetris in Java), read through Diveintopython3.net, done a bunch of miscellaneous programs in Python, lots of experience in Linux.

Comment by hath on The Apprentice Thread 2 · 2023-05-01T20:15:57.071Z · LW · GW

[APPRENTICE]:

For a bunch of these, the minimum viable product for mentoring me is a combination of pointing me to books/textbooks and checking in on me to make sure I actually do it.

Some things I'd like mentorship on:

  • People willing to review my writing, and accountability on spewing out a bunch of blog posts. (and maybe on starting an actual novel!)
  • Operations. I've run a couple large projects in the past, including a group house, and there's a lot I can do better. Would love to hear from people who have run group houses or organizations in the past.
  • Economics: I have most of the 101-level stuff, but could use some more specific knowledge on labor econ. Especially curious about (banking) regulation.
  • Math: Besides teaching myself calculus and linear algebra, I haven't really gotten into much complicated math; someone able to point me at more advanced stuff, ideally alignment-relevant, would be much appreciated.
  • Programming. Ideally, I'd go through a bunch of projects you suggest in Python, with you available for occasional debugging/querying, with the goal of eventually being able to do more technical alignment work.
Comment by hath on [deleted post] 2023-04-26T17:02:07.645Z

Some notes on the Dialogue format:

Seems like a less effortful/more social way of writing--like glowfic, but for nonfiction!

Probably better at conveying more implicit knowledge, like interviews do (maybe?)

Just because it's public doesn't mean we won't stop adding pieces to the dialogue; Elizabeth and I still have a lot to say.

Comment by hath on hath's Shortform · 2023-02-24T02:21:59.998Z · LW · GW

Day 1, adding ~500 words of nuance.

  • A lot of these models are “this was my lived experience, it seems to generalize a fair bit”. I sent out an interest form to see how much demand there was for something like this, as a way to test whether it did in fact generalize a bunch to other people, and it got a lot of responses. 
  • Default BATNA to high school is “live by yourself, maybe on a grant, while you self-teach or work on a project”. I did this! It sucked!
    • Solo productivity is hard. Creating systems that help you get work/studying done every day, without external deadlines and check-ins, is really difficult. Also, I have pretty bad ADHD, which means that my default for extended periods of working alone involves forgetting to eat, take my ADHD medication, or do anything productive whatsoever during the day.
    • I care a lot about seeing friends, and don’t really have a lot of ways to do that, especially because most of my really good friends are scattered across the US and Europe. 
    • Being stuck at home is corrosive for a bunch of reasons that aren’t always immediately apparent. Some of this is due to the loss of the counterfactual environment, and some of this is due to specific details about people’s home lives. 
  • Agency is a pretty important thing. By default, it gets crushed. Giving people power over their own lives, and encouraging them when they do weird things, helps turn them into the kind-of-person who comes up with weird new things to do that would help their lives, and overall makes the world a better place.
    • A lot of people who have the potential to do a lot of great things have their creativity and agency crushed by The System and their parents. The K-12 education system isn’t centrally designed to do any one thing, but the result of the system is that your creativity and independence is crushed. The parents of really smart kids can be slightly obtuse and limiting at best and controlling and manipulative at worst. I know people who were forced to do double-digit hours of test prep every week.
    • Living on your own, and not being forced to adhere to (arbitrary) external goals and standards, does a lot to help people acquire the generalized skills of actually making their own decisions and guiding their own paths and stuff. I have a lot of other thoughts on agency, some of which can be seen here, but “internal vs. external locus of control” is pretty central to the thing here. The key is getting people to see themselves as agents in the world taking actions according to their own desires/ambitions, as opposed to executing strategies that other people/their broader culture has set out for them. Again, I need to write more about this, but this is a good first pass.
  • Things like coworking are pretty good for long-term productivity.
    • The key is social accountability. I’d estimate that, for me, having all of my work hours be coworking of some form as opposed to [puttering around and occasionally doing productive things] results in doubling my actual output. Being in a work/living environment where coworking is a readily available default would then be a huge improvement on its own.
Comment by hath on hath's Shortform · 2023-02-22T20:40:16.094Z · LW · GW

Super rough expansion of the first couple bullet points, Day 0:

Intro:

  • Why write this?
    • I’m writing this post because I have a bunch of models about group houses, minors, and the combination of the two that I think other people might be interested in. I also want to have some publicly available thing I can point to that says what this whole thing is about.
  • Short version of what this was.
    • Ascension Beta was a month-long experimental group house I ran in October 2022, with participants aged 16-22. It was intended primarily as a test of the below models (to see if a larger, longer-running version was worthwhile) and a chance to practice running a group house of this type, working out the major kinks before running a longer version. 
    • The major goals of Ascension were to give residents social accountabilityagency over their environment, and community.
  • Why run Ascension (short version)
    • Because I wanted it to exist (so I could live there), other people wanted it to exist for the same reason, and nobody else was going to step up and make it happen. I had a lot of models about agency, environment, and productivity, and in particular a specific kind of environment wanted to live in. However, it didn’t exist, especially not for minors. I also hypothesized that the people I had met who were similar to me would also want this to exist, and that was borne out by the evidence. There are a bunch of reasons for why Ascension provides value to these people, and that’s what most of this post is about.

Models:

  • Most important model here: it worked. Everything below is mostly informed by that, and the beta was a really good way to develop those models.
    • Before I ran the beta, I was pretty uncertain about some of these models. My models on high school and agency were fairly strong, but everything about how something like Ascension would actually function in practice was fairly blurry. However, the beta, while janky, proved that something like Ascension could work, and that it was likely for some longer/larger version to be more effective.
  • High school, as an institution, is really bad.
    • It’s a waste of time that destroys your agency and love for learning. I could go in-depth on the specific reasons why it’s so bad, but for now, just keep in mind that the default societal path here is four years in hell that takes up as much of your Slack as possible. This means that really smart/ambitious people, the kind that you meet at programs like Atlas or SPARC, often have to find their own ways out of high school to actually do the things they care about. There mostly does not exist infrastructure to support alternate pathways for these people. There are small bits of it, notably Emergent Ventures for funding your endeavors during this time, but the majority of the things that you would have on the default pathway of high school + college (housing, peers, “learning”) have to be found for yourself; most of the time, the hand-rolled solutions that you find will fail you.
  • Default BATNA to high school is “live by yourself, maybe on a grant, while you self-teach or work on a project”. I did this! It sucked!
    • Solo productivity is hard.
    • Going insane due to loneliness.
    • Being stuck at home is really really bad for a bunch of reasons that aren’t always immediately apparent.
Comment by hath on hath's Shortform · 2023-02-22T20:05:23.278Z · LW · GW

I'm writing up my models on why my pet project, Ascension, is a good idea. This is the outline. As I expand the post, I'll add the incremental bits as comments.


Intro:

  • Why write this?
  • Short version of what this was.
  • Why run Ascension (short version)

Models:

  • Most important model here: it worked. Everything below is mostly informed by that, and the beta was a really good way to develop those models.
  • High school, as an institution, is absolutely dog shit.
    • Signaling race to the bottom that sucks up all of your time.
  • Default BATNA to high school is “live by yourself, maybe on a grant, while you self-teach or work on a project”
    • I did this! It fucking sucked!
  • Agency is a pretty important thing. By default, it gets crushed. Giving people power over their own lives, and encouraging them when they do weird things, helps turn them into the kind-of-person who comes up with weird new things to do that would help their lives, and overall makes the world a better place.
    • A lot of people who have the potential to do a lot of great things have their creativity and agency crushed by The System and their parents.
  • Things like coworking are pretty good for long-term productivity.
  • There are a bunch of other systems that can be implemented individually and on the scale of groups to make people more productive. Think to-do lists, or morning standup meetings.
  • Dealing with minors, in general, fucking sucks.
    • You have to be accountable to parents
    • They don’t have rights under the law to do a bunch of things
    • You’re exposed to a lot more liability in general
    • Also, a lot of people just, like, don’t really want to deal with most teenagers that much?
  • As a result of this, the housing situation for minors in the bay is close to nonexistent.
    • Arcadia, the closest thing to Ascension that exists, is pretty good, but not aimed at the exact kind of things I care about here, and also they don’t exactly have a no-minor policy, but they do have strict scrutiny around allowing minors.
  • There exist these weird niche communities around really smart, young, promising people, like Atlas/ESPR/SPARC and EV.
    • It’s fucking incredible for people in those communities to spend time with other people in those communities. Yet, because when you’re selecting on the scale of 1/1,000 or 1/10,000, you end up pretty far on average from other people like you.
    • Also, it’s useful for people to have some way of getting into these groups. 
  • The Bay community in general is fucking amazing, and being in a house that is part of that larger community is amazing.
    • Having people well into their career who can serve as mentorship-ish figures is also really good.
  • Weird rituals and group activities are awesome! People don’t get enough of those by default.
    • Having weird rationalist customs of betting and such, as group norms, is pretty cool.
  • Also, having a tight-knit group of people you live with, who you very much respect and know well, is great for social stuff.
  • Dath ilani coordination, where people choose Stag as a group.
  • Dragon Army was cool and all, but MAN do I not have the leadership knowledge/ability to run something with that much centralization of power. 
    • As it happens, neither did Duncan, according to him.
    • You’ll also note that the Dragon Army Theory post was mostly Duncan explaining different group dynamics problems, and trying to fix it with his group house, whereas this is more me explaining a bunch of societal problems that lead to people misjudging minors for this sort of thing.
  • For the demographic of teenagers who are likely to be at Ascension, the risks involved aren’t necessarily what you’d think of. They don’t drink, they don’t take (depressant) drugs. The actual risks that I’d be worried about are people getting depressed and romantic drama.
    • Though, a large part of this is due to me being pretty selective with who I invite.
  • Running stuff like “everyone in the house gets together to work on blog posts” results in many more blog posts being written than there would be otherwise.

So, like, what does all of this look like?



 

Why am I writing this? Because I have a bunch of models about group houses, minors, and the combination of the two that I think other people might be interested in. I also want to have some publicly available thing that I can point to that says what this whole thing is about.

Comment by hath on You Don't Exist, Duncan · 2023-02-03T00:12:37.702Z · LW · GW

I'm reminded of Falsehoods Programmers Believe About Names, an essay on the problems with handling "weird" data inputs that are normal for the people involved.

Comment by hath on You Don't Exist, Duncan · 2023-02-02T23:08:40.365Z · LW · GW
Comment by hath on How to Convince my Son that Drugs are Bad · 2022-12-19T15:43:26.072Z · LW · GW

Not sure if this would help, but I'm also a 16 year old[1]who's been reading LW for a bit over two years, and who doesn't think that taking most drugs is a great idea (and have chosen not to e.g. drink alcohol when I've had the opportunity to). I don't think all drugs are bad (I have an Adderall prescription for my ADHD) but the things your son mentioned seem likely to harm him. If he wanted to talk to me about it, he can PM me on LW or message me on Discord @ sammy!#0521. 

As someone who often has... disagreements with their parents, sometimes it's easier to rationally think about something if a peer brings it up. Also, I remember a long period of my life when I didn't really have friends of my own intelligence, and that sucked. Possibly that has something to do with this.

  1. ^

    LessWrong admins (like Ruby) can verify this, they've met me IRL.

Comment by hath on hath's Shortform · 2022-09-21T05:17:57.038Z · LW · GW

Meritxell has made the serious error of mentioning that she didn't fully grasp some of what Keltham said earlier about stock companies.

Keltham is currently explaining how a Lawful corporation has an internal prediction market, which forecasts the observable results on running various possible projects that company could be trying, which in turn is used to generate an estimate of marginal returns on marginal internal investment; this prevents a corporation from engaging in obvious madness like accepting an internal project with 6% returns while turning down another internal project with expected 10% returns.

The wider market, obviously, would also like to invest all its money where it'll get the highest returns; but it's usually not efficient to offer the broader market a specialized sub-ownership of particular corporate subprojects, since the ultimate usefulness of corporate subprojects is usually dependent on many other internal outputs of the company.  It doesn't do any good to have a 'website' without something to sell from it.  Sure, if everyone was an ideal agent, they'd be able to break things down in such a fine-grained way.  But the friction costs and imperfect knowledge are such that it's not worth breaking companies into even smaller ownable pieces.  So the wider stock market can only own shares of whole corporations, which combine the outputs and costs of all that company's projects.

Thus any corporation continuously buys or sells its own stock, or rather, has standing limit orders into the stock market to buy various quantities if the price goes low or sell various quantities if the price goes high, at prices that company sets depending on its internal belief about the returns from investing or not investing in the marginal subprojects being considered.  If the company isn't considered credible by the wider market, its stock will go lower and the company will automatically buy that stock, which leaves them less money to invest in new projects internally, and means that they only invest in projects with relatively higher returns - doing less total investment, but getting higher returns on the internal investments that they do start.  Conversely if the wider market thinks a company's promises to do a lot with money are credible, the stock price will go up and money will flow into that company until they no longer have internal investment prospects that credibly beat the broader market.

This may sound complicated, and it is probably a relatively more complicated part of the machinery that is necessarily implied by the existence of distinct stock corporations in the first place.  But the alternative, if you zoom out and look at the whole planet of dath ilan, is that a corporation in one place would be investing in a project with internally expected returns of 6%, and somebody on the other side of the planet would be turning down a project with market-credible returns of 10%, which means you could reorganize the whole planet and do better in a predictable way.  So whatever does happen as a consequence of the existence of stock corporations, it has to be not that.

Some form of drastic action on Meritxell's part is obviously required if she wants to get back on track to having sex with this person.  What does she do, if anything?

Comment by hath on hath's Shortform · 2022-09-21T05:17:17.039Z · LW · GW

Some quotes from Planecrash that I might collect into a full post:

Comment by hath on hath's Shortform · 2022-08-11T23:51:45.815Z · LW · GW

Upcoming Posts

Now that I'm back from [Atlas Fellowship+SPARC+EAG+Future Forum], I have some post ideas to write up. A brief summary:

Agency and Authority, an actual in-depth, gears-level explanation of agency, parenting, the two kinds of respect, moral conflation with that respect, the fact that those in power are incentivized to make their underlings more legible and predictable to them, arbitrarily high punishments and outcome matrices, absolute control and concessions, incentives for those not in power and how those incentives turn you into less of an agent, and how the best solution is to create a "good person who follows orders" mask that hopefully never breaks or bleeds into the rest of your character, and then use that mask while you plot to get out of the situation.

I've gotten the same questions a couple times from different people, and want to just write up the responses as a post, so I don't have to go back and rewrite them.

  • How did you get into rationality/EA/alignment?
  • Why did you hate high school so much?
  • What was [Atlas/SPARC/EAG/FF] like?
  • What's it like being a minor in those communities?
  • So, what exactly do you do? Well, okay, what are you planning to do?

There was a lightning talk I gave at SPARC and Future Forum, which I ended up teaching as a full-on class at SPARC, called Memento, Memory, and Note-Taking. I want to develop that as a full post.

I also want to migrate all of my notes from Obsidian to Notion, and have some plans for what I want to include in my Notion; this will probably make it onto LW at some point.

I've also made some progress on what I call "putting myself back together" and this is, in retrospect, what I have spent the past month doing. I might publicly reflect on some of the personal growth and introspection I've done during that time.

I'd like to write "An Intro to Rationalist Culture" at some point, because it's incredible to see the different social norms that rationalists have developed, the most important prerequisite being the ability to see and talk about social norms on the meta-level, and changing said norms as a result.

There's also some other ideas that seem important to me:

  • Fleshing out what "hath culture" looks like
  • I need to figure out how to live in a world on fire; writing down how I cope with that now might help.

This isn't even a definitive list of all the post ideas I have (the actual list is like 5x the size) but these are the ones I plan on writing soon.

I'll be at Capla-Con this weekend, if anyone else here is going.

Comment by hath on Cultivating And Destroying Agency · 2022-06-30T22:21:07.397Z · LW · GW

I’m really sorry to hear that, man. It’s honestly a horrible thing that this is what happens to so many people; it’s another sign of a generally inadequate civilization.

For what it’s worth, the first chapter of Smarter Faster Better is explicitly on motivation, and how to build it from nothing. It mentions multiple patients with brain injuries who were able to take back control over their own lives because someone else wanted to help them become agentic. I think reading that might help.

On another note, thank you for being open about this. I appreciate all comments on my posts, especially the ones actively related to the subject; your comment wasn’t complaining, and it was appreciated. Best of luck to you in the future.

Comment by hath on Godzilla Strategies · 2022-06-13T10:58:16.859Z · LW · GW

Not only is this post great, but it led me to read more James Mickens. Thank you for that! (His writings can be found here).

Comment by hath on LessWrong Now Has Dark Mode · 2022-05-10T01:47:50.120Z · LW · GW

Intercom doesn't change in Dark Mode. Also, the boxes around the comment section are faded, and the logo in the top left looks slightly off. Good job implementing it, though, and I'm extremely happy that LW has this feature.

Comment by hath on [$20K in Prizes] AI Safety Arguments Competition · 2022-04-27T00:02:41.094Z · LW · GW

If you are going to downvote this, at least argue why. 

Fair. Should've started with that.

To the extent that rationality has a purpose, I would argue that it is to do what it takes to achieve our goals,

I think there's a difference between "rationality is systematized winning" and "rationality is doing whatever it takes to achieve our goals". That difference requires more time to explain than I have right now.

if that includes creating "propaganda", so be it.

I think that if this works like they expect, it truly is a net positive.

I think that the whole AI alignment thing requires extraordinary measures, and I'm not sure what specifically that would take; I'm not saying we shouldn't do the contest. I doubt you and I have a substantial disagreement as to the severity of the problem or the effectiveness of the contest. My above comment was more "argument from 'everyone does this' doesn't work", not "this contest is bad and you are bad".

Also, I wouldn't call this contest propaganda. At the same time, if this contest was "convince EAs and LW users to have shorter timelines and higher chances of doom", it would be reacted to differently. There is a difference, convincing someone to have a shorter timeline isn't the same as trying to explain the whole AI alignment thing in the first place, but I worry that we could take that too far. I think that (most of) the responses John's comment got were good, and reassure me that the OPs are actually aware of/worried about John's concerns. I see no reason why this particular contest will be harmful, but I can imagine a future where we pivot to mainly strategies like this having some harmful second-order effects (which need their own post to explain).

Comment by hath on [$20K in Prizes] AI Safety Arguments Competition · 2022-04-26T19:00:48.026Z · LW · GW

You didn't refute his argument at all, you just said that other movements do the same thing. Isn't the entire point of rationality that we're meant to be truth-focused, and winning-focused, in ways that don't manipulate others? Are we not meant to hold ourselves to the standard of "Aim to explain, not persuade"? Just because others in the reference class of "movements" do something doesn't mean it's immediately something we should replicate! Is that not the obvious, immediate response? Your comment proves too much; it could be used to argue for literally any popular behavior of movements, including canceling/exiling dissidents. 

Do I think that this specific contest is non-trivially harmful at the margin? Probably not. I am, however, worried about the general attitude behind some of this type of recruitment, and the justifications used to defend it. I become really fucking worried when someone raises an entirely valid objection, and is met with "It's only natural; most other movements do this".

Comment by hath on [deleted post] 2022-04-22T18:32:40.629Z

Can confirm that this is all accurate. Some of it is much less weird in context. Some of it is much, much weirder in context.

Comment by hath on [deleted post] 2022-04-22T17:45:06.828Z

Yeah, my reaction to this was "you could have done a much better job of explaining the context" but:

"Your writing would be easier to understand if you explained things," the student said.

That was me, so I guess my opinion hasn't changed.

Comment by hath on Feature proposal: Close comment as resolved · 2022-04-15T23:53:44.029Z · LW · GW

I'd like to have the ability to leave Google-Doc style suggestions on normal posts about typos; seems like something that might be superior of our current system of doing it through the comments? Removing the trivial inconvenience might go a long way.

Comment by hath on Refine: An Incubator for Conceptual Alignment Research Bets · 2022-04-15T23:24:22.306Z · LW · GW

Are you accepting minors for this program?

Comment by hath on Editing Advice for LessWrong Users · 2022-04-11T19:32:10.221Z · LW · GW

Thank you for the post, and thank you for all the editing you've done!

Comment by hath on [deleted post] 2022-04-07T23:41:17.359Z

I'm an idiot; Blue Bottle is closed. Maybe the park next to it?

Comment by hath on [deleted post] 2022-04-07T17:36:16.704Z

The park next to there works as well.

Comment by hath on [deleted post] 2022-04-07T17:15:27.391Z

I've heard good things about Blue Bottle Coffee. It's also next to Lightcone.

Comment by hath on 20 Modern Heresies · 2022-04-03T21:17:29.423Z · LW · GW

I second this, I sincerely thought these were thoughts you held.

Comment by hath on Two Forms of Moral Judgment · 2022-04-03T21:03:04.015Z · LW · GW

Yeah, you're right. Oops.

Comment by hath on MIRI announces new "Death With Dignity" strategy · 2022-04-03T17:05:29.798Z · LW · GW

>Do you have any experience in programming or AI?

Programming yes, and I'd say I'm a skilled amateur, though I need to just do more programming. AI experience, not so much, other than reading (a large amount of) LW.

>Let's suppose you were organising a conference on AI safety. Can you name 5 or 6 ways that the conference could end up being net-negative?

  1. The conference involves someone talking about an extremely taboo topic (eugenics, say) as part of their plan to save the world from AI; the conference is covered in major news outlets as "AI Safety has an X problem" or something along those lines, and leading AI researchers are distracted from their work by the ensuing twitter storm.
  2. One of the main speakers at the event is very good at diverting money towards him/herself through raw charisma and ends up diverting money for projects/compute away from other, more promising projects; later it turns out that their project actually accelerated the development of an unaligned AI.
  3. The conference on AI safety doesn't involve the people actually trying to build an AGI, and only involves the people who are already committed to and educated about AI alignment. The organizers and conference attendees are reassured by the consensus of "alignment is the most pressing problem we're facing, and we need to take any steps necessary that don't hurt us in the long run to fix it," while that attitude isn't representative of the audience the organizers actually want to reach. The organizers make future decisions based on the information that "lead AI researchers already are concerned about alignment to the degree we want them to be", which ends up being wrong and they should have been more focused on reaching lead AI researchers.
  4. The conference is just a waste of time, and the attendees could have been doing better things with the time/resources spent attending.
  5. There's a bus crash on the way to the event, and several key researchers die, setting back progress by years.
  6. Similar to #2, the conference convinces researchers that [any of the wrong ways to approach "death with dignity" mentioned in this post] is the best way to try to solve x-risk from AGI, and resources are put towards plans that, if they fail, will fail catastrophically
  7. "If we manage to create an AI smarter than us, won't it be more moral?" or any AGI-related fallacy disproved in the Sequences is spouted as common wisdom, and people are convinced.
Comment by hath on High schoolers can apply to the Atlas Fellowship: $50k scholarship + summer program · 2022-04-03T12:27:24.684Z · LW · GW

As far as I know, the purpose of the nomination is "provide an incentive for you to share the Atlas Fellowship with those you think might be interested" not "help make our admissions decisions". I agree that, if the nomination form was weighted heavily in the admissions decisions, we would be incentivized to speak highly of those who don't deserve it to get 500$.

Comment by hath on MIRI announces new "Death With Dignity" strategy · 2022-04-03T12:19:50.653Z · LW · GW
  1. High charisma/extroversion, not much else I can think of that's relevant there. (Other than generally being a fast learner at that type of thing.)
  2. Not something I've done before.
Comment by hath on Vaniver's Shortform · 2022-04-02T20:59:55.062Z · LW · GW

Enjoy it while it lasts. /s

Comment by hath on Good Heart Week: Extending the Experiment · 2022-04-02T16:29:53.395Z · LW · GW

Are we changing from "payment sent every day at midnight" to "payment sent at end of week"?

Comment by hath on MIRI announces new "Death With Dignity" strategy · 2022-04-02T12:41:44.404Z · LW · GW

Also this comment:

Eliezer, do you have any advice for someone wanting to enter this research space at (from your perspective) the eleventh hour?

I don't have any such advice at the moment. It's not clear to me what makes a difference at this point.

Comment by hath on Replacing Karma with Good Heart Tokens (Worth $1!) · 2022-04-02T02:48:00.851Z · LW · GW

If you didn't already try, I bet Lightcone would let you post more if you asked over Intercom.

Comment by hath on [deleted post] 2022-04-02T02:44:49.666Z

Thank you so much! Fixed.

Comment by hath on MIRI announces new "Death With Dignity" strategy · 2022-04-02T02:36:23.367Z · LW · GW

(although, measuring impact on alignment to that degree might be of a similar difficulty as actually solving alignment).

Comment by hath on MIRI announces new "Death With Dignity" strategy · 2022-04-02T01:57:35.896Z · LW · GW

Sure, but it's dignity in the specific realm of "facing unaligned AGI knowing we did everything we could", not dignity in general.

Comment by hath on MIRI announces new "Death With Dignity" strategy · 2022-04-02T01:56:33.912Z · LW · GW

Do you have any ideas for how to go about measuring dignity?

Comment by hath on MIRI announces new "Death With Dignity" strategy · 2022-04-02T01:05:24.051Z · LW · GW

I mean this completely seriously: now that MIRI has changed to the Death With Dignity strategy, is there anything that I or anyone on LW can do to help with said strategy, other than pursue independent alignment research? Not that pursuing alignment research is the wrong thing to do, just that you might have better ideas.

Comment by hath on Two Forms of Moral Judgment · 2022-04-01T22:33:33.008Z · LW · GW

My inner Professor Quirrell is currently saying that if someone did have a moral policy in which animals had little-to-no value, they probably wouldn't abuse their pets where we could see; it'd be as if someone had read Snuff and thought "That man was a fool. He shouldn't have done that in public, because look what happened to him." Someone who really didn't care about animals in the slightest would still probably act like a normal member of society and just avoid interacting with animals whenever possible, because seeming like a stereotypical villain is going to be counterproductive for achieving your desires.

Wow. I have this strange feeling that someday, someone is going to look at the above paragraph and say "hath, you condone animal abuse?" or something to that effect. hopefully that doesn't happen.

Comment by hath on hath's Shortform · 2022-04-01T20:19:57.537Z · LW · GW

(there's also a level here of "i have no idea how to handle this situation/dynamic", and if you think I did something wrong either in the events described in these posts or by posting this, feel free to tell me i'm an idiot and that I should've done something different)

Comment by hath on Replacing Karma with Good Heart Tokens (Worth $1!) · 2022-04-01T20:14:31.865Z · LW · GW

...I forgot about the annual review. I think I'll just say that doesn't count, and also commit to no more changes of the conditions.

EDIT: actually, just going to kill the market.

Comment by hath on Replacing Karma with Good Heart Tokens (Worth $1!) · 2022-04-01T20:02:28.939Z · LW · GW

Created a market on Manifold to see if either today's GoodHeart system will last past today, or else if LW will try financial rewards for posting in 2022.

Comment by hath on Replacing Karma with Good Heart Tokens (Worth $1!) · 2022-04-01T19:51:08.728Z · LW · GW

It's really interesting seeing the change in attitude toward low-effort asking-for-money posts. Earlier, people upvoted/put up with them; now people are actively punishing bullshit with strong downvotes. This is good for LW implementing monetary incentives in the future; we can punish Goodharters ourselves.

Comment by hath on Replacing Karma with Good Heart Tokens (Worth $1!) · 2022-04-01T19:44:37.874Z · LW · GW

I've been working on setting up a TED talk at my high school, and since the beginning have been planning on asking for speakers through a post here. However, the day that we finally finished the website, and I can finally post here about it, is... when we're doing this whole GoodHeart thing. Not sure whether I should publish it today or tomorrow. (Pros: money. Cons: possibly fewer views because of everything else posted today.) What do you all think?

Comment by hath on hath's Shortform · 2022-04-01T19:37:44.596Z · LW · GW

This book occupies the same genre as The Theory And Practice of Oligarchial Collectivism, though I'm not sure what to call that genre. Thank you so much. Would you recommend the longer book?

Comment by hath on Replacing Karma with Good Heart Tokens (Worth $1!) · 2022-04-01T18:32:43.824Z · LW · GW

I think that was part of the whole "haha goodhart's law doesn't exist, making value is really easy" joke. However, it's also possible that that's... actually one of the hard-to-fake things they're looking for (along with actual competence/intelligence). See PG's Mean People Fail or Earnestness. I agree that "just give good money to good people" is a terrible idea, but there's a steelman of that which is "along with intelligence, originality, and domain expertise, being a Good Person (whatever that means) and being earnest is a really good trait in EA/LW and the world at large, and so we should try and find people who are Good and Earnest, to whatever extent that we can make sure that isn't Goodharted ."

(I somewhat expect someone at LW to respond to this saying "no, the whole goodness thing was a joke")

Comment by hath on hath's Shortform · 2022-04-01T18:05:33.468Z · LW · GW

As a follow up: There have been a couple incidents with said teacher trying to assert authority and win debates over, like, actually listening to her students. Today, we had a quiz on 1984. When, during the allotted study time beforehand, students started to go over the material with each other, the teacher told everyone that this was a silent study time; after the quiz, she expanded on this, mentioning a story she had told earlier in the year. It was a story of how a student who had helped their friend on a quiz was rejected by a college the friend was accepted to; the moral from this that she repeated throughout the year was "Your peers are your enemies. You should not help them, because that just actively hurts you in college admissions. Also, let's be real, helping them in this way before the quiz, telling them the answers, is cheating. So, don't help your fellow students; it's cheating, and it only hurts you." I pointed out that a former teacher of mine had lamented grading on a curve strictly because it makes them see their fellow students as competitors instead of friends and allies, and that her argument proved too much; under that, helping other students study in any way counte--she interrupted me, saying that I was equivocating between helping and cheating; when I tried to explain myself she shut me down, saying "You don't want to argue with me about this." (in an earlier conversation, she attributed her aptitude in this to doing debate.)

Another relevant time was when, when at one point I misspoke during a debate, repeatedly said "But you said X!" in response to me. "I don't believe that, either you misheard me or I misspoke." "You said X!" "You are purposefully misinterpreting my words." "I'm just saying back what you said!" "You aren't being at all charitable." "I'm just saying what you said!"

The point here is that, repeatedly, she's only cared about asserting authority rather than listening or being a charitable debate partner. It's not fun to be effectively shamed in front of the class without a valid chance to defend myself, and I already feel that impacting my decisions now; if I cared more about what the people in my classes thought, I'd never have spoken up in the first place. Maybe that's why nobody else does.