Posts

Halifax Monthly Meetup: AI Safety Discussion 2023-03-05T12:42:40.511Z
Halifax Monthly Meetup: Introduction to Effective Altruism 2023-02-02T21:10:12.992Z
Halifax Monthly Meetup: Moloch in the HRM 2023-01-08T14:49:28.054Z
Halifax, NS – Monthly Rationalist, EA, and ACX Meetup 2022-11-18T11:45:17.987Z
Halifax, NS – Monthly Rationalist, EA, and ACX Meetup Kick-Off 2022-10-16T13:17:06.533Z
Halifax Rationality / EA Coworking Day 2022-06-01T17:47:00.463Z
The Agent 2022-03-02T22:43:58.676Z
A Simple Ethics Model 2021-01-22T02:56:43.713Z
100 Tips for a Better Life 2020-12-22T14:30:12.756Z
Second-Order Existential Risk 2020-07-01T18:46:52.140Z
Autoexperiments 2019-12-23T16:53:03.287Z

Comments

Comment by Ideopunk on 80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly) · 2024-07-03T22:20:23.814Z · LW · GW

(Cross-posted from the EA forum)

Hi, I run the 80,000 Hours job board, thanks for writing this out! 

I agree that OpenAI has demonstrated a significant level of manipulativeness and have lost confidence in them prioritizing existential safety work. However, we don’t conceptualize the board as endorsing organisations. The point of the board is to give job-seekers access to opportunities where they can contribute to solving our top problems or build career capital to do so (as we write in our FAQ). Sometimes these roles are at organisations whose mission I disagree with, because the role nonetheless seems like an opportunity to do good work on a key problem.

For OpenAI in particular, we’ve tightened up our listings since the news stories a month ago, and are now only posting infosec roles and direct safety work – a small percentage of jobs they advertise. See here for the OAI roles we currently list. We used to list roles that seemed more tangentially safety-related, but because of our reduced confidence in OpenAI, we limited the listings further to only roles that are very directly on safety or security work. I still expect these roles to be good opportunities to do important work. Two live examples:

  • Infosec
    • Even if we were very sure that OpenAI was reckless and did not care about existential safety, I would still expect them to not want their model to leak out to competitors, and importantly, we think it's still good for the world if their models don't leak! So I would still expect people working on their infosec to be doing good work.
  • Non-infosec safety work
    • These still seem like potentially very strong roles with the opportunity to do very important work. We think it’s still good for the world if talented people work in roles like this! 
    • This is true even if we expect them to lack political power and to play second fiddle to capabilities work and even if that makes them less good opportunities vs. other companies. 

We also include a note on their 'job cards' on the job board (also DeepMind’s and Anthropic’s) linking to the Working at an AI company article you mentioned, to give context. We’re not opposed to giving more or different context on OpenAI’s cards and are happy to take suggestions!

Comment by Ideopunk on 100 Tips for a Better Life · 2023-08-11T15:39:48.094Z · LW · GW

Fixed, thanks! 

Comment by Ideopunk on Halifax, Nova Scotia, Canada – ACX Spring Meetups Everywhere Spring 2023 · 2023-04-11T01:47:56.506Z · LW · GW

Woooo!

Comment by Ideopunk on [deleted post] 2022-09-25T09:03:59.760Z

This event is no longer cancelled! 

Comment by Ideopunk on [deleted post] 2022-09-24T16:36:06.062Z

This event is cancelled. I will not be there in time due to Hurricane Fiona, and cannot guarantee another host. Big apologies! 

Comment by Ideopunk on Halifax Spring Meetup · 2022-04-19T14:55:28.726Z · LW · GW

Come hang out, shy pals!

Comment by Ideopunk on What The Foucault · 2022-02-20T14:48:07.539Z · LW · GW

From my reading, he's much more scout than postmodern soldier in his lectures https://foucault.info/parrhesia/foucault.DT1.wordParrhesia.en/ -- and as a bonus, a much easier read. 

Comment by Ideopunk on Epistemic Legibility · 2022-02-14T01:44:57.801Z · LW · GW

Aha! Thank you.

Comment by Ideopunk on Epistemic Legibility · 2022-02-12T14:22:58.846Z · LW · GW

This is an excellent post. I expect (and hope!) it will shape how I handle disagreements.

"The person counters every objection raised, but the counters aren’t logically consistent with each other."

Is there a particular term for this? This is something I've encountered before, and having a handle for it might help with addressing it. 

Comment by Ideopunk on Meetup at Propeller Brewing Company · 2022-02-06T14:55:03.516Z · LW · GW

Come hang out!

Comment by Ideopunk on Notes from "Don't Shoot the Dog" · 2021-04-24T12:02:55.854Z · LW · GW

This was a rich read, thank you!

Comment by Ideopunk on A Simple Ethics Model · 2021-01-22T13:22:08.008Z · LW · GW

This is interesting. Am I wrong in summarizing it as "deontology helps with coordination"? 

Comment by Ideopunk on The LessWrong 2019 Review · 2021-01-17T17:58:20.891Z · LW · GW

Is there a way to see all the nominations listed? I registered in 2020 so I can't vote but I'd still love to pick through the nominations. 

Comment by Ideopunk on 100 Tips for a Better Life · 2021-01-05T18:15:45.354Z · LW · GW

I of course agree (with original parent comment) that there are real problems and real solutions. I think three things are needed to qualify as a grifter, as compared to an honest informer. 

1) The sell. A grifter doesn't just tell you there's a problem, they will share the solution for a price. If you give them money (or control), they'll relieve your worry. 

2) The exclusivity. Alternative solutions are specifically highlighted as being insufficient or counterproductive. Somebody engaging in good-faith can acknowledge the costs and benefits of different approaches (while still believing their solution is best). 

3) The promise. Grifts promise a solution, not just a tool. An honest informer will have ideas about what can help, but won't guarantee their success. 

I think somebody can fit two of these three and still be in good-faith. 

Comment by Ideopunk on 100 Tips for a Better Life · 2020-12-25T23:00:16.655Z · LW · GW

76 was originally disclaimed with "wait a year before trying to be friends", which maybe should be added back in. I think friendship with exes is often doable eventually, it's the immediate aftermath where I think people handle themselves poorly and add trouble to whatever trouble made them break up. 

Comment by Ideopunk on 100 Tips for a Better Life · 2020-12-23T15:02:01.346Z · LW · GW

Yeah, I think the Reddit solution will lose its value over time. I think the important part is to find an argument (this applies to Hacker News too). 

I'd be interested to find another take on #69, I think that's one I came up with on my own through trial and error.

Comment by Ideopunk on 100 Tips for a Better Life · 2020-12-23T14:57:49.262Z · LW · GW

Thanks! 

It's not near the top of my mind either, but it is something I feel confident recommending to almost everybody, whereas I don't feel confident advising people on their financial investments. This is a small fruit, but it's low-hanging. 

Comment by Ideopunk on 100 Tips for a Better Life · 2020-12-23T14:53:53.225Z · LW · GW

These are useful criticisms! I'll caveat it later towards trusted friends, which I think cuts off much of the risks. 

Comment by Ideopunk on Open & Welcome Thread - March 2020 · 2020-03-10T01:55:02.904Z · LW · GW

Hi all!

I've been reading SSC for years, and took the plunge to read the sequences last year. It shifted a lot of my previous beliefs, and though I have a strong aversion to group identity, I suppose I'm an "aspiring rationalist" now!

About me: I work in Kelowna, BC. I've written for Quillette and Areo in the past, but my current writing output is confined to Letter (including this conversation on metarationality). I will probably do effortposts here eventually, but for now I'm mostly scanning for coronavirus information.

Any rationalists or effective altruists in BC, message me!

Comment by Ideopunk on Autoexperiments · 2020-01-07T06:49:56.364Z · LW · GW

Please pardon the late reply. I've modified the plan to two months each for a gratitude journal, cold showers, meditation, and cutting sugar. Thank you for sharing your own experiences!

Cold showers: I'm curious about the distance between how people swear by them and the inconclusive research - and, like you said, there's no obvious harm. When I've done them in the past I've noticed an energy boost in the hours afterwards, but I've been unable to push myself to keep them up when autumn hits. So this will be a summer experiment.

Sugar: I once cut sugar (all sweets, though not all added sugars) for roughly two months and felt like I noticed the disappearance of big energy drops. This time I want to see what else comes of it.

Gratitude journal: Given the research behind it, this seems obviously worth the effort, and is what I'm doing right now. It seems like there's evidence that writing 1-3 times a week gives better results, but I'm doing it daily for the habit formation.

Other than that, I don't have much to reply beyond appreciating the personal reference points, the meditation links, and the encouragement!

Comment by Ideopunk on 100 Ways To Live Better · 2020-01-01T16:08:29.971Z · LW · GW

I got some good ideas from this list, and will report back with results.

Most of these seem like hacks to increase extroversion.

For #12, shouldn't three star reviews divide (one star + five star reviews), not be subtracted?

Comment by Ideopunk on Autoexperiments · 2019-12-25T04:57:59.758Z · LW · GW

This is helpful, thanks! Do you have a study or post in mind?

Comment by Ideopunk on Is Rationalist Self-Improvement Real? · 2019-12-10T21:25:02.357Z · LW · GW

This seems correct to me. There are already self-improvement approaches to attempt and modify. Using epistemic rationality to achieve instrumental rationality is less about creating an RSI, and more about evaluating and improving upon existing SIs.