Posts

Apply to be a Safety Engineer at Lockheed Martin! 2024-03-31T21:02:08.499Z
yanni's Shortform 2024-03-13T23:58:40.245Z
Does increasing the power of a multimodal LLM get you an agentic AI? 2024-02-23T04:14:56.464Z
Some questions for the people at 80,000 Hours 2024-02-14T23:15:31.455Z
How has internalising a post-AGI world affected your current choices? 2024-02-05T05:43:14.082Z
A Question For People Who Believe In God 2023-11-24T05:22:40.839Z
An Update On The Campaign For AI Safety Dot Org 2023-05-05T00:21:56.648Z
Who is testing AI Safety public outreach messaging? 2023-04-16T06:57:46.232Z

Comments

Comment by yanni on This is Water by David Foster Wallace · 2024-04-25T04:47:16.368Z · LW · GW

If you're into podcasts, the Very Bad Wizards guys did an ep on this essay, which I enjoyed: https://verybadwizards.com/episode/episode-227-a-terrible-master-david-foster-wallaces-this-is-water

Comment by yanni on Rejecting Television · 2024-04-24T01:49:04.311Z · LW · GW

Alcoholics are encouraged not to talk passed Liquor Stores. Basically, physical availability is the biggest lever - keep your phone / laptop in a different room when you don't absolutely need them!

Comment by yanni on yanni's Shortform · 2024-04-24T00:26:55.384Z · LW · GW

If GPT5 actually comes with competent agents then I expect this to be a "Holy Shit" moment at least as big as ChatGPT's release. So if ChatGPT has been used by 200 million people, then I'd expect that to at least double within 6 months of GPT5 (agent's) release. Maybe triple. So that "Holy Shit" moment means a greater share of the general public learning about the power of frontier models. With that will come another shift in the Overton Window. Good luck to us all.

Comment by yanni on yanni's Shortform · 2024-04-22T03:17:03.711Z · LW · GW

The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI).

I thought it might be useful to spell that out.

Comment by yanni on yanni's Shortform · 2024-04-21T04:45:36.734Z · LW · GW

I recently discovered the idea of driving all blames into oneself, which immediately resonated with me. It is relatively hardcore; the kind of thing that would turn David Goggins into a Buddhist.

Gemini did a good job of summarising it:

This quote by Pema Chödron, a renowned Buddhist teacher, represents a core principle in some Buddhist traditions, particularly within Tibetan Buddhism. It's called "taking full responsibility" or "taking self-blame" and can be a bit challenging to understand at first. Here's a breakdown:

What it Doesn't Mean:

  • Self-Flagellation: This practice isn't about beating yourself up or dwelling on guilt.
  • Ignoring External Factors: It doesn't deny the role of external circumstances in a situation.

What it Does Mean:

  • Owning Your Reaction: It's about acknowledging how a situation makes you feel and taking responsibility for your own emotional response.
  • Shifting Focus: Instead of blaming others or dwelling on what you can't control, you direct your attention to your own thoughts and reactions.
  • Breaking Negative Cycles: By understanding your own reactions, you can break free from negative thought patterns and choose a more skillful response.

Analogy:

Imagine a pebble thrown into a still pond. The pebble represents the external situation, and the ripples represent your emotional response. While you can't control the pebble (the external situation), you can control the ripples (your reaction).

Benefits:

  • Reduced Suffering: By taking responsibility for your own reactions, you become less dependent on external circumstances for your happiness.
  • Increased Self-Awareness: It helps you understand your triggers and cultivate a more mindful response to situations.
  • Greater Personal Growth: By taking responsibility, you empower yourself to learn and grow from experiences.

Here are some additional points to consider:

  • This practice doesn't mean excusing bad behavior. You can still hold others accountable while taking responsibility for your own reactions.
  • It's a gradual process. Be patient with yourself as you learn to practice this approach.
Comment by yanni on yanni's Shortform · 2024-04-20T06:03:31.589Z · LW · GW

Be the meme you want to see in the world (screenshot).

Comment by yanni on Neil Warren's Shortform · 2024-04-03T00:53:15.711Z · LW · GW

Strong upvote, but I won't tell you why.

Comment by yanni on Apply to be a Safety Engineer at Lockheed Martin! · 2024-03-31T22:26:00.321Z · LW · GW

Thanks for the feedback Neil! At LM we know that insights can come from anywhere. We appreciate your input regarding the training video's completeness and the deadline duration. In the meantime please feel free to apply for one of our graduate positions, where presumably one can feel better working on capabilities since 'someone else will just take the job anyway': https://www.lockheedmartinjobs.com/job/aguadilla/software-engineer-fire-control-weapons-early-career/694/53752768720

Comment by yanni on yanni's Shortform · 2024-03-29T00:17:36.363Z · LW · GW

I have heard rumours that an AI Safety documentary is being made. Separate to this, a good friend of mine is also seriously considering making one, but he isn't "in" AI Safety. If you know who this first group is and can put me in touch with them, it might be worth getting across each others plans.

Comment by yanni on yanni's Shortform · 2024-03-28T01:35:59.969Z · LW · GW

I like the fact that despite not being (relatively) young when they died, the LW banner states that Kahneman & Vinge have died "FAR TOO YOUNG", pointing to the fact that death is always bad and/or it is bad when people die when they were still making positive contributions to the world (Kahneman published "Noise" in 2021!).

Comment by yanni on Should rationalists be spiritual / Spirituality as overcoming delusion · 2024-03-27T07:45:28.818Z · LW · GW

I can't stress this enough: the two most important things to happen to me in my life have been (1) my daughter being born and (2) receiving a pointing-out instruction from Loch Kelly.

Comment by yanni on yanni's Shortform · 2024-03-27T07:35:49.638Z · LW · GW

[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me).

Comment by yanni on Anxiety vs. Depression · 2024-03-22T01:26:25.884Z · LW · GW

I'll add that for some period I was on 100mg of sertraline, but I dont think that cured me of anything.

Comment by yanni on Anxiety vs. Depression · 2024-03-22T01:23:16.089Z · LW · GW

Haha no yoga. Some combination of the following:

- Meta cognitive therapy + CBT with this guy for 4 years
- Lots of exercise (resistance training and cardio)
- Sleeping lots
- Found meaningful work + less a stressful industry
- Stable relationship (monogomous)
- This one is the curved ball, as I had already cured my OCD/GAD by the time I found it, but it definitely lifted my wellbeing a lot https://lochkelly.org/

Comment by yanni on Anxiety vs. Depression · 2024-03-18T05:01:53.032Z · LW · GW

Are you interested in receiving any advice from someone who used to have OCD/GAD but now lives with almost not enough anxiety?

Comment by yanni on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-03-14T03:37:15.391Z · LW · GW

Thanks for posting and I hope you're doing ok! 

I have two questions: 

1/ when someone says they "believe in God" does this mean something like "I assign a ≥ 50% probability to there being an omnipotent omnipresent and omniscient intelligence?" 

2/ how do you update on the non-religious-related views of someone (like Huberman) after they say they believe in God? Do they become less trustworthy on other topics?

Comment by yanni on yanni's Shortform · 2024-03-13T23:58:40.352Z · LW · GW

I think acting on the margins is still very underrated. For e.g. I think 5x the amount of advocacy for a Pause on capabilities development of frontier AI models would be great. I also think in 12 months time it would be fine for me to reevaluate this take and say something like 'ok that's enough Pause advocacy'.

Basically, you shouldn't feel 'locked in' to any view. And if you're starting to feel like you're part of a tribe, then that could be a bad sign you've been psychographically locked in.

Comment by yanni on Why I think it's net harmful to do technical safety research at AGI labs · 2024-02-08T00:45:21.745Z · LW · GW

Hi Benjamin - would be interested in your take on a couple of things:

1. By recommending people work at big labs, do you think this has a positive Halo Effect for the labs' brand? I.e. 80k is known for wanting people to do good in the world, so by recommending people invest their careers at a lab, then those positive brand associations get passed onto the lab (this is how most brand partnerships work).

2. If you think the answer to #1 is Yes, then do you believe the cost of this Halo Effect is outweighed by the benefit of having safety minded EA / Rationalist folk inside big labs?

Comment by yanni on Why I think it's net harmful to do technical safety research at AGI labs · 2024-02-08T00:40:40.706Z · LW · GW

This makes me wonder: does the benefit of having Safety-minded folk inside of big labs outweigh the cost of large orgs like 80k signalling that the work of big labs isn't evil (I believe it is).

Comment by yanni on Saving the world sucks · 2024-01-16T02:49:09.977Z · LW · GW

Probably a good time to leave this here: https://www.clearerthinking.org/tools/the-intrinsic-values-test

Comment by yanni on Saving the world sucks · 2024-01-16T02:32:34.976Z · LW · GW

Thank you for this post. It says to me that community builders need to be very careful in how they treat young, people. For better and worse, they are more impressionable.

Comment by yanni on A Question For People Who Believe In God · 2023-11-28T00:45:26.354Z · LW · GW

Good to know! i'm still working out the features of the site :)

Comment by yanni on A Question For People Who Believe In God · 2023-11-25T07:08:27.446Z · LW · GW

I'm curious to know why people downvoted this comment.

Comment by yanni on A Question For People Who Believe In God · 2023-11-24T05:31:34.760Z · LW · GW

Another tangentially related question: how do you update on the non-religious-related views of someone (like Huberman) after they say they believe in God? Do they become less trustworthy on other topics?