Posts

How have analogous Industries solved Interested > Trained > Employed bottlenecks? 2024-05-30T23:59:39.582Z
If you're an AI Safety movement builder consider asking your members these questions in an interview 2024-05-27T05:46:17.485Z
What would stop you from paying for an LLM? 2024-05-21T22:25:52.949Z
Apply to be a Safety Engineer at Lockheed Martin! 2024-03-31T21:02:08.499Z
yanni's Shortform 2024-03-13T23:58:40.245Z
Does increasing the power of a multimodal LLM get you an agentic AI? 2024-02-23T04:14:56.464Z
Some questions for the people at 80,000 Hours 2024-02-14T23:15:31.455Z
How has internalising a post-AGI world affected your current choices? 2024-02-05T05:43:14.082Z
A Question For People Who Believe In God 2023-11-24T05:22:40.839Z
An Update On The Campaign For AI Safety Dot Org 2023-05-05T00:21:56.648Z
Who is testing AI Safety public outreach messaging? 2023-04-16T06:57:46.232Z

Comments

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-06-12T00:38:12.438Z · LW · GW

When AI Safety people are also vegetarians, vegans or reducetarian, I am pleasantly surprised, as this is one (of many possible) signals to me they're "in it" to prevent harm, rather than because it is interesting.

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-06-08T00:44:24.194Z · LW · GW

Hey mate thanks for the comment. I'm finding "pretty surprised" hard to interpret. Is that closer to 1% or 15%?

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-06-07T00:01:37.073Z · LW · GW

Hi Ann! Thank you for your comment. Some quick thoughts:

"I would consider, for the sake of humility, that they might disagree with your assessment for actual reasons, rather than assuming confusion is necessary."

  • Yep! I have considered this. The purpose of my post is to consider it (I am looking for feedback, not upvotes or downvotes).

"They also happen to have a have a p(doom from not AGI) of 40% from combined other causes, and expect an aligned AGI to be able to effectively reduce this to something closer to 1% through better coordinating reasonable efforts."

  • This falls into the confused category for me. I'm not sure how you have a 40% p(doom) from something other than unaligned AGI. Could you spell out for me what could make such a large number?
Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-06-06T23:56:21.229Z · LW · GW

Hi Richard! Thanks for the comment. It seems to me that might apply to < 5% of people in capabilities?

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-06-06T23:55:21.735Z · LW · GW

Thanks for your comment Thomas! I appreciate the effort. I have some questions:

  • by working on capabilities, you free up others for alignment work who were previously doing capabilities but would prefer alignment

I am a little confused by this, would you mind spelling it out for me? Imagine "Steve" took a job at "FakeLab" in capabilities. Are you saying Steve making this decision creates a Safety job for "Jane" at "FakeLab", that otherwise wouldn't have existed?

  • more competition on product decreases aggregate profits of scaling labs

Again I am a bit confused. You're suggesting that if, for e.g., General Motors announced tomorrow they were investing $20 billion to start an AGI lab, that would be a good thing? 

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-06-06T20:08:25.701Z · LW · GW

A judgement I'm attached to is that a person is either extremely confused or callous if they work in capabilities at a big lab. Is there some nuance I'm missing here?

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-06-06T20:04:18.952Z · LW · GW

Prediction: In 6-12 months people are going to start leaving Deepmind and Anthropic for similar sounding reasons to those currently leaving OpenAI (50% likelihood).

> Surface level read of what is happening at OpenAI; employees are uncomfortable with specific safety policies. 
> Deeper, more transferable, harder to solve problem; no person that is sufficiently well-meaning and close enough to the coal face at Big Labs can ever be reassured they're doing the right thing continuing to work for a company whose mission is to build AGI.

Basically, this is less about "OpenAI is bad" and more "Making AGI is bad". 

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-06-05T21:02:35.278Z · LW · GW

I went to buy a ceiling fan with a light in it recently. There was one on sale that happened to also tick all my boxes, joy! But the salesperson warned me "the light in this fan can't be replaced and only has 10,000 hours in it.  After that you'll need a new fan. So you might not want to buy this one." I chuckled internally and bought two of them, one for each room.

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-30T00:59:50.105Z · LW · GW

I've decided to post something very weird because it might (in some small way) help shift the Overton Window on a topic: as long as the world doesn't go completely nuts due to AI, I think there is a 5%-20% chance I will reach something close to full awakening / enlightenment in about 10 years. Something close to this: 

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-28T00:29:58.807Z · LW · GW

Very quick thoughts on setting time aside for strategy, planning and implementation, since I'm into my 4th week of strategy development and experiencing intrusive thoughts about needing to hurry up on implementation;

  • I have a 52 week LTFF grant to do movement building in Australia (AI Safety)
  • I have set aside 4.5 weeks for research (interviews + landscape review + maybe survey) and strategy development (segmentation, targeting, positioning),
  • Then 1.5 weeks for planning (content, events, educational programs), during which I will get feedback from others on the plan and then iterate it. 
  • This leaves me with 46/52 weeks to implement ruthlessly.

In conclusion, 6 weeks on strategy and planning seems about right. 2 weeks would have been too short, 10 weeks would have been too long, this porridge is juuuussttt rightttt.

keen for feedback from people in similar positions.

Comment by yanni kyriacos (yanni) on What would stop you from paying for an LLM? · 2024-05-22T21:21:42.652Z · LW · GW

Yeah it is a private purchase, unlike eating, so less likely to create some social effect by abstaining (i.e. vegan). I will say though, I've been vegan for about 7 years and I don't think I've nudged anyone :|

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-22T21:19:10.470Z · LW · GW

I have an intuition that if you tell a bunch of people you're extremely happy almost all the time (e.g. walking around at 10/10) then many won't believe you, but if you tell them that you're extremely depressed almost all the time (e.g. walking around at 1/10) then many more would believe you. Do others have this intuition? Keen on feedback.

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-22T00:25:42.831Z · LW · GW

Two jobs in AI Safety Advocacy that AFAICT don't exist, but should and probably will very soon. Will EAs be the first to create them though? There is a strong first mover advantage waiting for someone -

1. Volunteer Coordinator - there will soon be a groundswell from the general population wanting to have a positive impact in AI. Most won't know how to. A volunteer manager will help capture and direct their efforts positively, for example, by having them write emails to politicians

2. Partnerships Manager - the President of the Voice Actors guild reached out to me recently. We had a very surprising number of cross over in concerns and potential solutions. Voice Actors are the canary in the coal mine. More unions (etc) will follow very shortly. I imagine within 1 year there will be a formalised group of these different orgs advocating together.

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-20T22:06:07.372Z · LW · GW

Please help me find research on aspiring AI Safety folk!

I am two weeks into the strategy development phase of my movement building and almost ready to start ideating some programs for the year.

But I want these programs to be solving the biggest pain points people experience when trying to have a positive impact in AI Safety .

Has anyone seen any research that looks at this in depth? For example, through an interview process and then survey to quantify how painful the pain points are?

Some examples of pain points I've observed so far through my interviews with Technical folk:

  • I often felt overwhelmed by the vast amount of material to learn.
  • I felt there wasn’t a clear way to navigate learning the required information
  • I lacked an understanding of my strengths and weaknesses in relation to different AI Safety areas  (i.e. personal fit / comparative advantage) .
  • I lacked an understanding of my progress after I get started (e.g. am I doing well? Poorly? Fast enough?)
  • I regularly experienced fear of failure
  • I regularly experienced fear of wasted efforts / sunk cost
  • Fear of admitting mistakes or starting over might prevent people from making necessary adjustments.
  • I found it difficult to identify my desired role / job (i.e. the end goal)
  • When I did think I knew my desired role, identifying the specific skills and knowledge required for a desired role was difficult
  • There is no clear career pipeline: Do X and then Y and then Z and then you have an A% chance of getting B% role
  • Finding time to get upskilled while working is difficult
  • I found the funding ecosystem opaque
  • A lot of discipline and motivation over potentially long periods was required to upskill
  • I felt like nobody gave me realistic expectations as to what the journey would be like 
Comment by yanni kyriacos (yanni) on Examples of Highly Counterfactual Discoveries? · 2024-05-19T21:35:04.879Z · LW · GW

Thanks :) Uh, good question. Making some good links? Have you done much nondual practice? I highly recommend Loch Kelly :)

Comment by yanni kyriacos (yanni) on Examples of Highly Counterfactual Discoveries? · 2024-05-19T11:28:44.672Z · LW · GW

Hi Jonas! Would you mind saying about more about TMI + Seeing That Frees? Thanks!

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-16T00:11:31.602Z · LW · GW

Yesterday Greg Sadler and I met with the President of the Australian Association of Voice Actors. Like us, they've been lobbying for more and better AI regulation from government. I was surprised how much overlap we had in concerns and potential solutions:
1. Transparency and explainability of AI model data use (concern)

2. Importance of interpretability (solution)

3. Mis/dis information from deepfakes (concern)

4. Lack of liability for the creators of AI if any harms eventuate (concern + solution)

5. Unemployment without safety nets for Australians (concern)

6. Rate of capabilities development (concern)

They may even support the creation of an AI Safety Institute in Australia. Don't underestimate who could be allies moving forward!

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-15T00:04:43.343Z · LW · GW

Ilya Sutskever has left OpenAI https://twitter.com/ilyasut/status/1790517455628198322

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-14T23:33:29.690Z · LW · GW

Thanks for letting me know!

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-14T04:33:55.652Z · LW · GW

More people are going to quit labs / OpenAI. Will EA refill the leaky funnel?

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-14T04:31:24.037Z · LW · GW

[PHOTO] I sent 19 emails to politicians, had 4 meetings, and now I get emails like this. There is SO MUCH low hanging fruit in just doing this for 30 minutes a day (I would do it but my LTFF funding does not cover this). Someone should do this!

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-13T05:58:03.282Z · LW · GW

I expect (~ 75%) that the decision to "funnel" EAs into jobs at AI labs will become a contentious community issue in the next year. I think that over time more people will think it is a bad idea. This may have PR and funding consequences too.

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-12T02:27:06.232Z · LW · GW

Help clear something up for me: I am extremely confused (theoretically) how we can simultaneously have:

1. An Artificial Superintelligence

2. It be controlled by humans (therefore creating misuse of concentration of power issues)

My intuition is that once it reaches a particular level of power it will be uncontrollable. Unless people are saying that we can have models 100x more powerful than GPT4 without it having any agency??

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-05T11:03:21.496Z · LW · GW

Something I'm confused about: what is the threshold that needs meeting for the majority of people in the EA community to say something like "it would be better if EAs didn't work at OpenAI"?

Imagining the following hypothetical scenarios over 2024/25, I can't predict confidently whether they'd individually cause that response within EA?

  1. Ten-fifteen more OpenAI staff quit for varied and unclear reasons. No public info is gained outside of rumours
  2. There is another board shakeup because senior leaders seem worried about Altman. Altman stays on
  3. Superalignment team is disbanded
  4. OpenAI doesn't let UK or US AISI's safety test GPT5/6 before release
  5. There are strong rumours they've achieved weakly general AGI internally at end of 2025
Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-05T02:11:46.675Z · LW · GW

"alignment researchers are found to score significantly higher in liberty (U=16035, p≈0)" This partly explains why so much of the alignment community doesn't support PauseAI!

"Liberty: Prioritizes individual freedom and autonomy, resisting excessive governmental control and supporting the right to personal wealth. Lower scores may be more accepting of government intervention, while higher scores champion personal freedom and autonomy..." 
https://forum.effectivealtruism.org/posts/eToqPAyB4GxDBrrrf/key-takeaways-from-our-ea-and-alignment-research-surveys#comments

Image
Comment by yanni kyriacos (yanni) on Why I'm doing PauseAI · 2024-05-04T03:10:27.305Z · LW · GW

Hi Tomás! is there a prediction market for this that you know of?

Comment by yanni kyriacos (yanni) on Why I'm doing PauseAI · 2024-05-04T03:03:27.564Z · LW · GW

I think it is unrealistic to ask people to internalise that level of ambiguity. This is how EA's turn themselves into mental pretzels.

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-02T00:20:42.516Z · LW · GW

Something someone technical and interested in forecasting should look into:  can LLMs reliably convert peoples claims into a % of confidence through sentiment analysis? This would be useful for Forecasters I believe (and rationality in general)

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-01T23:19:57.171Z · LW · GW

That seems fair enough!

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-01T11:00:52.828Z · LW · GW

Hi Johannes! Thanks for the suggestion :) I'm not sure i'd want it in the middle of a video call, but maybe in a forum context like this could be cool?

Comment by yanni kyriacos (yanni) on Why I'm doing PauseAI · 2024-05-01T06:33:17.212Z · LW · GW

Putting my EA Forum comment here:

I'd like to make clear to anyone reading that you can support the PauseAI movement right now, only because you think it is useful right now. And then in the future, when conditions change, you can choose to stop supporting the PauseAI movement. 

AI is changing extremely fast (e.g. technical work was probably our best bet a year ago, I'm less sure now). Supporting a particular tactic/intervention does not commit you to an ideology or team forever!

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-05-01T06:30:21.653Z · LW · GW

There have been multiple occasions where I've copy and pasted email threads into an LLM and asked it things like:

  1. What is X person saying
  2. What are the cruxes in this conversation?
  3. Summarise this conversation
  4. What are the key takeaways
  5. What views are being missed from this conversation

I really want an email plugin that basically brute forces rationality INTO email conversations.

Comment by yanni kyriacos (yanni) on This is Water by David Foster Wallace · 2024-04-25T04:47:16.368Z · LW · GW

If you're into podcasts, the Very Bad Wizards guys did an ep on this essay, which I enjoyed: https://verybadwizards.com/episode/episode-227-a-terrible-master-david-foster-wallaces-this-is-water

Comment by yanni kyriacos (yanni) on Rejecting Television · 2024-04-24T01:49:04.311Z · LW · GW

Alcoholics are encouraged not to talk passed Liquor Stores. Basically, physical availability is the biggest lever - keep your phone / laptop in a different room when you don't absolutely need them!

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-04-24T00:26:55.384Z · LW · GW

If GPT5 actually comes with competent agents then I expect this to be a "Holy Shit" moment at least as big as ChatGPT's release. So if ChatGPT has been used by 200 million people, then I'd expect that to at least double within 6 months of GPT5 (agent's) release. Maybe triple. So that "Holy Shit" moment means a greater share of the general public learning about the power of frontier models. With that will come another shift in the Overton Window. Good luck to us all.

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-04-22T03:17:03.711Z · LW · GW

The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI).

I thought it might be useful to spell that out.

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-04-21T04:45:36.734Z · LW · GW

I recently discovered the idea of driving all blames into oneself, which immediately resonated with me. It is relatively hardcore; the kind of thing that would turn David Goggins into a Buddhist.

Gemini did a good job of summarising it:

This quote by Pema Chödron, a renowned Buddhist teacher, represents a core principle in some Buddhist traditions, particularly within Tibetan Buddhism. It's called "taking full responsibility" or "taking self-blame" and can be a bit challenging to understand at first. Here's a breakdown:

What it Doesn't Mean:

  • Self-Flagellation: This practice isn't about beating yourself up or dwelling on guilt.
  • Ignoring External Factors: It doesn't deny the role of external circumstances in a situation.

What it Does Mean:

  • Owning Your Reaction: It's about acknowledging how a situation makes you feel and taking responsibility for your own emotional response.
  • Shifting Focus: Instead of blaming others or dwelling on what you can't control, you direct your attention to your own thoughts and reactions.
  • Breaking Negative Cycles: By understanding your own reactions, you can break free from negative thought patterns and choose a more skillful response.

Analogy:

Imagine a pebble thrown into a still pond. The pebble represents the external situation, and the ripples represent your emotional response. While you can't control the pebble (the external situation), you can control the ripples (your reaction).

Benefits:

  • Reduced Suffering: By taking responsibility for your own reactions, you become less dependent on external circumstances for your happiness.
  • Increased Self-Awareness: It helps you understand your triggers and cultivate a more mindful response to situations.
  • Greater Personal Growth: By taking responsibility, you empower yourself to learn and grow from experiences.

Here are some additional points to consider:

  • This practice doesn't mean excusing bad behavior. You can still hold others accountable while taking responsibility for your own reactions.
  • It's a gradual process. Be patient with yourself as you learn to practice this approach.
Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-04-20T06:03:31.589Z · LW · GW

Be the meme you want to see in the world (screenshot).

Comment by yanni kyriacos (yanni) on Neil Warren's Shortform · 2024-04-03T00:53:15.711Z · LW · GW

Strong upvote, but I won't tell you why.

Comment by yanni kyriacos (yanni) on Apply to be a Safety Engineer at Lockheed Martin! · 2024-03-31T22:26:00.321Z · LW · GW

Thanks for the feedback Neil! At LM we know that insights can come from anywhere. We appreciate your input regarding the training video's completeness and the deadline duration. In the meantime please feel free to apply for one of our graduate positions, where presumably one can feel better working on capabilities since 'someone else will just take the job anyway': https://www.lockheedmartinjobs.com/job/aguadilla/software-engineer-fire-control-weapons-early-career/694/53752768720

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-03-29T00:17:36.363Z · LW · GW

I have heard rumours that an AI Safety documentary is being made. Separate to this, a good friend of mine is also seriously considering making one, but he isn't "in" AI Safety. If you know who this first group is and can put me in touch with them, it might be worth getting across each others plans.

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-03-28T01:35:59.969Z · LW · GW

I like the fact that despite not being (relatively) young when they died, the LW banner states that Kahneman & Vinge have died "FAR TOO YOUNG", pointing to the fact that death is always bad and/or it is bad when people die when they were still making positive contributions to the world (Kahneman published "Noise" in 2021!).

Comment by yanni kyriacos (yanni) on Should rationalists be spiritual / Spirituality as overcoming delusion · 2024-03-27T07:45:28.818Z · LW · GW

I can't stress this enough: the two most important things to happen to me in my life have been (1) my daughter being born and (2) receiving a pointing-out instruction from Loch Kelly.

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-03-27T07:35:49.638Z · LW · GW

[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me).

Comment by yanni kyriacos (yanni) on Anxiety vs. Depression · 2024-03-22T01:26:25.884Z · LW · GW

I'll add that for some period I was on 100mg of sertraline, but I dont think that cured me of anything.

Comment by yanni kyriacos (yanni) on Anxiety vs. Depression · 2024-03-22T01:23:16.089Z · LW · GW

Haha no yoga. Some combination of the following:

- Meta cognitive therapy + CBT with this guy for 4 years
- Lots of exercise (resistance training and cardio)
- Sleeping lots
- Found meaningful work + less a stressful industry
- Stable relationship (monogomous)
- This one is the curved ball, as I had already cured my OCD/GAD by the time I found it, but it definitely lifted my wellbeing a lot https://lochkelly.org/

Comment by yanni kyriacos (yanni) on Anxiety vs. Depression · 2024-03-18T05:01:53.032Z · LW · GW

Are you interested in receiving any advice from someone who used to have OCD/GAD but now lives with almost not enough anxiety?

Comment by yanni kyriacos (yanni) on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-03-14T03:37:15.391Z · LW · GW

Thanks for posting and I hope you're doing ok! 

I have two questions: 

1/ when someone says they "believe in God" does this mean something like "I assign a ≥ 50% probability to there being an omnipotent omnipresent and omniscient intelligence?" 

2/ how do you update on the non-religious-related views of someone (like Huberman) after they say they believe in God? Do they become less trustworthy on other topics?

Comment by yanni kyriacos (yanni) on yanni's Shortform · 2024-03-13T23:58:40.352Z · LW · GW

I think acting on the margins is still very underrated. For e.g. I think 5x the amount of advocacy for a Pause on capabilities development of frontier AI models would be great. I also think in 12 months time it would be fine for me to reevaluate this take and say something like 'ok that's enough Pause advocacy'.

Basically, you shouldn't feel 'locked in' to any view. And if you're starting to feel like you're part of a tribe, then that could be a bad sign you've been psychographically locked in.

Comment by yanni kyriacos (yanni) on Why I think it's net harmful to do technical safety research at AGI labs · 2024-02-08T00:45:21.745Z · LW · GW

Hi Benjamin - would be interested in your take on a couple of things:

1. By recommending people work at big labs, do you think this has a positive Halo Effect for the labs' brand? I.e. 80k is known for wanting people to do good in the world, so by recommending people invest their careers at a lab, then those positive brand associations get passed onto the lab (this is how most brand partnerships work).

2. If you think the answer to #1 is Yes, then do you believe the cost of this Halo Effect is outweighed by the benefit of having safety minded EA / Rationalist folk inside big labs?