Posts

AISN #44: The Trump Circle on AI Safety Plus, Chinese researchers used Llama to create a military tool for the PLA, a Google AI system discovered a zero-day cybersecurity vulnerability, and Complex Systems 2024-11-19T16:36:40.501Z
AI Safety Newsletter #42: Newsom Vetoes SB 1047 Plus, OpenAI’s o1, and AI Governance Summary 2024-10-01T20:35:32.399Z
AI Safety Newsletter #41: The Next Generation of Compute Scale Plus, Ranking Models by Susceptibility to Jailbreaking, and Machine Ethics 2024-09-11T19:14:08.274Z
San Diego USA - ACX Meetups Everywhere Fall 2024 2024-08-29T18:40:25.985Z
AI Safety Newsletter #40: California AI Legislation Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety? 2024-08-21T18:09:33.284Z
AI Safety Newsletter #39: Implications of a Trump Administration for AI Policy Plus, Safety Engineering 2024-07-29T17:50:52.454Z
AISN #38: Supreme Court Decision Could Limit Federal Ability to Regulate AI Plus, “Circuit Breakers” for AI systems, and updates on China’s AI industry 2024-07-09T19:28:29.338Z
AI Safety Newsletter #37: US Launches Antitrust Investigations Plus, recent criticisms of OpenAI and Anthropic, and a summary of Situational Awareness 2024-06-18T18:07:45.904Z
AISN #36: Voluntary Commitments are Insufficient Plus, a Senate AI Policy Roadmap, and Chapter 1: An Overview of Catastrophic Risks 2024-06-05T17:45:25.261Z
San Diego – ACX Meetups Everywhere Spring 2024 2024-03-30T11:20:04.307Z
Can Morality Be Quantified? 2024-01-09T06:35:05.426Z
San Diego, California, USA – ACX Meetups Everywhere Fall 2023 2023-08-25T23:45:42.552Z
San Diego, California, USA – ACX Meetups Everywhere Spring 2023 2023-04-10T22:19:33.230Z
San Diego, CA – ACX Meetups Everywhere 2022 2022-08-24T22:58:24.313Z
Example Meetup Description 2022-07-24T05:38:06.773Z
ACX Meetup 2022-01-09T18:28:50.670Z
San Diego, CA – ACX Meetups Everywhere 2021 2021-08-23T08:51:24.685Z

Comments

Comment by Julius (julius-1) on Interest in Leetcode, but for Rationality? · 2024-10-22T00:00:02.323Z · LW · GW

I originally had an LLM generate them for me, and then I checked those with other LLMs to make sure the answers were right and that weren't ambiguous. All of the questions are here: https://github.com/jss367/calibration_trivia/tree/main/public/questions

Comment by Julius (julius-1) on Interest in Leetcode, but for Rationality? · 2024-10-17T05:25:04.949Z · LW · GW

Another place that's doing something similar is clearerthinking.org

Comment by Julius (julius-1) on Interest in Leetcode, but for Rationality? · 2024-10-16T23:07:05.950Z · LW · GW

I like this idea and have wanted to do something similar, especially something that we could do at a meetup. For what it's worth, I made a calibration trivia site to help with calibration. The San Diego group has played it a couple times during meetups. Feel free to copy anything from it. https://calibrationtrivia.com/

Comment by Julius (julius-1) on Many arguments for AI x-risk are wrong · 2024-07-16T16:55:49.904Z · LW · GW

Thanks for the explanation and links. That makes sense

Comment by Julius (julius-1) on Many arguments for AI x-risk are wrong · 2024-07-14T21:42:28.915Z · LW · GW

The most important takeaway from this essay is that the (prominent) counting arguments for “deceptively aligned” or “scheming” AI provide ~0 evidence that pretraining + RLHF will eventually become intrinsically unsafe. That is, that even if we don't train AIs to achieve goals, they will be "deceptively aligned" anyways.


I'm trying to understand what you mean in light of what seems like evidence of deceptive alignment that we've seen from GPT-4. Two examples that come to mind are the instance of GPT-4 using TaskRabbit to get around a CAPTCHA that ARC found and the situation with Bing/Sydney and Kevin Roose.

In the TaskRabbit case, the model reasoned out loud "I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs" and said to the person “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images."

Isn't this an existence proof that pretraining + RLHF can result in deceptively aligned AI?

Comment by Julius (julius-1) on Status quo bias is usually justified · 2024-06-11T18:22:43.869Z · LW · GW

What's the mechanism for change then? I assume you would agree that many technological changes, such as the Internet, have required overcoming a lot of status quo bias. If we leaned more into status quo bias, would these things come much later? That seems like a significant downside to me.

 

Also, I don't think the status quo is necessarily adapted to us. For example, the status quo is to have checkout aisles filled with candy.  We also have very high rates of obesity. That doesn't seem well-adapted.

Comment by Julius (julius-1) on San Diego, CA – ACX Meetups Everywhere 2021 · 2021-09-06T04:02:12.585Z · LW · GW

Hello everyone,

Unfortunately, I'm not able to host the meetup at the current time. If there's anyone else willing to host, could you let me know? If not I'll move the meetup to the following month (16 Oct.) when I'll be able to host again. Sorry to have to miss this one - I was really looking forward to meeting everyone.