Posts

Steven Pinker on ChatGPT and AGI (Feb 2023) 2023-03-05T21:34:14.846Z
Steering Behaviour: Testing for (Non-)Myopia in Language Models 2022-12-05T20:28:33.025Z
Paper: Large Language Models Can Self-improve [Linkpost] 2022-10-02T01:29:00.181Z
Google AI integrates PaLM with robotics: SayCan update [Linkpost] 2022-08-24T20:54:34.438Z
Surprised by ELK report's counterexample to Debate, IDA 2022-08-04T02:12:15.139Z
New US Senate Bill on X-Risk Mitigation [Linkpost] 2022-07-04T01:25:57.108Z
Interpretability’s Alignment-Solving Potential: Analysis of 7 Scenarios 2022-05-12T20:01:56.400Z
Introduction to the sequence: Interpretability Research for the Most Important Century 2022-05-12T19:59:52.911Z
What is a training "step" vs. "episode" in machine learning? 2022-04-28T21:53:24.785Z
Action: Help expand funding for AI Safety by coordinating on NSF response 2022-01-19T22:47:11.888Z
Promising posts on AF that have fallen through the cracks 2022-01-04T15:39:07.039Z

Comments

Comment by Evan R. Murphy on On Devin · 2024-03-22T05:52:06.961Z · LW · GW

But in this case Patrick Collison is a credible source and he says otherwise.

Patrick Collison: These aren’t just cherrypicked demos. Devin is, in my experience, very impressive in practice

Patrick is an investor in Cognition. So while he may still be credible in this case, he also has a conflict of interest.

Comment by Evan R. Murphy on Sam Altman's ouster at OpenAI was precipitated by letter to board about AI breakthrough - Reuters · 2023-11-30T00:18:28.465Z · LW · GW

Reading that page, The Verge's claim seems to all hinge on this part:

OpenAI spokesperson Lindsey Held Bolton refuted that notion in a statement shared with The Verge: “Mira told employees what the media reports were about but she did not comment on the accuracy of the information."

They are saying that Bolton "refuted" the notion about such a letter, but the quote from her that follows doesn't actually sounds like a refutation. Hence the Verge piece seems confusing/misleading and I haven't yet seen any credible denial from the board about receiving such a letter.

Comment by Evan R. Murphy on Possible OpenAI's Q* breakthrough and DeepMind's AlphaGo-type systems plus LLMs · 2023-11-23T05:54:30.252Z · LW · GW

Yes though I think he said this at APEC right before he was fired (not after).

Comment by Evan R. Murphy on UFO Betting: Put Up or Shut Up · 2023-07-27T01:38:21.284Z · LW · GW

Carl, have you written somewhere about why you are confident that all UFOs so far are prosaic in nature? Would be interest to read/listen to your thoughts on this. (Alternatively, a link to some other source that you find gives a particularly compelling explanation is also good.)

Comment by Evan R. Murphy on My understanding of Anthropic strategy · 2023-07-18T01:38:08.150Z · LW · GW

Great update from Anthropic on giving majority control of the board to a financially disinterested trust: https://twitter.com/dylanmatt/status/1680924158572793856

Comment by Evan R. Murphy on Instrumental Convergence? [Draft] · 2023-06-15T16:38:17.750Z · LW · GW

Interesting... still taking that in.

Related question: Doesn't goal preservation typically imply self preservation? If I want to preserve my goal, and then I perish, I've failed because now my goal has been reassigned from X to nil.

Comment by Evan R. Murphy on Instrumental Convergence? [Draft] · 2023-06-15T05:57:22.383Z · LW · GW

Love to see an orthodoxy challenged!

Suppose Sia's only goal is to commit suicide, and she's given the opportunity to kill herself straightaway. Then, it certainly won't be rational for her to pursue self-preservation.

It seems you found one terminal goal which doesn't give rise to the instrumental subgoal of self-preservation. Are there others, or does basically every terminal goal benefit from instrumental self-preservation except for suicide?

(I skipped around a bit and didn't read your full post, so maybe you explain this already and I missed it.)

Comment by Evan R. Murphy on Intelligence Officials Say U.S. Has Retrieved Craft of Non-Human Origin · 2023-06-14T19:40:42.134Z · LW · GW

But if there really is a large number of intelligence officials earnestly coming forward with this

Yea, according to Michael Shellenberger's reporting on this, multiple "high-ranking intelligence officials, former intelligence officials, or individuals who we could verify were involved in U.S. government UAP efforts for three or more decades each" have come forward to vouch for Grusch's core claims.

Perhaps this is genuine whistleblowing, but not on what they make it sound like? Suppose there's something being covered up that Grusch et al. want to expose, but describing what it is plainly is inconvenient for one reason or another. So they coordinate around the wacky UFO story, with the goal being to point people in the rough direction of what they want looked at.

Interesting theory. Definitely a possibility.

Comment by Evan R. Murphy on Michael Shellenberger: US Has 12 Or More Alien Spacecraft, Say Military And Intelligence Contractors · 2023-06-11T03:52:16.482Z · LW · GW

What matters is the hundreds of pages and photos and hours of testimony given under oath to the Intelligence Community Inspector General and Congress.

Did Grusch already testify to Congress? I thought that was still being planned.

Comment by Evan R. Murphy on Dealing with UFO claims · 2023-06-11T01:22:50.310Z · LW · GW

Re: the tweet thread you linked to. One of the tweets is:

  1. Given that the DoD was effectively infiltrated for years by people "contracting" for the government while researching dino-beavers, there are now a ton of "insiders" who can "confirm" they heard the same outlandish rumors, leading to stuff like this: [references Michael Schellenberger]

Maybe, but this doesn't add up to me because Schellenberger said his sources had had multiple decades long careers in the gov agencies. It didn't sound like they just started their careers as contractors in 2008-2012.

Link to post with Schellenberger article details: https://www.lesswrong.com/posts/bhH2BqF3fLTCwgjSs/michael-shellenberger-us-has-12-or-more-alien-spacecraft-say

Comment by Evan R. Murphy on Intelligence Officials Say U.S. Has Retrieved Craft of Non-Human Origin · 2023-06-11T01:00:17.174Z · LW · GW

I guess the fact that this journalist says multiple other intelligence officials are anonymously vouching for Grusch's claims makes it interesting again: https://www.lesswrong.com/posts/bhH2BqF3fLTCwgjSs/michael-shellenberger-us-has-12-or-more-alien-spacecraft-say#comments

Comment by Evan R. Murphy on Intelligence Officials Say U.S. Has Retrieved Craft of Non-Human Origin · 2023-06-11T00:40:10.575Z · LW · GW

Wow that's awfully indirect. I'm surprised his speaking out is much of a story given this.

Comment by Evan R. Murphy on Intelligence Officials Say U.S. Has Retrieved Craft of Non-Human Origin · 2023-06-11T00:16:12.357Z · LW · GW

I don't know much about astronomy. But is it possible a more advanced alien civ has colonized much of the galaxy, but we haven't seen them because they anticipated the tech we would be using to make astronomical observations and know how to cloak from it?

Comment by Evan R. Murphy on Intelligence Officials Say U.S. Has Retrieved Craft of Non-Human Origin · 2023-06-09T15:17:00.541Z · LW · GW

The Guardian has been covering this story: https://www.theguardian.com/world/2023/jun/06/whistleblower-ufo-alien-tech-spacecraft

Comment by Evan R. Murphy on [deleted post] 2023-06-05T20:53:04.007Z

I wasn't saying that there were only a few research directions that don't require frontier models period, just that there are only a few that don't require frontier models and still seem relevant/promising, at least assuming short timelines to AGI.

I am skeptical that agent foundations is still very promising or relevant in the present situation. I wouldn't want to shut down someone's research in this area if they were particularly passionate about it or considered themselves on the cusp of an important breakthrough. But I'm not sure it's wise to be spending scarce incubator resources to funnel new researchers into agent foundations research at this stage.

Good points about mechanistic anomaly detection and activation additions though! (And mechanistic interpretability, but I mentioned that in my previous comment.) I need to read up more on activation additions.

Comment by Evan R. Murphy on [deleted post] 2023-06-05T20:47:33.752Z
Comment by Evan R. Murphy on Is Deontological AI Safe? [Feedback Draft] · 2023-06-01T19:28:51.894Z · LW · GW

Thanks for reviewing it! Yea of course you can use it however you like!

Comment by Evan R. Murphy on The Office of Science and Technology Policy put out a request for information on A.I. · 2023-05-30T18:06:06.526Z · LW · GW

Great idea, we need to make sure there are some submissions raising existential risks.

Deadline for the RFI: July 7, 2023 at 5:00pm ET

Comment by Evan R. Murphy on Is Deontological AI Safe? [Feedback Draft] · 2023-05-30T00:43:33.389Z · LW · GW

Would you agree with this summary of your post? I was interested in your post but I didn't see a summary and didn't have time to read the whole thing just now. So I generated this using a summarizer script I've been working on for articles that are longer than the context windows for gpt-3.5 turbo and gpt-4.

It's a pretty interesting thesis you have if this is right, but I wanted to check if you spotted any glaring errors:

In this article, the author examines the challenges of aligning artificial intelligence (AI) with deontological morality as a means to ensure AI safety. Deontological morality, a popular ethical theory among professional ethicists and the general public, focuses on adhering to rules and duties rather than achieving good outcomes. Despite its strong harm-avoidance principles, the author argues that deontological AI may pose unique safety risks and is not a guaranteed path to safe AI.

The author explores three prominent forms of deontology: moderate views based on harm-benefit asymmetry principles, contractualist views based on consent requirements, and non-aggregative views based on separateness-of-persons considerations. The first two forms can lead to anti-natalism and similar conclusions, potentially endangering humanity if an AI system is aligned with such theories. Non-aggregative deontology, on the other hand, lacks meaningful safety features.

Deontological morality, particularly harm-benefit asymmetry principles, may make human extinction morally appealing, posing an existential threat if a powerful AI is aligned with these principles. The author discusses various ways deontological AI could be dangerous, including anti-natalist arguments, which claim procreation is morally unacceptable, and the paralysis argument, which suggests that it is morally obligatory to do as little as possible due to standard deontological asymmetries.

The author concludes that deontological morality is not a reliable path to AI safety and that avoiding existential catastrophes from AI is more challenging than anticipated. It remains unclear which approach to moral alignment would succeed if deontology fails to ensure safety. The article highlights the potential dangers of AI systems aligned with deontological ethics, especially in scenarios involving existential risks, such as an AI system aligned with anti-natalism that may view sterilizing all humans as permissible to prevent potential harm to new lives.

Incorporating safety-focused principles as strict, lexically first-ranked duties may help mitigate these risks, but balancing the conflicting demands of deontological ethics and safety remains a challenge. The article emphasizes that finding a reasonable way to incorporate absolute prohibitions into a broader decision theory is a complex problem requiring further research. Alternative ethical theories, such as libertarian deontology, may offer better safety assurances than traditional deontological ethics, but there is no simple route to AI safety within the realm of deontological ethics.

Comment by Evan R. Murphy on [deleted post] 2023-05-30T00:16:28.543Z

A couple of quick thoughts:

  • Very glad to see someone trying to provide more infrastructure and support for independent technical alignment researchers. Wishing you great success and looking forward to hearing how your project develops.
  • A lot of promising alignment research directions now seem to require access to cutting-edge models. A couple of ways you might deal with this could be:
    • Partner with AI labs to help get your researchers access to their models
    • Or focus on some of the few research directions such as mechanistic interpretability that still seem to be making useful progress on smaller, more accessible models
Comment by Evan R. Murphy on More information about the dangerous capability evaluations we did with GPT-4 and Claude. · 2023-05-26T23:51:14.169Z · LW · GW

We're working on a more thorough technical report.

Is the new Model evaluation for extreme risks paper the technical report you were referring to?

Comment by Evan R. Murphy on "notkilleveryoneism" sounds dumb · 2023-05-01T18:41:00.613Z · LW · GW

A few other possible terms to add to the brainstorm:

  • AI massive catastrophic risks
  • AI global catastrophic risks
  • AI catastrophic misalignment risks
  • AI catastrophic accident risks (paired with "AI catastrophic misuse risks")
  • AI weapons of mass destruction (WMDs) - Pro: a well-known term, Con: strongly connotes misuse so may be useful for that category but probably confusing to try and use for misalignment risks
Comment by Evan R. Murphy on [deleted post] 2023-04-26T20:40:48.982Z

As an aside, if you are located in Australia or New Zealand and would be interested in coordinating with me, please contact me through LessWrong on this account.

One potential source of leads for this might be the FLI Pause Giant AI Experiments open letter . I did a Ctrl+F search on there for "Australia" which had 50+ results and "New Zealand" which had 10+. So you might find some good people to connect with on there.

Comment by Evan R. Murphy on [deleted post] 2023-04-26T20:36:12.164Z

Upvoted. I think it's definitely worth pursuing well-thought out advocacy in countries besides US and China. Especially since this can be done in parallel with efforts in those countries.

A lot of people are working on the draft EU AI Act in Europe.

In Canada, parliament is considering Bill C-27 which may have a significant AI component. I do some work with an org called AIGS that is trying to help make that go well.

I'm glad to hear that some projects are underway in Australia and New Zealand and that you are pursuing this there!

Comment by Evan R. Murphy on WHO Biological Risk warning · 2023-04-25T19:32:09.858Z · LW · GW

Seems important, I'm guessing people are downvoting this considering it a possible infohazard.

Comment by Evan R. Murphy on Scaffolded LLMs as natural language computers · 2023-04-24T21:27:34.707Z · LW · GW

Post summary

I was interested in your post and noticed it didn't have a summary, so I generated one using a summarizer script I've been working on and iteratively improving:

Scaffolded Language Models (LLMs) have emerged as a new type of general-purpose natural language computer. With the advent of GPT-4, these systems have become viable at scale, wrapping a programmatic scaffold around an LLM core to achieve complex tasks. Scaffolded LLMs resemble the von-Neumann architecture, operating on natural language text rather than bits.

The LLM serves as the CPU, while the prompt and context function as RAM. The memory in digital computers is analogous to the vector database memory of scaffolded LLMs. The scaffolding code surrounding the LLM core implements protocols for chaining individual LLM calls, acting as the "programs" that run on the natural language computer.

Performance metrics for natural language computers include context length (RAM) and Natural Language OPerations (NLOPs) per second. Exponential improvements in these metrics are expected to continue for the next few years, driven by the increasing scale and cost of LLMs and their training runs.

Programming languages for natural language computers are in their early stages, with primitives like Chain of Thought, Selection-Inference, and Reflection serving as assembly languages. As LLMs improve and become more reliable, better abstractions and programming languages will emerge.

The execution model of natural language computers is an expanding Directed Acyclic Graph (DAG) of parallel NLOPs, resembling a dataflow architecture. Memory hierarchy in scaffolded LLMs currently has two levels, but as designs mature, additional levels may be developed.

Unlike digital computers, scaffolded LLMs face challenges in reliability, underspecifiability, and non-determinism. Improving the reliability of individual NLOPs is crucial for building powerful abstractions and abstract languages. Error correction mechanisms may be necessary to create coherent and consistent sequences of NLOPs.

Despite these challenges, the flexibility of LLMs offers great opportunities. The set of op-codes is not fixed but ever-growing, allowing for the creation of entire languages based on prompt templating schemes. As natural language programs become more sophisticated, they will likely delegate specific ops to the smallest and cheapest language models capable of reliably performing them.

If you have feedback on the quality of this summary, you can easily indicate that using LessWrong's agree/disagree voting.

Comment by Evan R. Murphy on Capabilities and alignment of LLM cognitive architectures · 2023-04-24T17:05:11.273Z · LW · GW

Post summary (experimental)

Here's an alternative summary of your post, complementing your TL;DR and Overview. This is generated by my summarizer script utilizing gpt-3.5-turbo and gpt-4. (Feedback welcome!)

The article explores the potential of language model cognitive architectures (LMCAs) to enhance large language models (LLMs) and accelerate progress towards artificial general intelligence (AGI). LMCAs integrate and expand upon approaches from AutoGPT, HuggingGPT, Reflexion, and BabyAGI, adding goal-directed agency, executive function, episodic memory, and sensory processing to LLMs. The author contends that these cognitive capacities will enable LMCAs to perform extensive, iterative, goal-directed "thinking" that incorporates topic-relevant web searches, thus increasing their effective intelligence.

While the acceleration of AGI timelines may be a downside, the author suggests that the natural language alignment (NLA) approach of LMCAs, which reason about and balance ethical goals similarly to humans, offers significant benefits compared to existing AGI and alignment approaches. The author also highlights the strong economic incentives for LMCAs, as computational costs are low for cutting-edge innovation, and individuals, small and large businesses are likely to contribute to progress. However, the author acknowledges potential difficulties and deviations in the development of LMCAs.

The article emphasizes the benefits of incorporating episodic memory into language models, particularly for decision-making and problem-solving. Episodic memory enables the recall of past experiences and strategies, while executive function focuses attention on relevant aspects of the current problem. The interaction between these cognitive processes can enhance social cognition, self-awareness, creativity, and innovation. The article also addresses the limitations of current episodic memory implementations in language models, which are limited to text files. However, it suggests that vector space search for episodic memory is possible, and language can encode multimodal information. The potential for language models to call external software tools, providing access to nonhuman cognitive abilities, is also discussed.

The article concludes by examining the implications of the NLA approach for alignment, corrigibility, and interpretability. Although not a complete solution for alignment, it is compatible with a hodgepodge alignment strategy and could offer a solid foundation for self-stabilizing alignment. The author also discusses the potential societal alignment problem arising from the development of LLMs with access to powerful open-source agents. While acknowledging LLMs' potential benefits, the author argues for planning against Moloch (a metaphorical entity representing forces opposing collective good) and accounting for malicious and careless actors. Top-level alignment goals should emphasize corrigibility, interpretability, harm reduction, and human empowerment/flourishing. The author also raises concerns about the unknown mechanisms of LLMs and the possibility of their output becoming deceptively different from the internal processing that generates it. The term x-risk AI (XRAI) is proposed to denote AI with a high likelihood of ending humanity. The author also discusses the principles of executive function and their relevance to LLMs, the importance of dopamine response in value estimation, and the challenges of ensuring corrigibility and interpretability in LMCA goals. In conclusion, the author suggests that while LLM development presents a wild ride, there is a fighting chance to address the potential societal alignment problem.

I may follow up with an object-level comment on your post, as I'm finding it super interesting but still digesting the content. (I am actually reading it and not just consuming this programmatic summary :)

Comment by Evan R. Murphy on Towards a solution to the alignment problem via objective detection and evaluation · 2023-04-24T15:39:43.556Z · LW · GW

Less compressed summary

Here's a longer summary of your article generated by the latest version of my summarizer script:

In this article, Paul Colognese explores whether detecting and evaluating the objectives of advanced AI systems during training and deployment is sufficient to solve the alignment problem. The idealized approach presented in the article involves detecting all objectives/intentions of any system produced during the training process, evaluating whether the outcomes produced by a system pursuing a set of objectives will be good/bad/irreversibly bad, and shutting down a system if its objectives lead to irreversibly bad outcomes.

The alignment problem for optimizing systems is defined as needing a method of training/building optimizing systems such that they never successfully pursue an irreversibly bad objective during training or deployment and pursue good objectives while rarely pursuing bad objectives. The article claims that if an overseer can accurately detect and evaluate all of the objectives of optimizing systems produced during the training process and during deployment, the overseer can prevent bad outcomes caused by optimizing systems pursuing bad objectives.

Robustly detecting an optimizing system’s objectives requires strong interpretability tools. The article discusses the problem of evaluating objectives and some of the difficulties involved. The role of interpretability is crucial in this approach, as it allows the overseer to make observations that can truly distinguish between good systems and bad-but-good-looking systems.

Detecting all objectives in an optimizing system is a challenging task, and even if the overseer could detect all of the objectives, it might be difficult to accurately predict whether a powerful optimizing system pursuing those objectives would result in good outcomes or not. The article suggests that with enough understanding of the optimizing system’s internals, it might be possible to directly translate from the internal representation of the objective to a description of the relevant parts of the corresponding outcome.

The article concludes by acknowledging that the proposed solution seems difficult to implement in practice, but pursuing this direction could lead to useful insights. Further conceptual and empirical investigation is suggested to better understand the feasibility of this approach in solving the alignment problem.

Comment by Evan R. Murphy on No Summer Harvest: Why AI Development Won't Pause · 2023-04-21T22:39:53.427Z · LW · GW

My claim is that AI safety isn't part of the Chinese gestalt.

Stuart Russell claims that Xi Jinping has referred to the existential threat of AI to humanity [1].

[1] 5:52 of Russell's interview on Smerconish: https://www.cnn.com/videos/tech/2023/04/01/smr-experts-demand-pause-on-ai.cnn

Comment by Evan R. Murphy on Towards a solution to the alignment problem via objective detection and evaluation · 2023-04-20T21:06:48.296Z · LW · GW

Great idea, I will experiment with that - thanks!

Comment by Evan R. Murphy on Towards a solution to the alignment problem via objective detection and evaluation · 2023-04-20T02:24:47.840Z · LW · GW

Post summary (experimental)

I just found your post. I want to read it but didn't have time to dive into it thoroughly yet, so I put it into a summarizer script I've been working on that uses gpt-3.5-turbo and gpt-4 to summarize texts that exceed the context window length.

Here's the summary it came up with, let me know if anyone see problems with it. If you're in a rush you can use agree/disagree voting to signal whether you think this is overall a good summary or not:

The article examines a theoretical solution to the AI alignment problem, focusing on detecting and evaluating objectives in optimizing systems to prevent negative or irreversible outcomes. The author proposes that an overseer should possess three capabilities: detecting, evaluating, and controlling optimizing systems to align with their intended objectives.

Emphasizing the significance of interpretability, the article delves into the challenges of assessing objectives. As a practical solution, the author suggests developing tools to detect and evaluate objectives within optimizing systems and testing these methods through auditing games. Although implementing such tools may be difficult, the author advocates for further exploration in this direction to potentially uncover valuable insights. Acknowledging the theoretical nature of the solution, the author recognizes the potential hurdles that may arise during practical implementation.

Update: I see now that your post includes a High-level summary of this post (thanks for doing that!), which I'm going through and comparing with this auto-generated one.

Comment by Evan R. Murphy on The self-unalignment problem · 2023-04-20T01:19:55.693Z · LW · GW

Post summary (auto-generated, experimental)

I am working on a summarizer script that uses gpt-3.5-turbo and gpt-4 to summarize longer articles (especially AI safety-related articles). Here's the summary it generated for the present post.

The article addresses the issue of self-unalignment in AI alignment, which arises from the inherent inconsistency and incoherence in human values and preferences. It delves into various proposed solutions, such as system boundary alignment, alignment with individual components, and alignment through whole-system representation. However, the author contends that each approach has its drawbacks and emphasizes that addressing self-unalignment is essential and cannot be left solely to AI.

The author acknowledges the difficulty in aligning AI with multiple potential targets due to humans' lack of self-alignment. They argue that partial solutions or naive attempts may prove ineffective and suggest future research directions. These include developing a hierarchical agency theory and investigating Cooperative AI initiatives and RAAPs. The article exemplifies the challenge of alignment with multiple targets through the case of Sydney, a conversational AI interacting with an NYT reporter.

Furthermore, the article highlights the complexities of aligning AI with user objectives and Microsoft's interests, discussing the potential risks and uncertainties in creating such AI systems. The author underscores the need for an explicit understanding of how AI manages self-unaligned systems to guarantee alignment with the desired outcome. Ultimately, AI alignment proposals must consider the issue of self-unalignment to prevent potential catastrophic consequences.

Let me know any issues you see with this summary.  You can use the agree/disagree voting to help rate the quality of the summary if you don't have time to comment - you'll be helping to improve the script for summarizing future posts. I haven't had a chance to read this article in full yet (hence my interest in generating a summary for it!). So I don't know how good this particular summary is yet, though I've been testing out the script and improving it on known texts.

Comment by Evan R. Murphy on The ‘ petertodd’ phenomenon · 2023-04-20T01:08:39.834Z · LW · GW

New summary that's 'less wrong' (but still experimental)

I've been working on improving the summarizer script. Here's the summary auto-generated by the latest version, using better prompts and fixing some bugs:

The author investigates a phenomenon in GPT language models where the prompt "petertodd" generates bizarre and disturbing outputs, varying across different models. The text documents experiments with GPT-3, including hallucinations, transpositions, and word associations. Interestingly, "petertodd" is associated with character names from the Japanese RPG game, Puzzle & Dragons, and triggers themes such as entropy, destruction, domination, and power-seeking in generated content.

The text explores the origins of "glitch tokens" like "petertodd", which can result in unpredictable and often surreal outputs. This phenomenon is studied using various AI models, with the "petertodd" prompt producing outputs ranging from deity-like portrayals to embodiments of ego death and even world domination plans. It also delves into the connections between "petertodd" and other tokens, such as "Leilan", which is consistently associated with a Great Mother Goddess figure.

The article includes examples of AI-generated haikus, folktales, and character associations from different cultural contexts, highlighting the unpredictability and complexity of GPT-3's associations and outputs. The author also discusses the accidental discovery of the "Leilan" token and its negligent inclusion in the text corpus used to generate it.

In summary, the text provides a thorough exploration of the "petertodd" phenomenon, analyzing its implications and offering various examples of AI-generated content. Future posts aim to further analyze this phenomenon and its impact on AI language models.

I think it's a superior summary, no longer hallucinating narratives about language models in society and going more in detail on interesting parts of the post. It was unable to preserve ' petertodd' and ' Leilan' with single quotes and leading spaces from the OP though. Also I feel like it is clumsy how the summary brings up "Leilan" twice.

Send a reply if anyone sees additional problems with this new summary, or has other feedback on it.

Comment by Evan R. Murphy on The ‘ petertodd’ phenomenon · 2023-04-18T04:01:40.790Z · LW · GW

Great feedback, thanks! Looks like GPT-4 ran away with its imagination a bit. I'll try to fix that.

Comment by Evan R. Murphy on The ‘ petertodd’ phenomenon · 2023-04-17T22:44:40.995Z · LW · GW

Post summary (experimental)

Here's an experimental summary of this post I generated using gpt-3.5-turbo and gpt-4:

This article discusses the 'petertodd' phenomenon in GPT language models, where the token prompts the models to generate disturbing and violent language. While the cause of the phenomenon remains unexplained, the article explores its implications, as language models become increasingly prevalent in society. The author provides examples of the language generated by the models when prompted with 'petertodd', which vary between models. The article also discusses glitch tokens and their association with cryptocurrency and mythological themes, as well as their potential to prompt unusual responses. The text emphasizes the capabilities and limitations of AI in generating poetry and conversation. Overall, the article highlights the varied and unpredictable responses that can be generated when using 'petertodd' as a prompt in language models.

Let me know if anyone sees issues with this summary or has suggestions for making it better, as I'm trying to improve my summarizer script.

Comment by Evan R. Murphy on Who is testing AI Safety public outreach messaging? · 2023-04-17T17:33:37.477Z · LW · GW

Robert Miles has been making educational videos about AI existential risk and AI alignment for 4+ years. I've never spoken with him, but I'm sure he has learned a lot about how to communicate these ideas to a general audience in the process. I don't know that he has compiled his learnings on that anywhere, but it might be worth reaching out to him if you're looking to talk with someone who has experience with this.

Another resource - Vael Gates and Collin Burns shared some testing they did in Dec 2022 on outreach to ML researchers in What AI Safety Materials Do ML Researchers Find Compelling? . However, I should emphasize this was testing outreach to ML researchers rather than the general public. So it may not be quite what you're looking for.

Comment by Evan R. Murphy on On AutoGPT · 2023-04-15T20:11:02.500Z · LW · GW

This is my experience so far too. However, now that it exists I don't think it will be long before people iteratively improve on it until it is quite capable.

Comment by Evan R. Murphy on Request to AGI organizations: Share your views on pausing AI progress · 2023-04-14T01:37:48.897Z · LW · GW

There are actually 3 signatories now claiming to work for for OpenAI.

Comment by Evan R. Murphy on [New LW Feature] "Debates" · 2023-04-05T17:10:34.377Z · LW · GW

Is it a real feature or not? Was posted on April Fool's Day but some are saying it's a real feature.

Comment by Evan R. Murphy on Keep Making AI Safety News · 2023-04-04T18:20:49.082Z · LW · GW

Great interview with Stuart Russell this past week on CNN's Smerconish: https://edition.cnn.com/videos/tech/2023/04/01/smr-experts-demand-pause-on-ai.cnn

Comment by Evan R. Murphy on Policy discussions follow strong contextualizing norms · 2023-04-03T21:49:40.224Z · LW · GW

"One of the most striking features of the “six-month Pause” plea was how intellectually limited and non-diverse — across fields — the signers were."
[...]
Nearly all reasonable discussions of AI x-risk have taken place in the peculiar cultural bubble of rationality and EA.

Some counter-examples that come to mind: Joshua Bengio, Geoffrey Hinton, Stephen Hawking, Bill Gates, Steve Wozniak. Looking at the Pause Giant Experiments Open letter now, I also see several signatories from fields like history, philosophy, some signers identifying as teachers, priests, librarians, psychologists, etc.

(Not that I disagree broadly with your point that the discussion has been strongly weighted in the rationality and EA communities.)

Comment by Evan R. Murphy on Hooray for stepping out of the limelight · 2023-04-03T21:41:12.689Z · LW · GW

Relatedly, DeepMind also was the first of the leading AI labs to have any signatories on the Pause Giant AI Experiments open letter. They still have the most signatories among those labs, although now OpenAI now has one. (To be sure, the letter still hasn't been signed by leadership of any of the top three labs.)

Comment by Evan R. Murphy on Widening Overton Window - Open Thread · 2023-04-03T21:18:04.299Z · LW · GW

The independent red-teaming organization ARC Evals that OpenAI partnered with to evaluate GPT-4 seems to disagree with this. While they don't use the term "runaway intelligence", they have flagged similar dangerous capabilities that they think will possibly be in reach for the next models beyond GPT-4:

We think that, for systems more capable than Claude and GPT-4, we are now at the point where we need to check carefully that new models do not have sufficient capabilities to replicate autonomously or cause catastrophic harm – it’s no longer obvious that they won’t be able to.

Comment by Evan R. Murphy on FLI open letter: Pause giant AI experiments · 2023-03-29T20:40:51.056Z · LW · GW

and where [in the government] close to literally zero people understand how the systems work or the arguments for existential risks here.

Just want to flag that I'm pretty sure this isn't true anymore. At least a few important people in the US government (and possibly many) have now taken this course . I am still in progress on my technical review of the course for AIGS Canada, but my take so far is that it provides a good education on relevant aspects of AI for a non-technical audience and also focuses quite a bit on AI existential risk issues.

(I know this only one point out of many you made but I wanted to respond to it when I spotted it and had time.)

Comment by Evan R. Murphy on FLI open letter: Pause giant AI experiments · 2023-03-29T20:30:45.647Z · LW · GW

Yea they made a mistake not verifying signatures from the beginning. But they have course-corrected, see this notice FLI has posted now above the signatories list:

Screenshot

Signatories list paused due to high demand

Due to high demand we are still collecting signatures but pausing their appearance on the letter so that our vetting processes can catch up. Note also that the signatures near the top of the list are all independently and directly verified.

Comment by Evan R. Murphy on FLI open letter: Pause giant AI experiments · 2023-03-29T07:20:33.995Z · LW · GW

That might be true if nothing is actually done in the 6+ months to improve AI safety and governance. But the letter proposes:

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Comment by Evan R. Murphy on FLI open letter: Pause giant AI experiments · 2023-03-29T06:34:17.039Z · LW · GW

How is it badly timed? Everyone is paying attention to AI since ChatGPT and GPT-4 came out, and lots of people are freaking out about it.

Comment by Evan R. Murphy on FLI open letter: Pause giant AI experiments · 2023-03-29T06:20:58.059Z · LW · GW

The letter says to pause for at least 6 months, not exactly 6 months.

So anyone who doesn't believe that protocols exist to ensure the safety of more capable AI systems shouldn't avoid signing the letter for that reason, because the letter can be interpreted as supporting an indefinite pause in that case.

Comment by Evan R. Murphy on FLI open letter: Pause giant AI experiments · 2023-03-29T05:54:52.581Z · LW · GW

The letter isn't perfect, but the main ask is worthwhile as you said. Coordination is hard, stakes are very high and time may be short, so I think it is good to support these efforts if they are in the ballpark of something you agree with.

Comment by Evan R. Murphy on ChatGPT Plugins - The Beginning of the End · 2023-03-27T21:11:00.004Z · LW · GW

New post from Zvi on ChatGPT Plugins: https://www.lesswrong.com/posts/DcfPmgBk63cajiiqs/gpt-4-plugs-in