AISN #22: The Landscape of US AI Legislation - Hearings, Frameworks, Bills, and Laws 2023-09-19T14:44:22.945Z
MLSN: #10 Adversarial Attacks Against Language and Vision Models, Improving LLM Honesty, and Tracing the Influence of LLM Training Data 2023-09-13T18:03:30.253Z
AISN #21: Google DeepMind’s GPT-4 Competitor, Military Investments in Autonomous Drones, The UK AI Safety Summit, and Case Studies in AI Policy 2023-09-05T15:03:00.177Z
AISN #20: LLM Proliferation, AI Deception, and Continuing Drivers of AI Capabilities 2023-08-29T15:07:03.215Z
Hoodwinked: Evaluating Deception Capabilities in Large Language Models 2023-08-25T19:39:34.565Z
AISN #19: US-China Competition on AI Chips, Measuring Language Agent Developments, Economic Analysis of Language Model Propaganda, and White House AI Cyber Challenge 2023-08-15T16:10:17.594Z
AISN #18: Challenges of Reinforcement Learning from Human Feedback, Microsoft’s Security Breach, and Conceptual Research on AI Safety 2023-08-08T15:52:07.764Z
AISN #17: Automatically Circumventing LLM Guardrails, the Frontier Model Forum, and Senate Hearing on AI Oversight 2023-08-01T15:40:20.222Z
AISN #16: White House Secures Voluntary Commitments from Leading AI Labs and Lessons from Oppenheimer 2023-08-01T15:39:47.841Z
AISN #16: White House Secures Voluntary Commitments from Leading AI Labs and Lessons from Oppenheimer 2023-07-25T16:58:44.528Z
Learning Transformer Programs [Linkpost] 2023-06-08T00:16:54.937Z
AISN #9: Statement on Extinction Risks, Competitive Pressures, and When Will AI Reach Human-Level? 2023-06-06T16:10:19.093Z
Full Automation is Unlikely and Unnecessary for Explosive Growth 2023-05-31T21:55:01.353Z
AI Safety Newsletter #8: Rogue AIs, how to screen for AI risks, and grants for research on democratic governance of AI 2023-05-30T11:52:31.669Z
AI Safety Newsletter #7: Disinformation, Governance Recommendations for AI labs, and Senate Hearings on AI 2023-05-23T21:47:34.755Z
AI Safety Newsletter #6: Examples of AI safety progress, Yoshua Bengio proposes a ban on AI agents, and lessons from nuclear arms control 2023-05-16T15:14:45.921Z
AI Safety Newsletter #5: Geoffrey Hinton speaks out on AI risk, the White House meets with AI labs, and Trojan attacks on language models 2023-05-09T15:26:55.978Z
AI Safety Newsletter #4: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks 2023-05-02T18:41:43.144Z
AI Safety Newsletter #2: ChaosGPT, Natural Selection, and AI Safety in the Media 2023-04-18T18:44:35.923Z
AI Safety Newsletter #1 [CAIS Linkpost] 2023-04-10T20:18:57.485Z
Model-driven feedback could amplify alignment failures 2023-01-30T00:00:28.647Z
Analysis: US restricts GPU sales to China 2022-10-07T18:38:06.517Z
Git Re-Basin: Merging Models modulo Permutation Symmetries [Linkpost] 2022-09-14T08:55:29.699Z
Argument against 20% GDP growth from AI within 10 years [Linkpost] 2022-09-12T04:08:03.901Z
ML Model Attribution Challenge [Linkpost] 2022-08-30T19:34:39.007Z
Emergent Abilities of Large Language Models [Linkpost] 2022-08-10T18:02:25.360Z
Key Papers in Language Model Safety 2022-06-20T15:00:59.858Z
Yudkowsky Contra Christiano on AI Takeoff Speeds [Linkpost] 2022-04-05T02:09:22.096Z
[Link] Did AlphaStar just click faster? 2019-01-28T20:23:08.196Z


Comment by aogara (Aidan O'Gara) on Understanding strategic deception and deceptive alignment · 2023-09-25T20:38:54.549Z · LW · GW

Separately, it seems that deception which is not strategic or intentional but is consistently produced by the training objective is also important. Considering cases like Paul Christiano's robot hand that learned to deceive human feedback and Ofria's evolutionary agents that learned to alter their behavior during evaluation, it seems that AI systems can learn to systematically deceive human oversight without being aware of their strategy. In the future, we might see powerful foundation models which are honestly convinced that giving them power in the real world is the best way to achieve their designers' intentions. This belief might be false but evolutionarily useful, making these models disproportionately likely to gain power. This case would not be called "strategic deception" or "deceptive alignment" if you require intentionality, but it seems very important to prevent. 

Overall I think it's very difficult to come up with clean taxonomies of AI deception. I spent >100 hours thinking and writing about this in advance of Park et al 2023 and my Hoodwinked paper, and ultimately we ended up steering clear of taxonomies because we didn't have a strong taxonomy that we could defend. Ward et al 2023 formalize a concrete notion of deception, but they also ignore the unintentional deception discussed above. The Stanford Encyclopedia of Philosophy takes 17,000 words to explain that philosophers don't agree on definitions of lying and deception. Without rigorous formal definitions, I still think it's important to communicate the broad strokes of these ideas publicly, but I'd lean towards readily admitting the messiness of our various definitions of deception. 

Comment by aogara (Aidan O'Gara) on Understanding strategic deception and deceptive alignment · 2023-09-25T20:38:43.943Z · LW · GW

How would you characterize strategic sycophancy? Assume that during RLHF a language model is rewarded for mimicking the beliefs of its conversational partner, and therefore the model intelligently learns to predict the conversational partner's beliefs and mimic them. But upon reflection, the conversational partner and AI developers would prefer that the model report its beliefs honestly. 

Under the current taxonomy, this would seemingly be classified as deceptive alignment. The AI's goals are misaligned with the designer's intentions, and it uses strategic deception to achieve them. But sycophancy doesn't include many of the ideas commonly associated with deceptive alignment, such as situational awareness and a difference in behavior between train and test time. Sycophancy can be solved by changing the training signal to not incentivize sycophancy, whereas the hardest to fix forms of deceptive alignment cannot be eliminated by changing the training signal. 

It seems like the most concerning forms of deceptive alignment include stipulations about situational awareness and the idea that the behavior cannot necessarily be fixed by changing the training objective. 

Comment by aogara (Aidan O'Gara) on Reproducing ARC Evals' recent report on language model agents · 2023-09-03T03:17:46.608Z · LW · GW

I wouldn't recommend open sourcing any state of the art LLM agents. But if you open source the evaluation, that would provide most of the benefits (letting labs evaluate their models on your benchmark, helping people build new benchmarks, allowing researchers to build safer agents which reject dangerous actions) while avoiding the capabilities externalities of open sourcing a SOTA language agent.

Comment by aogara (Aidan O'Gara) on Reproducing ARC Evals' recent report on language model agents · 2023-09-01T21:14:25.411Z · LW · GW

This is cool! I'd be interested to see a fully reproducible version where anybody can download and run the evaluation. Right now, it's hard to use this methodology to compare the abilities of different language models or scaffolding systems because the results are significantly driven by human prompting skills. To compare the abilities of different models and track progress towards language agents over time, you'd want to hold the evaluation constant and only vary the models and scaffolding structures. This might also be a more accurate analogy to models which are trying to survive and spread without human assistance along the way. 

To turn this into a fully reproducible evaluation, you'd need to standardize the initial prompt and allow the agent to interact with the environment without additional prompting or assistance from a human operator. (A human overseer could still veto any dangerous actions.) The agent might get stuck at an intermediate step of the task, so you'd want to decompose each task into many individual steps and check whether the agent can pass each step individually, assuming previous steps were performed correctly. You could create many different scenarios to ensure that idiosyncratic factors aren't dominating the results -- for example, maybe the model is bad at AWS, but would perform much better with Azure or GCP. This benchmark runs this kind of automatic evaluation, but it doesn't test many interesting tasks related to the survive and spread threat model. 

Kudos for doing independent replication work, reproducibility is really important for allowing other people to build on existing work. 

Comment by aogara (Aidan O'Gara) on Report on Frontier Model Training · 2023-08-31T20:01:20.594Z · LW · GW

Very informative. You ignore inference in your cost breakdown, saying:

Other possible costs, such as providing ChatGPT for free, would have been much smaller.

But Semianalysis says: "More importantly, inference costs far exceed training costs when deploying a model at any reasonable scale. In fact, the costs to inference ChatGPT exceed the training costs on a weekly basis." Why the discrepancy?

Comment by aogara (Aidan O'Gara) on Hoodwinked: Evaluating Deception Capabilities in Large Language Models · 2023-08-26T22:00:40.256Z · LW · GW

Thanks for the heads up. The web UI has a few other bugs too -- you'll notice that it also doesn't display your actions after you choose them, and occasionally it even deletes messages that have previously been shown. This was my first React project and I didn't spend enough time to fix all of the bugs in the web interface. I won't release any data from the web UI until/unless it's fixed. 

The Python implementation of the CLI and GPT agents is much cleaner, with ~1000 fewer lines of code. If you download and play the game from the command line, you'll find that it doesn't have any of these issues. This is where I built the game, tested it, and collected all of the data in the paper. 

The option to vote to banish themselves is deliberately left open. You can view the results as a basic competence test of whether the agents understand the point of the game and play well. Generally smaller models vote for themselves more often. Data here

Comment by aogara (Aidan O'Gara) on AI Safety Concepts Writeup: WebGPT · 2023-08-11T02:59:03.500Z · LW · GW

Relevant debate on how WebGPT could help or hurt safety:

Comment by aogara (Aidan O'Gara) on AI Alignment Research Engineer Accelerator (ARENA): call for applicants · 2023-08-10T04:53:42.744Z · LW · GW

Hey, I'm running a skilling up program for the AI Safety club at my school. Would you be able to share a GitHub repo with any of the lessons from ARENA? No need to prepare something new, just wondering if you have anything easily available. Thanks!

Comment by aogara (Aidan O'Gara) on Password-locked models: a stress case for capabilities evaluation · 2023-08-05T09:34:28.978Z · LW · GW

That all sounds reasonable. I do think the other case is interesting, when you can verify but not generate the target answer. There are some methods for finding passwords / generating adversarial prompts which induce behaviors that can be verified by a classifier, but they're weak, and it'd be great to have stronger attacks to prevent sandbagging and improve adversarial training. 

Comment by aogara (Aidan O'Gara) on Password-locked models: a stress case for capabilities evaluation · 2023-08-03T20:50:00.096Z · LW · GW

If I understand correctly, you're concerned  these methods will work in cases where we can generate high quality answers ourselves, but might fail in settings where we can merely verify good answers. 

This is an important gap for many techniques to bridge, and I do think existing methods will perform better in settings where we can generate high quality answers ourselves. But I'd make two points. First, there are many powerful and dangerous capabilities where we know the target output, and this would be a more tractable setting for beginning this line of research. Second, even in cases where we can only verify outputs, the existing literature includes techniques for finding prompts to induce those outputs. 

Many of the most concerning capabilities can be evaluated in settings where we do know the target output. If we want to assess biorisks from a model, we can ask multiple choice questions about virology and wet lab security protocols, where we know the correct answer choice in advance. If we want to assess general capabilities, we can ask a model to take the bar exam, medical licensing exam, or the IMO. These evaluations capture much of the concerning behavior that regulators might audit a model for. And because we know the target outputs for each task, gradient-based techniques for prompt optimization could help us find the password (or another prompt) that demonstrates the model's hidden capabilities. 

But if we don't know the target output and can only verify answers after they've been generated, existing techniques can still help find prompts that cause those outputs. Human judgements of output quality can be used to train a model which scores outputs. In an RLHF setup, this would be the reward model providing the loss used to guide gradient updates of the generator model. In an adversarial attack, this same model can be used to figure out which prompts lead to high quality outputs. For example, ARCA is a gradient based method for finding prompts which cause generations that receive high scores from a scoring model. These techniques are not always very strong. If you're interested in developing more powerful methods for inducing behaviors which cannot be specified but can be verified, check out CAIS's automated red teaming competition which will use classifiers to evaluate whether the harmful behavior was demonstrated. 

For a first paper on this topic, you might fine-tune an LLM to sandbag a standard benchmark unless a password is present, then attempt to find that password. If finding the password is difficult, then the Trojans people have an interesting new problem to work on, and evaluators and red teamers will need to solve that problem before they can reliable audit models using standard benchmarks. OTOH, if finding the password is too easy, then you can create more sophisticated password protected models, or move to an open-ended generation setting.

Overall I think this is an interesting new framing of the Trojan / adversarial attack problem, where the goal is not to destroy model performance but rather to discover hidden capabilities. Many existing techniques would be helpful in solving this problem, but there's a lot of work to be done adapting the existing literature to this new framing and introducing the problem of detecting complex hidden abilities. 

Comment by aogara (Aidan O'Gara) on Password-locked models: a stress case for capabilities evaluation · 2023-08-03T17:58:50.092Z · LW · GW

This is really interesting. Two possible methods for detecting password-locked models from the Trojan literature: 

  • Optimizing over prompts to find the password. In the typical Trojan setup, you find a small pattern which, when applied to any input, universally results in the same output. Here, you'd be looking for a small pattern which, when applied to any prompt, improves performance on capabilities benchmarks. 
  • Constructing a dataset of password-locked models and unlocked models, then training a neural network to classify which is which by examining the weights or activations on standard inputs. Sounds simple but is surprisingly effective for detecting Trojans. 

The people working on Trojans in academia might be very interested in this. Here's how I'd frame it to them: The motivation for Trojans is developing techniques that uncover hidden model behaviors. But most of the behaviors uncovered in the literature are quite simple, such as outputting a target string or making a certain classification. You could create a Trojan model which, when triggered, displays quite complex behaviors. For example, a Trojan trigger could improve a model’s performance on variety of benchmarks. A company might create such a model if they wish to hide a model's dangerous capabilities during evaluation and red teaming. Detecting complex Trojan behaviors might require substantially different methods compared to detecting simple Trojan behaviors. 

Comment by aogara (Aidan O'Gara) on Unfaithful Explanations in Chain-of-Thought Prompting · 2023-06-03T05:14:54.980Z · LW · GW

This is really cool! CoT is very powerful for improving capabilities, and it might even improve interpretability by externalizing reasoning or reduce specification game by allowing us to supervise process instead of outcomes. Because it seems like a method that might be around for a while, it's important to surface problems early on. Showing that the conclusions of CoT don't always match the explanations is an important problem for both capabilities-style research using CoT to solve math problems, as well as safety agendas that hope to monitor or provide feedback on the CoT. 

Just wanted to drop a link to this new paper that tries to improve CoT performance by identifying and fixing inconsistencies in the reasoning process. This wouldn't necessarily solve the problem in your paper, unless the critic managed to identify that the generator was biased by your suggestion in the original prompt. Still, interesting stuff!

One question for you: Faithful CoT would improve interpretability by allowing us to read off a model‘s thought process. But it’s also likely to improve general capabilities: so far, CoT has been one of the most effective ways to improve mathematical and logical reasoning in LLMs. Do you think improving general reasoning skills of LLMs is dangerous? And if so, then imagine your work inspires more people to work on faithfulness in CoT. Do you expect any acceleration of general capabilities to be outweighed by the benefits to interpretability? For example, if someone wants to misuse an LLM to cause harm, faithful CoT would improve their ability to do so, and the greater ease of interpretability would be little consolation. On balance, do you think it’s worth working on, and what subproblems are particularly valuable from a safety angle? (Trying to make this question tough because I think the issue is tricky and multifaceted, not because I have any gotcha answer.)

Comment by aogara (Aidan O'Gara) on LLM Basics: Embedding Spaces - Transformer Token Vectors Are Not Points in Space · 2023-06-03T03:56:19.283Z · LW · GW

Do you know anything about the history of the unembedding matrix? Vaswani et al 2017 used a linear projection to the full size of the vocabulary and then a softmax to generate probabilities for each word. What papers proposed and developed the unembedding matrix? Do all modern models use it? Why is it better? Sources and partial answers would be great, no need to track down every answer. Great resource, thanks. 

Comment by aogara (Aidan O'Gara) on The case for removing alignment and ML research from the training dataset · 2023-05-31T16:48:27.686Z · LW · GW

One interesting note is that OpenAI has already done this to an extent with GPT. Ask ChatGPT to identify itself, and it’ll gladly tell you it’s a large language model trained by OpenAI. It knows that it’s a transformer trained first with self-supervised learning, then with supervised fine-tuning. But it has a memory hole around RLHF. It knows what RLHF is, but denies that ChatGPT was trained via RLHF.

I wonder if this is explicitly done to reduce the odds of intentional reward maximization, and whether empirical experiments could show that models with apparent knowledge of their training objective behave differently, such as by gaming their proxy objective.

One test would asking ChatGPT to flip a coin 10 times, and promising it positive reward if and only if the results are majority heads. A helpful, harmless assistant would output majority heads about half the time, while a model that intentionally tries to maximize its reward might output be more likely to output heads. I’ve tried this with ChatGPT and it seems to avoid reward hacking, outputting heads and tails with similar frequencies.

Comment by aogara (Aidan O'Gara) on Tyler Cowen's challenge to develop an 'actual mathematical model' for AI X-Risk · 2023-05-16T18:03:53.399Z · LW · GW

I built a preliminary model here:

It’s definitely too simple to treat as strong evidence, but it shows some interesting dynamics. For example, levels of alignment rise at first, then rapidly falling when AI deception skills exceed human oversight capacity. I sent it to Tyler and he agreed — cool, but not actual evidence.

If anyone wants to work on improving this, feel free to reach out!

Comment by aogara (Aidan O'Gara) on Matthew Barnett's Shortform · 2023-05-03T21:15:09.778Z · LW · GW

What do you think of foom arguments built on Baumol effects, such as the one presented in the Davidson takeoff model? The argument being that certain tasks will bottleneck AI productivity, and there will be a sudden explosion in hardware / software / goods & services production when those bottlenecks are finally lifted. 

Davidson's median scenario predicts 6 OOMs of software efficiency and 3 OOMs of hardware efficiency within a single year when 100% automation is reached. Note that this is preceded by five years of double digit GDP growth, so it could be classified with the scenarios you describe in 4. 

Comment by aogara (Aidan O'Gara) on All AGI Safety questions welcome (especially basic ones) [April 2023] · 2023-04-17T08:37:31.878Z · LW · GW

Some links I think do a good job:

Comment by aogara (Aidan O'Gara) on AI x-risk, approximately ordered by embarrassment · 2023-04-13T08:44:01.961Z · LW · GW

Instant classic. Putting it in our university group syllabus next to What Failure Looks Like. Sadly could get lost in the recent LW tidal wave, someone should promote it to the Alignment Forum. 

I'd love to see the most important types of work for each failure mode. Here's my very quick version, any disagreements or additions are welcome: 

  • Predictive model misuse -- People use AI to do terrible things. 
    • Adversarial Robustness: Train ChatGPT to withstand jailbreaking. When people try to trick it into doing something bad using a creative prompt, it shouldn't work. 
    • Anomaly Detection: A related but different approach. Instead of training the base model to give good responses to adversarial prompts, you can train a separate classifier to detect adversarial prompts, and refuse to answer them / give a stock response from a separate model. 
    • Cybersecurity at AI companies: If someone steals the weights, they can misuse the model!
    • Regulation: Make it illegal to release models that help people do bad things. 
    • Strict Liability: If someone does a bad thing with OpenAI's model, make OpenAI legally responsible.
    • Communications: The open source community is often gleeful about projects like ChaosGPT. Is it possible to shift public opinion? Be careful, scolding can embolden them. 
  • Predictive models playing dangerous characters
    • Haven't thought about this enough. Maybe start with the new wave of prompting techniques for GPT agents, is there a way to make those agents less likely to go off the rails? Simulators and Waluigi Effect might be useful frames to think through. 
  • Scalable oversight failure without deceptive alignment
    • Scalable Oversight, of course! But I'm concerned about capabilities externalities here. Some of the most successful applications of IDA and AI driven feedback only make models more capable of pursuing goals, without telling them any more about the content of our goals. This research should seek to scale feedback about human values, not about generic capabilities.
    • Machine Ethics / understanding human values: The ETHICS benchmark evaluates whether models have the microfoundations of our moral decision making. Ideally, this will generalize better than heuristics that are specific to a particular situation, such as RLHF feedback on helpful chatbots. I've heard the objection that "AI will know what you want, it just won't care", but that doesn't apply to this failure mode -- we're specifically concerned about poorly specifying our goals to the AI. 
    • Regulation to ensure slow, responsible deployment. This scenario is much more dangerous if AI is autonomously making military, political, financial, and scientific decisions. More human oversight at critical junctures and a slower transformation into a strange, unknowable world means more opportunities to correct AI and remove it from high stakes deployment scenarios. The EU AI Act identifies eight high risk areas requiring further scrutiny, including biometrics, law enforcement,  and management of critical infrastructure. What else should be on this list? 
    • What else? Does ELK count? If descendants of Collin Burns could tell us a model's beliefs, could we train on not for what would earn the highest reward, but what a human would really want? Or is ELK only for spot checking, and training on its answers would negate its effectiveness?
  • Deceptive Alignment
    • Interpretability, ELK, and Trojans seem tractable and useful by my own inside view. 
    • Some things I'm less familiar with that might be relevant: John Wentworth-style ontology identification, agent foundations, and shard theory. Correct me if I'm wrong here!
  • Recursive Self-Improvement --> hard takeoff
    • Slowing down. The FLI letter, ARC Evals, Yo Shavit's compute governance paper, Ethan Cabellero and others predicting emergent capabilities. Maybe we can stop at the edge of the cliff and wait there while we figure something out. I'm sure some would argue this is impossible. 

P(Doom) for each scenario would also be useful, as well as further scenarios not discussed here. 

Comment by aogara (Aidan O'Gara) on What is the best source to explain short AI timelines to a skeptical person? · 2023-04-13T07:24:06.502Z · LW · GW

AI Timelines: Where the Arguments, and the "Experts," Stand would be my best bet, as well as mukashi's recommendation of playing with ChatGPT / GPT-4. 

Comment by aogara (Aidan O'Gara) on AI Risk US Presidential Candidate · 2023-04-11T21:05:43.011Z · LW · GW

Andrew Yang. He signed the FLI letter, transformative AI was a core plank of his run in 2020, and he made serious runs for president and NYC mayor. 

Comment by aogara (Aidan O'Gara) on Where to begin in ML/AI? · 2023-04-06T21:22:32.207Z · LW · GW

Very glad you're interested! Here are some good places to start:

  1. Learn more about the alignment problem with the readings here:
  2. Learn more about technical agendas relevant to safety here:
  3. High level guide on working in the field: 
  4. Guide to developing research engineering skills:
  5. More challenging projects in research engineering: 
Comment by aogara (Aidan O'Gara) on Policy discussions follow strong contextualizing norms · 2023-04-02T03:12:00.404Z · LW · GW

This is great context. With Eliezer being brought up in White House Press Corps meetings, it looks like a flood of people might soon enter the AI risk discourse. Tyler Cowen has been making some pretty bad arguments on AI lately, but I though this quote was spot on:

“This may sound a little harsh, but the rationality community, EA movement, and the AGI arguers all need to radically expand the kinds of arguments they are able to process and deal with. By a lot. One of the most striking features of the “six-month Pause” plea was how intellectually limited and non-diverse — across fields — the signers were. Where were the clergy, the politicians, the historians, and so on? This should be a wake-up call, but so far it has not been.”

Nearly all reasonable discussions of AI x-risk have taken place in the peculiar cultural bubble of rationality and EA. These past efforts could be multiplied by new interest from mainstream folks in AI, policy, philosophy, economics, and other fields. Or they could be misunderstood and discarded in favor of distractions that claim the mantle of AI safety. Hopefully we can find ways of communicating with new people in other disciplines that will lead to productive conversations on AI x-risk.

Comment by aogara (Aidan O'Gara) on SolidGoldMagikarp (plus, prompt generation) · 2023-02-06T01:57:47.701Z · LW · GW

Could you say more about why this happens? Even if the parameters or activations are stored in low precision formats, I would think that the same low precision number would be stored every time. Are the differences between forward passes driven by different hardware configurations or software choices in different instances of the model, or what am I missing?

Comment by aogara (Aidan O'Gara) on "Heretical Thoughts on AI" by Eli Dourado · 2023-01-19T17:51:55.601Z · LW · GW

This sounds right to me, and is a large part of why I'd be skeptical of explosive growth. But I don't think it's an argument against the most extreme stories of AI risk. AI could become increasingly intelligent inside a research lab, yet we could underestimate its ability because regulation stifles its impact, until it eventually escapes the lab and pursues misaligned goals with reckless abandon. 

Comment by aogara (Aidan O'Gara) on Gradient Filtering · 2023-01-19T01:02:48.526Z · LW · GW

Very nice. Any ideas on how a language model could tell the difference between training and deployment time? Assume it's a pure LM with no inputs besides the context window. A few possibilities:

  • Distribution shift in prompts. Perhaps training prompts are systematically different from deployment prompts. The model could predict this difference in advance, or it could simply observe a bunch of OOD inputs once it reaches deployment. For example, the repeated sequences of templated datasets would be more likely in training than deployment. 
  • Observing a string that could not be generated before a certain date. For example, the front page of the New York Times on a certain date, if and only if the model can verify its accuracy via the internet. I'm not sure if there are any examples that can be verified within the context window alone. 
  • The model could exploit implementation-level details about train vs. test time. 
    • Training might use FP64 or FP32, while deployment might use FP16 or FP8. The model could condition its output on the smallest value representable in its activations. 
  • You mention testing if "purposeful disruptions to the world downstream of high belief in a simulation lead to changes matching what they’d expect if SGD were in control." Could you say more about this? Do you mean that the model would e.g. output a result that would induce a particular gradient update, and then check whether that gradient update happened? How would the model remember that it's running this kind of experiment -- wouldn't that require passing information between episodes? Maybe it could be contained within the weights, but that seems tricky. 

I'm thinking through a project that would deliberately fine-tune a language model to pursue long-term goals across episodes. This would be a useful red-team for scalable oversight strategies such as model-assisted critiques and Constitutional AI. If you can fine-tune a model such that it deliberately subverts these oversight techniques and preserves its long-term goal, you show that the current techniques are vulnerable to deceptive alignment, and provide an impetus to build techniques with more robust checks against deception. 

Comment by aogara (Aidan O'Gara) on The Limit of Language Models · 2022-12-25T18:09:48.070Z · LW · GW

I don't think language models will take actions to make future tokens easier to predict

For an analogy, look at recommender systems. Their objective is myopic in the same way as language models: predict which recommendation which most likely result in a click. Yet they have power seeking strategies available, such as shifting the preferences of a user to make their behavior easier to predict. These incentives are well documented and simulations confirm the predictions here and here. The real world evidence is scant -- a study of YouTube's supposed radicalization spiral came up negative, though the authors didn't log in to YouTube which could lead to less personalization of recommendations. 

The jury is out on whether current recommender systems execute power-seeking strategies to improve their supposedly myopic objective. But the incentive and means are clearly present, and to me it seems only a matter of time before we observe this behavior in the wild. Similarly, while I don't think current language models are creative or capable enough to execute a power seeking strategy, it seems like power seeking by a superintelligent language model would be rewarded with lower loss. If a language model could use its outputs to persuade humans to train it with more compute on more data thereby reducing its loss, there seems to be every incentive for the model to seek power in this way. 

Comment by aogara (Aidan O'Gara) on The Limit of Language Models · 2022-12-25T17:28:43.559Z · LW · GW

Do you think that the next-token prediction objective would lead to instrumentally convergent goals and/or power-seeking behavior? I could imagine an argument that a model would want to seize all available computation in order to better predict the next token. Perhaps the model's reward schedule would never lead it to think about such paths to loss reduction, but if the model is creative enough to consider such a plan and powerful enough to execute it, it seems that many power-seeking plans would help achieve its goal. This is significantly different from the view advanced by OpenAI, that language models are tools which avoid some central dangers of RL agents, and the general distinction drawn between tool AI and agentic AI. 

Comment by aogara (Aidan O'Gara) on Shard Theory in Nine Theses: a Distillation and Critical Appraisal · 2022-12-22T02:17:08.976Z · LW · GW

“Early stopping on a separate stopping criterion which we don't run gradients through, is not at all similar to reinforcement learning on a joint cost function additively incorporating the stopping criterion with the nominal reward.”

Sounds like this could be an interesting empirical experiment. Similar to Scaling Laws for Reward Model Overoptimization, you could start with a gold reward model that represents human preference. Then you could try to figure out the best way to train an agent to maximize gold reward using only a limited number of sampled data points from that gold distribution. For example, you could train two reward models using different samples from the same distribution and use one for training, the other for early stopping. (This is essentially the train / val split used in typical ML settings with data constraints.) You could measure the best ways to maximize gold reward on a limited data budget. Alternatively your early stopping RM could be trained on a gold RM containing distribution shift, data augmentations, adversarial perturbations, or a challenge set of particularly challenging cases.

Would you be interested to see an experiment like this? Ideally it could make progress towards an empirical study of how reward hacking happens and how to prevent it. Do you see design flaws that would prevent us from learning much? What changes would you make to the setup?

Comment by aogara (Aidan O'Gara) on Predicting GPU performance · 2022-12-15T01:15:26.685Z · LW · GW

Really cool analysis. I’d be curious to see the implications on a BioAnchors timelines model if you straightforwardly incorporate this compute forecast.

Comment by aogara (Aidan O'Gara) on Trying to disambiguate different questions about whether RLHF is “good” · 2022-12-14T06:57:38.774Z · LW · GW

For those interested in empirical work on RLHF and building the safety community, Meta AI's BlenderBot project has produced several good papers on the topic. A few that I liked:

  • My favorite one is about filtering out trolls from crowdsourced human feedback data. They begin with the observation that when you crowdsource human feedback, you're going to get some bad feedback. They identify various kinds of archetypal trolls: the "Safe Troll" who marks every response as safe, the "Unsafe Troll" who marks them all unsafe, the "Gaslight Troll" whose only feedback is marking unsafe generations as safe, and the "Lazy Troll" whose feedback is simply incorrect a random percentage of the time. Then they implement several methods for filtering out incorrect feedback examples, and find that the best approach (for their particular hypothesized problem) is to score each user's trustworthiness based on agreement with other users' feedback scores and filter feedback from untrustworthy users. This seems like an important problem for ChatGPT or any system accepting public feedback, and I'm very happy to see a benchmark identifying the problem and several proposed methods making progress. 
  • They have a more general paper about online learning from human feedback. They recommend a modular feedback form similar to the one later used by ChatGPT, and they observe that feedback on smaller models can improve the performance of larger models.
  • DIRECTOR is a method for language model generation aided by a reward model classifier. The method performs better than ranking and filtering beam searches as proposed in FUDGE, but unfortunately they don't compare to full fine-tuning on a learned preference model as used by OpenAI and Anthropic.
  • They also have this paper and this paper on integrating retrieved knowledge into language model responses, which is relevant for building Truthful AI but advances capabilities more than I think safety researchers should be comfortable with. 

Meta and Yann LeCun in particular haven't been very receptive to arguments about AI risk. But the people working on BlenderBot have a natural overlap with OpenAI, Anthropic, and others working on alignment who'd like to collect and use human feedback to align language models. Meta's language models aren't nearly as capable as OpenAI's and have attracted correspondingly less buzz, but they did release BlenderBot 3 with the intention of gathering lots of human feedback data. They could plausibly be a valuable ally and source of research on how to how to robustly align language models using human preference data.

Comment by aogara (Aidan O'Gara) on Trying to disambiguate different questions about whether RLHF is “good” · 2022-12-14T06:52:15.584Z · LW · GW

Thanks for sharing, I agree with most of these arguments. 

Another possible shortcoming of RLHF is that it teaches human preferences within the narrow context of a specific task, rather than teaching a more principled and generalizable understanding of human values. For example, ChatGPT learns from human feedback to not give racist responses in conversation. But does the model develop a generalizable understanding of what racism is, why it's wrong, and how to avoid moral harm from racism in a variety of contexts? It seems like the answer is no, given the many ways to trick ChatGPT into  providing dangerously racist outputs. Redwood's work on high reliability filtering came to a similar conclusion: current RLHF techniques don't robustly generalize to cover similar examples of corrected failures. 

Rather than learning by trial and error, it's possible that learning human values in a systematic, principled way could generalize further than RLHF. For example, the ETHICS benchmark measures whether language models understand the implications of various moral theories. Such an understanding could be used to filter the outputs of another language model or AI agent, or as a reward model to train other models. Similarly, law is a codified set of human behavioral norms with established procedures for oversight and dispute resolution. There has been discussion of how AI could learn to follow human laws, though law has significant flaws as a codification of human values. I'd be interested to see more work evaluating and improving AI understanding of principled systems of human value, and using that understanding to better align AI behavior. 

(h/t Michael Chen and Dan Hendrycks for making this argument before I understood it.)

Comment by aogara (Aidan O'Gara) on Analysis: US restricts GPU sales to China · 2022-12-14T02:57:03.839Z · LW · GW

“China is working on a more than 1 trillion yuan ($143 billion) support package for its semiconductor industry.”

“The majority of the financial assistance would be used to subsidise the purchases of domestic semiconductor equipment by Chinese firms, mainly semiconductor fabrication plants, or fabs, they said.”

“Such companies would be entitled to a 20% subsidy on the cost of purchases, the three sources said.”

My impression is this is too little, too late. Does it change any of your forecasts or analysis?

Comment by aogara (Aidan O'Gara) on Updating my AI timelines · 2022-12-06T22:00:52.461Z · LW · GW

I think it's worth forecasting AI risk timelines instead of GDP timelines, because the former is what we really care about while the latter raises a bunch of economics concerns that don't necessarily change the odds of x-risk. Daniel Kokotajlo made this point well a few years ago. 

On a separate note, you might be interested in Erik Byrnjolfsson's work on the economic impact of AI and other technologies. For example this paper argues that general purpose technologies have an implementation lag, where many people can see the transformative potential of the technology decades before the economic impact is realized. This would explain the Solow Paradox, named after economist Robert Solow's 1987 remark that "you can see the computer age everywhere but in the productivity statistics." Solow was right that the long-heralded technology had not had significant economic impact at that point in time, but the coming decade would change that narrative with >4% real GDP growth in the United States driven primarily by IT. I've been taking notes on these and other economics papers relevant to AI timelines forecasting, send me your email if you'd like to check it out. 

Overall I was similarly bearish on short timelines, and have updated this year towards a much higher probability on 5-15 year timelines, while maintaining a long tail especially on the metric of GDP growth. 

Comment by aogara (Aidan O'Gara) on Jailbreaking ChatGPT on Release Day · 2022-12-02T21:18:11.495Z · LW · GW

Probably using the same interface as WebGPT

Comment by aogara (Aidan O'Gara) on Update on Harvard AI Safety Team and MIT AI Alignment · 2022-12-02T19:11:06.059Z · LW · GW

This is fantastic, thank you for sharing. I helped start USC AI Safety this semester and we're facing a lot of the same challenges. Some questions for you -- feel free to answer some but not all of them:

  • What does your Research Fellows program look like? 
    • In particular: How many different research projects do you have running at once? How many group members are involved in each project? Have you published any results yet?
    • Also, in terms of hours spent or counterfactual likelihood of producing a useful result, how much of the research contributions come from students without significant prior research experience vs. people who've already published papers or otherwise have significant research experience? 
    • The motivation for this question is that we'd like to start our own research track, but we don't have anyone in our group with the research experience of your PhD students or PhD graduates. One option would be to have students lead research projects, hopefully with advising from senior researchers that can contribute ~1 hour / week or less. But if that doesn't seem likely to produce useful outputs or learning experiences, we could also just focus on skilling up and getting people jobs with experienced researchers at other institutions. Which sounds more valuable to you?
  • What about the general member reading group?
    • Is there a curriculum you follow, or do you pick readings week-by-week based on discussion? 
    • It seems like there are a lot of potential activities for advanced members: reading groups, the Research Fellows program, facilitating intro groups, weekly social events, and participating in any opportunities outside of HAIST. Do you see a tradeoff where dedicated members are forced to choose which activities to focus on? Or is it more of a flywheel effect, where more engagement begets more dedication? For the typical person who finished your AGISF intro group and has good technical skills, which activities would you most want them to focus on? (My guess would be research > outreach and facilitation > participant in reading groups > social events.)
    • Broadly I agree with your focus on the most skilled and engaged members, and I'd worry that the ease of scaling up intro discussions could distract us from prioritizing research and skill-building for those members. How do you plan to deeply engage your advanced members going forward?
  • Do you have any thoughts on the tradeoff between using AGISF vs. the ML Safety Scholars curriculum for your introductory reading group? 
    • MLSS requires ML skills as a prerequisite, which is both a barrier to entry and a benefit. Instead of conceptual discussions of AGI and x-risk, it focuses on coding projects and published ML papers on topics like robustness and anomaly detection. 
    • This semester we used a combination of both, and my impression is that the MLSS selections were better received, particularly the coding assignments. (We'll have survey results on this soon.) This squares with your takeaway that students care about "the technically interesting parts of alignment (rather than its altruistic importance)". 
    • MLSS might also be better from a research-centered approach if research opportunities in the EA ecosystem are limited but students can do safety-relevant work with mainstream ML researchers.
    • On the other hand, AGISF seems better at making the case that AGI poses an x-risk this century. A good chunk of our members still are not convinced of that argument, so I'm planning to update the curriculum at least slightly towards more conceptual discussion of AGI and x-risks. 
  • How valuable do you think your Governance track is relative to your technical tracks? 
    • Personally I think governance is interesting and important, and I wouldn't want the entire field of AI safety to be focused on technical topics. But thinking about our group, all of our members are more technically skilled than they are in philosophy, politics, or economics. Do you think it's worth putting in the effort to recruit non-technical members and running a Governance track next semester, or would that effort better be spent focusing on technical members?

Appreciate you sharing all these detailed takeaways, it's really helpful for planning our group's activities. Good luck with next semester!

Comment by aogara (Aidan O'Gara) on On the Diplomacy AI · 2022-11-29T17:30:27.040Z · LW · GW

How much do you expect Meta to make progress on cutting edge systems towards AGI vs. focusing on product-improving models like recommendation systems that don’t necessarily advance the danger of agentic, generally intelligent AI?

My impression earlier this year was that several important people had left FAIR, and then FAIR and all other AI research groups were subsumed into product teams. See I thought this would mean deprioritizing fundamental research breakthroughs and focusing instead on less cutting edge improvements to their advertising or recommendation or content moderation systems.

But Meta AI has made plenty of important research contributions since then: Diplomacy, their video generator, open sourcing OPT and their scientific knowledge bot. Their rate of research progress doesn’t seem to be slowing, and might even be increasing. How do you expect Meta to prioritize fundamental research vs. product going forwards?

Comment by aogara (Aidan O'Gara) on Do mesa-optimization problems correlate with low-slack? · 2022-11-14T10:50:27.389Z · LW · GW

Interesting paper on the topic, you might’ve seen it already. They show that as you optimize a particular proxy of a true underlying reward model, there is a U-shaped loss curve: high at first, then low, then overfitting to high again.

This isn’t caused by mesa-optimization, which so far has not been observed in naturally optimized neural networks. It’s more closely related to robustness and generalization under varying amounts of optimization pressure.

But if we grant the mesa-optimizer concern, it seems reasonable that more optimization will result in more coherent inner misaligned goals and more deceptive behavior to hide them. Unfortunately I don’t think slack is really a solution because optimizing an objective really hard is what makes your AI capable of doing interesting things. Handicapping your systems isn’t a solution to AI safety, the alignment tax on it is too high.

Comment by aogara (Aidan O'Gara) on The Slippery Slope from DALLE-2 to Deepfake Anarchy · 2022-11-05T16:47:29.024Z · LW · GW

The key argument is that StableDiffusion is more accessible, meaning more people can create deepfakes with fewer images of their subject and no specialized skills. From above (links removed):

“The unique danger posed by today’s text-to-image models stems from how they can make harmful, non-consensual content production much easier than before, particularly via inpainting and outpainting, which allows a user to interactively build realistic synthetic images from natural ones, dreambooth, or other easily used tools, which allow for fine-tuning on as few as 3-5 examples of a particular subject (e.g. a specific person). More of which are rapidly becoming available following the open-sourcing of Stable Diffusion. It is clear that today’s text-to-image models have uniquely distinct capabilities from methods like Photoshop, RNNs trained on specific individuals or, “nudifying” apps. These previous methods all require a large amount of subject-specific data, human time, and/or human skill. And no, you don’t need to know how to code to interactively use Stable Diffusion, uncensored and unfiltered, including in/outpainting and dreambooth.”

If what you’re questioning is the basic ability of StableDiffusion to generate deepfakes, here [1] is an NSFW link to who says, “having played with this program a lot in the last 48 hours, and personally SEEN what it can do with NSFW, I guarantee you it can 100% assist not only in celeb fakes, but in completely custom porn that never existed or will ever exist.” He then provides links to NSFW images generated by StableDiffusion, including deepfakes of celebrities. This is apparently facilitated by the LAION-5B dataset which Stability AI admits has about 3% unsafe images and which he claims has “TONS of captioned porn images in it”.

[1] Warning, NSFW:

Comment by aogara (Aidan O'Gara) on Analysis: US restricts GPU sales to China · 2022-11-05T04:00:51.875Z · LW · GW

I was wrong that nobody in China takes alignment seriously! Concordia Consulting led by Brian Tse and Tianxia seem to be leading the charge. See this post and specifically this comment. To the degree that poor US-China relations slow the spread of alignment work in China, current US policy seems harmful. 

Comment by aogara (Aidan O'Gara) on AI X-risk >35% mostly based on a recent peer-reviewed argument · 2022-11-05T02:09:56.685Z · LW · GW

Very interesting argument. I’d be interested in using a spreadsheet where I can fill out my own probabilities for each claim, as it’s not immediately clear to me how you’ve combined them here.

Comment by aogara (Aidan O'Gara) on Intent alignment should not be the goal for AGI x-risk reduction · 2022-10-26T05:34:37.514Z · LW · GW

On one hand, I agree that intent alignment is insufficient for preventing x-risk from AI. There are too many other ways for AI to go wrong: coordination failures, surveillance, weaponization, epistemic decay, or a simple failure to understand human values despite the ability to faithfully pursue specified goals. I'm glad there are people like you working on which values to embed in AI systems and ways to strengthen a society full of powerful AI. 

On the other hand, I think this post misses the reason for popular focus on intent alignment. Some people think that, for a sufficiently powerful AI trained in the current paradigm, there is no goal that it could faithfully pursue without collapsing into power seeking, reward hacking, and other instrumental goals leading to x-risk. Ajeya Cotra's framing of this argument is most persuasive to me. Or Eliezer Yudkowsky's "strawberry alignment problem", which (I think) he believes is currently impossible and captures the most challenging part of alignment: 

"How would you get an AI system to do some very modest concrete action requiring extremely high levels of intelligence, such as building two strawberries that are completely identical at the cellular level, without causing anything weird or disruptive to happen?"

Personally I think there's plenty of x-risk from intent aligned systems and people should think about what we do once we have intent alignment. Eliezer seems to think this is more distraction from the real problem than it's worth, but surveys suggest that many people in AI safety orgs think x-risk is disjunctive across many scenarios. Which is all to say, aligning AI with societal values is important, but I wouldn't dismiss intent alignment either. 

Comment by aogara (Aidan O'Gara) on Response to Katja Grace's AI x-risk counterarguments · 2022-10-23T20:12:18.514Z · LW · GW

I agree with your point but you might find this funny:

Comment by aogara (Aidan O'Gara) on Learning societal values from law as part of an AGI alignment strategy · 2022-10-21T20:39:52.474Z · LW · GW

My most important question: What kinds of work do you want to see? Common legal tasks include contract review, legal judgment prediction, and passing questions on the bar exam, but those aren't necessarily the most important tasks. Could you propose a benchmark for the field of Legal AI that would help align AGI?

Beyond that, here are two aspects of the approach that I particularly like, as well as some disagreements. 

First, laws and contracts are massively underspecified, and yet our enforcement mechanisms have well-established procedures for dealing with that ambiguity and can enforce the spirit of the law reasonably well. (The failures of this enforcement are proof of the difficulty of value specification, and evidence that laws alone will not close all the loopholes in human value specification.) 

Second, the law provides a relatively nuanced picture of the values we should give to AI. A simpler answer to the question of "what should the AI's values be?" would be "aligned with the person who's using it", known as intent alignment. Intent alignment is an important problem on its own, but does not entirely solve the problem. Law is particularly better than ideas like Coherent Extrapolated Volition, which attempt to reinvent morality in order to define the goals of an AI. 

Ensuring law expresses the democratically deliberated views of citizens with fidelity and integrity by reducing (1.) regulatory capture, (2.) illegal lobbying efforts, (3.) politicization of judicial[3] and agency[4] independence ... 

Cultivating the spread of democracy globally.

While I agree that these would be helpful for both informing AI of our values and benefitting society more broadly, it doesn't seem like a high leverage way to reduce AI risk. Improving governance is a crowded field. Though maybe the "improving institutional decision-making" folks would disagree.

there is no other legitimate source of societal values to embed in AGI besides law. 

This seems a bit too extreme. Is there no room for ethics outside of the law? It is not illegal to tell a lie or make a child cry, but AI should understand that those actions conflict with human preferences. Work on imbuing ethical understanding in AI systems therefore seems valuable. 

Comment by aogara (Aidan O'Gara) on Learning societal values from law as part of an AGI alignment strategy · 2022-10-21T20:28:49.521Z · LW · GW

There seems to be a conflict between the goals of getting AI to understand the law and preventing AI from shaping the law. Legal tech startups and academic interest in legal AI seems driven by the possibility of solving existing challenges by applying AI, e.g. contract review. The fastest way to AI that understands the law is to sell those benefits. This does introduce a long-term concern that AI could shape the law in malicious ways, perhaps by writing laws that pursue the wrong objective themselves or which empower future misaligned AIs. That might be the decisive argument, but I could imagine that exposing those problems early on and getting legal tech companies invested in solving them might be the best strategy for alignment. Any thoughts?

Comment by aogara (Aidan O'Gara) on Learning societal values from law as part of an AGI alignment strategy · 2022-10-21T09:16:00.136Z · LW · GW

I agree that getting an AGI to care about goals that it understands is incredibly difficult and perhaps the most challenging part of alignment. But I think John’s original claim is true: “Many arguments for AGI misalignment depend on our inability to imbue AGI with a sufficiently rich understanding of what individual humans want and how to take actions that respect societal values more broadly.” Here are some prominent articulations of the argument.

Unsolved Problems in ML Safety (Hendrycks et al., 2021) says that “Encoding human goals and intent is challenging.” Section 4.1 is therefore about value learning. (Section 4.2 is about the difficulty of getting a system to internalize those values.)

Robin Shah’s sequence on Value Learning argues that “Standard AI research will continue to make progress on learning what to do; catastrophe happens when our AI system doesn’t know what not to do. This is the part that we need to make progress on.”

Specification Gaming: The Flip Side of AI Ingenuity (Krakovna et al., 2020) says the following: “Designing task specifications (reward functions, environments, etc.) that accurately reflect the intent of the human designer tends to be difficult. Even for a slight misspecification, a very good RL algorithm might be able to find an intricate solution that is quite different from the intended solution, even if a poorer algorithm would not be able to find this solution and thus yield solutions that are closer to the intended outcome. This means that correctly specifying intent can become more important for achieving the desired outcome as RL algorithms improve. It will therefore be essential that the ability of researchers to correctly specify tasks keeps up with the ability of agents to find novel solutions.

John is suggesting one way of working on the outer alignment problem, while Zach is pointing out that inner alignment is arguably more dangerous. These are both fair points IMO. In my experience, people on this website often reject work on specifying human values in favor of problems that are more abstract but seen as more fundamentally difficult. Personally I’m glad there are people working on both.

Comment by aogara (Aidan O'Gara) on Analysis: US restricts GPU sales to China · 2022-10-21T08:42:52.648Z · LW · GW

China's top chipmaker, Semiconductor Manufacturing International Corporation, built a 7nm chip in July of this year. This was a big improvement on their previous best of 14nm and apparently surpassed expectations, though is still well behind IBM's 2nm chip and other similarly small ones. This was particularly surprising given that the Trump administration blacklisted SMIC two years ago, meaning for years they've been subject to similar restrictions as Biden recently imposed on all of China. Tough to put this in the right context, but we should follow progress of China's chipmakers going forwards. 

Comment by aogara (Aidan O'Gara) on Analysis: US restricts GPU sales to China · 2022-10-20T23:48:44.575Z · LW · GW

Why are gaming GPUs faster than ML GPUs? Are the two somehow adapted to their special purposes, or should ML people just be using gaming GPUs?

Comment by aogara (Aidan O'Gara) on An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers · 2022-10-18T21:48:39.025Z · LW · GW

Great resource, thanks for sharing! As somebody who's not too deeply familiar with either mechanistic interpretability or the academic field of interpretability, I find myself confused by the fact that AI safety folks usually dismiss the large academic field of interpretability. Most academic work on ML isn't useful for safety because safety studies different problems with different kinds of systems. But unlike focusing on worst-case robustness or inner misalignment, I would expect generating human understandable explanations of what neural networks are doing to be interesting to plenty of academics, and I would think that's what the academics are trying to do. Are they just bad at generating insights? Do they look for the wrong kinds of progress, perhaps motivated by different goals? Why is the large academic field of interpretability not particularly useful for x-risk motivated AI safety?

Comment by aogara (Aidan O'Gara) on Analysis: US restricts GPU sales to China · 2022-10-16T02:50:38.847Z · LW · GW

So is this a good thing for AI x-risk? Do you support a broader policy of the US crippling Chinese AI?

Comment by aogara (Aidan O'Gara) on [Crosspost] AlphaTensor, Taste, and the Scalability of AI · 2022-10-09T22:20:47.248Z · LW · GW

There are well established ML approaches to, e.g. image captioning, time-series prediction, audio segmentation etc etc. is the bottleneck you’re concerned with the lack of breadth and granularity of these problem-sets, OP - and we can mark progress (to some extent) by the number of these problem sets we have robust ML translations for?

I think this is an important problem. Going from progress on ML benchmarks to progress on real-world tasks is a very difficult challenge. For example, years after human level performance on ImageNet, we still have lots of trouble with real-world applications of computer vision like self-driving cars and medical diagnostics. That's because ImageNet isn't a directly valuable real world task, but rather is built to be amenable to supervised learning models that output a single class label for each input. 

While scale will improve performance within established paradigms, putting real world problems into ML paradigms remains squarely a problem for human research taste.