Posts

Context-dependent consequentialism 2024-11-04T09:29:24.310Z
Can a Bayesian Oracle Prevent Harm from an Agent? (Bengio et al. 2024) 2024-09-01T07:46:26.647Z
Bengio's Alignment Proposal: "Towards a Cautious Scientist AI with Convergent Safety Bounds" 2024-02-29T13:59:34.959Z
mattmacdermott's Shortform 2024-01-03T09:08:14.015Z
What's next for the field of Agent Foundations? 2023-11-30T17:55:13.982Z
Optimisation Measures: Desiderata, Impossibility, Proposals 2023-08-07T15:52:17.624Z
Reward Hacking from a Causal Perspective 2023-07-21T18:27:39.759Z
Incentives from a causal perspective 2023-07-10T17:16:28.373Z
Agency from a causal perspective 2023-06-30T17:37:58.376Z
Introduction to Towards Causal Foundations of Safe AGI 2023-06-12T17:55:24.406Z
Some Summaries of Agent Foundations Work 2023-05-15T16:09:56.364Z
Towards Measures of Optimisation 2023-05-12T15:29:33.325Z
Normative vs Descriptive Models of Agency 2023-02-02T20:28:28.701Z

Comments

Comment by mattmacdermott on o3 · 2024-12-21T11:58:38.436Z · LW · GW

The coin coming up heads is “more headsy” than the expected outcome, but maybe o3 is about as headsy as Thane expected.

Like if you had thrown 100 coins and then revealed that 80 were heads.

Comment by mattmacdermott on mattmacdermott's Shortform · 2024-12-20T08:48:25.962Z · LW · GW

Could mech interp ever be as good as chain of thought?

Suppose there is 10 years of monumental progress in mechanistic interpretability. We can roughly - not exactly - explain any output that comes out of a neural network. We can do experiments where we put our AIs in interesting situations and make a very good guess at their hidden reasons for doing the things they do.

Doesn't this sound a bit like where we currently are with models that operate with a hidden chain of thought? If you don't think that an AGI built with the current fingers-crossed-its-faithful paradigm would be safe, what percentage an outcome would mech interp have to hit to beat that?

Seems like 99+ to me.

Comment by mattmacdermott on OpenAI Email Archives (from Musk v. Altman and OpenAI blog) · 2024-12-16T00:32:50.806Z · LW · GW

Elon Musk to: Ilya Sutskever, Greg Brockman, Sam Altman - Feb 19, 2016 12:05 AM

Frankly, what surprises me is that the AI community is taking this long to figure out concepts. It doesn't sound super hard. High-level linking of a large number of deep nets sounds like the right approach or at least a key part of the right approach

Ilya Sutskever to: Elon Musk, (cc: Greg Brockman, Sam Altman) - Feb 19, 2016 10:28 AM

Several points:

It is not the case that once we solve "concepts," we get AI. Other problems that will have to be solved include unsupervised learning, transfer learning, and lifetime learning. We're also doing pretty badly with language right now.

Seems like there was a misunderstanding here where Musk meant “the AI community is taking a long time to figure stuff out” but Ilya thought he meant “figure out how to make AI that can think in terms of concepts”.

Comment by mattmacdermott on OpenAI Email Archives (from Musk v. Altman and OpenAI blog) · 2024-12-14T19:04:05.416Z · LW · GW

Compute is used in two ways: it is used to run a big experiment quickly, and it is used to run many experiments in parallel.

95% of progress comes from the ability to run big experiments quickly. The utility of running many experiments is much less useful. In the old days, a large cluster could help you run more experiments, but it could not help with running a single large experiment quickly.

For this reason, an academic lab could compete with Google, because Google's only advantage was the ability to run many experiments. This is not a great advantage.

Recently, it has become possible to combine 100s of GPUs and 100s of CPUs to run an experiment that's 100x bigger than what is possible on a single machine while requiring comparable time. This has become possible due to the work of many different groups. As a result, the minimum necessary cluster for being competitive is now 10–100x larger than it was before.

I think Ilya said almost exactly this in his NeurIPS talk yesterday. Sounds like “doing the biggest experiments fast is more important than doing many experiments in parallel” is a core piece of wisdom to him.

Comment by mattmacdermott on The "Think It Faster" Exercise · 2024-12-13T20:26:57.023Z · LW · GW

This might be the source of Sarah Constantin’s incredulity?

Comment by mattmacdermott on The "Think It Faster" Exercise · 2024-12-13T20:13:29.612Z · LW · GW

Is Eliezer claiming to (train himself to perform only those steps) over thirty seconds, or train himself to (perform only those steps over thirty seconds)?

Comment by mattmacdermott on mattmacdermott's Shortform · 2024-12-05T10:02:57.771Z · LW · GW

I'm confused by this argument from Redwood's AI control paper.

5.2 WHY WE DIDN’T CONSIDER STRATEGIES THAT INVOLVE TRAINING THE MODEL [...] Any protocol that involves training the model can be transformed into a validation-style protocol that merely validates the model’s performance on the dataset that was used as a train set. In any case where the validation-style protocol fails, we know that the red-team model is not distinguished from an aligned model by the measurements done by the protocol. Given that none of the training examples allow the reward-generating process to determine whether the policy it is training is undesirable, we don’t see why training would transform the model from an undesirable policy to a desirable policy. It is conceivable that we could find a training scheme that systematically converts undesirable policies to desirable ones, despite the lack of a reward differential causing it to do so. But we don’t know how.

If I understand correctly, the argument is:

  1. In order for training on dataset D to convince you a model is desirable, good performance on D needs to be a sufficient condition for being desirable.
  2. If good performance on D is a sufficient condition for being desirable, you can just test performance on D to decide whether a model is desirable.

But 2 seems wrong -- if good performance on D is sufficient but not neccesary for being desirable, then you could have two models, one desirable and one undesirable, which are indistinguishable on D because they both perform badly, and then turn them both into desirable models by training on D.

As an extreme example, suppose your desirable model outputs correct code and your undesirable model backdoored code. Using D you train them both to output machine checkable proofs of the correctness of the code they write. Now both models are desirable because you can just check correctness before deploying the code.

Comment by mattmacdermott on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-03T08:37:09.492Z · LW · GW

I gave $290. Partly because of the personal value I get out of LW, partly because I think it's a solidly cost-effective donation.

Comment by mattmacdermott on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-03T08:28:41.201Z · LW · GW

Do we have any data on p(doom) in the LW/rationalist community? I would guess the median is lower than 35-55%.

It's not exactly clear where to draw the line, but I would guess this to be the case for, say, the 10% most active LessWrong users.

Comment by mattmacdermott on Ayn Rand’s model of “living money”; and an upside of burnout · 2024-12-03T08:16:00.122Z · LW · GW

Not sure. I guess you also have to exclude policy gradient methods that make use of learned value estimates. "Learned evaluation vs sampled evaluation" is one way you could say it.

Model-based vs model-free does feel quite appropriate, shame it's used for a narrower kind of model in RL. Not sure if it's used in your sense in other contexts.

Comment by mattmacdermott on Ayn Rand’s model of “living money”; and an upside of burnout · 2024-12-02T18:48:37.387Z · LW · GW

Under that definition you end up saying that what are usually called ‘model-free’ RL algorithms like Q-learning are model-based. E.g. in Connect 4, once you’ve learned that getting 3 in a row has a high value, you get credit for taking actions that lead to 3 in a row, even if you ultimately lose the game.

I think it is kinda reasonable to call Q-learning model-based, to be fair, since you can back out a lot of information about the world from the Q-values with little effort.

Comment by mattmacdermott on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T18:11:01.177Z · LW · GW

Nitpick: “odds of 63%” sounds to me like it means “odds of 63:100” i.e. “probability of around 39%”. Took me a while to realise this wasn’t what you meant.

Comment by mattmacdermott on mattmacdermott's Shortform · 2024-10-09T08:14:14.612Z · LW · GW

I think the way to go, philosophically, might be to distinguish kindness-towards-conscious-minds and kindness-towards-agents. The former comes from our values, while the second may be decision theoretic.

Comment by mattmacdermott on mattmacdermott's Shortform · 2024-10-09T08:10:27.292Z · LW · GW

The revealed preference orthogonality thesis

People sometimes say it seems generally kind to help agents achieve their goals. But it's possible there need be no relationship between a system's subjective preferences (i.e. the world states it experiences as good) and its revealed preferences (i.e. the world states it works towards).

For example, you can imagine an agent architecture consisting of three parts:

  • a reward signal, experienced by a mind as pleasure or pain
  • a reinforcement learning algorithm
  • a wrapper which flips the reward signal before passing it to the RL algorithm.

This system might seek out hot stoves to touch while internally screaming. It would not be very kind to turn up the heat.

Comment by mattmacdermott on You can, in fact, bamboozle an unaligned AI into sparing your life · 2024-10-01T11:11:58.089Z · LW · GW

Even if you think a life’s work can’t make a difference but many can, you can still think it’s worthwhile to work on alignment for whatever reasons make you think it’s worthwhile to do things like voting.

(E.g. a non-CDT decision theory)

Comment by mattmacdermott on the case for CoT unfaithfulness is overstated · 2024-09-30T17:25:31.174Z · LW · GW

Since o1 I’ve been thinking that faithful chain-of-thought is waaaay underinvested in as a research direction.

If we get models such that a forward pass is kinda dumb, CoT is superhuman, and CoT is faithful and legible, then we can all go home, right? Loss of control is not gonna be a problem.

And it feels plausibly tractable.

I might go so far as to say it Pareto dominates most people’s agendas on importance and tractability. While being pretty neglected.

Comment by mattmacdermott on "Slow" takeoff is a terrible term for "maybe even faster takeoff, actually" · 2024-09-29T08:42:17.680Z · LW · GW

Gradual/Sudden

Comment by mattmacdermott on Stephen McAleese's Shortform · 2024-09-14T21:50:56.054Z · LW · GW

Do we know that the test set isn’t in the training data?

Comment by mattmacdermott on OpenAI o1 · 2024-09-13T14:08:48.047Z · LW · GW

You can read examples of the hidden reasoning traces here.

Comment by mattmacdermott on eggsyntax's Shortform · 2024-09-12T22:31:39.330Z · LW · GW

But it's not clear to me that in practice it would say naughty things, since it's easier for the model to learn one consistent set of guidelines for what to say or not say than it is to learn two.

If I think about asking the model a question about a politically sensitive or taboo subject, I can imagine it being useful for the model to say taboo or insensitive things in its hidden CoT in the course of composing its diplomatic answer. The way they trained it may or may not incentivise using the CoT to think about the naughtiness of its final answer.

But yeah, I guess an inappropriate content filter could handle that, letting us see the full CoT for maths questions and hiding it for sensitive political ones. I think that does update me more towards thinking they’re hiding it for other reasons.

Comment by mattmacdermott on eggsyntax's Shortform · 2024-09-12T20:32:01.119Z · LW · GW

CoT optimised to be useful in producing the correct answer is a very different object to CoT optimised to look good to a human, and a priori I expect the former to be much more likely to be faithful. Especially when thousands of tokens are spent searching for the key idea that solves a task.
For example, I have a hard time imagining how the examples in the blog post could be not faithful (not that I think faithfulness is guaranteed in general).

If they're avoiding doing RL based on the CoT contents,

Note they didn’t say this. They said the CoT is not optimised for ‘policy compliance or user preferences’. Pretty sure what they mean is the didn’t train the model not to say naughty things in the CoT.

'We also do not want to make an unaligned chain of thought directly visible to users.' Why?

I think you might be overthinking this. The CoT has not been optimised not to say naughty things. OpenAI avoid putting out models that haven’t been optimised not to say naughty things. The choice was between doing the optimising, or hiding the CoT.

Edit: not wanting other people to finetune on the CoT traces is also a good explanation.

Comment by mattmacdermott on Jeremy Gillen's Shortform · 2024-09-01T02:28:09.999Z · LW · GW

But superhuman capabilities doesn’t seem to imply “applies all the optimisation pressure it can towards a goal”.

Like, being crazily good at research projects may require the ability to do goal-directed cognition. It doesn’t seem to require the habit of monomaniacally optimising the universe towards a goal.

I think whether or not a crazy good research AI is a monomaniacal universe optimiser probably depends on what kind of AI it is.

Comment by mattmacdermott on Jeremy Gillen's Shortform · 2024-08-30T23:14:08.920Z · LW · GW

My second mistake was thinking that danger was related to the quantity of RL finetuning. I muddled up agency/goal-directedness with danger, and was also wrong that RL is more likely to produce agency/goal-directedness, conditioned on high capability. It's a natural mistake, since stereotypical RL training is designed to incentivize goal-directedness. But if we condition on high capability, it wipes out that connection, because we already know the algorithm has to contain some goal-directedness.

Distinguish two notions of "goal-directedness":

  1. The system has a fixed goal that it capably works towards across all contexts.

  2. The system is able to capably work towards goals, but which it does, if any, may depend on the context.

My sense is that a high level of capability implies (2) but not (1). And that (1) is way more obviously dangerous. Do you disagree?

Comment by mattmacdermott on Dalcy's Shortform · 2024-08-25T21:01:11.514Z · LW · GW

Thanks for the feedback!

... except, going through the proof one finds that the latter property heavily relies on the "uniqueness" of the policy. My policy can get the maximum goal-directedness measure if it is the only policy of its competence level while being very deterministic. It isn't clear that this always holds for the optimal/anti-optimal policies or always relaxes smoothly to epsilon-optimal/anti-optimal policies.

Yeah, uniqueness definitely doesn't always hold for the optimal/anti-optimal policy. I think the way MEG works here makes sense: if you're following the unique optimal policy for some utility function, that's a lot of evidence for goal-directedness. If you're following one of many optimal policies, that's a bit less evidence -- there's a greater chance that it's an accident. In the most extreme case (for the constant utility function) every policy is optimal -- and we definitely don't want to ascribe maximum goal-directedness to optimal policies there.

With regard to relaxing smoothly to epsilon-optimal/anti-optimal policies, from memory I think we do have the property that MEG is increasing in the utility of the policy for policies with greater than the utility of the uniform policy, and decreasing for policies with less than the utility of the uniform policy. I think you can prove this via the property that the set of maxent policies is (very nearly) just Boltzman policies with varying temperature. But I would have to sit down and think about it properly. I should probably add that to the paper if it's the case.

minimum for uniformly random policy (this would've been a good property, but unless I'm mistaken I think the proof for the lower bound is incorrect, because negative cross entropy is not bounded below.)

Thanks for this. The proof is indeed nonsense, but I think the proposition is still true. I've corrected it to this.

Comment by mattmacdermott on Linda Linsefors's Shortform · 2024-08-02T15:55:40.727Z · LW · GW

Instead of tracking who is in debt to who, I think you should just track the extent to which you’re in a favouring-exchanging relationship with a given person. Less to remember and runs natively on your brain.

Comment by mattmacdermott on Conditioning Predictive Models: Outer alignment via careful conditioning · 2024-07-12T19:31:30.778Z · LW · GW
  1. ...if the malign superintelligence knows what observations we would condition on, it can likely arrange to make the world match those observations, making the probability of our observations given a malign superintelligence roughly one

The probability of any observation given the existence of a malign superintelligence is 1? So P(observation | malign superintelligence) adds up to like a gajillion?

Comment by mattmacdermott on A list of core AI safety problems and how I hope to solve them · 2024-06-26T02:02:30.351Z · LW · GW

5.1.4. It may be that the easiest plan to find involves an unacceptable degree of power-seeking and control over irrelevant variables. Therefore, the score function should penalize divergence of the trajectory of the world state from the trajectory of the status quo (in which no powerful AI systems take any actions).

5.1.5. The incentives under 5.1.4 by default are to take control over irrelevant variables so as to ensure that they proceed as in the anticipated "status quo". Infrabayesian uncertainty about the dynamics is the final component that removes this incentive.

If you know which variables you want to remove the incentive to control, an alternative to penalising divergence is path-specific objectives, i.e. you compute the score function under an intervention on the model that sets the irrelevant variables to their status quo values. Then the AI has no incentive to control the variables, but no incentive to keep them the same either.

Comment by mattmacdermott on Dalcy's Shortform · 2024-06-08T23:56:10.587Z · LW · GW

theorem is limited. only applies to cases where the decision node is not upstream of the environment nodes

I think you can drop this premise and modify the conclusion to “you can find a causal model for all variables upstream of the utility and not downstream of the decision.”

Comment by mattmacdermott on Examples of Highly Counterfactual Discoveries? · 2024-04-26T21:16:47.551Z · LW · GW

Lucius-Alexander SLT dialogue?

Comment by mattmacdermott on mattmacdermott's Shortform · 2024-03-04T10:06:51.017Z · LW · GW

Yep, exactly.

Comment by mattmacdermott on mattmacdermott's Shortform · 2024-03-03T14:39:28.592Z · LW · GW

Neural network interpretability feels like it should be called neural network interpretation.

Comment by mattmacdermott on Bengio's Alignment Proposal: "Towards a Cautious Scientist AI with Convergent Safety Bounds" · 2024-03-02T23:09:47.736Z · LW · GW

If you could tractably obtain and work with the posterior, I think that would be much more useful than a normal ensemble. E.g. being able to choose at what threshold you start paying attention to a hypothesis which predicts harm, and vary it depending on the context, seems like a big plus.

I think the reason Bayesian ML isn't that widely used is because it's intractable to do. So Bengio's stuff would have to succesfully make it competitive with other methods.

Comment by mattmacdermott on Bengio's Alignment Proposal: "Towards a Cautious Scientist AI with Convergent Safety Bounds" · 2024-03-02T00:17:13.924Z · LW · GW

I agree this agenda wants to solve ELK by making legible predictive models.

But even if it doesn’t succeed in that, it could bypass/solve ELK as it presents itself in the problem of evaluating whether a given plan has a harmful outcome, by training a posterior over harm evaluation models rather than a single one, and then acting conservatively with respect to that posterior. The part which seems like a natural byproduct of the Bayesian approach is “more systematically covering the space of plausible harm evaluation models”.

Comment by mattmacdermott on Shortform · 2024-03-01T19:03:18.670Z · LW · GW

It's not just a lesswrong thing (wikipedia).

My feeling is that (like most jargon) it's to avoid ambiguity arising from the fact that "commitment" has multiple meanings. When I google commitment I get the following two definitions:

  1. the state or quality of being dedicated to a cause, activity, etc.
  2. an engagement or obligation that restricts freedom of action

Precommitment is a synonym for the second meaning, but not the first. When you say, "the agent commits to 1-boxing," there's no ambiguity as to which type of commitment you mean, so it seems pointless. But if you were to say, "commitment can get agents more utility," it might sound like you were saying, "dedication can get agents more utility," which is also true.

Comment by mattmacdermott on Bengio's Alignment Proposal: "Towards a Cautious Scientist AI with Convergent Safety Bounds" · 2024-03-01T12:01:34.049Z · LW · GW

tutorial

Comment by mattmacdermott on Bengio's Alignment Proposal: "Towards a Cautious Scientist AI with Convergent Safety Bounds" · 2024-03-01T11:34:51.433Z · LW · GW

IIUC, I think that in addition to making predictive models more human interpretable, there's another way this agenda aspires to get around the ELK problem.

Rather than having a range of predictive models/hypotheses but just a single notion of what constitutes a bad outcome, it wants to also learn posterior over hypotheses for what constitutes a bad outcome, and then act conservatively with respect to that.

IIRC the ELK report is about trying to learn an auxilliary model which reports on whether the predictive model predicts a bad outcome, and we want it to learn the "right" notion of what constitutes a bad outcome, rather than a wrong one like "a human's best guess would be that this outcome is bad". I think Bengio's proposal aspires to learn a range of plausible auxilliary models, and sufficiently cover that space that the both the "right" notion and the wrong one are in there, and then if any of those models predict a bad outcome, call it a bad plan.

EDIT: from a quick look at the ELK report, this idea ("ensembling") is mentioned under "How we'd approach ELK in practice". Doing ensembles well seems like sort of the whole point of the AI scientists idea, so it's plausible to me that this agenda could make progress on ELK even if it wasn't specifically thinking about the problem.

Comment by mattmacdermott on Benito's Shortform Feed · 2024-02-27T15:29:59.299Z · LW · GW

"Bear in mind he could be wrong" works well for telling somebody else to track a hypothesis.

"I'm bearing in mind he could be wrong" is slightly clunkier but works ok.

Comment by mattmacdermott on A Bird's Eye View of the ML Field [Pragmatic AI Safety #2] · 2024-02-23T10:59:21.126Z · LW · GW

Really great post. The pictures are all broken for me, though.

Comment by mattmacdermott on Dreams of AI alignment: The danger of suggestive names · 2024-02-13T09:21:26.000Z · LW · GW

He means the second one.

Seems true in the extreme (if you have 0 idea what something is how can you reasonably be worried about it), but less strange the futher you get from that.

Comment by mattmacdermott on Dreams of AI alignment: The danger of suggestive names · 2024-02-12T19:14:10.903Z · LW · GW

Somewhat related: how do we not have separate words for these two meanings of 'maximise'?

  1. literally set something to its maximum value
  2. try to set it to a big value, the bigger the better

Even what I've written for (2) doesn't feel like it unambiguously captures the generally understood meaning of 'maximise' in common phrases like 'RL algorithms maximise reward' or 'I'm trying to maximise my income'. I think the really precise version would be 'try to affect something, having a preference ordering over outcomes which is monotonic in their size'.

But surely this concept deserves a single word. Does anyone know a good word for this, or feel like coining one?

Comment by mattmacdermott on And All the Shoggoths Merely Players · 2024-02-12T17:46:20.271Z · LW · GW

you can train on MNIST digits with twenty wrong labels for every correct one and still get good performance as long as the correct label is slightly more common than the most common wrong label

I know some pigeons who would question this claim

Comment by mattmacdermott on Dreams of AI alignment: The danger of suggestive names · 2024-02-11T17:24:31.134Z · LW · GW

I have read many of your posts on these topics, appreciate them, and I get value from the model of you in my head that periodically checks for these sorts of reasoning mistakes.

But I worry that the focus on 'bad terminology' rather than reasoning mistakes themselves is misguided.

To choose the most clear cut example, I'm quite confident that when I say 'expectation' I mean 'weighted average over a probability distribution' and not 'anticipation of an inner consciousness'. Perhaps some people conflate the two, in which case it's useful to disabuse them of the confusion, but I really would not like it to become the case that every time I said 'expectation' I had to add a caveat to prove I know the difference, lest I get 'corrected' or sneered at.

For a probably more contentious example, I'm also reasonably confident that when I use the phrase 'the purpose of RL is to maximise reward', the thing I mean by it is something you wouldn't object to, and which does not cause me confusion. And I think those words are a straightforward way to say the thing I mean. I agree that some people have mistaken heuristics for thinking about RL, but I doubt you would disagree very strongly with mine, and yet if I was to talk to you about RL I feel I would be walking on eggshells trying to use long-winded language in such a way as to not get me marked down as one of 'those idiots'.

I wonder if it's better, as a general rule, to focus on policing arguments rather than language? If somebody uses terminology you dislike to generate a flawed reasoning step and arrive at a wrong conclusion, then you should be able to demonstrate the mistake by unpacking the terminology into your preferred version, and it's a fair cop.

But until you've seen them use it to reason poorly, perhaps it's a good norm to assume they're not confused about things, even if the terminology feels like it has misleading connotations to you.

Comment by mattmacdermott on Choosing a book on causality · 2024-02-07T21:25:32.656Z · LW · GW
Comment by mattmacdermott on Choosing a book on causality · 2024-02-07T21:23:54.641Z · LW · GW

Causal Inference in Statistics (pdf) is much shorter, and a pretty easy read.

I have not read causality but I think you should probably read the primer first and then decide if you need to read that too.

Comment by mattmacdermott on Leading The Parade · 2024-02-01T20:26:14.498Z · LW · GW

Sorry, yeah, my comment was quite ambiguous.

I meant that while gaining status might be a questionable first step in a plan to have impact, gaining skill is pretty much an essential one, and in particular getting an ML PhD or working at a big lab seem like quite solid plans for gaining skill.

i.e. if you replace status with skill I agree with the quotes instead of John.

Comment by mattmacdermott on Leading The Parade · 2024-02-01T16:39:14.899Z · LW · GW

People occasionally come up with plans like "I'll lead the parade for a while, thereby accumulating high status. Then, I'll use that high status to counterfactually influence things!". This is one subcategory of a more general class of plans: "I'll chase status for a while, then use that status to counterfactually influence things!". Various versions of this often come from EAs who are planning to get a machine learning PhD or work at one of the big three AI labs.

This but skill instead of status?

Comment by mattmacdermott on A model of research skill · 2024-01-25T11:20:40.010Z · LW · GW

Any other biography suggestions?

Comment by mattmacdermott on 1a3orn's Shortform · 2024-01-18T08:29:54.691Z · LW · GW

I think Just Don't Build Agents could be a win-win here. All the fun of AGI without the washing up, if it's enforceable.

Possible ways to enforce it:

(1) Galaxy-brained AI methods like Davidad's night watchman. Downside: scary, hard.

(2) Ordinary human methods like requring all large training runs to be approved by the No Agents committee.

Downside: we'd have to ban not just training agents, but training any system that could plausibly be used to build an agent, which might well include oracle-ish AI like LLMs. Possibly something like Bengio's scientist AI might be allowed.

Comment by mattmacdermott on mattmacdermott's Shortform · 2024-01-18T08:09:51.975Z · LW · GW

LW feature I would like: I click a button on a sequence and recieve one post in my email inbox per day.

Comment by mattmacdermott on Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training · 2024-01-14T19:44:17.225Z · LW · GW

I think this paper shows the community at large will pay orders of magnitude more attention to a research area when there is, in @TurnTrout's words, AGI threat scenario "window dressing," or when players from an EA-coded group research a topic.

At least for relative newcomers to the field, deciding what to pay attention to is a challenge, and using the window-dressed/EA-coded heuristic seems like a reasonable way to prune the search space. The base rate of relevance is presumably higher than in the set of all research areas.

Since a big proportion will always be newcomers this means the community will under or overweight various areas, but I'm not sure that newcomers dropping the heuristic would lead to better results.

Senior people directing the attention of newcomers towards relevant uncoded research areas is probably the only real solution.