Posts

Andrew McKnight's Shortform 2021-12-05T01:36:52.160Z

Comments

Comment by Andrew McKnight (andrew-mcknight) on 'Empiricism!' as Anti-Epistemology · 2024-03-15T18:47:28.638Z · LW · GW

lukeprog argued similarly that we should drop the "the"

Comment by Andrew McKnight (andrew-mcknight) on The shape of AGI: Cartoons and back of envelope · 2023-07-21T18:38:12.843Z · LW · GW

Another possible inflection point, pre-self-improvement could be when an AI gets a set of capabilities that allows it to gain new capabilities at inference time.

Comment by Andrew McKnight (andrew-mcknight) on UFO Betting: Put Up or Shut Up · 2023-06-19T01:09:48.692Z · LW · GW

I'll repeat this bet, same odds same conditions same payout, if you're still interested. My $10k to your $200 in advance.

Comment by Andrew McKnight (andrew-mcknight) on Policy discussions follow strong contextualizing norms · 2023-04-03T22:09:57.555Z · LW · GW

Responding to your #1, do you think we're on track to handle the cluster of AGI Ruin scenarios pointed at in 16-19? I feel we are not making any progress here other than towards verifying some properties in 17.

16: outer optimization even on a very exact, very simple loss function doesn't produce inner optimization in that direction.
17: on the current optimization paradigm there is no general idea of how to get particular inner properties into a system, or verify that they're there, rather than just observable outer ones you can run a loss function over. 
18: There's no reliable Cartesian-sensory ground truth (reliable loss-function-calculator) about whether an output is 'aligned'
19: there is no known way to use the paradigm of loss functions, sensory inputs, and/or reward inputs, to optimize anything within a cognitive system to point at particular things within the environment

Comment by Andrew McKnight (andrew-mcknight) on Anthropic's Core Views on AI Safety · 2023-03-13T20:23:52.827Z · LW · GW

Thanks for the links and explanation, Ethan.

Comment by Andrew McKnight (andrew-mcknight) on Anthropic's Core Views on AI Safety · 2023-03-11T02:27:28.668Z · LW · GW

I mean, it's mostly semantics but I think of mechanical interpretability as "inner" but not alignment and think it's clearer that way, personally, so that we don't call everything alignment. Observing properties doesn't automatically get you good properties. I'll read your link but it's a bit too much to wade into for me atm.

Either way, it's clear how to restate my question: Is mechanical interpretability work the only inner alignment work Anthropic is doing?

Comment by Andrew McKnight (andrew-mcknight) on Anthropic's Core Views on AI Safety · 2023-03-10T23:35:06.629Z · LW · GW

Great post. I'm happy to see these plans coming out, following OpenAI's lead.

It seems like all the safety strategies are targeted at outer alignment and interpretability. None of the recent OpenAI, Deepmind, Anthropic, or Conjecture plans seem to target inner alignment, iirc, even though this seems to me like the biggest challenge.

Is Anthropic mostly leaving inner alignment untouched, for now?

Comment by Andrew McKnight (andrew-mcknight) on Acausal normalcy · 2023-03-10T22:58:52.209Z · LW · GW

Taken literally, the only way to merge n utility functions into one without any other info (eg the preferences that generated the utility functions) is to do a weighted sum. There's only n-1 free parameters.

Comment by Andrew McKnight (andrew-mcknight) on Announcing Encultured AI: Building a Video Game · 2022-08-22T19:38:08.112Z · LW · GW

Wouldn't the kind of alignment you'd be able to test behaviorally in a game be unrelated to scalable alignment?

Comment by Andrew McKnight (andrew-mcknight) on Computational Model: Causal Diagrams with Symmetry · 2022-08-07T20:44:09.956Z · LW · GW

I know this was 3 years ago, but was this disagreement resolved, maybe offline?

Comment by Andrew McKnight (andrew-mcknight) on Two-year update on my personal AI timelines · 2022-08-03T21:32:28.500Z · LW · GW

Is there reason to believe algorithmic improvements follow an exponential curve? Do you happen to know a good source on this?

Comment by Andrew McKnight (andrew-mcknight) on AGI Ruin: A List of Lethalities · 2022-08-01T21:52:54.234Z · LW · GW

I'm tempted to call this a meta-ethical failure. Fatalism, universal moral realism, and just-world intuitions seem to be the underlying implicit hueristics or principals that would cause this "cosmic process" thought-blocker.

Comment by Andrew McKnight (andrew-mcknight) on Why all the fuss about recursive self-improvement? · 2022-07-28T18:26:27.021Z · LW · GW

I think it's good to go back to this specific quote and think about how it compares to AGI progress.

A difference I think Paul has mentioned before is that Go was not a competitive industry and competitive industries will have smaller capability jumps. Assuming this is true, I also wonder whether the secret sauce for AGI will be within the main competitive target of the AGI industry.

The thing the industry is calling AGI and targeting may end up being a specific style of shallow deployable intelligence when "real" AGI is a different style of "deeper" intelligence (with, say, less economic value at partial stages and therefore relatively unpursued). This would allow a huge jump like AlphaGo in AGI even in a competitive industry targeting AGI.

Both possibilities seem plausible to me and I'd like to hear arguments either way.

Comment by Andrew McKnight (andrew-mcknight) on AGI Ruin: A List of Lethalities · 2022-06-18T14:39:00.981Z · LW · GW

von Neumann's design was in full detail, but, iirc, when it was run for the first time (in the 90s) it had a few bugs that needed fixing. I haven't followed Freitas in a long time either but agree that the designs weren't fully spelled out and would have needed iteration.

Comment by Andrew McKnight (andrew-mcknight) on AGI Ruin: A List of Lethalities · 2022-06-06T21:18:23.819Z · LW · GW

If we merely lose control of the future and virtually all resources but many of us aren't killed in 30 years, would you consider Eliezer right or wrong?

Comment by Andrew McKnight (andrew-mcknight) on AGI Ruin: A List of Lethalities · 2022-06-06T20:52:35.602Z · LW · GW

There is some evidence that complex nanobots could be invented in ones head with a little more IQ and focus because von Neumann designed a mostly functional (but fragile) replicator in a fake simple physics using the brand-new idea of a cellular automata and without a computer and without the idea of DNA. If a slightly smarter von Neumann focused his life on nanobots, could he have produced, for instance, the works of Robert Freitas but in the 1950s, and only on paper?

I do, however, agree it would be helpful to have different words for different styles of AGI but it seems hard to distinguish these AGIs productively when we don't yet know the order of development and which key dimensions of distinction will be worth using as we move forward. (human-level vs super-? shallow vs deep? passive vs active? autonomy-types? tightness of self-improvement? etc). Which dimensions will pragmatically matter?

Comment by Andrew McKnight (andrew-mcknight) on February 2022 Open Thread · 2022-02-22T19:20:58.040Z · LW · GW

I think this makes sense because eggs are haploid (already only have 23 chromosomes) but a natural next question is: why are eggs haploid if there is a major incentive to pass more of the 46 chromosomes?

Comment by Andrew McKnight (andrew-mcknight) on Andrew McKnight's Shortform · 2021-12-05T01:36:52.402Z · LW · GW

I've been thinking about benefits of "Cognitive Zoning Laws" for AI architecture.

If specific cognitive operations were only performed in designated modules then these modules could have operation-specific tracking, interpreting, validation, rollback, etc. If we could ensure "zone breaches" can't happen (via e.g. proved invariants or more realistically detection and rollback) then we could theoretically stay aware of where all instances of each cognitive operation are happening in the system. For now let's call this cognitive-operation-factored architecture "Zoned AI".

Zoned AI seems helpful in preventing inner optimizers that are within particular modules (but might have little to say about emergent cross-module optimizers) and also would let interpretability techniques focus in on particular sections of the AI (e.g. totally speculating but if we knew where the meta-learning was inside GPT-3 it might just be all over the place and even with interpretability tools it could be hard to understand globally compared to the ability being localized in the network). Gradient descent training schemes break cognitive zoning law by default.

Defining cognitive operations perfectly enough to capture all instances of them is a losing battle. Instead we might (1) allow lots of false negatives and (2) use a behavioral test for detecting them rather than a definition.

To test a single inner piece of a Zoned AI, we create a second Zoned AI that is functional for some task and remove the capacity we want to test from that AI. Then we take the inner piece we are testing for a breach from the first AI, wrap it in a shallow network (a neural net or whatever), and see if the second AI can be made to function by training the shallow network. If the training succeeds, then we have a thing that is sufficiently similar to the disallowed operation, so we have a breach.

Now we don't actually want to check every tiny piece of the AI so instead we train a 3rd system to search for sections that might contain the disallowed ability and to predict whether one exists within the entire first AI, using the 2nd AI only as an expensive check.

Seeing the same abilities cropping up in the wrong place would tell you about the incentives innate to your architecture components and gesture towards new architectures that relieve the incentive. (e.g. If you find planning in your perception then maybe you need to attach the planner in a controlled way to the perception module)

None of this will work at later stages when an AGI can operate on itself but I would hope Cognitive Zoning could help during the crucial phase when we have AGI architecture in our hands but have not yet deployed instances at a scale where they are dangerous.

Thoughts and improvements? I'm sure this isn't a novel idea but has anyone written about it?

Comment by Andrew McKnight (andrew-mcknight) on Morality is Scary · 2021-12-02T22:12:53.402Z · LW · GW

I think the main thing you're missing here is that an AI is not generally going to share common learning facilities with humans. An AI growing up as a human will make it wildly different from a normal human because they aren't built precisely to learn from those experiences the way a human does.

Comment by Andrew McKnight (andrew-mcknight) on Ngo and Yudkowsky on alignment difficulty · 2021-11-24T22:11:14.338Z · LW · GW

I haven't read your papers but your proposal seems like it would scale up until the point when the AGI looks at itself. If it can't learn at this point then I find it hard to believe it's generally capable, and if it can, it will have incentive to simply remove the device or create a copy of itself that is correct about its own world model. Do you address this in the articles?

On the other hand, this made me curious about what we could do with an advanced model that is instructed to not learn and also whether we can even define and ensure a model stops learning.

Comment by Andrew McKnight (andrew-mcknight) on Ngo and Yudkowsky on AI capability gains · 2021-11-19T21:55:25.374Z · LW · GW

I agree that this thread makes it clearer why takeoff speeds matter to people but I always want to ask why people think sufficient work is going to get done in that extended 4-10 years even with access to proto-AGI to directly study.

Comment by Andrew McKnight (andrew-mcknight) on Attempted Gears Analysis of AGI Intervention Discussion With Eliezer · 2021-11-17T21:11:23.835Z · LW · GW

Thanks. This is great! I hadn't thought of Embedded Agency as an attempt to understand optimization. I thought it was an attempt to ground optimizers in a formalism that wouldn't behave wildly once they had to start interacting with themselves. But on second thought it makes sense to consider an optimizer that can't handle interacting with itself to be a broken or limited optimizer.

Comment by Andrew McKnight (andrew-mcknight) on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-16T23:32:44.319Z · LW · GW

No one has yet solved "and then stop" for AGI even though this should be easier than a generic stop button which in turn should be easier than full corrigibility. (Also I don't think we know how to refer to things in the world in a way that gets an AI to care about it rather than observations of it or its representation of it)

Comment by Andrew McKnight (andrew-mcknight) on Attempted Gears Analysis of AGI Intervention Discussion With Eliezer · 2021-11-16T22:31:15.972Z · LW · GW

the ways in which solving AF would likely be useful

Other than the rocket alignment analogy and the general case for deconfusion helping, has anyone ever tried to describe with more concrete (though speculative) detail how AF would help with alignment? I'm not saying it wouldn't. I just literally want to know if anyone has tried explaining this concretely. I've been following for a decade but don't think I ever saw an attempted explanation.