Posts

“Reframing Superintelligence” + LLMs + 4 years 2023-07-10T13:42:09.739Z
Role Architectures: Applying LLMs to consequential tasks 2023-03-30T15:00:28.928Z
The Open Agency Model 2023-02-22T10:35:12.316Z
Applying superintelligence without collusion 2022-11-08T18:08:31.733Z
QNR prospects are important for AI alignment research 2022-02-03T15:20:53.826Z

Comments

Comment by Eric Drexler on Role Architectures: Applying LLMs to consequential tasks · 2023-03-30T23:37:35.531Z · LW · GW

I agree that using the forms of human language does not ensure interpretability by humans, and I also see strong advantages to communication modalities that would discard words in favor of more expressive embeddings. It is reasonable to expect that systems with strong learning capacity could to interprete and explain messages between other systems, whether those messages are encoded in words or in vectors. However, although this kind of interpretability seems worth pursuing, it seems unwise to rely on it.

The open-agency perspective suggests that while interpretability is important for proposals, it is less important in understanding the processes that develop those proposals. There is a strong case for accepting potentially uninterpretable communications among models involved in generating proposals and testing them against predictive models — natural language is insufficient for design and analysis even among humans and their conventional software tools.

Plans of action, by contrast, call for concrete actions by agents, ensuring a basic form of interpretability. Evaluation processes can and should favor proposals that are accompanied by clear explanations that stand up under scrutiny.

Comment by Eric Drexler on You're not a simulation, 'cause you're hallucinating · 2023-02-22T16:56:09.413Z · LW · GW

This makes sense, but LLMs can be described in multiple ways. From one perspective, as you say,

ChatGPT is not running a simulation; it's answering a question in the style that it's seen thousands - or millions - of times before.

From different perspective (as you very nearly say) ChatGPT is simulating a skewed sample of people-and-situations in which the people actually do have answers to the question.

The contents of hallucinations are hard to understand as anything but token prediction, and by a system that (seemingly?) has no introspective knowledge of its knowledge. The model’s degree of confidence in a next-token prediction would be a poor indicator of degree of factuality: The choice of token might be uncertain, not because the answer itself is uncertain, but because there are many ways to express the same, correct, high-confidence answer. (ChatGPT agrees with this, and seems confident.)

Comment by Eric Drexler on Applying superintelligence without collusion · 2023-02-22T10:52:10.651Z · LW · GW

Are you arguing that increasing the number of (AI) actors cannot make collusive cooperation more difficult? Even in the human case, defectors make large conspiracies more difficult, and in the non-human case, intentional diversity can almost guarantee failures of AI-to-AI alignment.

Comment by Eric Drexler on Applying superintelligence without collusion · 2022-12-21T15:31:40.184Z · LW · GW

What you describe is a form of spiral causation (only seeded by claims of impossibility), which surely contributes strongly to what we’ve seen in the evolution of ideas. Fortunately, interest in the composite-system solution space (in particular, the open agency model) is now growing, so I think we’re past the worst of the era of unitary-agent social proof.

Comment by Eric Drexler on An Open Agency Architecture for Safe Transformative AI · 2022-12-21T15:20:13.701Z · LW · GW

Nate [replying to Eric Drexler]: I expect that, if you try to split these systems into services, then you either fail to capture the heart of intelligence and your siloed AIs are irrelevant, or you wind up with enough AGI in one of your siloes that you have a whole alignment problem (hard parts and all) in there….

GTP-Nate is confusing the features of the AI services model with the argument that “Collusion among superintelligent oracles can readily be avoided”. As it says on the tin, there’s no assumption that intelligence must be limited. It is, instead, an argument that collusion among (super)intelligent systems is fragile under conditions that are quite natural to implement.

Comment by Eric Drexler on Applying superintelligence without collusion · 2022-11-09T08:43:10.853Z · LW · GW

The question then becomes, as always, what it is you plan to do with these weak AGI systems that will flip the tables strongly enough to prevent the world from being destroyed by stronger AGI systems six months later.

Yes, this is the key question, and I think there’s a clear answer, at least in outline:

What you call “weak” systems can nonetheless excel at time- and resource-bounded tasks in engineering design, strategic planning, and red-team/blue-team testing. I would recommend that we apply systems with focused capabilities along these lines to help us develop and deploy the physical basis for a defensively stable world — as you know, some extraordinarily capable technologies could be developed and deployed quite rapidly. In this scenario, defense has first move, can preemptively marshal arbitrarily large physical resources, and can restrict resources available to potential future adversaries. I would recommend investing resources in state-of-the-art hostile planning to support ongoing red-team/blue-team exercises.

This isn’t “flipping the table”, it’s reinforcing the table and bolting it to the floor. What you call “strong” systems then can plan whatever they want, but with limited effect.

Comment by Eric Drexler on Applying superintelligence without collusion · 2022-11-09T02:39:43.330Z · LW · GW

Mind space is very wide

Yes, and the space of (what I would call) intelligent systems is far wider than the space of (what I would call) minds. To speak of “superintelligences” suggests that intelligence is a thing, like a mind, rather than a property, like prediction or problem-solving capacity. This is which is why I instead speak of the broader class of systems that perform tasks “at a superintelligent level”. We have different ontologies, and I suggest that a mind-centric ontology is too narrow.

The most AGI-like systems we have today are LLMs, optimized for a simple prediction task. They can be viewed as simulators, but they have a peculiar relationship to agency:

The simulation objective

A simulator trained with machine learning is optimized to accurately model its training distribution – in contrast to, for instance, maximizing the output of a reward function or accomplishing objectives in an environment.… Optimizing toward the simulation objective notably does not incentivize instrumentally convergent behaviors the way that reward functions which evaluate trajectories do.

LLMs have rich knowledge and capabilities, and can even simulate agents, yet they have no natural place in an agent-centric ontology. There’s an update to be had here (new information! fresh perspectives!) and much to reconsider.

Comment by Eric Drexler on Applying superintelligence without collusion · 2022-11-09T00:21:39.769Z · LW · GW

So I think useful arrangements of diverse/competing/flawed systems is a hope in many contexts. It often doesn't work, so looks neglected, but not for want of trying.

What does and doesn’t work will depend greatly on capabilities, and the problem-context here assumes potentially superintelligent-level AI.

Comment by Eric Drexler on Applying superintelligence without collusion · 2022-11-08T23:17:20.264Z · LW · GW

These are good points, and I agree with pretty much all of them. 

Instances in a bureaucracy can be very different and play different roles or pursue different purposes. They might be defined by different prompts and behave as differently as text continuations of different prompts in GPT-3

I think that this is an important idea. Though simulators analogous to GPT-3, it may be possible to develop strong, almost-provably-non-agentic intelligent resources, then prompt them to simulate diverse, transient agents on the fly. From the perspective of building multicomponent architectures this seems like a strange and potentially powerful tool.

Regarding interpretability, tasks that require communication among distinct AI components will tend to expose information, and manipulating “shared backgrounds” between information sources and consumers could potentially be exploited to make that information more interpretable. (How one might train against steganography is an interesting question.)

Comment by Eric Drexler on Applying superintelligence without collusion · 2022-11-08T21:49:38.700Z · LW · GW

I don’t see that as an argument [to narrow this a bit: not an argument relevant to what I propose]. As I noted above, Paul Christiano asks for explicit assumptions.

To quote Paul again:.

I think that concerns about collusion are relatively widespread amongst the minority of people most interested in AI control. And these concerns have in fact led to people dismissing many otherwise-promising approaches to AI control, so it is de facto an important question

Dismissing promising approaches calls for something like a theorem, not handwaving about generic “smart entities”.

[Perhaps too-pointed remark deleted]

Comment by Eric Drexler on Applying superintelligence without collusion · 2022-11-08T21:32:08.728Z · LW · GW

HCH-like amplification seems related, multiple unreliable agent instances assembled into a bureaucracy that as a whole improves on some of their qualities

This approach to amplification involves multiple instances, but not diverse systems, competing systems, different roles, adversarial relationships, or a concern with collusion. It is, as you say a training story for a single AI. Am I missing a stronger connection?

Comment by Eric Drexler on Applying superintelligence without collusion · 2022-11-08T21:22:59.153Z · LW · GW

Yes, we need to weigh the considerations you mention, and many others.

I think copy-collusion is reasonably likely, and we don't currently know of a way to prevent it.

Copy-collusion must be considered carefully, and its implications will depend on both potential contexts for using copies (sometimes problematic, sometimes OK) and on potential alternatives to using copies.

Copies will always “collude” in the trivial sense of giving identical, potentially deceptive responses to identical inputs, but might also offer potentially deceptive, mutually-supporting responses to differing inputs. For some purposes, a multicomponent architecture must treat multiple copies as a single source of information. (Note that solving the alignment problem is often taken to require that we be able to trust a single copy, or a system that consists of nothing but copies of a single entity — this seems strictly riskier than anything I’ve suggested.)

Some constraints can be effective even among copies: Constrained communication can be applied to preclude coordination that requires new, shared knowledge. Lack of access to a shared clock can be applied to preclude coordination of simultaneous or cleverly sequenced actions. Systems that audit alternative behaviors by re-running processes from checkpoints can be applied to offline copies, where adaptive, multi-trial probing (by what may be highly intelligent systems!) can likely be leveraged to make some kinds of deceptions unwise.

Of course a central idea is that many roles call for systems that are not copies: proposers vs. critics, designers vs. testers, actors vs. auditors, diverse members of ensembles, and so on.

we need to make choices about which approaches are most promising, which to pursue, etc.

Before asking whether one approach is superior to another, we should ask whether they are competitors or complements. Multicomponent approaches for making non-collusion robust seem quite complementary to strategies for making individual systems more trustworthy.

Regarding criteria for investment, neglectedness is of course a huge consideration.

Comment by Eric Drexler on Applying superintelligence without collusion · 2022-11-08T18:51:09.697Z · LW · GW

From the Afterword: “Note that I have presented a there-exists argument against a for-all proposition. Responses of the form “But I can think of something similar that won’t work” don’t count as counterarguments.” [This has been edited in the main text to sharpen the phrasing.]

B2: Copies might collude with one another, provided they have a broad decision theory: ∃(some sets of models) among which some mechanisms for impeding collusion might not work.

B4: A lot of models may be deployed over the internet: ∃(some communication channels) for which one of 6 mechanisms for impeding collusion might not work (given flawed security).

B6: LLM's will share a lot of knowledge: ∃(high performance models for some tasks) for which one of 6 mechanisms for impeding collusion doesn’t contribute to safety.

C1: Large models tend to beat compositions of small models for some tasks: ∃(some tasks) where some models should be large, and narrow knowledge won’t contribute to impeding collusion.

Not too worrisome, I think, but these are all considerations important to building robust multicomponent systems.

BTW, regarding the (non-)goal oriented nature of LLMs, I highly recommend the Simulators post by Janus.

Comment by Eric Drexler on Intuitions about solving hard problems · 2022-08-02T00:07:12.471Z · LW · GW

I’d like to promote a norm for proposals for alignment techniques to be very explicit about where the hard work is done, i.e. which part is surprising or insightful or novel enough to make us think that it could solve alignment even in worlds where that’s quite difficult.

Alignment is, by nature, an engineering task, not a scientific task: It is an attempt to make something, not to understand some existing thing. It may be that, as you suggest, “solving hard scientific problems usually requires compelling insights”, but this is beside the point. Spaceflight was a hard problem, but was solved without a special, compelling insight. Likewise for the progress of computation from vacuum tubes to nanoscale electronics. Both are in the domain of engineering, where problems are typically solved by improving and composing many components. Asking “which part solves the hard problem” would be a mistake.

Regarding the CAIS model, you suggest that it “dramatically underrates the importance of general intelligence”, yet I have argued that the comprehensive AI services model (including the service of developing new services) is a way of thinking about implementations of general intelligence, not a substitute for it! 

The capabilities of large language models should update our expectations, but do not persuade me that knowledge and skills of societal scale and diversity must or will be embodied in an undifferentiated blob of computation.

By the way, I haven’t suggested the CAIS model as a solution to alignment problems; instead of proposing a solution, it suggests that alignment problems are likely to arise (and perhaps be solved) in a context different from what has often been assumed. Some problems seem more tractable in that context, others less.

Comment by Eric Drexler on QNR prospects are important for AI alignment research · 2022-02-04T11:18:19.580Z · LW · GW

Although I don’t understand what you mean by “conservation of computation”, the distribution of computation, information sources, learning, and representation capacity is important in shaping how and where knowledge is represented.

The idea that general AI capabilities can best be implemented or modeled as “an agent” (an “it” that uses “the search algorithm”) is, I think, both traditional and misguided. A host of tasks require agentic action-in-the-world, but those tasks are diverse and will be performed and learned in parallel (see the CAIS report, www.fhi.ox.ac.uk/reframing). Skill in driving somewhat overlaps with — yet greatly differs from — skill in housecleaning or factory management; learning any of these does not provide deep, state-of-the art knowledge of quantum physics, and can benefit from (but is not a good way to learn) conversational skills that draw on broad human knowledge.

A well-developed QNR store should be thought of as a body of knowledge that potentially approximates the whole of human and AI-learned knowledge, as well as representations of rules/programs/skills/planning strategies for a host of tasks. The architecture of multi-agent systems can provide individual agents with resources that are sufficient for the tasks they perform, but not orders of magnitude more than necessary, shaping how and where knowledge is represented. Difficult problems can be delegated to low-latency AI cloud services. .

There is no “it” in this story, and classic, unitary AI agents don’t seem competitive as service providers — which is to say, don’t seem useful..

I’ve noted the value of potentially opaque neural representations (Transformers, convnets, etc.) in agents that must act skillfully, converse fluently, and so on, but operationalized, localized, task-relevant knowledge and skills complement rather than replace knowledge that is accessible by associative memory over a large, shared store.

Comment by Eric Drexler on What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) · 2021-05-05T21:30:49.307Z · LW · GW

How exactly does the terminal goal of benefiting shareholders disappear[…]

But does this terminal goal exist today? The proper (and to some extent actual) goal of firms is widely considered to be maximizing share value, but this is manifestly not the same as maximizing shareholder value — or even benefiting shareholders. For example:

  • I hold shares in Company A, which maximizes its share value through actions that poison me or the society I live in. My shares gain value, but I suffer net harm.
  • Company A increases its value by locking its customers into a dependency relationship, then exploits that relationship. I hold shares, but am also a customer, and suffer net harm.
  • I hold shares in A, but also in competing Company B. Company A gains incremental value by destroying B, my shares in B become worthless, and the value of my stock portfolio decreases. Note that diversified portfolios will typically include holdings of competing firms, each of which takes no account of the value of the other.

Equating share value with shareholder value is obviously wrong (even when considering only share value!) and is potentially lethal. This conceptual error both encourages complacency regarding the alignment of corporate behavior with human interests and undercuts efforts to improve that alignment.