A couple of questions about Conjecture's Cognitive Emulation proposal

post by Igor Ivanov (igor-ivanov) · 2023-04-11T14:05:58.503Z · LW · GW · 1 comments

Contents

  Introduction
  Questions
    Is it even possible to create a perfect CoEm?
    Do you really believe that CoEms will not fudamentally disrupt society?
  Conclusion
None
1 comment


Introduction

Recently, Conjecture proposed a conception of a modular AI architecture, with each module being interpretable, having limited intellect, and functionally emulating the human mind. They called this concept Cognitive Emulation [LW · GW] or CoEm. 

Conjecture's post was concise and did not contain technical details or plans, so people in the comments asked a lot of questions about this proposal. Later, a group of authors made a post with a set of questions about CoEm [LW · GW].

I have a pair of questions that to my knowledge were not asked about this proposal.

They might be naive and have been discussed elsewhere, but since the questions are not entirely obvious, and no one else asked them within the context of CoEm, I believe that this post has the potential to start valuable discussions.

 

Questions

Is it even possible to create a perfect CoEm?

A quote from the original post:

We want systems that are as safe as humans, for the same reasons that humans have (or don’t have) those safety properties. Any scheme that involves building systems that involves humans should allow you to swap those humans for CoEms without breaking or drastically altering their behavior.

As far as I understand, Conjecture plans to create AI agents that function so similarly to humans that they are interchangeable with humans in social systems.

But what if a human need hours to write a text while AI needs seconds? Is it still human-capability level? What if AI can rapidly learn a large corpus of scientific knowledge? What if AI can instantly answer any email, not like busy people who might answer in a week?

I don't know how CoEm will be implemented, but it is highly likely that it will not require hours to write texts, and it will not wait for a week to answer an email. 

In the past, such bumps in human capabilities resulted in drastic changes in the world. For example, it is believed that the adoption of radio which allowed propagandists to speak not to the tens or hundreds, but millions of people, was one of the causes of the rise of totalitarian regimes in the 20th century. In the case of CoEm, even if it will be as perfect a human emulator as possible, such changes in capabilities are inevitable and will cause consequences that are very hard to predict.

 

Do you really believe that CoEms will not fudamentally disrupt society?

A quote from the original post:

We have a lot of experience and knowledge of building systems that are broadly beneficial and safe, while operating in the human capabilities regime.

The problem of why e.g. normal laws and regulations will not work for AGI is that we have no way of ensuring that the AGI that gets built will obey the capabilities constraints that are implicitly assumed in our social and legal mechanism design.

I can speculate (and I might be wrong) that the authors intended to say that if CoEms will be interchangeable with humans, we can continue to use our existing social institutions with some adjustments, and we will not need to create everything from the ground up.

If my understanding is correct, then I think that this statement is way optimistic. Even though there is intent and thought put into designing social institutions, by a large part, our social systems are mostly based on millennia of trial and error, and the survival of the most stable systems. This is something similar to biological evolution. And as products of evolution hardly know why they work the way they do, similarly, our societies are extremely complex in non-obvious ways, and their stability is based on tons of implicit human peculiarities and limitations that we are unaware of, and that will be inevitably distorted by CoEms.

My following example is definitely wrong in detail, but I believe it's a good illustration of my point:

For example, in authoritarian countries, many people want to replace an autocrat, but they can't do it legally due to oppressive laws and rigged elections. In theory, if a large enough number of people start protesting at the same time, the police will be overwhelmed, unable to control the situation, and the protesters will have a real chance to remove the ruler. 

In reality, this happens rarely because a large protest should start a small protest that grows over time, and small protests are usually easy to suppress. People know that it's hard to create a large protest from scratch, and are usually reluctant to participate in protests in the first place. A part of that is a communications problem. People need time to communicate and cooperate so a protest can grow.

But what if CoEms will communicate much faster, and as a result, they will be able to plan and coordinate more rapidly and more efficiently than human beings? This might make authoritarian regimes much less stable. 

This exact effect might be good, but my point is that there is an implicit assumption that people need time to coordinate, and if the coordination of CoEms will become much faster, then many of our intuitions about societies might become useless, and even create a false sense of understanding.

In some sense, it's not an argument against CoEm, and there is no way to give a reasonable answer before CoEms are created and tested in the real world. 

Another important note is that any AGI will severely reshape society, and CoEms might become the solution that will reshape it in a milder way compared to alternatives. 

 

Conclusion

Personally, I believe that the Cognitive Emulation proposal has the potential to solve some of the problems we expect from AGI [LW · GW], but I also believe that it is important to not have a false sense of security, and respectful criticism is essential.
 

 

1 comments

Comments sorted by top scores.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-04-11T15:55:25.514Z · LW(p) · GW(p)

Yes, I think we have to face the fact that a significant amount of social disruption due to technological progress is on the horizon. I do think CogEms are potentially a good solution to making a digital police force that can detect and halt the rise of rogue AGI. If we can find a way to do that, then we'll get our "long reflection", our chance to work on better solutions to the alignment problem over decades rather than just a few years.