Posts
Comments
Good post! We will soon have very powerful quantum computers that probably could simulate what will happen if a mirror bacteria is confronted with the human immune system. Maybe there is no risk at all or an existential risk to humanity. This should be a prioritized task for our first powerful quantum computer to find out.
Because he says so.
I’m not allowed to vote in the election but I hope Trump wins because I think he will negotiate a peace in Ukraine. If Harris wins I think the war will drag on for another couple of years at worst.
I have no problem getting pushback.
I guess it could be a great tool to help people quickly learn to converse in a foreign language.
The ”eternal recurrence” is surprisingly the most attractive picture of the ”afterlife”. The alternatives: the annihilation of the self or eternal life in heaven are both unattractive, for different reasons. Add to this that Nietzsche is right to say that the eternal recurrence is a view of the world which seems compatible with a scientific cosmology.
I remember I came up with a similar thought experiment to explain the Categorical Imperative.
Assume there is only one Self-Driving Car on the market, what principle would you want it to follow?
The first priciple we think of is: ”Always do what the driver would want you to do”.
This would certainly be the principle we would want if our SDC was the only car on the road. But there are other SDCs and so in a way we are choosing a principle for our own car which is also at the same time a ”universal law”, valid for every car on the road.
With this in mind, it is easy to show that the principle we could rationally want is: ”Always act on that principle which the driver can rationally will to become a universal law”.
Coincidently this is also Kant’s Categorical Imperative.
Yes, but that ”generative AI can potentially replace millions of jobs” is not contradictory to the statement that it eventually ”may turn out to be a dud”.
I initially reacted in the same way as you to the exact same passage but came to the conclusion that it was not illogical. Maybe I’m wrong but I don’t think so.
I think the auther ment that there was a perception that it could replace millions of jobs, and so an incentive for business to press forward with their implementation plans, but that this would eventually back fire if the hallucination problem is insoluble.
I am a Kantian and believe that those a priori rules have already been discovered.
But my point here was merely that you can isolate the part that belongs to pure ethics from evererything empirical, like in my example what a library is; why do people go to libraries; what is a microphone and what is it’s purpose and so on. What makes an action right or wrong at the most fundamental level however is independent of everything empirical and simply an a priori rule.
I guess also my broader point was that Stephen Wolfram is far too pessimistic about the prospects of making a moral AI. A future AI may soon have a greater understanding of the world and the people in it, and so all we have to do is to provide the right a priori rule and we will be fine.
Of course, the technical issue still remains: how do we make the AI stick to that rule, but that is not an ethical problem but an engineering problem.
You can’t isolate individual ”atoms” in ethics, according to Wolfram. Let’s put that to the test. Tell me if the following ”ethical atoms” are right or wrong:
- I will speak in a loud voice
2…on a monday
3…in a public library
4…where I’ve been invited to speak about my new book and I don’t have a microphone.
Now, (1) seems morally permissible, and (2) doesn’t change the evaluation. (3) does make my action seem morally impermissible, but (4) turns it around again. I’m convinced all of this was very simple to everyone.
Ethics is the science about the a priori rules that make these judgments so easy to us, or at least that was Kant’s view which I share. It should be possible to make an AI do this calculation even faster than we do, and all we have to do is to provide the AI with the right a priori rules. When that is done, the rest is just empirical knowledge about libraries and human beings and we will eventually have a moral AI.
I think that was the point. Comedians of the future will be performers. They will not write their own jokes but be increasingly good at reading the lines written by AI.
When Chat GPT came out I asked it to write a Seinfeld episode about taking an IQ-test. In my judgment it was just as good and funny as every other Seinfeld episodes I’ve watched. Mildly amusing. Entertaining enough not to stop reading.
This answer is a little bit confusing to me. You say that ”agency” may be an important concept even if we don’t have a deep understanding of what it entails. But how about a simple understanding?
I thought that when people spoke about ”agency” and AI, they meant something like ”a capacity to set their own final goals”, but then you claim that Stockfish could best be understood by using the concept of ”agency”. I don’t see how.
I myself kind of agree with the sentiment in the original post that ”agency” is a superfluous concept, but want to understand the opposite view?
You seem to hold the position that:
- Scientists and not philosophers should do meta-ethics and normative ethics, until
- AGIs can do it better at which point we should leave it to them.
I don’t believe that scientists either have the inclination or the competence to do what you ask of them, and secondly that letting AGIs decide right and wrong would be a nightmare scenario for the human race.
”Whom the gods would destroy, they first make mad.”
One way to kill everyone would be to turn the planet into a Peoples Temple and have everyone drink the kool-aid.
I think even those of us who think of ouselves as psychologically robust will be defenseless against the manipulations of a future GPT-9.
I thought that was what was meant. The question is probably the easiest one to answer affirmatively with a high degree of confidence. I can think of several ongoing ”moral catastrophs”.
Very good, fundamental questions.. I don’t understand question 85 though. Here are two more good questions.
- Are human beings aligned?
- Is human alignment, insofar as it exists, a property of the goals we have when we act or a property of the actions themselves?
I don’t think the response to Covid should give us reason to be optimistic about our effectiveness at dealing with the threat from AI. Quite the opposite. Much of the measures taken were known to be useless from the start, like masks, while others were ineffective or harmful like shutting down schools or giving vaccine to young people who were not at risk of dying from Covid.
Everything can be explained by the incentives our politicians have to do anything. They want to be seen to take important questions seriously while not upsetting their doners in the pharma industry.
I can easily imagine something similar happening if the voters becomes concerned about AI. Some ineffective legislation dictated by Big tech.
It is somewhat alarming that many participants here appear to accept the notion that we should cede political decision-making to an AGI. I had assumed that this was a widely-held view that such a course of action was to be avoided, yet it appears that I may be in the minority.
Someone used the metaphore of Plato’s cave to describe LLMs. The LLM is sitting in cave 2, unable to see the shadows on the wall but can only hear the voices of the people in cave 1 talking about the shadows.
The problem is that we people in cave 1 are not only talking about the shadows but also telling fictional stories, and it is very difficult for someone in cave 2 to know the difference between fiction and reality.
If we want to give a future AGI the responsibility to make important decisions I think it is necessary that it occupies a space in cave 1 and not just being a statistical word predictor in cave 2. They must be more like us.
Running simulations of other people’s preferences is what is usually called ”empathy”so I will use that word here.
To have empathy for someone, or an intuition about what they feel is a motivational force to do good in most humans, but it can also be used to be better at deceving and take advantage of others. Perhaps high functioning psychopaths work in this way.
To build an AI that knows what we think and feel, but without having moral motivation would just lead to a world of superintelligent psychopaths.
P.s. I see now that kibber is making the exact same point.
Consciousness is a red herring. We don’t even know if human beings are conscious. You may have a strong belief that you are yourself a conscious being, but how can you know if other people are conscious? Do you have a way to test if other people are conscious?
A superintelligent, misaligned AI poses an existential risk to humanity quite independantly of whether it is conscious or not. Consciousness is an interesting philosophical topic, but has no relevance to anything in the real world.