Posts

Comments

Comment by MattJ on Any Trump Supporters Want to Dialogue? · 2024-09-28T20:39:42.214Z · LW · GW

Because he says so.

Comment by MattJ on Any Trump Supporters Want to Dialogue? · 2024-09-28T20:35:51.554Z · LW · GW

I’m not allowed to vote in the election but I hope Trump wins because I think he will negotiate a peace in Ukraine. If Harris wins I think the war will drag on for another couple of years at worst.

I have no problem getting pushback.

Comment by MattJ on The Next ChatGPT Moment: AI Avatars · 2024-01-06T16:51:57.880Z · LW · GW

I guess it could be a great tool to help people quickly learn to converse in a foreign language.

Comment by MattJ on Nietzsche's Morality in Plain English · 2023-12-04T19:54:43.078Z · LW · GW

The ”eternal recurrence” is surprisingly the most attractive picture of the ”afterlife”. The alternatives: the annihilation of the self or eternal life in heaven are both unattractive, for different reasons. Add to this that Nietzsche is right to say that the eternal recurrence is a view of the world which seems compatible with a scientific cosmology.

Comment by MattJ on Immanuel Kant and the Decision Theory App Store · 2023-08-31T11:03:12.956Z · LW · GW

I remember I came up with a similar thought experiment to explain the Categorical Imperative.

Assume there is only one Self-Driving Car on the market, what principle would you want it to follow?

The first priciple we think of is: ”Always do what the driver would want you to do”.

This would certainly be the principle we would want if our SDC was the only car on the road. But there are other SDCs and so in a way we are choosing a principle for our own car which is also at the same time a ”universal law”, valid for every car on the road.

With this in mind, it is easy to show that the principle we could rationally want is: ”Always act on that principle which the driver can rationally will to become a universal law”.

Coincidently this is also Kant’s Categorical Imperative.

Comment by MattJ on Is this the beginning of the end for LLMS [as the royal road to AGI, whatever that is]? · 2023-08-24T16:41:01.629Z · LW · GW

Yes, but that ”generative AI can potentially replace millions of jobs” is not contradictory to the statement that it eventually ”may turn out to be a dud”.

I initially reacted in the same way as you to the exact same passage but came to the conclusion that it was not illogical. Maybe I’m wrong but I don’t think so.

Comment by MattJ on Is this the beginning of the end for LLMS [as the royal road to AGI, whatever that is]? · 2023-08-24T15:51:39.664Z · LW · GW

I think the auther ment that there was a perception that it could replace millions of jobs, and so an incentive for business to press forward with their implementation plans, but that this would eventually back fire if the hallucination problem is insoluble.

Comment by MattJ on Steven Wolfram on AI Alignment · 2023-08-24T11:05:08.179Z · LW · GW

I am a Kantian and believe that those a priori rules have already been discovered.

But my point here was merely that you can isolate the part that belongs to pure ethics from evererything empirical, like in my example what a library is; why do people go to libraries; what is a microphone and what is it’s purpose and so on. What makes an action right or wrong at the most fundamental level however is independent of everything empirical and simply an a priori rule.

I guess also my broader point was that Stephen Wolfram is far too pessimistic about the prospects of making a moral AI. A future AI may soon have a greater understanding of the world and the people in it, and so all we have to do is to provide the right a priori rule and we will be fine.

Of course, the technical issue still remains: how do we make the AI stick to that rule, but that is not an ethical problem but an engineering problem.

Comment by MattJ on Steven Wolfram on AI Alignment · 2023-08-23T14:08:06.789Z · LW · GW

You can’t isolate individual ”atoms” in ethics, according to Wolfram. Let’s put that to the test. Tell me if the following ”ethical atoms” are right or wrong:

  1. I will speak in a loud voice

2…on a monday

3…in a public library

4…where I’ve been invited to speak about my new book and I don’t have a microphone.

Now, (1) seems morally permissible, and (2) doesn’t change the evaluation. (3) does make my action seem morally impermissible, but (4) turns it around again. I’m convinced all of this was very simple to everyone.

Ethics is the science about the a priori rules that make these judgments so easy to us, or at least that was Kant’s view which I share. It should be possible to make an AI do this calculation even faster than we do, and all we have to do is to provide the AI with the right a priori rules. When that is done, the rest is just empirical knowledge about libraries and human beings and we will eventually have a moral AI.

Comment by MattJ on Next Level Seinfeld · 2023-05-15T20:23:14.354Z · LW · GW

I think that was the point. Comedians of the future will be performers. They will not write their own jokes but be increasingly good at reading the lines written by AI.

When Chat GPT came out I asked it to write a Seinfeld episode about taking an IQ-test. In my judgment it was just as good and funny as every other Seinfeld episodes I’ve watched. Mildly amusing. Entertaining enough not to stop reading.

Comment by MattJ on Why do we care about agency for alignment? · 2023-04-24T13:06:31.684Z · LW · GW

This answer is a little bit confusing to me. You say that ”agency” may be an important concept even if we don’t have a deep understanding of what it entails. But how about a simple understanding?

I thought that when people spoke about ”agency” and AI, they meant something like ”a capacity to set their own final goals”, but then you claim that Stockfish could best be understood by using the concept of ”agency”. I don’t see how.

I myself kind of agree with the sentiment in the original post that ”agency” is a superfluous concept, but want to understand the opposite view?

Comment by MattJ on Scientism vs. people · 2023-04-18T21:12:26.041Z · LW · GW

You seem to hold the position that:

  1. Scientists and not philosophers should do meta-ethics and normative ethics, until
  2. AGIs can do it better at which point we should leave it to them.

I don’t believe that scientists either have the inclination or the competence to do what you ask of them, and secondly that letting AGIs decide right and wrong would be a nightmare scenario for the human race.

Comment by MattJ on AI-kills-everyone scenarios require robotic infrastructure, but not necessarily nanotech · 2023-04-03T18:53:10.392Z · LW · GW

”Whom the gods would destroy, they first make mad.”

One way to kill everyone would be to turn the planet into a Peoples Temple and have everyone drink the kool-aid.

I think even those of us who think of ouselves as psychologically robust will be defenseless against the manipulations of a future GPT-9.

Comment by MattJ on ~100 Interesting Questions · 2023-03-30T18:23:47.600Z · LW · GW

I thought that was what was meant. The question is probably the easiest one to answer affirmatively with a high degree of confidence. I can think of several ongoing ”moral catastrophs”.

Comment by MattJ on ~100 Interesting Questions · 2023-03-30T18:04:16.174Z · LW · GW

Very good, fundamental questions.. I don’t understand question 85 though. Here are two more good questions.

  1. Are human beings aligned?
  2. Is human alignment, insofar as it exists, a property of the goals we have when we act or a property of the actions themselves?
Comment by MattJ on Nobody’s on the ball on AGI alignment · 2023-03-29T19:28:43.804Z · LW · GW

I don’t think the response to Covid should give us reason to be optimistic about our effectiveness at dealing with the threat from AI. Quite the opposite. Much of the measures taken were known to be useless from the start, like masks, while others were ineffective or harmful like shutting down schools or giving vaccine to young people who were not at risk of dying from Covid.

Everything can be explained by the incentives our politicians have to do anything. They want to be seen to take important questions seriously while not upsetting their doners in the pharma industry.

I can easily imagine something similar happening if the voters becomes concerned about AI. Some ineffective legislation dictated by Big tech.

Comment by MattJ on Will people be motivated to learn difficult disciplines and skills without economic incentive? · 2023-03-20T19:42:15.531Z · LW · GW

It is somewhat alarming that many participants here appear to accept the notion that we should cede political decision-making to an AGI. I had assumed that this was a widely-held view that such a course of action was to be avoided, yet it appears that I may be in the minority.

Comment by MattJ on What does Bing Chat tell us about AI risk? · 2023-03-01T08:46:49.238Z · LW · GW

Someone used the metaphore of Plato’s cave to describe LLMs. The LLM is sitting in cave 2, unable to see the shadows on the wall but can only hear the voices of the people in cave 1 talking about the shadows.

The problem is that we people in cave 1 are not only talking about the shadows but also telling fictional stories, and it is very difficult for someone in cave 2 to know the difference between fiction and reality.

If we want to give a future AGI the responsibility to make important decisions I think it is necessary that it occupies a space in cave 1 and not just being a statistical word predictor in cave 2. They must be more like us.

Comment by MattJ on The Preference Fulfillment Hypothesis · 2023-02-26T19:28:50.186Z · LW · GW

Running simulations of other people’s preferences is what is usually called ”empathy”so I will use that word here.

To have empathy for someone, or an intuition about what they feel is a motivational force to do good in most humans, but it can also be used to be better at deceving and take advantage of others. Perhaps high functioning psychopaths work in this way.

To build an AI that knows what we think and feel, but without having moral motivation would just lead to a world of superintelligent psychopaths.

P.s. I see now that kibber is making the exact same point.

Comment by MattJ on AI psychology should ground the theories of AI consciousness and inform human-AI ethical interaction design · 2023-01-08T18:29:14.542Z · LW · GW

Consciousness is a red herring. We don’t even know if human beings are conscious. You may have a strong belief that you are yourself a conscious being, but how can you know if other people are conscious? Do you have a way to test if other people are conscious?

A superintelligent, misaligned AI poses an existential risk to humanity quite independantly of whether it is conscious or not. Consciousness is an interesting philosophical topic, but has no relevance to anything in the real world.