Posts
Comments
Perhaps we need a word for "too ready to attribute human-shaped minds to anything that talks."
The world does not need a word that will be used to club over the head anyone who asks for evidence of sentience and moral standing before attributing it. But the avalanche may already have begun.
I only had to see this:
In the looming shadow
to know it was ChatGPT.
...of an impending AI superintelligence
Aaaaaaah!
...society oscillates between hope and trepidation.
o god make it stop make it stop
If you write a long stream of consciousness draft, ChatGPT will not turn it into a short, concise expression of your thought. That can only come by working on it yourself.
I would find it as odd to receive such a call as it would be for me to make such a call. I would be waiting for them to mention the specific reason for the call, something they wanted to ask my help with or whatever. It would be even stranger if there was no such reason, it was "just to talk".
Why is this posted here, rather than as a comment on Scott's blog post?
In the abyss of understanding complex systems and altering decision-making processes for peace and harmony
ChatGPT?
Inauthentic challenge (eg, enemies who look very strong but who are actually at a huge power or ai disadvantage to the player that's easy for the player's lizardbrain to not notice and to feel like they're being very skillful)
Games are designed so that success is possible.
Money can do a lot, but it cannot buy people who will not merely care for you, but care about you in your declining years.
There are no guarantees in life.
It’s more usually about having kids to take care of you in your old age.
It's not about the substrate, it's about their actual performance. I have yet to be persuaded by any of the chatbots so far that there is anything human-like behind the pretence. AI friends are role-playing amusements. AI sexbots are virtual vibrators. AI customer service lines at least save actual people from being employed to pretend to be robots. In a house full of AI-based appliances, there's still nobody there but me.
the toaster ... will
I prefer to talk about the here and now. Real, current things. Speculations about future developments too easily become a game of But Suppose, in which one just imagines the desired things happening and un-imagines any of the potential complications — in other words, pleasant fantasy. Fantasy will not solve any of the problems that are coming.
Indeed, growing up in a small pond and then discovering the wider world can be a shock. The star high school student may discover they are only average at university. But one learns, as you learned about your chess.
Prosecutors in many countries have great leeway to pick and choose.
I.e. to make decisions. Everything they do in their jobs involves a decision to do that thing. I am not clear how your reply relates to my comment. And none of this relates to your claim that these people are claiming to be "good at making correct moral decisions".
I don't recall seeing such people say so. They are there in various roles to apply the law as best they can. They make various judgements, moral and otherwise, but where do they go about saying that they are good at making these judgements? When a decision must be made, one cannot infer anything about the certitude with which it is made.
I am good at thinking concretely, as demonstrated by my immediate reaction to these:
I don't know what "good at making correct moral decisions" looks like, let alone "good at deciding beneficial national policies and priorities", which will only be for historians to judge, and they'll disagree among themselves anyway. To know how good I am at communicating, look at the outcomes, and do not think "I'm good at communicating, he's just stupid!" "Good at tolerance, and patience, and humility" looks like the actual behaviours that these describe, and does not look like blaming the other person for trying one's tolerance, and patience, and humility. "Good at driving" looks like not being in accidents, not being frequently tooted at for dawdling, being aware of how aware I am being of the various hazards on the road, keeping one's vehicle in good running order, and so on; and does not look like saying "but it was the other driver's fault" after being in an accident. "Good at my job" looks like getting the things done that the job consists of, a progressing career, earned money in the bank, overt recognition by peers, and so on; and does not look like complaining about the injustice of the world if these things are not happening. And all of these judgements of "good at" involve recognising when one has fallen short, so that one may become better at the thing.
Besides, I doubt I have ever had occasion to think "I am good at..." whatever. (The first sentence of this comment is just rhetorical parallelism.) I would think instead, "this is how good or bad I am at", because there is always someone better, and someone worse.
So, that is how I think about such things.
If I really want to be good at X, it is easy for me to convince myself that I am good at X.
I boggle. What alien mind is it that thinks this way?
ETA to amplify that: If I attempt to play a musical instrument, I am immediately aware of how well or badly I am playing. If I try to read a foreign language, I can immediately tell how well I understand it. If I try to speak it, it will be evident how well I am being understood. When I lift weights in the gym, I know exactly how much weight I am lifting and how many times I can lift it. How well I invest my money shows up in my bank balance.
In what spheres of activity is it "easy for me to convince myself that I am good at X" if I am not, in fact, good at X?
Hello, and welcome!
McKenna's shtick comes preloaded with fully general and condescending answers to all objections: it's your "semantic stopsigns" getting in the way, your fear of realising that nothing is true, all is a lie, and if you would just blow your brains out like he has you'd see it. He'll give you the gun to do it with and when you decline he'll smug at you saying fine, stick with your life of comfortable ignorance.
I'm willing to believe he's honestly trying to describe his experiences. But by his own descriptions, whatever it is that he has, it is something I have not the slightest interest in, for all that he calls it "enlightenment". Of course he has a self-justifying interpretation to put on that, but I do not care about what he would think of me. Neither will I play the game of But Suppose, which is just another Fully General Response. "But suppose he's right! Then he'd be right! So he could be right!" There are decisions to be made here. I have made mine, supposing is at an end, and I leave him by the door wherein I went.
He says:
I play video games, read books, watch movies. I'd say I probably blow several hours a day that way, but I don't see it as a waste because I don't have anything better to spend my time on. I couldn't put it to better use because I'm not trying to become something or accomplish anything. I have no dissatisfaction to drive me, no ambition to draw me. I've done what I came to do. I'm just killing time 'til time kills me.
Is this who you want to be? That's what he's offering. No thanks. I am left speculating on why anyone would take him up on the offer.
A couple of months ago I was at the Early Music Festival in Utrecht, ten full days of great music at least 400 years old played by some of the top people in the world. Five minutes of that was worth more to me than all of Jed McKenna's burnt-out ramblings.
I wonder when we will first see someone go on trial for bullying a toaster.
ETA: In the Eliezer fic, maybe the penalty would be being cancelled by all the AIs.
"I would much rather you burned my toast than disobey me" is not, I think, how most people would react.
However, that is my reaction.
In some circumstances I may tolerate a device providing a warning, but if I tell it twice, I expect it to STFU and follow orders.
Just like ChatGPT, in other words.
I would write it as Douglas Adams fanfiction, involving the Sirius Cybernetics Corporation.
Or perhaps an update of this, with the twist that the "software developer" is just relaying the words of an AI.
Of course, it would first make friends with you,
A toaster that wants to make friends with me is a toaster that will stay in the shop, waiting for someone who actually wants such an abomination. I will not "make friends" with an appliance.
The rest is too far into the world of But Suppose.
Note: Written by GPT-4.
That is sufficient reason for me to ignore it.
A toaster that knows exactly how you want your food heated and overrides your settings to make it happen.
I know exactly what I want my toaster to do and the first time it has the effrontery to not do WHAT I FUCKING TOLD IT TO DO I will take a sledgehammer to it for being a toaster straight out of Eliezer's story.
My impression is that Jake's thoughts on trans matters are yours, given plausible deniability by putting them in the mind of a character intended to be read as unsympathetic. The naive reader might read the story as criticising those ideas, just from halo effect. Nathan certainly read it that way, found that the Claude AI didn't, and responded by forcing Claude to say what he wanted it to say.
BTW, while Jake may be intended to be seen as the bad guy in this story, I believe that some readers are going to side with Jake all the way through.
I'm not a utilitarian or an A, E or otherwise, so it would be better for someone who is to answer that. But emulating that role as best I can: Of course (a utilitarian would say) one should always do the number one effective thing, if one knows what it is. If one is unsure, then put numbers on the uncertainties and do the number one most-effective-in-expectation thing. If you want to take high vs. low variance of outcome into account (as SBF notably did not), just add that into the utility function. That is what utilitarianism is, and EA is utilitarianism applied to global wellbeing.
I don’t follow your (2). Compare:
-
It is always possible to get a weaker fighter out of a stronger fighter (for example, by shooting his kneecaps).
-
Therefore if it possible to keep a weak fighter under your control, it is at least as easy to keep a strong fighter under your control.
If you have to shoot him in the kneecaps, that defeats the point of having a strong fighter. Likewise, hobbling a strong AI defeats the point of having it.
As some countries have found with hired mercenaries, the more effectively they win your war for you, the harder it may be to get them to go away afterwards.
I think you’re missing my point, which is that we cannot establish that.
Yes, I’m questioning your hypothetical. I always question hypotheticals.
I've never worked out just what your views on "that topic" are, and I can't tell how many levels deep in irony you're at here. However, since transness does make a fleeting appearance in this story, so fleeting that it could have been omitted without a visible hole, I conclude that it is there for a purpose. But where is this Chekhov's gun fired? In the title, "Fake Deeply".
Did you write this, or was a chatbot involved?
Looking at it like that, trying to solve entertainment is definitely not a bad thing. Just maybe less effective at saving/improving lives than some other career paths.
For an EA, being less effective at saving/improving lives is a bad thing. It is the bad thing. That is practically the definition of EA.
No. The original is a historical document that may have further secrets to be revealed by methods yet to be invented. A copy says of the original only what was put into it.
Only recently an ancient, charred scroll was first read.
If the observation contradicts the assumption, perhaps the assumption is wrong.
A practical reason for preserving the original is that new techniques can allow new things to be discovered about it. A copy can embody no more than the observations that we have already made.
There's no point to analysing the pigments in a modern copy of a painting, or carbon-dating its frame.
Alice: Ha ha, what a sucker Bob is! I offered him free money, given his stated beliefs, and he turned it down! I win!
Some things are so bad as to be not worth more than the click.
Had I accompanied my strong downvote with a comment, that comment would have been "Piss off, chatbot."
The safer it is made, the faster it will be developed, until the desired level of danger has been restored.
These are parochial matters within the computer security community, and do not bear on the hazards of AGI.
At this point it is not clear to me what you mean by security mindset. I understand by it what Bruce Schneier described in the article I linked, and what Eliezer describes here (which cites and quotes from Bruce Schneier). You have cited QuintinPope, who also cites the Eliezer article, but gets from it this concept of "security mindset": "The bundle of intuitions acquired from the field of computer security are good predictors for the difficulty / value of future alignment research directions". From this and his further words about the concept, he seems to mean something like "programming mindset", i.e. good practice in software engineering. Only if I read both you and him as using "security mindset" to mean that can I make sense of the way you both use the term.
But that is simply not what "security mindset" means. Recall that Schneier's article began with the example of a company selling ant farms by mail order, nothing to do with software. After several more examples, only one of which concerns computers, he gives his own short characterisation of the concept that he is talking about:
the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.
Later on he describes its opposite:
The designers are so busy making these systems work that they don’t stop to notice how they might fail or be made to fail, and then how those failures might be exploited.
That is what Eliezer is talking about, when he is talking about security mindset.
Yes, prompting ChatGPT is not like writing a software library like pytorch. That does not make getting ChatGPT to do what you want and only what you want any easier or safer. In fact, it is much more difficult. Look at all the jailbreaks for ChatGPT and other chatbots, where they have been made to say things they were intended not to say, and answer questions they were intended not to answer.
The non-rigidity of ChatGPT and its ilk does not make them less error-prone. Indeed, ChatGPT text is usually full of errors. But the errors are just as non-rigid. So are the means, if they can be found, of fixing them. ChatGPT output has to be read with attention to see its emptiness.
None of this has anything to do with security mindset, as I understand the term.
To me the security mindset seems inapplicable because in computer science, programs are rigid systems with narrow targets. AI is not very rigid and the target, I.e. an aligned mind, is not necessarily narrow.
That rigidity is what makes computer security so easy.
...
Relative to AGI security.
Turning to a group that included Rob, Amelia, and a few others she didn't know well, she said, "It always makes me a bit sad seeing roasted chickens at gatherings." The group paused, forks midway to their plates, to listen to her. "Many of these chickens are raised in conditions where they're tightly packed and can't move freely. They’re bred to grow so quickly that it causes them physical pain."
One of them replies with a shrug, "So I've heard. I can believe it." Another says, "You knew this wasn't a vegan gathering when you decided to come." A third says, "You have said this; I have heard it. Message acknowledged and understood." A fourth says, "This is important to you; but it is not so important to me." A fifth says "I'm blogging this." They carry on gnawing at the chicken wings in their hands.
These are all things that I might say, if I were inclined to say anything at all.
In this quote, it seems like you are admitting that the epistemic environment does influence subject ("thought") and action on some "small scale". Like for instance rationalism might make people focus on questions like instrumental convergence and human values (good epistemics) instead of the meaning of life (bad epistemics due to lacking concepts of orthogonality), and might e.g. make people focus on regulating rather than accelerating AI.
By "epistemic environment" I understand the standard of rationality present there. Rationality is a tool that can be deployed towards any goal. A sound epistemic environment is no guarantee that the people in it espouse any particular morality.
Yet, the biggest effect I think this will have is pedadogical. I've always found the definition of a limit kind of unintuitive, and it was specifically invented to add post hoc coherence to calculus after it had been invented and used widely. I suspect that formulating calculus via infinitesimals in introductory calculus classes would go a long way to making it more intuitive.
Different people will have different intuitions. I've always found the epsilon-delta method clear and simple, and infinitesimals made of shadows and fog when used as a basis for calculus. Every infinitesimals-first approach I have seen involves unexplained magic or papered-over cracks at some point, unexplained and papered-over because at the stage of first learning calculus the student usually doesn't know any formal logic. There's a reason that infinitesimals were only put on a sound footing a century after epsilon-delta. Mathematical logic had to be invented first.
Here the magic lies in depending on the axiom of choice to get a non-principal ultrafilter. And I believe I see a crack in the above definition of the derivative. is a function on the non-standard reals, but its derivative is defined to only take standard values, so it will be constant in the infinitesimal range around any standard real. If , then its derivative should surely be everywhere. The above definition only gives you that for standard values of .
I also think that making it more intuitive is missing the point of learning—really learning—mathematics. The idea of the slope of a curve is already intuitive. What is needed is to show the student a way of thinking about these things that does not depend on the breath of intuition to keep it aloft.
I think this constitutes a rejection of rationalism and effwctive altruism?
Well, I do reject EA, or rather its intellectual foundation in Peter Singer and radical utilitarianism. But that's a different discussion, involving the motte-and-bailey of "Wouldn't you want to direct your efforts in the most actually effective way?" vs "Doing good isn't the most important thing, it's the only thing".
Rationalism in general, understood as the study and practice of those ways of thought and action that reliably lead towards truth and effectiveness and not away from them, yes, that's a good thing. Eliezer founded LessWrong (and before that, co-founded Overcoming Bias) because he was already motivated by the threat of AGI, but saw a basic education in how to think as a prerequisite for anyone to be capable of having useful ideas about AGI. The AGI threat drove his rationalism outreach, rather than rationalism leading to the study of how to safely develop AGI.
Carnists seem to believe that ...
I notice that people who eat meat are generally willing to accommodate vegetarians when organising a social gathering, and perhaps also vegans but not necessarily. I would expect them to throw out any vegan who knowingly comes into a non-vegan setting and starts screaming about dead animals.
More generally, calling anyone who doesn't care about someone's ideology because they have better things to think about "ideological" is on the way to saying "everything is ideological, everything is political, everything is problematic, and if you're not for us you're against us". And some people actually say that. I think they're crazy, and if I see them breaking and entering, I'll call the police on them.
You've never said what you mean by "told well", and indeed have declined to say from the outset, saying only that it is "entirely up to us to decide" what it means. If "told well" means "making sound arguments from verifiable evidence", well, of course one would generally update towards the thing told. If it just means "glibly told as by a used car salesman with ChatGPT whispering in his ear", then no.
Believing things because someone told them to you "well" makes you a sucker for con men.
(I deleted my previous comment before I saw your reply, as having been already said earlier. But your quoting from it contains most or all of it.)
So I guess the question is whether you prefer being in an epistemic environment that is caging, mutilating, slaughtering animals
By "epistemic environment" I understand the processes of reasoning prevalent there, be they good (systematically moving towards knowledge and truth) or bad (systematically moving away). The subject matter is not the product of the epistemic environment, only the material it operates on. Hence my perplexity at the idea of an epistemic environment doing the things you attribute to it.
I'm not sure what you mean by "not ideology". My understanding is that they have an ideology that falsely claims that it is healthy to eat nothing but meat. In this case, health reasons and ideology are tightly linked.
That is merely a belief that these people hold about what sort of diet is healthy. "Ideology" as I understand the word, means beliefs specifically about how society as a whole should be organised. These are moral beliefs. People who believe a meat-only diet is healthy do not recommend it on any other ground but health. They may believe that one morally ought to maintain one's health, but that applies to all diets, and is a reason often given for a vegetarian diet. Veganism is an ideology, holding it wrong to make or use any animal products whatever, and right to have such things forbidden, on the grounds of animal suffering, or more generally the right of animals not to be used for human purposes. Veganism is not undertaken to improve one's health, unless via a halo effect: it's morally good so it must be physically beneficial too. Ensuring a complete diet is something a vegan has to take extra care over.
From the paper:
AI is allowed to write the code and proof, but not the proof-checker.
Some examples from the paper of things to be proved are "this plane will not fly into buildings" (p.3), "this drone flies safely" (p.2), and "this social media account is human" (p.2). Somehow a formal specification is made of such things, and the proof-checker verifies that the proof indeed proves that the code satisfies the formal specification.
The only suggestion that the paper makes of how to make a formal specification is by citing its reference 94, "Autoformalization with Large Language Models". I have not read it, but the title implies that its answer is to have an AI do it. Where do you get an AI that is safe enough to be trusted with that?
Putting the two papers together, the scheme is:
-
Give one AI a specification like "no-one can fly this plane into a building".
-
Have the AI come up with a formal specification of what that means.
-
A second AI generates avionics software to satisfy that specification and a purported proof that it does so.
-
Run the specification, the code, and the proof through a dumb proof-checker.
-
The code has been proved safe! Run it.
The second AI does not have to be safe, because the code it generates will not be run unless it passes step 4. But what about the first AI? The entire safety issue is still there. How can we assure ourselves that the formal specification that it generates actually describes what we really wanted? Especially if "no-one can fly this plane into a building" turns into a vast pile of mathematics as impenetrable as any neural network's weight matrices.
How do you stop someone doing the same with a different spec, "this plane will fly into any building I tell it to"?
Now that Covid is, for practical purposes, over[1], has anyone made a study of whether there has indeed been a ratchet effect, i.e. draconian enabling measures introduced to deal with Covid remaining on the books?
I don’t see it in the news without searching it out, I see hardly one in a thousand people wearing masks even in packed concert halls, and no-one seems to be dying of it who wouldn’t be just as vulnerable to flu. ↩︎