Posts
Comments
I think there might be other aspects to trauma, though. Some possible candidates:
- memories feel as if they are “tagged” with an emotion, in a way that memories normally aren’t
-depletion of some kind of mental resource; not sure what to call it, so I won’t be too so specific about exactly what is depleted
One of the ideas in Cognitive Behavioral Therapy is you might be treating as dangerous something that actually isn’t dangerous (and don’t learn that it’s safe because you’re avoiding it).
so the account you’re giving here seems to be fairly standard.
On the other hand: some things actually are dangerous.
In any case, as a researcher currently working in this area, I am putting a big bet on moderate badness happening (in that I could be working on something else, and my time has value).
Also, there is counterparty risk if you bet on everyone dying.
(Yeah, yeah, you can bet on something like other peoples belief in the impednding apocalypse going up before it actually happens).
“Rapid takeoff” hypotheses are particularly hard to bet on.
If I was going to play this game with an AI, I’d also feed it my genomic data, which would reveal I have a version of the HLA genes that makes me more likely to develop autoimmune diseases.
Probably, if some AI were to recommend additional blood testing I could manage to persuade the wctual medical professionals to do it. Recent conversation went some thing like this:
Me: “can I have my thyroid levels checked pleas? And the consultant endocrinologist said he’d like to see a liver function test done next time i give a blood sample.”
Nurse (taking my blood sample and pulling my medical record up in the computer) “you take carbimazole right?”
Me: “yes”
Nurse (ticking boxes on a form on the computer) “… and full blood panel, and electrolytes…”
Probably wouldn’t be hard to get suggestions from an AI added to the list.
Things I might spend more money on, if the were better AI’s to spend it on,
1. I am currently having a lot of blood tests done, with a genuine qualified medical doctor interpreting the results. Just for fun, I can see if AI gives a similar interpretation of the test results (its not bad).
Suppose we had AI that was actually better than human doctors, and cheaper. (Sounds like that might be here real soon, to be honest). I would probably pay money for that.
2. Some work things I am doing involve formally proving correctness of software. AI is not there, quite yet. If it was, I could probably get DARPA to pay the license fee for it, assuming cost isnt absolutely astronomical.
Etc.
On the other hand, this would imply that most doctors, and mathematicians, are out of work.
Replika, I think.
“self-reported data from demons is questionable for at least two reasons”—Scott Alexander.
He was actually talking about Internal Family Systems, but you could probably be skeptical about what malign AIs are telling you, too.
Well, we had that guy who tried to assassinate the Queen of England with a crossbow because his AI girlfriend told him to. That was clearly a harm to him, and could have been one for the Queen.
We don’t know how much more “But the AI told me to kill Trump” we’d have with less alignment, but it’s a reasonable guess (given the Replika datapoint) that it might not be zero,
Discussing sleep paralysis might be an infohazard…
The times I’ve entered sleep paralysis it hasn’t bothered me, as I knew what it was.
And then you get the people who are like, “Great! I’m lucid! Now I shall cast one of those demon summoning spells from Vajrayana Buddhism.”
Lucid dreaming is often like being Sigourney Weaver in Alien while also being on hospital sedatives. (You are, in fact, actually asleep, so it’s kind of a miracle you can reason at all and not the least bit surprising that you feel a bit groggy; also, dream can be nightmarish).
Why people choose to do this for fun is an interesting question.
You do get people who think they might get into lucid dreaming, then they read the dream diaries of some of the experienced lucid dreamers, and then are like “OMG, I never, ever, want to experience that.”
Well, it’s an interesting question whether there might be more efficient ways to do it.
Lucid nightmares are quite a good way of exposing you to real-seeming dangers without actually dying.
Reading this article, I have just realised that a dream I had last night came from reading one of those test cases where people try to bypass the guardrails on LLMs. Only the dream was taken from the innocuous part of the prompt.
At this rate, I’m going to be having dreams about turning Lemsip(*) into meth.
(*) UK cold remedy. Contains pseudoephedrine.
Chöd in a lucid dream if you’re feeling brave.
Like transform into vajrayogini and invite the demons to devour your corpse, etc,
And then there’s the thing where you dispel the entire dream-universe are just there in a black formless void.
Hmm… but, for example, stabilising a dream is kind of like a meditation, and one of the many ways you can transform your body in a dream is basically a body scan meditation from hatha yoga.
Given the significance of lucid dreaming in Buddhist practise (Siz Yogas of Naropa, etc.) realising that having a lucid dream just for sexual purposes is kind of pointless may lead to you realising that it’s kind of pointless in waking life too. Many of those guys were monks…
I’m not sure about (10).
Whenever someone has a theory that it’s impossible to do thing X in a dream, the regular lucid dreamers will provide a counterecamp,e by deliberately doing X in their next dream.
Computers, clocks, and written text can behave weirdly in dreams. Really, it’s the same things that generative AI has diffuculty with, possibly for information-theory reasons.
A possible benefit: the regulation of your own emotion that you do to keep a dream stable (even when alarming things are happening in it) may help you keep your emotion stable in the waking state too.
I can lucid dream, and I kind of agree here. Sure, lucid dreaming is possible, but why would you do that?
Re (3), a dream you can completely control tells you nothing you didn’t know already. There is some scope for controlling the dream enough to, in effect, set up a question, and then not control the result.
There a running joke in the lucid dreaming community that the first thing everyone tries is either flying or sex. It’s only when you get to #3 on their list of things they want to do that it becomes at all interesting.
Some psychiatry textbooks classify “overvalued ideas” as distinct from psychotic delusions.
Depending on how wide you make the definition, a whole rag-bag of diagnoses from the DSM V are overvalued ideas (e.g, anorexia nervosa over valuing being fat).
Possibly similar dilemma with e.g. UK political parties, who generally have a rule that publicly supporting another party’s candidate will get you expelled.
An individual party member, on the other hand, may well support the party’s platform in general, but think that that one particular candidate is an idiot who is unfit to hold political office - but is not permitted to say so,
(There is a joke about the Whitby Goth Weekend that everyone thinks half the bands are rubbish, but there is no consensus on which half that is. Something similar seems to hold for Labour Party supporters.)
An organisation such as the Catholic Church primarily wants to perpetuate its own existence, so of course the official doctrine is that they are The One True Church.
An individual Catholic, on the other hand, might genuinely believe that the benefits of religion are also available from other suppliers.
COVID-19 killed, idk, tens of millions worldwide rather than hundreds of millions.
But consider that an example of a (biological) virus takeoff of the order of months.
So the question for AGI takeoff .. death rate growing more rapidly than COVID19 pandemic, or slower?
Takeoff speed could be measured by e.g. the time between the first mass casualty incident that kills thousands of people vs the first mass casualty incident that kills hundreds of millions.
(This bit isn't serious) "i mean, a days long takeoff leaves you will loads of time for the hypersonic missiles to destroy all of Meta's datacenters."
Minutes long takeoff...
[By comparison, I forget the reference but there is a paper estimating how quickly a computer virus could destroy most of the Internet. About 15 minutes, if I recall correctly.]
e.g. After the mass casuality incident...
"You told the government that you had a shutdown procedure, but you didnt, and hundreds of people died because you knowingly lied to the government."
My personal view on how it might help:
- Meta will probably carry on being as negligent as ever, even with sb1047
- When/if the first mass casualty incident happens, sb1047 makes it easier for Meta to be successfully sued
- After that.AI companies become more careful.
On the one hand, we encounter a lot of arguments about gender that seem, to me, to be philosophically bad. Maybe a good source of reasoning fallacies you might be able to spot in other contexts too,
On the other hand, the more I think about it, the less I care about the object level isue. It seems inevitable that there are going to be various sorts of statistical outliers and hard to classify cases, and really is it that big a deal?
I know one person who is intersex, and I know because they're involved in right activism and they told me, Probably couldnt tell otherwise. Could well be xome other people I know are intersex and haven't told me.Maybd they dont even know themselves, as it appears that this information was frequently witheld by doctors. Shrug.
Also: if you take gender as chromosomal sex, then (a) tye aforementioed person is totally genuinely both xx and xy because they have mosaic chromosomes; and (b) it seems really strange for your gender to sometimes be something that you, yourself, do not know,.
A financial conflict of interest is a wonderous thing...
"Okay, Beatrice. There was no alien, and the flash of light you saw in the sky wasn't a UFO. Swamp gas from a weather balloon was trapped in a thermal pocket and refracted the light from Venus -- Men in Black
The TV Series "Dark Skies" .. in which the US Government is orchestrating a coverup about the involvement of giant prawns from outer space in the Roswell incident, the JFK assassination, the shootdown of Gary Power's US spyplane, erc.
I agree that Vernor Vinge's A Deepness in the Sky is an example.
Almost but not quite an example: Edmund Cooper's The Overman Culture. It is obvious to the reader from the outset that the characters cannot be when and where they think the are (evacuated from London during World War 2).Maybe not enough deceiver's perspective to count.
Also not quite: Gene Wolfe's The Book of the New Sun.
This is pretty much why many people thought that the term "Open Source" was a betrayal of the objectives of the Free Software movement,
"Free as in free speech, not free bewr" has implication that "well, you can read the source" lacks.
Yeah, many of the issues are the same:
*RLHF can be jail broken with prompts, so you can get it to tell you a sexy story or a recipe for methamphetamine. If we ever get to a point where LLMs know truly dangerous things, they'll tell you those, too.
*Open source weights are fundamentally insecure, because you can finetune out the guardrails. Sexy stories, meth, or whatever.
The good thing about the War on Horny
- probably doesnt really matter, so not much harm done when people get LLMx to write porn
- Turns out, lots of people want to read porn (surprise! who would have guessed?) so there are lots of attackers trying to bypass the guardrails
- This gives us good advance warning that the guardrails are worthless
Also note that Open Source precludes doing this ...
The basic Open SOurce deal is that absolutely anyone can take the product and do whatever they like with it, without paying the supplier anything.
So
- The vendor cannot prevent the customer doing something bad with the product (If there is a line of code that says "dont do this bad thing", then the customer can just delete it
- The vendor also cannot charge the customer an insurance premium base on how likely the customer is to do something ba with the product
... which would suggest that Open Source is only viable in areas where there isn't much third party liability.
With a nod to the recent Crowdstrike incident .... if your AI is sending out packets to other people;s Windows systems, and bricking them about as fast it can send packets through its ethernet interface, your liability may be expanding rapidly. An additional billion dollars for each hour you dont shut it down sounds possible.
If your AI is doing something that's causing harm to third parties that you are legally liable for .. chances are, whatever it is doing, it is doing it at Internet speeds, and even small delays are going to be very, very expensive.
I am imagining that all the people who got harmed after the first minute or so after the AI went rogue are going to be pointing at SB1047 to argue that you are negligent, and therefore liable for whatever bad thing it did.
If quantum computers really work, for more than 3 qbits, then I think I will believe in infinite worlds interpretation.
On the other hand, if there turns out to be some fundamental reason why quantum omputers with many qbits cant exist then maybe not.
The version where you only have 3 qbits is kind of unsatisfactory (look, there are exactly 8 parallel universes and no more...)
If American citizens who can vote from Trump are arguing over whether hr's a bad guy, there is arguably a point to it .. though I can also the case against.
But Swedish guys ... who arent even allowed to vote for Trump if they want to .. aruing over Trump? What are you doing?
(Presumably, if there is any point at all, the argument is not against Trump specifically but the global phenomenon he is part of, and whoever the Swedish Trump equivalent is).
To be truly dangerous, an AI would typically need to have (a) lack of alignment (b) be smart enough to cause harm
Lack of alignment is now old news. The warning shot is, presumably, when an example of (b) happens and we realise that both component pieces exist.
I am given to understand that in firearms training, they say "no such thing as a warning shot".
By rough analogy - envisage an AI warning shot as being something that only fails to be lethal because the guy missed.
Maybe: it's easier to capture a whole load of diverse stuff if you don't care about numerically quantifying it, and dont care about statistical significance tests, multiple testing, etc. Once you have a a candidate list of qualitative features that might be interesting, you can then ask: ok, how do I numerically measure this thing?
A possible justification of qualitative research: you do this first, before you even know which hypotheses you want to test quantitatively.
RE: autism. we might also add sensory issues/only able to concentrate on one sense at a time, and the really strange one: having fluent knowledge of the literal meanings of words, but difficulty with metaphors.
a) Are these part of the same symptom cluster as the "theory of mind" aspects of autism?
b) If so, why? Why on Earth would we expect metaphorical use of language to (i) be somehow processed by different metal modules from literal usage (ii) be somehow related to reasoning about other minds?
I actually personally know a couple of people who have the metaphors one. They tell me the issuer is that the literal meaning is just way more salient than the literal one.
AS a purely anecdotal data point, my mother had COVID19 (again) a couple of weeks ago.
We appear to be in the "nearly everybody will keep getting it regularly, for ever" phase of COVID19.
Chronic Fatigue Syndrome is, maybe - maybe - someth8ing of that kind.
Unfortunately, (a) the clinical trials of Cognitive Behavioral Therapy in this area, and as a result (b) we have a strong suspicion that it is ineffective as a treatment, and maybe is actually harmful relative to no treatment at all.
So, eg. does exercizing break you out of the bad equilibrium, or does it tighten the noose around your your neck tighter, making you more sick, disabled, and (potentially) closer to death?