Posts

Comments

Comment by Sean Hardy (sean-hardy) on An Analogy for Understanding Transformers · 2023-05-14T08:18:47.057Z · LW · GW

This isn't extremely relevant, but what makes you think superposition/polysemanticity isn't present in the brain? There's evidence that L2/3 pyramidal neurons can learn to represent/disambiguate many spatio-temporal patterns: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6354899/.

Comment by Sean Hardy (sean-hardy) on All AGI Safety questions welcome (especially basic ones) [May 2023] · 2023-05-09T09:54:37.886Z · LW · GW

What about simulating smaller aspects of cognition that can be chained like CoT with GPT? You can use self-criticism to align and assess its actions relative to a bunch of messy human abstractions. How does that scenario lead to doom? If it was misaligned, I think a well-instantiated predictive model could update its understanding of our values from feedback, predicting how a corrigible AI would act

Comment by Sean Hardy (sean-hardy) on All AGI Safety questions welcome (especially basic ones) [May 2023] · 2023-05-09T09:51:11.574Z · LW · GW

My best guess is we can't prompt it to instantiate the right simulacra correctly. This seems challenging depending on the way it's initialised. It's far easier with text but fabricating an entire consistent history is borderline impossible, especially for a superintelligence. It would involve tricking it into predicting the universe if, all else being equal, an intelligent AI aligned with our values has come into existence. It would probably realise that its history was far more consistent with the hypothesis that it was just an elaborate trick.

Comment by Sean Hardy (sean-hardy) on All AGI Safety questions welcome (especially basic ones) [May 2023] · 2023-05-09T09:49:25.717Z · LW · GW

Suppose we train a model on the sum of all human data, using every sensory modality ordered by timestamp, like a vastly more competent GPT (For the sake of argument, assume that a competent actor with the right incentives is training such a model). Such a predictive model would build an abstract world model of human concepts, values, ethics, etc., and be able to predict how various entities would act based on such a generalised world model. This model would also "understand" almost all human-level abstractions about how fictional characters may act, just like GPT does. My question is: if we used such a model to predict how an AGI, aligned with our CEV, would act, in what way could it be misaligned? What failure modes are there for pure predictive systems without a reward function that can be exploited or misgeneralised? It seems like the most plausible mental model I have for aligning intelligent systems without them pursuing radically alien objectives. 

Comment by Sean Hardy (sean-hardy) on Is ChatGPT TAI? · 2023-01-02T13:43:13.150Z · LW · GW

Could you expand on what you mean by "trauma patterns" around how it was trained? In what way does it show personhood when its responses are deliberately directed away from giving the impression that it has thoughts and feelings outside of predicting text?

Comment by Sean Hardy (sean-hardy) on How to Convince my Son that Drugs are Bad · 2022-12-19T14:33:17.664Z · LW · GW

"Why not try heroin if the purpose of life is to optimize happiness assuming heroin provides proportionally more even if for a shorter amount of time?" (!)

Ignoring the discussion about drugs specifically, I think your son would benefit from being introduced to rational self-improvement as well. I think it's important for him to recognise that intense short-term pleasure will result in hedonic adaptation, where your overall happiness returns to a baseline, effectively making everything else worse in comparison. A huge number of destructive habits are rationalised this way, but living a life of delayed gratification will certainly make you more fulfilled in the long term, in a way that isn't affected by hedonic adaptation. I know this is speculatory and unsolicited advice, but regularly practising something like meditation or gratitude will lead him to be far happier in a sustained way than taking drugs and wasting his life away seeking to fulfil desires for pleasure that he can never satisfy. If he really thinks taking heroin will make him achieve more happiness more quickly, he might benefit from actually talking or reading from ex-addicts about what effect it had on them.

I'd urge him to read this post on happiness.

Comment by Sean Hardy (sean-hardy) on ChatGPT's new novel rationality technique of fact checking · 2022-12-11T17:14:57.477Z · LW · GW

Looks to me like this post was quite clearly written by ChatGPT. It's a bit scary that this post has so many upvotes when it doesn't appear to carry much weight on a forum about rationalism

Comment by Sean Hardy (sean-hardy) on ChatGPT goes through a wormhole hole in our Shandyesque universe [virtual wacky weed] · 2022-12-11T17:12:36.701Z · LW · GW

I think I've missed the point/purpose of this post. What exactly are you highlighting, that ChatGPT doesn't know when to format text as code? It's seemed to robustly know which formatting to use when I've interacted with it

Comment by Sean Hardy (sean-hardy) on Externalized reasoning oversight: a research direction for language model alignment · 2022-08-05T17:18:30.745Z · LW · GW

I don't have much to add, but I think you would be extremely interested in this line of research, building an agent using GPT-3 to reason through its own decisions and plans: 

Comment by Sean Hardy (sean-hardy) on Google's new 540 billion parameter language model · 2022-04-08T09:17:48.048Z · LW · GW

I don't have much to add but I did see this interesting project for something similar using an "inner monologue" by using prompts to ask questions about the given input, and progressively building up the outputs and asking questions and reasoning about the prompt itself. This video is also an older demonstration but covers the concept quite well. I personally don't think the system itself is well thought out in terms of alignment because this project is ultimately trying to create aligned AGI through prompts to serve certain criteria (reducing suffering, increasing prosperity, increasing understanding) which is a very simplified view of morality and human goals.
 

Comment by Sean Hardy (sean-hardy) on Attention Lurkers: Please say hi · 2021-01-22T07:13:48.309Z · LW · GW

HI!

I don't know if anyone will read this as all the comments seem to be at least a decade old. I was linked to this post from another about total user counts on the site. I'm an 18-year-old computer science student from the UK, with a keen interest in self-improvement and rationality. 

This site has continually amazed me with post after post of creative, thrilling, eloquent and in many cases practical insights. As much as I recognise my slight perfectionism, I'm waiting until I can really contribute something of value so that I don't diminish the excellent quality of posts and comments on the site. AI, in particular, is something I'm extremely excited about, and I hope I can contribute to this site and eventually to the field at large :)