Questions about AI that bother me
post by Eleni Angelou (ea-1) · 2023-02-05T05:04:07.582Z · LW · GW · 6 commentsContents
General Cognition Deception Agent Foundations Theory of Machine Learning Epistemology of Alignment (I've written about this here) Philosophy of Existential Risk Teaching and Communication Governance/Strategy None 6 comments
Crossposted from the EA Forum: https://forum.effectivealtruism.org/posts/4TcaBNu7EmEukjGoc/questions-about-ai-that-have-bothered-me [EA · GW]
As 2022 comes to an end, I thought it'd be good to maintain a list of "questions that bother me" in thinking about AI safety and alignment. I don't claim I'm the first or only one to have thought about them. I'll keep updating this list.
(The title of this post alludes to the book "Things That Bother Me" by Galen Strawson)
First posted: 12/6/22
Last updated: 1/30/23
General Cognition
- What signs do I need to look for to tell whether a model's cognition has started to emerge, e.g., situational awareness?
- Will a capacity for "doing science" be sufficient condition for general intelligence?
- How easy was it for humans to get science (e.g., compared to evolving to take over the world).
Deception
- What kind of interpretability tools do we need to avoid deception?
- How do we get these interpretability tools and even if we do get them, what if they're like neuroscience for understanding brains (not enough)?
- How can I tell whether a model has found another goal to optimize for during its training?
- What is it that makes a model switch to a goal different from the one set by the designer? How do you prevent it from doing so?
Agent Foundations
- Is the description/modeling of an agent ultimately a mathematical task?
- From where do human agents derive their goals?
- Is value fragile [LW · GW]?
Theory of Machine Learning
- What explains the success of deep neural networks?
- Why was connectionism unlikely to succeed?
Epistemology of Alignment (I've written about this here [? · GW])
- How can we accelerate research?
- Has philosophy ever really helped scientific research e.g., with concept clarification?
- What are some concrete takeaways from the history of science and technology that could be used as advice for alignment researchers and field-builders?
- The emergence of the AI Safety paradigm [? · GW]
Philosophy of Existential Risk
- What is the best way to explain the difference between forecasting extinction scenarios and narratives from chiliasm or escatology?
- What is the best way to think about serious risks in the future without reinforcing a sense of doom?
Teaching and Communication
- Younger people (e.g., my undergraduate students) seem more willing to entertain scenarios of catastrophes and extinction compared to older people (e.g., academics). I find that strange and I don't have a good explanation as to why that is the case.
- The idea of a technological singularity was not difficult to explain and discuss with my students. I think that's surprising given how powerful the weirdness heuristic is.
- The idea of "agency" or "being an agent" was easy to conflate with "consciousness" in philosophical discussions. It's not clear to me why that was the case since I gave a very specific definition of agency.
- Most of my students thought that AI models will never be conscious; it was difficult for them to articulate specific arguments about this, but their intuition seemed to be that there's something uniquely human about consciousness/sentience.
- The "AIs will take our jobs in the future" seems to be a very common concern both among students and academics.
- 80% of a ~25 people classroom thought that philosophy is the right thing to major in if you're interested in how minds work. The question I asked them was: "should you major in philosophy or cognitive science if you want to study how minds work?"
Governance/Strategy
- Should we try to slow down AI progress? What does this mean in concrete steps?
- How should we go about capabilities externalities?
- How should concrete AI risk stories inform/affect AI governance and short-term/long-term future planning?
6 comments
Comments sorted by top scores.
comment by LawrenceC (LawChan) · 2023-02-05T06:01:38.991Z · LW(p) · GW(p)
- 80% of a ~25 people classroom thought that philosophy is the right thing to major in if you're interested in how minds work. The question I asked them was: "should you major in philosophy or cognitive science if you want to study how minds work?"
This seems really bizarre -- what's the class you asked this in? I feel like I'd get a dramatically different answer.
Replies from: ea-1↑ comment by Eleni Angelou (ea-1) · 2023-02-05T06:24:28.845Z · LW(p) · GW(p)
It was intro to phil 101 at Queens College CUNY. I was also confused by this.
comment by Noah Scales · 2022-12-12T08:50:34.984Z · LW(p) · GW(p)
Why was connectionism unlikely to succeed?
Can you clarify? I'm not sure what you mean by this with respect to machine learning.
Replies from: LawChan↑ comment by LawrenceC (LawChan) · 2023-02-05T06:00:00.598Z · LW(p) · GW(p)
https://plato.stanford.edu/entries/connectionism/
Pretty sure it means the old school of "neural networks are the way to go"?
My guess is she's asking, "why was connectionism considered/thought of as unlikely to succeed?"
↑ comment by Eleni Angelou (ea-1) · 2023-02-05T06:26:30.535Z · LW(p) · GW(p)
Yup, that's what I mean. Specifically, I had Pinker in mind: https://forum.effectivealtruism.org/posts/3nL7Ak43gmCYEFz9P/cognitive-science-and-failed-ai-forecasts [EA · GW]
comment by Lao Mein (derpherpize) · 2022-12-12T08:11:20.929Z · LW(p) · GW(p)
- Will a capacity for "doing science" be sufficient condition for general intelligence?
We can probably "do science" [LW · GW], at least to the level of the median scientist, with current LLMs. Automating data analysis and paper writing isn't a big leap for existing models.