Questions about AI that bother me

post by Eleni Angelou (ea-1) · 2023-02-05T05:04:07.582Z · LW · GW · 6 comments

Contents

  General Cognition
  Deception 
  Agent Foundations 
  Theory of Machine Learning
  Epistemology of Alignment (I've written about this here)
  Philosophy of Existential Risk 
  Teaching and Communication
  Governance/Strategy
None
6 comments

Crossposted from the EA Forum: https://forum.effectivealtruism.org/posts/4TcaBNu7EmEukjGoc/questions-about-ai-that-have-bothered-me [EA · GW] 

 

As 2022 comes to an end, I thought it'd be good to maintain a list of "questions that bother me" in thinking about AI safety and alignment. I don't claim I'm the first or only one to have thought about them. I'll keep updating this list.

(The title of this post alludes to the book "Things That Bother Me" by Galen Strawson)

First posted: 12/6/22

Last updated: 1/30/23

 

General Cognition

Deception 

Agent Foundations 

Theory of Machine Learning

Epistemology of Alignment (I've written about this here [? · GW])

Philosophy of Existential Risk 

Teaching and Communication

Governance/Strategy

6 comments

Comments sorted by top scores.

comment by LawrenceC (LawChan) · 2023-02-05T06:01:38.991Z · LW(p) · GW(p)
  • 80% of a ~25 people classroom thought that philosophy is the right thing to major in if you're interested in how minds work. The question I asked them was: "should you major in philosophy or cognitive science if you want to study how minds work?"
     

This seems really bizarre -- what's the class you asked this in? I feel like I'd get a dramatically different answer.

Replies from: ea-1
comment by Eleni Angelou (ea-1) · 2023-02-05T06:24:28.845Z · LW(p) · GW(p)

It was intro to phil 101 at Queens College CUNY. I was also confused by this. 

comment by Noah Scales · 2022-12-12T08:50:34.984Z · LW(p) · GW(p)

Why was connectionism unlikely to succeed?

Can you clarify? I'm not sure what you mean by this with respect to machine learning.

Replies from: LawChan
comment by LawrenceC (LawChan) · 2023-02-05T06:00:00.598Z · LW(p) · GW(p)

https://plato.stanford.edu/entries/connectionism/

Pretty sure it means the old school of "neural networks are the way to go"?

My guess is she's asking, "why was connectionism considered/thought of as unlikely to succeed?"

Replies from: ea-1
comment by Lao Mein (derpherpize) · 2022-12-12T08:11:20.929Z · LW(p) · GW(p)
  • Will a capacity for "doing science" be sufficient condition for general intelligence?

We can probably "do science" [LW · GW], at least to the level of the median scientist, with current LLMs. Automating data analysis and paper writing isn't a big leap for existing models.