Request for Intelligence Philosophy Essay Topic Suggestions
post by MalcolmOcean (malcolmocean) · 2015-03-13T04:15:26.209Z · LW · GW · Legacy · 11 commentsContents
Essay Topics: pick one from A, B, or C A. Compare intelligence in machines, humans, and other animals with respect to one of the following topics. Feel free to narrow the topic down to some more specific issue, and to consider specific machines, animals, and human capacities. You must pick a completely different topic from your first es... None 11 comments
As part of a philosophy course I'm currently taking called Intelligence in Machines, Humans, and Other Animals, I have to write a <3000w essay on a topic related to intelligence. The description is here, but I've copied the important details below. I figured I might as well solicit suggestions for things to research. Realistically, I am likely to optimize the essay more for passing the course than for rigour though, so if you're expecting a very thorough review of something then you may be disappointed. But I suspect that it will still be at least an interesting jumping-off point.
Essay Topics: pick one from A, B, or C
A. Compare intelligence in machines, humans, and other animals with respect to one of the following topics. Feel free to narrow the topic down to some more specific issue, and to consider specific machines, animals, and human capacities.
You must pick a completely different topic from your first essay - I've kept track. For example, if you wrote on one kind of imagery, you can't write on another kind of imagery.
- Perception
- Imagery
- Problem solving
- Learning (I did this for my first essay)
- Analogy
- Emotion
- Consciousness
- Action
- Language
- Creativity
- The self
How to narrow down the topic
After choosing one of the 11 topics, you can narrow it down to particular aspects and entitites (human, computer, animal).
For example, you could narrow perception down to sound, the computer down to SIRI, and the animal down to dogs.
Imagery could be narrowed down to visual, auditory, etc.
Learning could be narrowed down to supervised or unsupervised, or to teaching.
Analogy could be narrowed down to intelligence test type analogies (A is to B as C is to what?).
Emotion could be narrowed down to empathy.
Etc.
Edited to add: Note that these are pretty squirrellable. E.g. Last time I took "Learning" and used it to talk about (recursive) self-improvement in machines and humans (planning to post this at some point). So feel free to propose something even if you only have a vague notion of how it would fit into one of the categories
One constraint: I need to be able to ask some sort of question and then produce evidence towards either side of it, i.e. it can't just be a review of the topic. But this too can be pretty vague; in my last essay I did "are humans or machines better suited for self-improvement?", concluding "humans for now, ultimately machines".
11 comments
Comments sorted by top scores.
comment by MalcolmOcean (malcolmocean) · 2015-03-13T04:27:30.263Z · LW(p) · GW(p)
Whoever downvoted this, I would appreciate knowing why. I don't imagine you're going to stick around long enough to do so, but figured I'd request anyway.
Replies from: None, Alicorn↑ comment by [deleted] · 2015-03-13T07:35:45.828Z · LW(p) · GW(p)
It reads too much like "Help me with my homework", and you don't give the impression that you care about the topic or have given it any thought. Therefore it seems to offer little in terms of productive discussion.
This may be wrong but it's the impression I got from reading the post.
Replies from: Antisuji, diegocaleiro↑ comment by Antisuji · 2015-03-13T15:40:30.608Z · LW(p) · GW(p)
That's interesting, because to me it read more like "I'm going to write something interesting about anything you like, do some research for you, and even share the results" and "as long as I have to do this assignment I might as well make it useful to someone" but maybe that's because I recognized the poster's name, read his blog, etc.
I can see how someone might interpret it this way, though.
Replies from: malcolmocean↑ comment by MalcolmOcean (malcolmocean) · 2015-03-13T16:21:43.515Z · LW(p) · GW(p)
Yeah, I was definitely thinking of it more in this light: "I'm going to go do 10-15 hours of research and writing, and I'm offering anyone the chance to influence the topic with < 5 minutes of their own effort."
Replies from: None↑ comment by [deleted] · 2015-03-13T19:39:26.431Z · LW(p) · GW(p)
Thanks for clarifying - rescinded my downvote after your edit.
My own interests in these general areas relate to the intersection between language and action... In psychology where (a) embodied cognition is all the rage, and (b) theories of action, motor planning more specifically, are being applied to language theory (work by Pickering & Garrod especially). Especially given the application of similar approaches in robotics (Cangelosi and embodiment especially coming to mind here).
↑ comment by diegocaleiro · 2015-03-13T17:11:57.678Z · LW(p) · GW(p)
Me too Malcolm.
comment by Curiouskid · 2015-03-13T13:03:07.544Z · LW(p) · GW(p)
I recently re-read Gwern's Drug Heuristics, and this jumped out at me:
....In other words, from the starting point of those wormlike common ancestors in the environment of Earth, the resources of evolution independently produced complex learning, memory, and tool use both within and without the line of human ancestry....
...The obvious answer is that diminishing returns have kicked in for intelligence in primates and humans in particular5354. (Indeed, it’s apparently been argued that not only are humans not much smarter than primates55, but there is little overall intelligence differences in vertebrates56. Humans lose embarrassingly on even pure tests of statistical reasoning; we are outperformed on the Monty Hall problem by pigeons and to a lesser extent monkeys!) The last few millennia aside, humans have not done well and has apparently verged on extinction before...
...The human brain seems to be special only in being a scaled-up primate brain41, with close to the metabolic limit in its number of neurons...
If I had more time, I'd try to look more into the intelligence tests that are given to animals. Assuming animals are smarter (in some sense of the word), then why are humans dominant? I think the answer to this might be something like "Humans evolutionarily stumbled upon language, then encoded this in our genes, and language allows us to reason about the world, which is something raw animal intelligence/pattern-matching cannot do."
I think it's an interesting hypothesis, but I don't know where I'd start trying to evaluate it, or how likely I think it's true.
Replies from: malcolmocean↑ comment by MalcolmOcean (malcolmocean) · 2015-03-13T16:41:34.221Z · LW(p) · GW(p)
Huh! Yeah, that's super interesting, but it seems like it might be hard to actually tackle. The finding of info on animal intelligence as well as the supporting of specific reasons for human dominance anyway both seem a little messy. I'll put it on the list of options though :)
Replies from: Curiouskid↑ comment by Curiouskid · 2015-03-14T16:09:49.224Z · LW(p) · GW(p)
Another question I find interesting about animal consciousness I have is whether or not they can recognize cartoons. Cartoons are abstractions/analogies of the real-world. I'm curious if this abstract visual pattern recognition is possessed by animals, or if it requires human-level abstract pattern recognition. There are also some computer vision papers about classifying cartoons, and using artificially generated data-sets (since you mentioned it had to involve humans, animals, and robots).
comment by Xerographica · 2015-03-13T18:51:15.756Z · LW(p) · GW(p)
Not sure if this will be of any use to you but... figured I'd let you decide that!
Walking upright allowed our ancestors to carry more resources greater distances. But not all our ancestors were equally "smart" at figuring out what to carry. It seems pretty straightforward that being better at calculating the best (most valuable) bundles conferred greater fitness. A group that could figure out the best combination of food and weapons to carry would have a greater chance of survival compared to any group that carried too much of one at the expensive of the other. Voila! Here we are... Homo sapiens.
Right now there's some concern regarding the rise of super intelligent robots. In order to beat us, robots have to be better than we are at determining which bundles to carry. If a robot leaves home but forgets to take a spare battery with it... then it's probably not going to "win".
The process by which humans have come to be better at valuation seems pretty clear to me. Individuals that weren't that great at valuations were removed from the gene pool. But with robots... it's not that clear to me. How do robots become better than we are at valuation? What does that process entail? If I want to upgrade my computer then I have to buy the necessary components. How does a moderately smart robot buy the necessary components it needs to upgrade itself? Are we guessing that it has a job? Are we guessing that it robs a bank? Are we guessing that we'll hire robots to protect banks from thieving robots?
And, if robots do become better than we are at valuation... then should we really be concerned that they will waste us? Right now we waste each other. If robots waste us too then they are no better at valuation than we are. They'll fit right in with us. Some will end up in jail and the worst ones will end up in congress.
If robots don't waste us, then they are better at valuation than we are.
So it's hard for me to imagine a realistic scenario where robots 1. are better at valuation than we are and 2. waste us.
If you're interested in "my" theory of human intelligence... What Do Coywolves, Mr. Nobody, Plants And Fungi All Have In Common?