Why I think that teaching philosophy is high impact
post by Eleni Angelou (ea-1) · 2022-12-19T03:11:38.363Z · LW · GW · 0 commentsContents
What I did this fall Why I did this A thought for the future None No comments
Crossposted from the EA Forum: https://forum.effectivealtruism.org/posts/9cCyPE2EDpjpJvqnF/why-i-think-that-teaching-philosophy-is-high-impact [EA · GW]
I wrote this post quickly because it's better to share an imperfect post than to never write anything at all.
In a recent post [EA · GW], I mentioned some thoughts about teaching and communication mostly considering some classes I taught on AI risk during my Intro to Philosophy course this fall.
I think it's easy to agree that teaching is in principle a way to have high impact by directly influencing people's intellectual lives and careers. But here I want to argue that there is something special about teaching philosophy: a philosophy classroom is one of the few places in the world where all sorts of arguments can be heard and discussed and where everyone is invited to question their experiences and very own thoughts.
What I did this fall
For that reason, I thought that having a whole section on Science, Technology, and Society with a focus on AI was worth the while. In particular, we discussed some bits of Chalmer's Singularity, we watched a good chunk of this video by Rob Miles, and read excerpts from Ord's Precipice and Bostrom's Superintelligence.
Most of the students had very thoughtful contributions to understanding how a technological singularity could happen and what it would mean for humanity. I asked them to reflect on it as a thought experiment as we weren't trying to produce actual forecasting results or AI timelines.
Overall, I felt that a philosophy classroom is the best place to discuss difficult and controversial topics and have everyone engaged in the conversation. Those that were easy to agree that AI risk is a plausible scenario had something to argue about and so did those that considered it unlikely or couldn't see if it makes sense or not.
I keep saying that a philosophy classroom is great for such conversations because it really encourages arguing and people have to explain why they think what they think and find the right words to articulate their thoughts and make communication possible and effective.
Their final essay was to construct their ideal future and discuss the role of science, technology, and philosophy in that future. Many of them seemed to be concerned about the role of future technologies and wanted people to philosophize in the ideal future so that technologies are beneficial for society at large (nobody wrote these exact words, I'm rephraming the spirit of some essays focusining on that idea).
Why I did this
Two reasons:
First, I thought one of the best philosophical exercises is to learn to discuss controversial topics with real-world implications like AI risk. I wanted to prioritize cultivating their ability to think in slow motion, to argue, to play with thought experiments and wild-seeming scenarios, to filter their intuitions, and to create their own world models without deferring. For that last reason, I made sure they'd never have to memorize stuff like "what Aristotle says in Metaphysics Book A " (I did that sometimes as a student in undergrad, for fun!) and try to appeal to "great thinkers". [As a side note, a couple of them noticed that I have a thing for David Hume and they cited him pretty randomly when it didn't make sense to do so.]
Second, I felt a moral duty to talk about why we might be living in the most important century. The shorter my timelines are getting, the more I think this moral duty increases. But even if it weren't for doomy arguments, I think it's good practice to honestly ask yourself what is the most important problem in your field [? · GW], more or less broadly, and then focus on that, either as a researcher or a teacher.
A thought for the future
While I cannot safely say that I plan to teach something like this again, I'm certainly convinced that it was worth it and that I will grab whatever opportunity I'm given to not only incorporate a section like this into future Intro to Philosophy syllabi but also to design (and hopefully teach!) a philosophically-minded course on AI risk, safety, and the wild future ahead of us.
0 comments
Comments sorted by top scores.