Posts

Comments

Comment by Trent Fowler (trent-fowler) on Ms. Blue, meet Mr. Green · 2018-03-02T19:15:23.792Z · LW · GW

(Re-posted from Facebook):

I like this analysis and even agree with it as far as it goes. What you seem to be describing here is something I've called 'introspective scaffolding', which helps because introspection is shallow and we're not good at it.

Once I did a similar sort of breakdown, comparing literary criticism to a kind of 'applied apophenia':

https://rulerstothesky.com/2016/09/15/applied-apophenia/

The problem I see here is the same problem I have with Sophisticated Theists who, in formal discussions, define God as a non-personal organizing principle or something.

Namely,

(1) That's not what most people seem to mean when they say 'God', so we're inviting confusion;
(2) Even Sophisticated Theists often don't really behave as though this is their true definition of God, if you observe them long enough.

So perhaps you think of 'quantum physics' as being a kind of metaphor for internal states, but in fact Deepak Chopra believes the moon becomes a ceaselessly flowing superposition of possibilities when the world isn't looking at it.

For the moment we may have no better way to talk about internal states than borrowing terms and concepts from various mystical traditions. But I do think we should be careful so as to not provide succor to genuinely silly beliefs (like Chopra's), and I think if nothing else we absolutely, positively should *stop* borrowing words from quantum physicists. We're making their jobs harder, and we're making it harder to have sensible discussions about authentically valuable spiritual experiences.

Comment by Trent Fowler (trent-fowler) on A model I use when making plans to reduce AI x-risk · 2018-02-04T20:01:31.847Z · LW · GW

For years I've been wanting to put together a research or reading group to work on value alignment. I started a meetup group, gave a number of talks on x-risk and machine ethics, and even kicked around the idea of founding my own futurist institute.

None of this went anywhere.

So: if someone in the Denver/Boulder Colorado area happens to read this comment and wants to hel with some of these goals, get in touch with me on Facebook or at fowlertm9@gmail.com.

Also, I am putting together a futurist speaker series on behalf of the Da Vinci Institute, and if you'd like to talk about the value alignment problem please drop me a line.

(Unrelated: the speaker series is in part meant to advertise for the Da Vinci institute's outstanding office spaces. If you have a startup and need space let me know)