Posts
Comments
That works for me too... anyone here have enough karma so that we can break this out as a separate top level post? :-)
I'd attend a DC meetup, but maybe we should push it out a month or so at least- Otherwise it causes confusion about the Baltimore meeting, which has already been fully organized... no need to split attendance by having two meetings at the same time at two places so close to each other.
I doubt that simply donating money to charity is an efficient way to make the world a better place. There are studies that question, for instance, how much good all the money has done that we've given to developing nations.
It's definitely possible, I think, that creating a great video game might bring more happiness to the world than simply writing a check for a charity.
I am not saying, by the way, that being charitable is a bad idea. However, I do think you need to be strategic for it to be effective. For instance, it might be better to help a struggling neighbor or cousin by getting actively involved in their problems and helping them in a more involved manner. Or, if you have specific skill that can be helpful for a charity organization, that may be a better investment than just giving them money.
My point is, there is no simple, clear path to making the world a better place. We all have to actively think about how to make it happen. And it may happen in unexpected ways.
Any time want to perform a complex activity, we need to balance our time between evaluating different strategies for performing this activity, versus performing the mundane steps of this activity, themselves. If we just jump right into the activity without adequate planning (and without reevaluating our plan periodically) then we may perform it with a low efficiency. On the other hand, if we invest too much time in planning, we end up never actually "doing it."
At it's simplest level, your idea can thought of as getting stuck in local maxima of efficiency, when additional time could be spent in strategizing to find higher possibilities for efficiency.
I interpreted the question PG was asking as, "why is it worth considering newcomb-like problems?"
(Of course, any philosophical idea is worth considering, but the question is whether this line of reasoning has any practical benefits for developing AI software)
Hmm... that list of projects worries me a little...
It uncomfortably reminds me of preachers on TV/radio who spend all their air time trying to convert new people as opposed to answering the question "OK, I'm a Christian, now what should I do?" The fact that they don't address any follow up questions really hurts their credibility.
Many of these projects seem to address peripheral/marketing issues instead of addressing the central, nitty-gritty technical details required for developing GAI. That worries me a bit.
I'd say my most valuable skill derives from the fact that I had very unusual parents with whom I also moved a lot, so that they had a strong influence on me. Consequently, the environment of my childhood was pretty unique, giving me neural patterns that deviate significantly from those of many other people.
This means I sometimes behave in ways that seem "dumb", but in other instances act in ways that seem unusually intelligent.
I excel in areas where unique neural patterns are rewarded: This includes (naturally) the stock market, some types of programming, and some types of non-fiction writing. It also means that I tend to have more success using lateral approaches to solve problems, since my atypical neurology makes it more likely that I will conceive of lateral approaches that have not yet been tried by others.
The biggest downside to this (I'm speculating here) is that success when using lateral problem solving correlates less directly with overall effort. Hence, there is less of a psychological reward for exerting a large effort. I suspect this makes lateral thinkers, like myself, trend towards having a lower discipline, compared to others who have managed comparable achievements.