Posts
Comments
Interesting... all the places I've seen the word, it meant a winged unicorn*. But reading this post drove me to look it up, and I did find both definitions. Less Wrong: raising new interest in definitions of mythological creature parts! :)
*Speaking of mythological definitions, I learned somewhere to distinguish between an alicorn, which has the goat-like body, lion's tail, beard, etc. of a unicorn, vs a horned pegasus, which has horse-like features. Not sure where that came from, but it's firmly implanted in my stores of useless knowledge.
I can agree that most of "science and other fields" came out of what was called "philosophy" if you go back far enough. It just seems that once you pull out all the "science and other fields" what is left has no use for solving practical problems -- including AI. Like the pages and pages of debate I've seen here on Less Wrong about "philosophical" stuff like the nature of morality, or free will, or "zombies" with no consciousness. Obviously a lot of people feel that discussing these topics is worthwhile, but I just don't see the use of it.
In continuing to plod through the older writings here, I've seen numerous passages from Eliezer that disparage philosophy's usefulness, including these that I hit today:
Sorry - but philosophy, even the better grade of modern analytic philosophy, doesn't seem to end up commensurate with what I need, except by accident or by extraordinary competence. (http://lesswrong.com/lw/tg/against_modal_logics/)
and
I suggest that, like ethics, philosophy really is important, but it is only practiced effectively from within a science. Trying to do the philosophy of a frontier science, as a separate academic profession, is as much a mistake as trying to have separate ethicists. You end up with ethicists who speak mainly to other ethicists, and philosophers who speak mainly to other philosophers. (http://lesswrong.com/lw/pg/where_philosophy_meets_science/ )
So I'm still baffled by the comment here that currently the main problems in FAI are philosophical. Is there a summary or chain of posts that spells out this change in position? Or will it just gradually emerge if I manage to read all the posts between those quotes from 2008, up to the quote from 2011? Or, is this just your opinion, not Eliezer's?
The definition of art begins to matter a lot when governments have bizarre laws that require spending public funds on it -- e.g. Seattle's SMC 20.32.030 "Funds for works of art" which states that "All requests for appropriations for construction projects from eligible funds shall include an amount equal to one (1) percent of the estimated cost of such project for works of art..."
Of course, the law doesn't even attempt to define what is and isn't "art". They leave that up to the Office of Arts and Cultural Affairs... and I'm sure those folks spend PLENTY of time (also at public expense) debating exactly that question.
Yes! Thank you, Poke. I've been thinking something vaguely like the above while reading through many, many posts and replies and arguments about morality, but I didn't know how to express it. I've copied this post into a quotes file.
I'm still going through the Sequences too. I've seen plenty of stuff resembling the top part of your post, but nothing like the bottom part, which I really enjoyed. The best "how to get to paperclips" story I've seen yet!
I suspect the problem with the final paragraph is that any AI architecture is unlikely to be decomposable in such a well-defined fashion that would allow drawing those boundary lines between "the main process" and "the paperclip subroutine". Well, besides the whole "genie" problem of defining what is a Friendly goal in the first place, as discussed through many, many posts here.
My sense is that currently the main problems in FAI are philosophical. Skill in math is obviously very useful, but secondary to skill in philosophy...
Philosophy? Really???
My impression of philosophy has been that it is entirely divorced from anything concrete or reality-based with no use in solving concrete, reality-based problems -- that all the famous works of philosophy are essentially elaborate versions of late-night college bull sessions, like irresistible forces vs. immovable objects, or trees falling in forests that do/don't make a sound.
After working my way through a lot of the posts here, I now think that most of philosophy comes down to semantics and definitions of terms (i.e. Eliezer's excellent analysis of the tree-sound argument), and that what remains is still entirely divorced from reality and real-world uses.
What have I missed? How does philosophy bring anything useful to the table?
On another note, I've been wanting to write a sci-fi story where a person slowly discovers they are an artificial intelligence led to believe they're human and are being raised on a virtual earth. The idea is that they are designed to empathize with humanity to create a Friendly AI. The person starts gaining either superpowers or super-cognition as the simulators start become convinced the AI person will use their power for good over evil. Maybe even have some evil AIs from the same experiment to fight. If anyone wants to steal this idea, go for it.
I want to read that story! Has anyone written it yet?
Don't know how many M:TG players are still around, since I'm replying to a two-year-old post, but I found this thread very interesting. I used to play Magic (a little) and write about Magic (a lot), and I was the head M:TG rules guru for a while. The M:TG community is certainly a lovely place to see a wide variety of rationality and irrationality at work. For seriously competitive players, the game itself provides a strong payoff for being able to rapidly calculate probabilities and update them as new information becomes available.
Greetings! I'm a relatively new reader, having spent a month or two working my way through the Sequences and following lots of links, and finally came across something interesting to me that no one else had yet commented on.
Eleizer wrote "Those who dream do not know they dream; but when you wake you know you are awake." No one picked out or disagreed with this statement.
This really surprised me. When I dream, if I bother to think about it I almost always know that I dream -- enough so that on the few occasions when I realize I was dreaming without knowing so, it's a surprising and memorable experience. (Though there may be selection bias here; I could have huge numbers of dreams where I don't know I'm dreaming, but I just don't remember them.)
I thought this was something that came with experience, maturity, and -- dare I say it? -- rationality. Now that I'm thinking about it in this context, I'm quite curious to hear whether this is true for most of the readership. I'm non-neurotypical in several ways; is this one of them?