Posts
Comments
Well, if we really wanted to other-optimize we'd try to change your outlook on life, but I'm sure you get a lot of such advice already.
One thing you could try is making websites to sell advertising and maybe amazon clickthroughs. You would have to learn some new skills and have a little bit of discipline (and have some ideas about what might be popular). You could always start with the games you are interested in.
There's plenty of information out there about doing this. It will take a while to build up the income, and you may not be motivated enough to learn what you need to do to succeed.
"Useful" is negatively correlated with "Correct theory"... on a grand scale.
Sure, having a correct theory has some positive correlation with "useful",
Which is it?
I think all the further you can go with this line of thought is to point out that lots of things are useful even if we don't have a correct theory for how they work. We have other ways to guess that something might be useful and worth trying.
Having a correct theory is always nice, but I don't see that our choice here is between having a correct theory or not having one.
Thank you for the detailed reply, I think I'll read the book and revisit your take on it afterward.
I suppose for me it's the sort of breathless enthusiastic presentation of the latest brainstorm as The Answer. Also I believe I am biased against ideas that proceed from an assumption that our minds are simple.
Still, in a rationalist forum, if one is to not be bothered by dismissing the content of material based on the form of its presentation, one must be pretty confident of the correlation. Since a few people who seem pretty smart overall think there might be something useful here, I'll spend some time exploring it.
I am wondering about the proposed ease with which we can purposefully rewire control circuits. It is counterintuitive to me, given that "bad" ones (in me at least) do not appear to have popped up one afternoon but rather have been reinforced slowly over time.
If anybody does manage to achieve lasting results that seem like purposeful rewiring, I'm sure we'd all like to hear descriptions of your methods and experience.
Not to be discouraging, but is that really the "logical" reasoning used at the time? They use the word "rationalization" for a reason. "I can always work toward my goals tomorrow instead" will always be true.
Hopefully you had fun dancing, nothing wrong with it at all, but it does seem odd to be so self-congratulatory about deciding to go out and party.
Yes, I'm afraid this post is kind of impenetrable, although cousin_it's contribution helped. What is "RDS"?
Also, continually saying "People should..." do this and that and the other thing might be received better if you (meaning Michael, not Vladimir) start us off by doing a little of the desired analysis yourself.
If you're wondering whether I'm aware that I can figure out how to steal software licenses, I am.
ETA: I don't condemn those who believe that intellectual property rights are bad for society or immoral. I don't feel that way myself, though, so I act accordingly.
No specific use cases or examples, just throwing out ideas. On the one hand it would be cool if the notes one jots down could self-organize somehow, even a little bit. Now OpenCog is supposed by its creators to be a fully general knowledge representation system so maybe it's possible to use it as a sort of notation (like a probabilistic-logic version of mathematica? or maybe with a natural language front end of some kind? i think Ben Goertzel likes lojban so maybe an intermediate language like that)
Anyway, it's not really a product spec just one possible sort of way someday to use machines to make people smarter.
(but that was before I realized we were talking about pills to make people stop liking their favorite tv shows, heh)
Thanks for the motivation, by the way -- I have toyed with the idea of getting Mathematica many times in the past but the $2500 price tag dissuaded me. Now I see that they have a $295 "Home Edition", which is basically the full product for personal use. I bought it last night and started playing with it. Very nifty program.
If the point of this essay was to advocate pharmaceutical research, it might have been more effective to say so, it would have made the process of digesting it smoother. Given the other responses I think I am not alone in failing to guess that this was pretty much your sole target.
I don't object to such research; a Bostrom article saying "it might not be impossible to have some effect" is weak support for a 10 IQ point avergage-gain pill, but that's not a reason to avoid looking for one. Never know what you'll find. I'm still not clear what the takeaway from this essay is for a lesswrong reader, though, unless it is to suggest that we should experiment ourselves with the available chemicals.
I've tried many of the ones that are obtainable. Despite its popularity, I found piracetam to have no noticeable effect even after taking it for extended periods of time. Modafinil is the most noticeable of all; it doesn't seem to do much for me while I'm well-rested but does remove some of the sluggishness that can come with fatigue, although I think the results on an IQ test would be unnoticeable (maybe a 6 hour test, something to highlight endurance, could show a measurable difference). Picamilone has a subtler effect that I'm not sure how to characterize. I'm thinking of trying Xanthinol NIcotinate, but have not yet done so. Because of the small effects I do not use these things as a component of my general lifestyle, both for money reasons and the general uncertainty of long-term effects (also mild but sometimes unpleasant side effects). The effects of other more common drugs like caffeine and other stimulants are probably stronger than any of the "weird" stuff, and are widely known. Thinking beyond IQ, there are of course many drugs with cognitive effects that could be useful on an occasional-use basis, but that's beyond the scope of this discussion.
I'm still baffled about what you are getting at here. Apparently training people to think better is too hard for you, so I guess you want a pill or something. But there is no evidence that any pill can raise the average person's IQ by 10 points (which kind of makes sense, if some simple chemical balance adjustment could have such a dramatic effect on fitness it would be quite surprising). Are you researching a sci fi novel or something? What good does wishing for magical pills do?
The issue people are having is, that you start out with "sort of" as your response to the statement that math is the study of precisely-defined terms. In doing so, you decide to throw away that insightful and useful perspective by confusing math with attempts to use math to describe phenomena.
The pitfalls of "mathematical modelling" are interesting and worth discussing, but it actually doesn't help clarify the issue by jumbling it all together yourself, then trying to unjumble what was clear before you started.
Cool stuff. Good luck with your research; if you come up with anything that works I'll be in line to be a customer!
Well if you are really only interested in raising the average person's "IQ" by 10 points, it's pretty hard to change human nature (so maybe Bostrom was on the right track).
Perhaps if somehow video games could embed some lesson about rationality in amongst the dumb slaughter, that could help a little -- but people would probably just buy the games without the boring stuff instead.
I suppose the question is not whether it would be good, but rather how. Some quick brainstorming:
I think people are "smarter" now then they were, say, pre-scientific-method. So there may be more trainable ways-of-thinking that we can learn (for example, "best practices" for qualitative Bayesianism)
Software programs for individuals. Oh, maybe when you come across something you think is important while browsing the web you could highlight it and these things would be presented to you occasionally sort of like a "drill" to make sure you don't forget it, or prime association formation at a later time. Or some kind of software aid to "stack unwinding" so you don't go to sleep with 46 tabs open in your web browser. Or some short-term memory aid that works better than scratch paper. Or just biting the bullet and learning Mathematica to an expert level instead of complaining about its UI. Or taking a cutting-edge knowledge representation framework like Novamente's PLN and trying to enter stuff into it as an "active" note-taking system.
Collaboration tools -- shared versions of the above ideas, or n-way telephone conversations, or freeform "chatroom"-style whiteboards or iteratively-refined debate thesis statements, or lesswrong.com
Man-machine hybrids. Like having people act as the utility function or search-order-control of an automated search process.
Of course, neural prostheses may become possible at some point fairly soon. Specially-tailored virtual environments to aid in visualization (like of nanofactories), or other detailed and accurate scientific simulations allowing for quick exploration of ideas... "Do What I Mean" interfaces to CAD programs might be possible if we can get a handle on the functional properties of human cognitive machinery...
Or: "Physics is not Math"
Um, so has Eurisko.
Perhaps a writeup of what you have discovered, or at least surmise, about walking that road would encourage bright young minds to work on those puzzles instead of reimplementing Eurisko.
It's not immediately clear that studying and playing with specific toy self-referential systems won't lead to ideas that might apply to precise members of that class.
You could use that feedback from the results of prior actions. Like: http://www.aleph.se/Trans/Individual/Self/zahn.txt
Interesting exercise. After trying for a while I completely failed; I ended up with terms that are completely vague (e.g. "comfort"), and actually didn't even begin to scratch the surface of a real (hypothesized) utility function. If it exists it is either extremely complicated (too complicated to write down perhaps) or needs "scientific" breakthroughs to uncover its simple form.
The result was also laughably self-serving, more like "here's roughly what I'd like the result to be" than an accurate depiction of what I do.
The real heresy is that this result does not particularly frighten or upset me. I probably can't be a "rationalist" when my utility function doesn't place much weight on understanding my utility function.
Can you write your own utility fuinction or adopt the one you think you should have? Is that sort of wholesale tampering wise?
People on this site love to use fiction to illustrate their points, and a "biomoderate singularity managed by a superintelligent singleton" is very novel-friendly, so that's something!
Eliezer, in the ones I've seen so far I don't think you comes across very well. In particular you tend to ignore the point (or substance) of your partner's arguments which makes you look evasive or inattentive. There is also a fine line for viewers between confidence and arrogant pomposity and you often come across on the wrong side of that line. Hopefully this desire of yours to keep doing it reflects a commitment to improving, in which case keep at it. Perhaps asking a number of neutral parties about specifics would help you train for it... if you're willing to accept that you are being watched by human beings and that the audience reacts differently to different styles of presentation (it seems you do care to some extent; for example you wear clothing and appear well groomed during the conversations).
As others have suggested, trying to resolve or at least continue your debate with Robin Hanson would be interesting. A conversation with Ben Goertzel about AI safety issues and research protocols would be worthwhile to me but might not engage a broad audience. Most exciting would be Dale Carrico (http://amormundi.blogspot.com).
If dark arts are allowed, it certainly seems like hundreds of millions of dollars spent on AI-horror movies like Terminator are a pretty good start. Barring an actual demostration of progress toward AI, I wonder what could actually be more effective...
Sometime reasonably soon, getting real actual physical robots into the uncanny valley could start to help. Letting imagination run free, I imagine a stage show with some kind of spookily-competent robot... something as simple as competent control of real (not CGI) articulated robots would be rather scary... for example, suppose that this robot does something shocking like physically taking a human confederate and nailing him to a cross, blood and all. Or something less gross, heh.
Apparently you and others have some sort of estimate of probability distribution over time leading you to being alarmed enough to demand action. Maybe it's say "1% chance in the next 20 years of hard takeoff" or something like that. Say what it is and how you got to it from "conceivability" or "non-impossibility". If there is a reasoned link that can be analyzed producing such a result, it is no longer a leap of faith; it can be reasoned about rationally and discussed in more detail. Don't get hung up on the number exactly, use a qualitative measure if you like, but the point is how you got there.
I am not attempting to ridicule hard takeoff or Friendly AI, just giving my opinion about the thesis question of this post: "what can we do to efficiently change the opinion of millions of people..."
Steven, I'm a little surprised that the paper you reference convinces you of a high probability of imminent danger. I have read this paper several times, and would summarize its relevant points thusly:
We tend to anthropomorphise, so our intuitive ideas about how an AI would behave might be biased. In particular, assuming that an AI will be "friendly" because people are more or less friendly might be wrong.
Through self-improvement, AI might become intelligent enough to accomplish tasks much more quickly and effectively than we expect.
This super-effective AI would have the ability (perhaps just as a side effect of its goal attainment) to wipe out humanity. Because of the bias in (1) we do not give sufficient credibility to this possibility when in fact it is the default scenario unless the AI is constructed very carefully to avoid it.
It might be possible to do that careful construction (that is, create a Friendly AI), if we work hard on achieving that task. It is not impossible.
The only arguments for the likelihood of imminence despite little to none apparent progress toward a machine capable of acting intelligently in the world and rapidly rewriting its own source code are:
A. a "loosely analogous historical surprise" -- the above-mentioned nuclear reaction analogy. B. the observation that breakthroughs do not occur on predictable timeframes, so it could happen tomorrow. C. we might already have sufficient prerequisites for the breakthrough to occur (computing power, programming productivity, etc)
I find these points to all be reasonable enough and imagine that most people would agree. The problem is going from this set of "mights" and suggestive analogies to a probability of imminence. You can't expect to get much traction for something that might happen someday, you have to link from possibility to likelihood. That people make this leap without saying how they got there is why observers refer to the believers as a sort of religious cult. Perhaps the case is made somewhere but I haven't seen it. I know that Yudkowsky and Hanson debated a closely related topic on Overcoming Bias at some length, but I found Eliezer's case to be completely unconvincing.
I just don't see it myself... "Seed AI" (as one example of a sort of scenario sketch) was written almost a decade ago and contains many different requirements. As far as I can see, none of them have had any meaningful progress in the meantime. If multiple or many breakthroughs are necessary, let's see one of them for starters. One might hypothesize that just one magic bullet brfeakthrough is necessary but that sounds more like a paranoid fantasy than a credible scientific hypothesis.
Now, I'm personally sympathetic to these ideas (check the SIAI donor page if you need proof), and if the lack of a case from possibility to likelihood leaves me cold, it shouldn't be surprising that society as a whole remains unconvinced.
One thing that might help change the opinion of people about friendly AI is to make some progress on it. For example, if Eliezer has had any interesting ideas about how to do it in the last five years of thinking about it, it could be helpful to communicate them.
A case that is credible to a large number of people needs to be made that this is a high-probability near-term problem. Without that it's just a scary sci-fi movie, and frankly there are scarier sci-fi movie concepts out there (e.g. bioterror). Making an analogy with a nuclear bomb is simply not an effective argument. People were not persuaded about global warming with a "greenhouse" analogy. That sort of thing creates a sort of dim level of awareness, but "AI might kill us" is not some new idea; everybody is already aware of that -- just like they are aware that a meteor might wipe us out, aliens might invade, or an engineered virus or new life form could kill us all. Which of those things get attention from policy-makers and their advisers, and why?
Besides the weakness of relying on analogy, this analogy isn't even all that good -- it takes concerted and advanced targeted technical dedication to make a nuclear FOOM fast enough to "explode". It's a reasonably simple matter to make it FOOM slowly and provide us with electrical power to enhance our standard of living.
If the message is "don't build Skynet", funding agencies will say "ok, we won't fund Skynet" and AI researchers will say "I'm not building Skynet". If somebody is working on a dangerous project, name names and point fingers.
GIve a chain of reasoning. If some of you rationalists have concluded a significant probability of an AI FOOM coming soon, all you have to do is explicate the reasoning and probabilities involved. If your conclusion is justified, if your ratiocination is sound, you must be able to explicate it in a convincing way, or else how are you so confident in it?
This isn't really an "awareness" issue -- because it's scary and in some sense reasonable it makes a great story, thus hour after hour of TV, movie blockbusters stretching back through decades, novel after novel after novel.
Make a convincing case and people will start to be convinced by it. I know you think you have already, but you haven't.
For a continuation of the ideas in Beyond AI, relevant to this LW topic, see:
Hello all. I don't think I identify myself as a "rationalist" exactly -- I think of rationality more as a mode of thought (for example, when singing or playing a musical instrument, that is a different mode of thought, and there are many different modes of thought that are natural and appropriate for us human animals). It is a very useful mode of thought, though, and worth cultivating. It does strike me that the goals targeted by "Instrumental Rationality" are only weakly related to what I would consider "rationality" and for most people things like focus, confidence, and other similar skills far surpass things like Bayesian update for the practical achievement of goals. I also fear that our poor ability to gauge priors very often makes human-Bayesianism provide more of the appearance of rationality than actual improvement in tangible success in day-to-day reasoning.
Still, there's no denying that epistemic and instrumental rationality drive much of what we call "progress" for humanity and the more skilled we are in their use, the better. I would like to improve my own world-modeling skills.
I am also very interested in a particular research program that is not presently an acceptable topic of conversation. Since that program has no active discussion forum anywhere else (odd given how important many people here think it to be), I am hopeful that in time it will become an active topic -- as "rationality incarnate" if nothing else.
I thank all of the authors here for providing interesting material and hope to contribute myself, at least a little.
Oh, I'm a 45-year-old male software designer and researcher working for a large computer security company.