Posts

Comments

Comment by MatthewB on Glenn Beck discusses the Singularity, cites SI researchers · 2012-08-09T05:58:46.538Z · LW · GW

Also, don't forget that humans will be improving just as rapidly as the machines.

My own studies (Cognitive Science and Cybernetics at UCLA) tend to support the conclusion that machine intelligence will never be a threat to humanity. Humanity will have become something else by the time that machines could become an existential threat to current humans.

Comment by MatthewB on Glenn Beck discusses the Singularity, cites SI researchers · 2012-08-09T05:52:02.016Z · LW · GW

He believes that the Singularity is proof that the Universe was created by an Intelligent Creator (who happens to be the Christian God), and that it is further evidence of YEC.

Comment by MatthewB on Glenn Beck discusses the Singularity, cites SI researchers · 2012-08-09T05:49:22.210Z · LW · GW

I think the comment that LWer suck at Politics is the more apt description.

Politics is the art of the possible, and that it deals with WHAT IS, regardless of whether that is "rational."

And attempting to demand that it conform to rationality standards dictated by this community guarantees that this community will lack political clout.

Especially if it becomes known that the main beneficiaries and promoters of the Singularity have a particularly pathological politics.

Peter Thiel may well be a Libertarian Hero, but his name is instant death in even mainstream GOP circles, and he is seen as a fascist by the progressives.

Glenn Beck is seen as a dangerous and irrationally delusional ideologue by mainstream politicians.

That sort of endorsement isn't going to help the cause if it becomes well known.

It will tar the Singularity as an ideological enclave of techno-supremists.

NO ONE at Less Wrong seems to be aware of the stigma attached to the Singularity after the performance of David Rose at the "Human Being in an Inhuman World" conference at Bard College in 2010. I was there, and got to witness the reactions of academics and political analysts from New York and Washington DC (some very powerful people in policy circles) who sat, mouths hanging aghast, at what David Rose was saying.

When these people discover that Glenn Beck is promoting the Singularity (and Glenn Beck has some very specific agendas in promoting it, that are very selfish and probably pretty offensive to the ideals of Less Wrong) these people will be even more convinced that the Singularity is a techno-cult composed of some very dangerous individuals.

Comment by MatthewB on Glenn Beck discusses the Singularity, cites SI researchers · 2012-08-09T05:38:08.213Z · LW · GW

Being influential is not necessarily a good thing.

Especially when Glenn Beck's influence is in delusional conspiracy theories, and evangelical christianity, and Young Earth Creationism.

Comment by MatthewB on Glenn Beck discusses the Singularity, cites SI researchers · 2012-08-09T05:35:32.821Z · LW · GW

Glenn Beck is hardly someone whose enthusiasm you should welcome.

He has a creationist agenda that he has found a way to support with the ideas surrounding the topic of the Singularity.

Comment by MatthewB on Glenn Beck discusses the Singularity, cites SI researchers · 2012-08-09T05:33:57.929Z · LW · GW

This is not exactly "success."

There are some populations that will pervert the things they get in their hands.

Comment by MatthewB on Glenn Beck discusses the Singularity, cites SI researchers · 2012-08-09T05:33:06.967Z · LW · GW

Glenn Beck was one of the first TV personalities to interview Ray.

The Interview is on YouTube, and is very informative as to Glenn's objectives and Agenda.

Primarily, he wishes to use the ideology behind the Singularity as support for "Intelligent Design." In the inteview, he makes an explicit statement to that effect.

Glenn Beck is hardly "rational" as per the definition of "Less Wrong."

Comment by MatthewB on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) · 2010-11-08T05:37:42.571Z · LW · GW

Yes, I have read many of the various Less Wrong Wiki entries on the problems surrounding Friendly AI.

Unfortunately, I am in the process of getting an education in Computational Modeling and Neuroscience (I was supposed to have started at UC Berkeley this fall, but budget cuts in the Community Colleges of CA resulted in the loss of two classes necessary for transfer, so I will have to wait till next fall to start... And, I am now thinking of going to UCSD, where they have the Institute of Computational Neuroscience (or something like that - It's where Terry Sejnowski teaches), among other things, that make it also an excellent choice for what I wish to study) and this sort of precludes being able to focus much on the issues that tend to come up often among many people on Less Wrong (particularly those from the SIAI, whom I feel are myopically focused upon FAI to the detriment of other things).

While I would eventually like to see if it is even possible to build some of the Komodo Dragon like Superintelligences, I will probably wait until such a time as our native intelligence is a good deal greater than it is now.

This touches upon an issue that I first learned from Ben. The SIAI seems to be putting forth the opinion that AI is going to spring fully formed from someplace, in the same fashion that Athena sprang fully formed (and clothed) from the Head of Zeus.

I just don't see that happening. I don't see any Constructed Intelligence as being something that will spontaneously emerge outside of any possible human control.

I am much more in line with people like Henry Markham, Dharmendra Modha, and Jeff Hawkins who believe that the types of minds that we will be tending to work towards (models of the mammalian brain) will trend toward Constructed Intelligences (CI as opposed to AI) that tend to naturally prefer our company, even if we are a bit "dull witted" in comparison.

I don't so much buy the "Ant/Amoeba to Human" comparison, simply because mammals (almost all of them) tend to have some qualities that ants and amoebas don't... They tend to be cute and fuzzy, and like other cute/fuzzy things. Building a CI modeled after a mammalian intelligence will probably share that trait. It doesn't mean it is necessarily so, but it does seem to be more than less likely.

And, considering it will be my job to design computational systems that model cognitive architectures. I would prefer to work toward that end until such a time as it is shown that ANY work toward that end is dangerous enough to not do that work.

Comment by MatthewB on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) · 2010-11-07T16:51:04.050Z · LW · GW

I think major infrastructure rebuilding is probably closer to the case than "maintenance"

Comment by MatthewB on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) · 2010-11-07T16:48:34.069Z · LW · GW

Yes, that is close to what I am proposing.

No, I am not aware of any facts about progress in decision theory that would give any guarantees of the future behavior of AI. I still think that we need to be far more concerned with people's behaviors in the future than with AI. People are improving systems as well.

As far as the Komodo Dragon, you missed the point of my post, and the Komodo dragon just kinda puts the period on that:

"Gorging upon the stew of..."

Comment by MatthewB on What I would like the SIAI to publish · 2010-11-02T06:11:30.861Z · LW · GW

From Ben Goertzel,

And I think that theory is going to emerge after we've experimented with some AGI systems that are fairly advanced, yet well below the "smart computer scientist" level.

At the second Singularity Summit, I heard this same sentiment from Ben, Robin Hanson, and from Rodney Brooks, and from Cynthia Breazeal (at the Third Singularity Summit), and from Ron Arkin (at the "Human Being in an Inhuman Age" Conference at Bard College on Oct 22nd ¹), and from almost every professor I have had (or will have for the next two years).

It was a combination of Ben, Robin and several professors at Berkeley and UCSD which led me to the conclusion that we probably won't know how dangerous an AGI (CGI - Constructed General Intelligence... Seems to be a term I have heard used by more than one person in the last year instead of AI/AGI. They prefer it to AI, as the word Artificial seems to imply that the intelligence is not real, and the word Constructed is far more accurate) is until we have put a lot more time into building AI (or CI) systems that will reveal more about the problems they attempt to address.

Sort of like how the Wright Brothers didn't really learn how they needed to approach building an airplane until they began to build airplanes. The final Wright Flyer didn't just leap out of a box. It is not likely that an AI will just leap out of a box either (whether it is being built at a huge Corporate or University lab, or in someone's home lab).

Also, it is possible that AI may come in the form of a sub-symbolic system which is so opaque that even it won't be able to easily tell what can or cannot be optimized.

Ron Arkin (From Georgia Tech) discussed this briefly at the conference at Bard College I mentioned.

MB

¹ I should really write up something about that conference here. I was shocked at how many highly educated people so completely missed the point, and became caught up in something that makes The Scary Idea seem positively benign in comparison.

Comment by MatthewB on What I would like the SIAI to publish · 2010-11-02T05:34:43.056Z · LW · GW

I agree.

I doubt you would remember this, but we talked about this at the Meet and Greet at the Singularity Summit a few months ago (in addition to CBGBs and Punk Rock and Skaters).

James Hughes mentioned you as well at a Conference in NY where we discussed this very issue as well.

One thing that you mentioned at the Summit (well in conversation) was that The Scary Idea was tending to cause some paranoia among people who otherwise might be contributing more to the development of AI (of course, you also seemed pretty hostile to brain emulation too) as it tends to cause funding that could be going to AI to be slowed as a result.

Comment by MatthewB on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) · 2010-11-02T05:26:23.001Z · LW · GW

Well... That is hard to communicate now, as I will need to extricate the problems from the specifics that were communicated to me (in confidence)...

Let's see...

1) That there is a dangerous political movement in the USA that seems to be preferring revealed knowledge to scientific understanding and investigation. 2) Poverty 3) Education 4) Hunger (I myself suffer from this problem - I am disabled, on a fixed income, and while I am in school again and doing quite well I still have to make choices sometimes between necessities... And, I am quite well off compared to some I know) 5) The lack of a political dialog and the preference for ideological certitude over pragmatic solutions and realistic uncertainty. 6) The fact that there exist a great amount of crime among the white collar crowd that goes both unchecked, and unpunished when it is exposed (Maddoff was a fluke in that regard). 7) The various "Wars" that we declare on things (Drugs, Terrorism, etc.) "War" is a poor paradigm to use, and it leads to more damage than it corrects (especially in the two instances I cited) 8) The real "Wars" that are happening right now (and not just those waged by the USA and allies)

Some of these were explicitly discussed.

Some will eventually be resolved, but that doesn't mean that they should be ignored until that time. That would be akin to seeing a man dying of starvation, while one has the capacity to feed him, yet thinking "Oh, he'll get some food eventually."

And, some may just be perennial problems with which we will have to deal with for some time to come.

Comment by MatthewB on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) · 2010-10-31T05:13:16.576Z · LW · GW

At the Singularity Summit's "Meet and Greet", I spoke with both Ben Geortzel and Eliezer Yudowski (among others) about this specific problem.

I am FAR more in line with Ben's position than with Eliezer's (probably because both Ben and I are either Working or Studying directly on the "how to do" aspect of AI, rather than just concocting philosophical conundrums for AI, such as the "Paperclip Maximizer" scenario of Eliezer's, which I find highly dubious).

AI isn't going to spring fully formed out of some box of parts. It may be an emergent property of something, but if we worry about all of the possible places from which it could emerge, then we might as well worry about things like ghosts and goblins that we cannot see (and haven't seen) popping up suddenly as a threat.

At Bard College on the Weekend of October the 22nd, I attended a Conference where this topic was discussed a bit. I spoke to James Hughes, head of the IEET (Institute for the Ethics of Emerging Technologies) about this problem as well. He believes that the SIAI tends to be overly dramatic about Hard Takeoff scenarios at the expense of more important ethical problems... And, he and I also discussed the specific problems of "The Scary Idea" that tend to ignore the gradual progress in understanding human values and cognition, and how these are being incorporated into AI as we move toward the creation of a Constructed Intelligence (CI as opposed to AI) that is equivalent to human intelligence.

Also, WRT this comment:

For another example, you can't train tigers to care about their handlers. No matter how much time you spend with them and care for them, they sometimes bite off arms just because they are hungry. I understand most big cats are like this.

You CAN train (Training is not the right word for it) tigers, and other big cats to care about their handlers. It requires a type of training and teaching that goes on from birth, but there are plenty of Big Cats who don't attack their owners or handlers simply because they are hungry, or some other similar reason. They might accidentally injure a handler due to the fact that they do not have the capacity to understand the fragility of a human being, but this is a lack of cognitive capacity, and it is not a case of a higher intelligence accidentally damaging something fragile... A more intelligent mind would be capable of understanding things like physical frailty and taking steps to avoid damaging a more fragile body... But, the point still stands... Big cats can and do form deep emotional bonds with humans, and will even go as far as to try to protect and defend those humans (which, can sometimes lead to injury of the human in its own right).

And, I know this from having worked with a few big cats, and having a sister who is a senior zookeeper at the Houston Zoo (and head curator of the SW US Zoo's African Expedition) who works with big cats ALL the time.

Back to the point about AI.

It is going to be next to impossible to solve the problem of "Friendly AI" without first creating AI systems that have social cognitive capacities. Just sitting around "Thinking" about it isn't likely to be very helpful in resolving the problem.

That would be what Bertrand Russell calls "Gorging upon the Stew of every conceivable idea."

Comment by MatthewB on "Outside View!" as Conversation-Halter · 2010-10-16T11:33:49.853Z · LW · GW

But, it would also not have the function of letting others who may struggle with certain concepts of knowing that they were not alone in struggling.

Comment by MatthewB on Five-minute rationality techniques · 2010-10-16T11:19:23.847Z · LW · GW

That Candidate 2 (admitting that one is wrong is a win for an argument), is one of my oldest bits of helpful knowledge.

If one admits that one is wrong, one instantly ceases to be wrong (or at lest ceases to be wrong in the way that one was wrong. It could still be the case that the other person in an argument is also wrong, but for the purposes of this point, we are assuming that they are "correct"), because one is then in possession of more accurate (i.e. "right") information/knowledge.

Comment by MatthewB on Christopher Hitchens and Cryonics · 2010-08-09T15:19:26.579Z · LW · GW

How About Eliezer, Peter Thiel, Peter Diamandis, done... I know that Peter Diamandis would NOT be turned away by Hitchens... Now, it is just a matter of getting ahold of a few mullionaire/billionaire types...

Comment by MatthewB on Christopher Hitchens and Cryonics · 2010-08-09T15:17:31.826Z · LW · GW

I have had the EXACT same idea!

However, my plan was to contact his publicist through Alcor or one of the other Cryonics companies (all one of them I think)

Comment by MatthewB on Two straw men fighting · 2010-08-09T15:14:16.189Z · LW · GW

Now, I am not certain about this, but we have to examine that code before we know it's outcome.

While this isn't "Running" the code in the traditional sense of computation as we are familiar with it today, it does seem that the code is sort of run by our brains as a simulation as we scan it.

As sort of meta-process if you will...

I could be so wrong about that though... eh...

Also, that code is useless really, except maybe as a wait function... It doesn't really do anything (Not sure why Unknowns gets voted up in the first post above, and down below)...

Also, leaping from some code to the Entirety of an AI's source code seems to be a rather large leap.

Comment by MatthewB on The Fundamental Question · 2010-04-25T07:37:04.056Z · LW · GW

It isn't stuff that made it into the modern canon, but in the Early Christian Church, Myth of this type appeared all over the place from the Jewish Sources, in an attempt to integrate it into various Christian Sects.

To be fair this stuff isn't Christian mythology in the way that Adam and Eve, or Loaves and Fishes is Christian mythology. It's just religious fiction.

Isn't it ALL just religious fiction?

Comment by MatthewB on The Fundamental Question · 2010-04-25T05:17:02.397Z · LW · GW

Well, they both (according to Christian Myth) are truly bad characters.

It is unfortunate for God that Satan (Lucifer) had such a reasonable request "Gee, Jehovah, It would certainly be nice if you let us try out that chair every once in a while." Basically, Lucifer's crime was one that is only a crime in a state where the King is seen as having divine authority to rule, and all else is seen as beneath such things (thus reflecting the Divine Order)

It was this act upon which Modern Satanists seized to create a new mythology for Satanism, where it was reason rebelling again an order that was corrupt and tyrannical.

Comment by MatthewB on The Fundamental Question · 2010-04-25T05:11:31.542Z · LW · GW

Yes, they are "Christian" in the sense that all of the mythology and practices for their worship of Satan are derived from Christianity, and they still believe in a Christian God.

It is just that these people believe that they are defying and opposing the Christian God (Fighting for the other team). They still believe in this God, just no longer have it as the object of their worship and devotion.

This is also the more traditional form of Satanist in our society, and one which the more modern Satanist tends to oppose. The Modern Satanist is a self-worshiping atheist, and as has been pointed out, tend to place everything in the context of self-interest. It is a highly utilitarian philosophy, but often marred in actual practice by ignorant fools who don't seem to understand the difference between just acting like a selfish dick and acting out of self-interest (doing things which improve one's condition in life, not things which worsen one's condition)

Comment by MatthewB on The Fundamental Question · 2010-04-24T02:16:32.634Z · LW · GW

Depending upon the Type of Satanist, yes, they are often just people looking for a high "Boo-Factor" (A term made-up by many of the early followers of a musical Genre called "Deathrock" (it's more public name is now Goth, although that is like comparing a chain saw to a kitchen pealing knife - the "Goths" are the kitchen knife).

Many Satanists, especially those who hadn't really read much of the published Satanic literature would just make something up themselves and it was almost always based in Christian motifs and archetypes. The two institutions who have publicly claimed the title of "Satanist" (The Church of Satan and The Temple of Set) both reject any and all of Christian Theology, Motifs, Archetypes, Symbolism and Characters as being ingenuous and twisted archetypes of older more healthy god archetypes (If you read Jung and Joseph Campbell, this is not uncommon for a rising religious paradigm to hijack an older competing paradigm as its bad-guys)

As Phil has suggested, maybe a front page post will come in handy. It should be recognized that some Satanists happen to be very rational people. They are just using the symbolism to manipulate their environment (although most of the more mature ones have found more mature symbols with which to manipulate the environment and their peers and subordinates).

The types to which I was referring in my post were the Christian Satanists (people who are worshiping the Christian version of Satan), which is just as bad as worshiping the Christian God. Both the Christian God and the Christian Satan are required for that mythology to be complete.

Comment by MatthewB on The Fundamental Question · 2010-04-24T02:07:41.891Z · LW · GW

You mean, like a main page post? I'd love to.

You would be surprised about how rational the real Satanists (and their various offshoots and schisms) are (as the non-Christian based Satanist is an athiest).

In fact, the very first Schism of the Church of Satan gave birth to the Temple of Set (Founded by the then head of the Army's Psychological Warfare Division), which was described as a "Hyper-Rational Belief System" (Although in reality it still had some rather unfortunately insane beliefs among its constituents). The Founder was very rational though. He even had quite a bit of science behind his position... It's just that his job caused him to be a rather creepy and scary guy.

Comment by MatthewB on The Fundamental Question · 2010-04-23T09:30:00.155Z · LW · GW

A couple of points:

I could not tell from your post if you understood that Pascal's Wager is a flawed argument for believing in ANY belief system. You do understand this don't you (That Pascal's Wager is horribly flawed as an argument for believing in anything)?

Also, as Counsin it seems to be implying (And I would suspect as well), you seem to be exhibiting signs of the True Believer complex.

This is what I alluded to when I discussed friends of mine who would swing back and forth between Born-Again Christian and Satanists. Don't make the same mistake with a belief in the Singularity. One needn't have "Faith" in the Singularity as one would God in a religious setting, as there are clear and predictable signs that a Singularity is possible (highly possible), yet there exists NO SUCH EVIDENCE for any supernatural God figure.

Forming beliefs is about evidence, not about blindly following something due to a feel good that one gets from a belief.

Comment by MatthewB on The Fundamental Question · 2010-04-22T07:17:39.830Z · LW · GW

That puts it into an understandable context... I can't quite understand about the having to shake off Christian Beliefs. I was raised with a tremendously religious mother, but about the age of 6 I began to question her beliefs and by 14 was sure that she was stark raving mad to believe what she did. So, I managed to keep from being brainwashed to begin with.

I've seen the results of people who have been brainwashed and who have not managed to break completely free from their old beliefs. Most of them swung back and forth between the extremes of bad belief systems (From born-again Christian to Satanist, and back, many times)... So, what you are doing is probably best for the time being, until you learn the tools needed to step off into the wilderness by yourself.

Comment by MatthewB on The Fundamental Question · 2010-04-21T02:22:57.701Z · LW · GW

It may just be me, but why do you need to find someone to follow?

I have always found that forging my own path through the wilderness to be far more enjoyable and yield far greater rewards that following a path, no matter how small or large that path may be.

Comment by MatthewB on Open Thread: April 2010 · 2010-04-05T03:21:55.020Z · LW · GW

Either that or painting (The latter is harder to do because the cats tend to want to help me paint, yet don't get the necessity of oppose-able thumbs ... umm...Opposeable? Opposable??? anyway....)

Since I have had sleep disorders since I was 14, I've got lots of practice at not sleeping (pity there was no internet then)... So, I either read, draw, paint, sculpt, or harass people on the opposite side of the earth who are all wide awake.

Comment by MatthewB on Open Thread: April 2010 · 2010-04-03T10:48:14.739Z · LW · GW

Are rush Limbaugh and Glen Beck (with their sidekick O'Reily, who does't really factor much) foolishly April enough?

Comment by MatthewB on "Life Experience" as a Conversation-Halter · 2010-03-27T08:23:40.615Z · LW · GW

Sports... That requires a form of Life Experience in order to gain an informed set of opinions on the subject.

When playing something like Soccer/Football, the basic skills may be imparted through training, yet the ability to successfully play with others on a field is going to take experience in learning how the whole of a team interacts both with you and against you (speaking in third person).

Another area where I understand that Life-Experience is not to be communicated is in the field of certain types of military, paramilitary or intelligence gathering activities. Like Art, there are basic skills which must be learned through your typical learning patterns, yet the application of those skills is something that I learned (in VERY hard and dangerous situations) only through life experience. My would be peers tried to tell me, and warn me as best they could. Told me that there were various situations that would come up for which there was no fixed answer, no set play, and that only having a set of basic skills that were honed as well as could be would save my pitiful ass once I found myself in those situations (fortunately for me, I recalled the words of my art teacher "You won't know why you are doing these things until after you have done them.")... And, since I had paid attention to the basic skill set needed, I was able to put it to good use to save my skin and that of others when the time came. It was only by life experience that I learned about the communities and personalities involved. Those are things that cannot be learned from a book.

Just like on a sports field, there are personalities and the character of the moment that arises in a play that cannot be taught, and must be learned through doing.

Even doing those tracings that I mentioned. We got something out of that that the other students who didn't put in their time would spend years to learn, and it was something that even a full explanation by our instructor wouldn't have taught us (as we all realized after having done what we were told). We learned that we needed to have a reflexive ability to react in order to be able to take in a situation, rather than having to concentrate upon things that should be instinctive, and thus miss out on an opportunity to learn something we would miss otherwise.

Comment by MatthewB on "Life Experience" as a Conversation-Halter · 2010-03-21T12:13:04.380Z · LW · GW

This was the basic gist of the earlier response that I made... Only, ironically, I could only recount it as an anecdote.

I have to give it to my art instructor though, because his lesson on gaining personal experience has really carried over into other fields well. His comments ended, temporarily, the conversation until we had gathered the requisite skills and experience to understand both his position and what we were doing. After we finished the assignment (and consequently the semester), those of us who did the work then could understand his reasoning far better than those of the class who had not done the work.

Comment by MatthewB on "Life Experience" as a Conversation-Halter · 2010-03-20T15:06:41.750Z · LW · GW

I am not sure what to make of this, because I do think that many times the "When you get to be my age..." argument is often used to shut down an opponent. But... I also think that at times it has merit.

For instance, I don't know of a single one of the fellow art students who could understand the argument for needing to do 500 tracings a week when the instructor told us "At the end of the semester, you will understand why you needed to do them. At least, those of you who do them will understand why"

And, he was right, yet most of us were angry that he could have communicated this more effectively had he tried. However, if he had done a better job of telling us why we needed to do 500 tracings a week, it is likely that fewer of us would have done the 500 tracings a week (also supported by past evidence of when he used to give a fuller explanation to his classes. We could see in both the grades and the classes' work that they had not put in their time)... So, in that case, there was a motivational factor to have us gain the experience on our own, instead of trying to gain it through proxy.

BTW, the reason that we did 500 tracings a week was much like the whole Karate Kid "Wax On", "Wax Off" thing... So that our bodies would learn the motions of drawing, leaving our brains free to think about composition or morphology of the image on which we were working. Except that it was long before the movie ever came out.

Comment by MatthewB on "Life Experience" as a Conversation-Halter · 2010-03-20T14:59:17.426Z · LW · GW

I'd also like to point out that many Homosexuals wish to have children (in one form of reproduction or another)... At least today this is the case. I cannot say if it has always been the case though.

However, you are correct. It wouldn't matter, as most people's objections to homosexuality are based upon fear and disgust. Pity that...

Comment by MatthewB on Coffee: When it helps, when it hurts · 2010-03-20T14:47:42.154Z · LW · GW

I haven't noticed that Provigil is too terribly hard to get, once Doctors know that it isn't an Amphetamine, they will usually prescribe it... Remember though, it is usually prescribed in dosages for narcoleptics, so only half a tablet is needed... Ritalin, well, I guess that you really need to either be ADD to get that, or be willing to enter into a black market arrangement.

Comment by MatthewB on Coffee: When it helps, when it hurts · 2010-03-11T08:58:30.012Z · LW · GW

What about Spiking that coffee with Provigil or Ritalin?

Comment by MatthewB on What is Bayesianism? · 2010-02-27T07:43:49.998Z · LW · GW

Thanks Kaj,

As I stated in my last post, reading LW often gives me the feeling that I have read something very important, yet I often don't immediately know why what I just read should be important until I have some later context in which to place the prior content.

Your post just gave me the context in which to make better sense of all of the prior content on Bayes here on LW.

It doesn't hurt that I have finally dipped my toes in the Bayesian Waters of Academia in an official capacity with a Probability and Stats class (which seems to be a prerequisite for so many other classes). The combined information from school and the content here have helped me to get a leg up on the other students in the usage of Bayesian Probability at school.

I am just lacking one bit in order to fully integrate Bayes into my life: How to use it to test my beliefs against reality. I am sure that this will come with experience.

Comment by MatthewB on "Outside View!" as Conversation-Halter · 2010-02-26T09:27:50.901Z · LW · GW

I always love reading Less Wrong. I am just sometimes confused, for many days, about what exactly I have read. Until, something pertinent comes along and reveals the salience of what I had read, and then I say "OH! Now I get it!"

At present, I am between those two states... Waiting for the Now I get it moment.

Comment by MatthewB on Babies and Bunnies: A Caution About Evo-Psych · 2010-02-23T11:03:21.798Z · LW · GW

I have a very adverse reaction to human babies... I want to pop them. Or something similar. They look like you could just stick a big pin in them and they'd go POP.

Bunnies are way cuter than human babies (at least to humans I think).

Comment by MatthewB on You're Entitled to Arguments, But Not (That Particular) Proof · 2010-02-19T00:53:34.976Z · LW · GW

You still can't know that they taped themselves every time they had sex. Now can you know that either of them might have had sex with someone else that wasn't taped.

Comment by MatthewB on [deleted post] 2010-02-18T10:23:27.260Z

I've noticed that many people miss Kurzweil's claims. Such as I keep encountering the misconception that Kurzweil claims that technical advances will become infinite, which is just silly, and he never claims this. I have talked briefly with him about it and he says that the exponential climb will probably give way to a new paradigm that changes the way things are done rather than continue to infinity (or approach an asymptote).

Comment by MatthewB on [deleted post] 2010-02-18T10:19:02.245Z

According to the guy from Intel (Justin Ratner) at the 2008 Singularity Summit. Moore's Law ended in 2005/06. The discrete transistor is a thing of the past according to his talk.

Comment by MatthewB on [deleted post] 2010-02-18T03:41:27.668Z

Couple of things... A leap ahead in computing would not necessarily mean that Moore's Law was not descriptive of the event. It could still follow the same exponential trend, yet look like a giant leap forward.

It is likely that Moore's Law will continue due to economic pressure to find newer and faster ways to compute. This may not have to do with shrinking transistor sized, but may well involve other forms of computation or architecture for chips.

Comment by MatthewB on You're Entitled to Arguments, But Not (That Particular) Proof · 2010-02-17T02:32:25.527Z · LW · GW

Oh... I also thought that I would throw this into the mix.

When a creationist or evolution-denier says that "No one has ever seen an ape evolving into a man, or a dinosaur evolving into a bird." Often, what they mean is that an Ape literally turned into a man while it was alive. The more subtle creationist will just imply that a thing that was fully ape gave birth to a thing that was fully man, yet I have discovered that both types are to be considered about equally likely to be encountered.

Neither type of Creationist or evolution-denier seems to understand that were these things to occur, both would disprove the Theory of Evolution...

Also, for anyone who wishes to know how far some of these people go... William Dembski, who runs a "University" that teaches "Creation Science" has, as part of one of his classes, an assignment whereby they get credit for making posts critical of evolution and in support of creation on what they term "Hostile Internet Forums". PZ Myers has taken to deleting any posts and banning any members who are discovered to be part of these classes. Richard Dawkins' forums have yet to devise a strategy against this sort of thing...

Comment by MatthewB on You're Entitled to Arguments, But Not (That Particular) Proof · 2010-02-16T06:23:35.424Z · LW · GW

On the issue of "Have you ever seen an Ape evolving into a Human?" and the requested video tape (We get that too at RDF), I have found the following to be very helpful in showing just how stupid the claim is. Simply ask the person:

"How do you know that your father is really your father? Do you see him have sex with your mother to conceive you? How do you know that she did not have sex with someone else? Do you have a video tape to prove this?"

They of course, will have to admit that they take it as given based upon the testimony of their parents.

But, no creationist is going to let a thing like reality or evidence stand in their way. In the words of more than a few Creationists, such as the founder of the Creation Museum: If reality and scripture contradict, reality is wrong and scripture is right (paraphrased, as this has been stated in more ways than I could possibly recount here)

Comment by MatthewB on You're Entitled to Arguments, But Not (That Particular) Proof · 2010-02-16T06:16:57.222Z · LW · GW

The report of creationists deploying the tactic in question (finding a transitional fossil where one was not previously creates two gaps that now need to be filled by other fossils), is not a Poe.

As a very active member of the Richard Dawkins Foundation Forums (RDF), I can tell you that I have seen this ploy used on more occasions than I can count.

This is in addition to people who think that Evolution also means that there should exist the Crockoduck, the Cat-dog, and the Bird-fish (to name just a few), or that Evolution means that Polar Bears had the Color scared off of them (attributed to one Richard Byers, notorious for Stoopid on the RDF. We never discovered what exactly scared Polar Bears white as he was banned for violations of the user's agreement before he could get around to describing the process of being scared white).

Now, I must read the rest of this article. Just wanted to clear this up. If anyone is interested, I am sure that I could get exact links to the posts on RDF that make such claims. Although they represent the most insipid and fanatic of religious persons in the world, they remain terrifying that anyone could remain so willfully ignorant in the face of experts (there are more than a few real scientists on the RDF in the fields of Paleontology, Evolutionary Biology, Mathematics, Computer Science, and Cog Sci).

Comment by MatthewB on How Much Should We Care What the Founding Fathers Thought About Anything? · 2010-02-14T07:59:40.478Z · LW · GW

This is exactly the interpretation that I formed of Karma after my brief exchange with... with... I forget who, but it was an Eastern European Gentleman. In it, he said he downvoted a comment due to it being vague and repetitious. So, I did a quick study of those comments that were strongly upvoted and discovered that the vast majority had contributed something to the dialog.

Although, not having a lot of Karma has led me to be rather slow to post anything as a main blog post where the Karma seems to have more weight (if I am interpreting this correctly). I do have something I have been working on, but the "Karma to Burn" does make me hesitant to post something that could send my karma score into the negative.

Comment by MatthewB on A survey of anti-cryonics writing · 2010-02-09T07:06:24.383Z · LW · GW

I find this to be a silly argument, as it assumes that not much will change about the methods of teaching, or of rejuvenation by the time that these people (who have been frozen) are re-constituted in one manner or another.

True, we would be antiquated and ignorant by the standards of the day, but just going into the process of freezing gives us a mind-set that shows us that we must be ready to abandon just about everything we know when and if we wake again. The Man from 1400 discussed in the article did not have that mindset and it was discussed as if his freezing had been an accident rather than an act of intention.

Plus, there is another reason to thaw the people out who have been frozen: Rule of Law. These people have all signed contracts based upon their conception that when and if we develop the technology to reanimate them, that we will do so. I have not examined a contract from Alcor or another cryonics program, so this may be an implicit assumption of the process.

If that is the Best anti-cryonics argument heard so far, then it is a lousy argument.

Edit: Also, if, when revived, the person is going to have an indefinitely long life, then any re-training would be trivial in terms of cost.

Comment by MatthewB on Back of the envelope calculations around the singularity. · 2010-02-09T02:07:03.394Z · LW · GW

As for why I don't say Life instead of Intelligence Life, that is because I think that Life itself will continue in one fashion or another regardless of what we do.

I do think that Intelligence is important now that we have it, and that may be similar to a tall man assuming that tallness is the most important virtue in some eyes (although I find it to be a stretch of an analogy - Tallness is obviously a disadvantage at many times, and I could probably find a good reason to favor shortness... But, that aside...).

I don't know why Intelligence would not be (or, to use a word that I hate: Why it should be) the characteristic that should be most valued. Is there a reason that Intelligence should not be the most important factor or characteristic of life?

Comment by MatthewB on BHTV: Eliezer Yudkowsky & Razib Khan · 2010-02-09T02:00:30.964Z · LW · GW

I figured that might be why... My comment was not really very useful, and I realized that, but his Green-ness did make me think "He's in the Matrix" all through the video (as you too have noticed).

He could offset this in the future by hanging a reddish-orange cloth behind him... Or, maybe Yellowish would be best to prevent him from over-correcting the color. I used to do a lot of video work and probably should have mentioned this to begin with above (that I really knew why he was green), and have had to do some really strange make-up or lighting to correct for a camera or ambient light color that was causing bizarre skin-tones (the very worst was having to make a guy nearly blue to offset an orange cast)

Comment by MatthewB on BHTV: Eliezer Yudkowsky & Razib Khan · 2010-02-08T08:48:29.558Z · LW · GW

Why is Razib Green?