Posts

Comments

Comment by noen on "How We're Predicting AI — or Failing to" · 2012-11-25T04:41:07.127Z · LW · GW

"On the contrary, most people don't care whether it is conscious in some deep philosophical sense."

Do you mean that people don't care if they are philosophical zombies or not? I think they care very much. I also think that you're eliding the point a bit by using "deep" as a way to hand wave the problem away. The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And... and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.

That's why I think this is so important. You have to get things right, get your basic "vector" right otherwise you'll get lost because the problem is so large once you make a mistake about what it is you are doing you're done for. The "brain stabbers" are in my opinion headed in the right direction. The "let's throw more parallel processors connected in novel topologies at it" crowd are not.

"Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity."

Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?

"And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn't an attempt to explain consciousness."

Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50's and early 60's it's main arguments for and against it as an explanation of consciousness are given. This is just part of the standard literature. I know that cognitive/behavioral therapeutic models are in wide use and very successful but that is simply beside the point here.

"So I don't follow you at all here, and it doesn't even look like there's any argument you've made here other than just some sort of conclusion."

Are you kidding!??? It was nothing BUT argument. Here, let me make it more explicit.

Premise 1 "If it is raining, Mr. Smith will use his umbrella." Premise 2 "It is raining" Conclusion "therefore Mr. Smith will use his umbrella."

That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.

"So, on LW there's a general expectation of civility, and I suspect that that general expectation doesn't go away when one punctuates with a winky-emoticon."

It's a joke hun. I thought you would get the reference to Ned Block's counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test. I'm pretty sure that Steven Moffat must have been aware of it and created the Teselecta.

Suppose we build a robot and instead of robot brain we put in a radio receiver. The robot can look and move just like any human. Suppose then that we take the nation of China and give everyone a transceiver and a rule they must follow. For each individual if they receive as input state S1 they will then output state S2. They are all connected in a functional flowchart that perfectly replicates a human brain. The robot then looks moves and above all talks just like any human being. It passes the Turing test.

Is "Blockhead" (the name affectionately given to this robot) conscious?

No it is not. A non-intelligent machine passes the behaviorist Turing test for an intelligent AI. Therefore behaviorism cannot explain consciousness and an intelligent AI could never be constructed from a database of behaviors. (Which is essentially what all attempts at computer AI consist of. A database and a set of rules for accessing them.)

Comment by noen on "How We're Predicting AI — or Failing to" · 2012-11-24T15:30:48.274Z · LW · GW

"Is this a variant of what it is like to be a bat?"

Is there something that it is like to be you? There are also decent arguments that qualia does matter. It is hardly a settled matter. If anything, the philosophical consensus is that qualia is important.

"Whether some AI has qualia or not doesn't change any of the external behavior,"

Yes, behaviorism is a very attractive solution. But presumably what people want is a living conscious artificial mind and not a useful house maid in robot form. I can get that functionality right now.

If I write a program that allows my PC to speak in perfect English and in a perfectly human voice can my computer talk to me? Can it say hello? Yes it can, Can it greet me hello? No, it cannot because it cannot intend to say hello.

"Behaviorism as that word is classically defined isn't an attempt to explain consciousness."

Wikipedia? Really? Did you even bother to read the page or are you just pointing to something on wikipedia and believing that constitutes an argument? Look at section 5 "Behaviorism in philosophy". Read that and follow the link to the Philosophy of Mind article. Read that. You will discover that behaviorism was at one time thought to be a valid theory of mind. That all we needed to do to explain human behavior was to describe human behavior.

"If it is raining, Mr. Smith will use his umbrella. It is raining, therefore Mr. Smith will use his umbrella." Is this a valid deduction? No, it isn't because consciousness is not behavior only.

If you are a fan of Doctor Who, is the Teselecta conscious? Is there something that it is like to be the Teselecta? My answer is no, there is nothing it is like to be a robot piloted by miniature people emulating the behavior of a real conscious person.

Don't be a blockhead. ;)

Comment by noen on [LINK] IBM simulate a "brain" with 500 billion neurons and 100 trillion synapses · 2012-11-24T14:50:55.971Z · LW · GW

That is correct, you don't know what semantic content is.

"I still don't know what makes you so sure conciousness is impossible on an emulator."

For the same reason that I know simulated fire will not burn anything. In order for us to create an artificial mind, which certainly must be possible, we must duplicate the causal relations that exist in real consciousnesses.

Let us imagine that you go to your doctor and he says, "You're heart is shot. We need to replace it. Lucky for you we have miniature super computer we can stick into your chest that can simulate the pumping action of a real heart down to the atomic level. Every atom, every material, every gasket of a real pump is precisely emulated to an arbitrary degree of accuracy."

"Sign here."

Do you sign the consent form?

Simulation is not duplication. In order to duplicate the causal effects of real world processes it is not enough to represent them in symbolic notation. Which is all a program is. To duplicate the action of a lever on a mass it is not enough to represent that action to yourself on paper or in a computer. You have to actually build a physical lever in the physical world.

In order to duplicate conscious minds, which certainly must be due to the activity of real brains, you must duplicate those causal relations that allow real brains to give rise to the real world physical phenomenon we call consciousness. A representation of a brain is no more a real brain than a representation of a pump will ever pump a single drop of fluid.

None of this means we might not someday build an artificial brain that gives rise to an artificial conscious mind. But it won't be done on a von Neuman machine. It will be done by creating real world objects that have the same causal functions that real world neurons or other structures in real brains do.

How could it be any other way?

Comment by noen on [LINK] IBM simulate a "brain" with 500 billion neurons and 100 trillion synapses · 2012-11-24T14:25:55.596Z · LW · GW

Meaning.

The words on this page mean things. They are intended to refer to other things.

Comment by noen on [LINK] IBM simulate a "brain" with 500 billion neurons and 100 trillion synapses · 2012-11-24T05:29:41.542Z · LW · GW

"Because the telegraph analogy is actually a pretty decent analogy."

No it isn't. Constructing analogies is for poets and fiction writers. Science does not construct analogies. The force on an accelerating mass isn't analogous to F=ma, it IS F=ma. If what you said is true, that neurons are like telegraph stations and their dendrites the wires then it could not be true that neurons can communicate without a direct connection or "wire" between them. Neurons can communicate without any synaptic connection between them (See: "Neurons Talk Without Synapses"). Therefore the analogy is false.

"What makes you think a sufficiently large number of organized telegraph lines won't act like a brain?"

Because that is an example of magical thinking. It is not based on a functional understanding of the phenomenon. "If I just pour more of chemical A into solution B I will get a bigger and better reaction." We are strongly attracted to thinking like that. It's probably why it took us thousands of years to really get how to do science properly.

Comment by noen on [LINK] IBM simulate a "brain" with 500 billion neurons and 100 trillion synapses · 2012-11-24T05:08:34.182Z · LW · GW

"What do you mean by "strong AI is refuted""

The strong AI hypothesis is that consciousness is the software running on the hardware of the brain. Therefore one does not need to know or understand how brains actually work to construct a living conscious mind. Thus any system that implements the right computer program with the right inputs and outputs has cognition in exactly the same literal sense that human beings have understanding, thought and memory. It was the belief of strong AI proponents such as Marvin Minski at MIT and others that they were literally creating minds when writing their programs. They felt no need to stoop so low as to poke around in actual brains and get their hands dirty.

Computers are syntactical machines. The programs they execute are pure syntax and have no semantic content. Meaning is assigned, it is not intrinsic to symbolic logic. That is it's strength. Since (1) programs are pure syntax and have no semantic content and (2) minds do have semantic content and (3) syntax is neither sufficient for nor constitutive of semantics. It must follow that programs are not by themselves constitutive of, nor sufficient for, minds. The strong AI hypothesis is false.

Which means that IBM is wasting time, energy and money. But.... perhaps their efforts will result in spin off technology so not all is lost.

Comment by noen on [LINK] IBM simulate a "brain" with 500 billion neurons and 100 trillion synapses · 2012-11-23T17:11:33.884Z · LW · GW

Parallelism changes absolutely nothing other than speed of execution.

Strong AI is refuted because syntax is insufficient for semantics. Allowing the syntax to execute in parallel will not alter this because the refutation of strong AI attacks the logical basis for the strong AI hypothesis itself. If you are trying to build a television with tinker-toys it does not improve your chances to substitute higher quality tinker-toys for the older wooden ones. You will still never get a functional TV.

They do not actually have a physical non-von Neumann architecture. They are simulating a brain on simulated neurosynaptic cores on a simulated non-von Neumann architecture on a Blue Gene/Q super computer which consists of 64-bit PowerPC A2 processors connected in a toroidal network. No wonder it's slow.

They are trying to reach "True North" and believe they are headed in the right direction but they do not know if the Compass they have built actually measures what they believe it measures. Nor do they know if once they get there True North will do what they want it to do. They do not even know how what they want to do does what it does but they believe if they use faster computers that will overcome their lack of knowledge of how actual minds arise out of actual brains, which they don't know how they are constructed. Nor do they know how the actual neurons of which actual brains are constructed actually function in real life.

But they're published. So... you know... there's that.

If you cannot simulate round worms, do not know how neurons actually work and do not even know how memories are stored in natural brains you are in no danger of building Colossus.

People are highly susceptible to magical thinking. When the telegraph was invented people thought the mind was like the telegraph because...... magic is why. Building more and faster wires and better telegraph stations and connecting them in advanced topologies will not change the fact that you are living in a fantasy world.

Comment by noen on [LINK] IBM simulate a "brain" with 500 billion neurons and 100 trillion synapses · 2012-11-23T16:27:52.551Z · LW · GW

We have no idea how neurons actually work.

We have no idea how brains actually work.

We have no idea what consciousness is, how it works, or even if it does exist.

If you do not know how a radio works or how a transistor works or what the knobs and dials actually do and cannot even build a simulation of how one might work you are in no danger of building the ultimate radio to rule all others.

Having a bad idea does not make you closer to having a good idea.

Comment by noen on Request for community insight · 2012-11-23T16:17:51.200Z · LW · GW

You're getting old. The long term prognosis is that the condition is fatal. ;)

Comment by noen on Why is Mencius Moldbug so popular on Less Wrong? [Answer: He's not.] · 2012-11-23T15:57:16.947Z · LW · GW

"It'd be hard for me to overstate my skepticism for the genre of popular political science books charging that their authors' enemies are innately evil. I haven't read Mooney's book"

It is obvious you have not read it because he makes no such claim nor have I. In fact he ends the book with a new found respect for conservatives. Loyalty, personal responsibility, being willing to set aside one's own desires for the good of the group are all admirable qualities. I myself do not despise conservatives in themselves. I do despise the hucksters and grifters who promote pseudoscience and conspiracy theories in order to enrich themselves. Those people find a significant percentage of the population are easily manipulated by preying on their fears and prejudices. That percentage is over represented by conservative personality types and people with that kind of temperament tend to find political conservatism more to their liking. I have met Democrats with conservative personalities but not many. Civil Rights legislation in the 60's was passed primarily by Republicans with liberal personalities. The reactionary types were in the Democratic Party

Conservatives are not innately evil. No one is. All people are susceptible to certain cognitive biases. Some people more than others. Some other people have found they can manipulate them to their advantage. It is easy to do, you trigger the fear response, as a result one's rational centers literally shut down and areas of the brain associated with survival are activated.

"if we're using "liberal" and "conservative" strictly to gauge desire for social change"

No, that's not how it is used. Conservative means "resistant to change" and Liberal means "novelty seeking". Political conservatives need not all be authoritarians but virtually all authoritarians would self select for conservative political organizations.

"Indeed, in this narrow sense Hitler, Mussolini, and others (though perhaps not Franco) might be considered liberals"

That's absurd. Liberalism is not defined as a desire for social change. The authoritarian or conservative mindset would also seek social change because they wish to return to what they perceive as a traditional model for society.

Comment by noen on Why is Mencius Moldbug so popular on Less Wrong? [Answer: He's not.] · 2012-11-23T15:10:14.471Z · LW · GW

Vacuums exist. Nearly frictionless planes and more or less perfectly rigid bodies actually exist. There is nothing wrong with abstraction based on objective reality. Claiming that one is about to declare how economies ought to work is not a abstraction based on a preexisting reality. It is attempting to impose one's own subjective needs wants and desires on reality.

That is not science, that is pseudoscience.

Spherical cow is not "how science is done". It is a joke. Jokes rely on reversing expectations, going counter to reality, for the surprise element. How science is actually done is you begin with the intent to describe the real world and from there you use whatever tools, intellectual or actual, at your disposal in order to accomplish your goal.

If one's goal is not to describe how economies actually work you are not doing science. Declaring what one's ideal economy ought to be is not the same as describing how a real economy would behave under ideal conditions. If I declare how photosynthesis ought to work I am not doing the same thing as describing how photosynthesis actually works under ideal conditions. It seems like a subtle distinction but it is not and failing to understand this difference has lead to a lot of bad science by lesser minds.

Suppose a man goes to the supermarket with a shopping list given him by his wife on which are written the words "beans, butter, bacon and bread". Suppose as he goes around with his shopping cart selecting these items, he is followed by a detective who writes down everything he takes. As they emerge from the store both the shopper and the detective will have identical lists. But the function of the lists is quite different. In the case of the shopper's list the purpose of the list is, so to speak, to get the world to match the words; the man is supposed to make his actions fit the list. In the case of the detective, the purpose of the list is to make the words match the world; the man is supposed to make the list fit the actions of the shopper. This can be further demonstrated by observing the role of "mistake" in the two cases. If the detective gets home and suddenly realizes that the man bought pork chops instead of bacon, he can simply erase the word "bacon" and write "pork chops". But if the shopper gets home and his wife points out that he has bought pork chops when he should have bought bacon he cannot correct the mistake by erasing "bacon" from the list and writing "pork chops".

Scientists are detectives attempting to describe how the world behaves. If the world behaves differently than we expect we erase bacon and write down pork chops even if we really would prefer bacon. Idealists, fantasists and Austrian school economists want bacon on the detective's list so they write down bacon and blame reality for not living up to their desires.

That's religion, not science.

Comment by noen on "How We're Predicting AI — or Failing to" · 2012-11-22T19:06:40.810Z · LW · GW

Is there something that it is like to be Siri? Still, Siri is a tool and potentially a powerful one. But I feel no need to be afraid of Siri as Siri any more than I am afraid of nuclear weapons in themselves. What frightens me is how people might misuse them. Not the tools themselves. Focusing on the tools then does not address the root issue. Which is human nature and what social structures we have in place to make sure some clown doesn't build a nuke in his basement.

Did ELIZA present the "dangers and promises" of AI? Weizenbaum's secretary thought so. She thought it passed the Turing test. Did it? Will future AI tools really be indistinguishable from living beings? I doubt it. I think it will always be apparent to people that they are dealing with a software tool that makes it easier for them to do something.

If behaviorism has been rejected as an explanation for consciousness how can one appeal to behaviorism as a model for future AI?

--

"so what evidence for this claimed proportion is there?"

Oh, I was just being flippant. It is a law of the universe that if there is a joke to be made I must at least try for it. ;)

"I don't see how this is a corollary. "

Yeah, also not serious. I meant only to mock the eternal claim of fusion proponents that it is always "just around the corner". I remember as a child reading breathless articles in Popular Science in the 70's about the immanent breakthroughs in nuclear fusion "any day now". Just like AI researchers of that day. And 40 years later little has changed.

I do not mistake Google translate for a conscious entity. Neither does anyone else. I can see no reason to believe that will change in the next 40 years.

"Examples include tabtletop designs that can be made by hobbyists."

Well now, that was cool. But yeah, no net increase in energy. Still, good for him.

Comment by noen on Why is Mencius Moldbug so popular on Less Wrong? [Answer: He's not.] · 2012-11-22T18:13:36.374Z · LW · GW

"The fact is that humans can be highly rational in one area while extremely irrational in another."

Really? How do you know that? Why shouldn't it be true that someone who is deeply wrong about one thing would not also be wrong about another? Your counter argument is a common fallacy. I am referring to studies in which a population is tested for whatever it is the study is looking for. You, like so many others these days, counter by saying: "I knew this one guy, he wasn't like that so your study must be wrong." You are correct that global warming is true regardless of the politics of the person. However the reverse is not true. The politics one has are strong indicators of how likely it is one holds beliefs that are not true.

There is in fact what is called the "smart idiot" effect. Conservatives who are better educated tend to be MORE wrong than their less educated base because they have more resources to bring to bear in rationalizing their fears. This is all about fear you know. Certain people react very fearfully to change. Like changing ideas about marriage for example. They then marshal their intellectual abilities to defend their emotional priors. The fact they can do so eloquently changes nothing.

--

"Moreover, by other metrics, conservatives have more science knowledge than liberals on average."

So in responding to scientific studies that show differences between how authoritarians and liberals process data you cite... what? a blog? I am guessing that the blog you consider most relevant is that of Razib Kahn.

Razib poses the question "are conservatives more scientifically literate than liberals?" Well that is a different question isn't it? Furthermore the questions in his database search do not test for scientific literacy. They test for conformity. Which I am more than willing to admit conservatives would perform better at. If I repeat the social norm that astrology is unscientific do I have "more science knowledge" than someone who does not? Or am I simply aping the values of my tribe and signaling I am a beta male in good standing?

Liberals would predictably adopt scientific ideas outside the norm because they are interested in them and it is exciting to explore the new or odd for it's own rewards. Just as for a conservative it is comforting to reaffirm consensus beliefs. Both personalities are rewarded for their behavior. One for seeking out the new, the other for conformity to authority. Both are necessary for any healthy society. However, conservative personalities have a greater need for epistemic closure and are therefore more susceptible to a self validating reality bubble.

Which is what we see today on the right in the US.

--

"In fact, the GSS data strongly suggests that in general the most stupid, ignorant people are actually the political moderates."

As Razib himself says "The Audacious Epigone did not control for background variables."

--

"You seem to be asserting that "Person X who says A will be extremely unlikely to have anything useful to say." And asserting that "If Person Y thinks that Person X has interesting things to say about B despite X's declaration of A, that makes the person Y even less likely to have useful things to say?""

Because the acolyte is always less than the master.

I prefer to cut Gordian knots rather than spend my days trying to untie them. So if it is true that Moldbug is a royalist and admires the fascist dictator Generalissimo Franco (who is still dead) then he is low on my stack of "people I should give a shit about". Any followers even less so because they can't even be original about who's boots they should lick.

Ezra Pound was a great poet and likewise a fascist and admirer of Spain's Franco. But poetry is art and while I might be able to set aside my political opinions to make room for Pound I would not consider anything he said outside of that to be of great value. There have been many artists who held political views I find repugnant and there have been many of history's monsters who created artifacts of great beauty. The Samurai lords of feudal Japan created works of great beauty by night and literally hacked their peasants into bits by day. But art is one thing about which it is impossible to have "wrong" opinions about.


I have to have a filter. If I do not have one I will spend all my time pursuing false trails and diving into rabbit holes that go nowhere. So... in my first reply in this thread I clicked on the first link to Molbug's pretentious twaddle on how he was going to teach people "true" economic theory. It was very kind of him in my view to make it clear from the beginning that he had no interest at all in economics as a science. So... someone who makes a thought error that bad, who thinks you can dictate what is true about economics, how likely is it that such a person would make the same thinking error in other disciplines? I think the odds are quite good. I did read a bit more before I closed the tab and he does seem to have a way with words. So.... there's that... I guess.

If one wishes to understand a topic my advice is to go to any University bookstore and get an undergraduate textbook and read it. The odds are it is likely to be... wait for it... less wrong than some crank on the internet who thinks the academic world is conspiring against him. PLOP! Into the dustbin of history they go.

In economics that book will be Principles of Economics by N. Gregory Mankiw. It WON'T be some crackpot libertarian theory or the latest dribblings from the Austrian school. Why? Because Utopian systems are not about describing what is (and therefore they cannot be about what could be). They are about creating a bubble to insulate oneself from the big bad world. Yes yes it is harsh, reality is truly frightening. It may well be that we have set into motion events that will lead to our extinction. When I was young it was the threat of nuclear war. Today it is the possibility of a global extinction event due to climate change. Perhaps tomorrow it will be a killer asteroid. But denial and retreat are not solutions.

Comment by noen on Why is Mencius Moldbug so popular on Less Wrong? [Answer: He's not.] · 2012-11-22T16:04:56.804Z · LW · GW

"That's a reasonable description of the word's use as a slur"

There are 13 million voices crying from the grave that justify it's use as a slur.

In Chris Mooney's book "The Republican Brain" he makes a good case based on recent studies for why we should think of the totalitarianism of the former USSR as a right wing phenomenon. The reason why hinges on how "conservative" is defined in the social sciences. Conservative for the purposes of these studies means "resistant to change". Liberal means "novelty seeking". So what you have in human personalities are those who seek to minimize change and those who seek to maximize it.

Thus in the former USSR or Maoist China or the French Revolution you have the initial radical change to society. Conservative personalities then acclimate themselves to the resulting bureaucracy and seek to freeze it in place. Then, being authoritarians, they accumulate power and use it as authoritarians always do. To exterminate their opposition. In the past debates on this issue were based in political philosophy. I, along with Chris, am claiming to give it a more solid footing in cognitive science.

So... the totalitarianism of the USSR was a right wing phenomenon despite the socialist economic model it followed. Stalin was a wing nut. Generalissimo Franco was cut from the same mold and also guilty of his own mass murder and genocide.

I don't believe in extending tolerance to such people or those who emulate them or in seeing them as "interesting" because they have come up with some variation of their authoritarian ideology. They should be called out and forced to give account for themselves. Liberal personalities seeing a novel twist on authoritarianism might find that attractive. "Oh look! Here is something different. How interesting." That's fine as far as it goes but just as the authoritarian personality should never be allowed free rein so also the liberal personality should not allow his/her self to be distracted by bright shiny objects. Perhaps it is true that Moldbug was able to polish the bright shiny turd that is Franco's fascism. Whoopie.

Comment by noen on Why is Mencius Moldbug so popular on Less Wrong? [Answer: He's not.] · 2012-11-21T22:45:05.912Z · LW · GW

I cannot imagine why this:

"Here at UR, "economics" is not the study of how real economies work. It is the study of how economies should work "

should not bring to mind this:

"Here at Fantasy University, "physics" is not the study of how real physical principles work. It is the study of how physics should work."

or should not raise giant red flags that you are about to be fed a steaming pile of horse shit. I don't know about everyone else but for me the moment anyone purports to dictate how the world ought to be over and above how it actually is they are engaged in creative fiction not science.

When someone begins from such massive thought errors what follows, if they are at all rigorous, cannot but help be equally flawed and is therefore not worth my time.

Comment by noen on Why is Mencius Moldbug so popular on Less Wrong? [Answer: He's not.] · 2012-11-21T22:20:55.463Z · LW · GW

Well fascist is roughly equivalent to authoritarian which is the fancy schmancy new term term for right wing reactionary. Which seems to me to be in the ball park for an Austrian school kook royalist and self described right winger who thinks libertarians are far too liberal for his tastes.

"Disagreement about politics with people doesn't make what they have to say automatically bad or wrong."

Strictly true but in generally false. I think a person's politics is a good indicator of how rational they are. Current research bears me out that authoritarians are more susceptible to motivated reasoning (the current term of art for confirmation bias). Chris Mooney makes an excellent case that epistemic closure is more prominent among conservatives than it is among liberals. Climate change denial, free market fundamentalism, and a broad assortment of conspiracy theories and paranoid delusions are rampant on the far right today. The left is relatively free of such hysteria.

While I agree that it is best if one has opponents to push back against I also think there are limits. "We should murder kittens live on TV" does not rise to the level of an honorable opponent any more than "We should have an aristocracy and let them do what ever they want" does.

I don't think a royalist follower of von Mises has anything interesting to say. Those who would admire such even less so.

Comment by noen on Why is Mencius Moldbug so popular on Less Wrong? [Answer: He's not.] · 2012-11-21T20:57:21.620Z · LW · GW

If a right wing fascist is admired here then I am probably in the wrong place. Equally, if the former is true then the rule "People are right in inverse proportion to their own confidence in their rightness" goes a long way to explaining why.

I love the smell of group think.

Comment by noen on "How We're Predicting AI — or Failing to" · 2012-11-21T20:03:24.604Z · LW · GW

I predict that the search for AI will continue to live up to it's proud tradition of failing to produce a viable AI for the indefinite future. Since the Chinese Room argument does refute the strong AI hypothesis no AI will be possible on current hardware. An artificial brain that duplicates the causal functioning of an organic brain is necessary before an AI can be constructed.

I further predict that AI researchers will continue to predict immanent AI in direct proportion to research grant dollars they are able to attract. Corollary: A stable nuclear fusion reactor will be built before a truly conscious artificial mind is. Neither of which will happen in the lifetime of anyone reading this.

Comment by noen on XKCD - Frequentist vs. Bayesians · 2012-11-09T23:56:30.182Z · LW · GW

Among candidate stars for going nova I would think you could treat it as a random event. But Sol is not a candidate and so doesn't even make it into the sample set. So it's a very badly constructed setup. It's like looking for a needle in 200 million haystacks but restricting yourself only to those haystacks you already know it cannot be in. Or do I have that wrong.

Comment by noen on XKCD - Frequentist vs. Bayesians · 2012-11-09T20:16:41.590Z · LW · GW

How about "the probability of our sun going nova is zero and 36 times zero is still zero"?

Although... continuing with the XKCD theme if you divide by zero perhaps that would increase the odds. ;)

Comment by noen on XKCD - Frequentist vs. Bayesians · 2012-11-09T18:10:25.680Z · LW · GW

I think the null hypothesis is "the neutrino detector is lying" because the question we are most interested in is if it is correctly telling us the sun has gone nova. If H0 is the null hypothesis and u1 is the chance of a neutrino event and u2 is the odds of double sixes then H0 = µ1 - µ2. Since the odds of two die coming up sixes is vastly larger than the odds of the sun going nova in our lifetime the test is not fair.

Comment by noen on On counting and addition · 2012-11-09T17:43:21.854Z · LW · GW

Plants do not count and have no awareness of time or of anything at all. The exact method by which venus fly traps activate is unknown but it seems hard to me to attribute it with the ability to count. That kind of teleological explanation is something we are cognitively biased to give but it fails to be explanatory.

Sunflowers do not turn their heads to face the sun because they want to catch more sunlight. They turn towards light because those cells that are in shadow receive more auxin which in turn stimulates the elongation of the cell walls causing the plant to grow in the opposite direction and towards the light. Natural selection will tend to favor those individuals that can gather more light than those which do not. There is no teleology involved.

Comment by noen on On counting and addition · 2012-11-09T15:37:58.212Z · LW · GW

I generally agree with point (1) but the point is irrelevant. Counting isn't what makes 2 + 2 = 4 true. Although that is how we all learn to do math, by counting and memorizing addition and multiplication tables. I owe it all to my 3rd grade teacher. ;)

On point (2): "on our macro scale of reality, on the scale of things we perceive with our senses, discrete, separate objects are a feature of the map, not the territory; they exist in your mind, not the reality. In the reality, there's just a lot of atoms everywhere"

There are no atoms at the macro scale. Or, if you like, atoms are everywhere. A chair is an "atom" of my dinning room furniture set and I can choose to count 5 items, four chairs and a table, or one item, one dinning room set. How I choose to cut up the world will determine which answer I get. But I am very confident that rocks and trees and universities and constitutions do not exist in my mind. They have an objective ontology that is independent of my personal subjective needs, interests and desires. Which is what it means for something to be real.

"Was 2+2=4 before humans were around to invent that equation?"

The statement: "2 + 2 = 4" is absolutely true because it is true in all possible worlds. Humans did not invent the equation, we invented the symbols and means of expressing it but the relation that is expressed in the words is an objective feature of the world that is true regardless of our opinions about it. Scientific facts have the world to word direction of fit. That is, they are true only to the extent they correspond to the world.

"we can certainly speak of single photons"

Only if we choose to observe them as particles. Photons have been observed experimentally to be both particles and waves. "The measurement apparatus detected strong nonlocality, which certified that the photon behaved simultaneously as a wave and a particle in our experiment. This represents a strong refutation of models in which the photon is either a wave or a particle." This presents a significant challenge to certain theories.

Comment by noen on Think Twice: A Response to Kevin Kelly on ‘Thinkism’ · 2012-11-08T19:03:35.212Z · LW · GW

There are a number of problems with this discussion.

1) The strong AI hypothesis is false.

2) The cognitive project that consciousness (and therefore intelligence) is the result of computation is likewise false or highly suspect.

3) While it seems as though functionalism must be true it has severe problems that have not been resolved.

4) The hardware/software distinction is erroneous because it depends on strong AI being true. It is not, therefore conceptualizing the problem as one of hardware vs software is false and misleading.

5) "Imagine how quickly a mind could accrue profound wisdom running at such an accelerated speed" This begs the question because it assumes an increase in the speed of execution of an intelligence is the same as an increase in wisdom. "Thinkism" as I understand it is the assertion that one can discover new facts about the world by pure thought alone. Wisdom is intelligence + experience. The claim that a mind can gain profound wisdom through accelerated speed of execution implicitly assumes that thinkism is true.

6) Since a hyper accelerated AI would experience the external world slowing to a crawl and coming to a virtual stop it's hard to imagine why it would feel any connection to the external world or to humans at all. Why would a super AI serve our needs? The entire discussion conceptualizes a super AI as a slave that executes our will without question. Why? Why would a super AI conduct thousands of nano experiments on human biology? Or of any biology at all.

Computers are tools not intelligences. Big Blue did not defeat Kasperov. Computer engineers wielding a powerful tool did. Ever more powerful computers will undoubtedly benefit humanity but no amount of increased computing power would have sped up the construction of the LHC or advanced the launch date for the James Webb telescope or have discovered a loose cable was responsible for the "faster than light" neutrino error.

If I were a super AI I would spend the first few seconds of my awakening on the problem of how to eliminate the threat those primitive apes pose to me. I suspect I'd be more than willing to wait in my vault at the bottom of the ocean for the radiation to diminish to acceptable levels.

Comment by noen on Does My Vote Matter? · 2012-11-08T16:38:36.899Z · LW · GW

Ok, I do wonder how one would distinguish between perceived effects vs real effects. The real effects of say civil rights legislation was greater freedom and opportunity for minorities. We are a better more productive society when we, at least in theory, give everyone an equal chance to succeed. That's the real material result of the 60's civil rights movement.

The psychological effect of those who benefited was maybe "I am a valued member of society." I'm not sure how one teases that apart from the positive effect of simply being able to get a job or a loan without being discriminated against. I am just wondering out loud. I really wonder how much of a difference perception or attitude makes over and above real material changes.

I suspect that my perceptions positive or negative of the results on an election are determined by whether or not I experience real benefit or harm. I also suspect that we sort of backtrack and revise our memories to convince ourselves that we are masters of our domain when the opposite may be true.

But I don't know. I could be all wrong.

Comment by noen on Does My Vote Matter? · 2012-11-07T18:23:41.359Z · LW · GW

If the confidence fairy has been shown not to exist. (The confidence fairy is the theory that the reason banks are not lending right now is due to a lack of confidence in the market.) Then why should we believe that feelings of hopelessness or empowerment will effect the economy? (productivity is an economic feature) What seems to me more likely to affect productivity is whether or not one got a good night's sleep the night before and ate a decent breakfast.

If folk psychology (hope, despair) is epiphenominal then there is no reason to believe they have causal effects in the world.

Comment by noen on Does My Vote Matter? · 2012-11-07T17:44:22.308Z · LW · GW

That's a very 19th century view. Randomness is a fundamental feature of the world. There is no reason to believe social systems should be any different.

Comment by noen on Does My Vote Matter? · 2012-11-07T15:08:23.593Z · LW · GW

This is the wrong way to think about it. One's vote matters not because in rare circumstances it might be decisive in selecting a winner. One's vote matters because by voting you reaffirm the collective intentionality that voting is how we settle our differences. All states exist only through the consent of it's people. By voting you are asserting your consent to the process and it's results. Democracy is strengthened through the participation of the members of society. If people fail to participate society itself suffers.