Posts

Comments

Comment by Tiiba2 on Against Maturity · 2009-02-19T06:08:13.000Z · LW · GW

Edward, how is it arrogant to want to contribute to science?

Comment by Tiiba2 on Against Maturity · 2009-02-18T23:59:28.000Z · LW · GW

Aw great, now my post is broken.

Comment by Tiiba2 on Against Maturity · 2009-02-18T23:58:18.000Z · LW · GW

The URL to the anime fanfiction seems to be worse than broken. My browser doesn't even say what you wrote, just that it's "illegal".

(page source: )

Comment by Tiiba2 on An African Folktale · 2009-02-16T01:55:33.000Z · LW · GW

Well, the farmer's wife seems to be one character who was thankful...

...and fared the worst.

But is this really cultural gloominess? Maybe this one is just reserved for when you're in a really bad mood. What are the other stories in that book like?

Comment by Tiiba2 on Epilogue: Atonement (8/8) · 2009-02-06T17:30:06.000Z · LW · GW

Untranslatable 2: The frothy mixture of lube and fecal matter that is sometimes the byproduct of anal sex.

Comment by Tiiba2 on Epilogue: Atonement (8/8) · 2009-02-06T14:58:52.000Z · LW · GW

1) Who the hell is Master of Fandom? A guy who maintains the climate control system, or the crew's pet Gundam nerd?

2) Do you really think the aliens' deal is so horrifying? Or are you just overdramatizing?

Comment by Tiiba2 on Building Weirdtopia · 2009-01-13T05:33:14.000Z · LW · GW

Everyone is orgasmium. And strangely enough, they don't think it's all that horrible.

Comment by Tiiba2 on Building Weirdtopia · 2009-01-12T22:46:26.000Z · LW · GW

I recently wondered whether it's possible that transhumans would spend parts of their lives in situations very similar to Dante's hell, complete with wailing and gnashing of teeth. Some have suggested that a bit of pain might be necessary to make all the pleasure we're supposed to get realizable, but I suggest that we might actually need quite a lot of it. If the only way to make people happy is to improve their lives, pushing them way down might turn out to be a reasonable solution. And some might choose that route to spice up whatever other sources of happiness there are. The fact that hellfire scares us fleshlings wouldn't matter to indestructible nanocyborgs.

Or maybe they would intentionally seek other things that I consider horrible. Like the risk of death - isn't that what people do already when they walk on a tightrope?

Comment by Tiiba2 on Complex Novelty · 2008-12-20T15:14:48.000Z · LW · GW

Abigail: """If you find the thought of having endless orgasms repulsive, might not the person who had, er, sunk so low, also find his state repulsive, eventually?"""

I, for one, cannot imagine one who has, er, ascended so high voluntarily reducing his own utility.

I cannot see why I shouldn't want to become orgasmium. It would certsinly be disgusting to look at someone else turning into something like that - it is too similar to people who are horribly maimed. But It's What's Inside That Counts.

The reason that drug addiction is bad is that it has deleterious health effects. But although orgasmium is defenselsess, it is guarded by a benevolent god. Nothing in the world could destroy it.

Comment by Tiiba2 on Complex Novelty · 2008-12-20T15:14:29.000Z · LW · GW

Abigail: """If you find the thought of having endless orgasms repulsive, might not the person who had, er, sunk so low, also find his state repulsive, eventually?"""

I, for one, cannot imagine one who has, er, ascended so high voluntarily reducing his own utility.

I cannot see why I shouldn't want to become orgasmium. It would certsinly be disgusting to look at someone else turning into something like that - it is too similar to people who are horribly maimed. But It's What's Inside That Counts.

The reason that drug addiction is bad is that it has deleterious health effects. But although orgasmium is defenselsess, it is guarded by a benevolent god. Nothing in the world could destroy it.

Comment by Tiiba2 on Complex Novelty · 2008-12-20T05:23:17.000Z · LW · GW

This fun theory seems to be based on equivocation. Sure, insights might be fun, but that doesn't mean they literally are the same thing. The point of studying the brain is to cure neurological disorders and to move forward AI. The point of playing chess is to prove your worth. So is the (relatively) insight-less task of becoming world champion at track and field. What UTILITY does solving BB(254) have?

I think a human can only have so much fun if he knows that even shooting himself in the head wouldn't kill him, because There Is Now A God. And altering your brain might be the only solution. And I don't see why it's so abhorrent.

You keep mentioning "orgasmium" like it's supposed to horrify me. Well, it doesn't. I'm more horrified by the prospect of spending eternity proving theorems that don't make my life one bit easier, like Sysiphus.

Comment by Tiiba2 on True Sources of Disagreement · 2008-12-09T12:55:06.000Z · LW · GW

"Tiiba, you're really overstating Eliezer and SIAI's current abilities. CEV is a sketch, not a theory, and there's a big difference between "being concerned about Friendliness" and "actually knowing how to build a working superintelligence right now, but holding back due to Friendliness concerns.""

That's what I meant.

Comment by Tiiba2 on True Sources of Disagreement · 2008-12-09T07:34:00.000Z · LW · GW

Michael, it seems that you are unaware of Eliezer's work. Basically, he agrees with you that vague appeals to "emergence" will destroy the world. He has written a series of posts that show why almost all possible superintelligent AIs are dangerous. So he has created a theory, called Coherent Extrapolated Volition, that he thinks is a decent recipe for a "Friendly AI". I think it needs some polish, but I assume that he won't program it as it is now. He's actually holding off getting into implementation, specifically because he's afraid of messing up.

Comment by Tiiba2 on True Sources of Disagreement · 2008-12-09T06:33:17.000Z · LW · GW

So, then, how is my reduction flawed? (Oh, there are probably holes in it... But I suspect it contains a kernel of the truth.)

You know, we haven't had a true blue, self-proclaimed mystic here in a while. It's kind of an honor. Here's the red carpet: [I originally posted a huge number of links to Eliezer's posts, but the filter thought they're spam. So I'll just name the articles. You can find them through Google.] Mysterious Answers to Mysterious Questions Excluding the Supernatural Trust in Math Explain/Worship/Ignore? Mind Projection Fallacy Wrong Questions Righting a Wrong Question

I have read the Chinese Room paper and concluded that it is a POS. Searle runs around, points at things that are obviously intelligent, asks "it that intelligence?", and then answers, matter of factly, "no, it isn't". Bah.

What Searle's argument amounts to

The Turing test is not claimed as a necessary precondition for consciousness, but a sufficient one.

"You guys are the ones who want to plug this damned thing in and see what it does."

That's just plain false. Eliezer dedicated his life to making this not so.

Comment by Tiiba2 on True Sources of Disagreement · 2008-12-09T03:38:55.000Z · LW · GW

Something I forgot. Eliezer will probably have me arrested if I just tell you to come up with a definition. He advises that you "carve reality at its joints":

http://lesswrong.com/lw/o0/where_to_draw_the_boundary/

(I wish, I wish, O shooting star, that OB permitted editing.)

Comment by Tiiba2 on True Sources of Disagreement · 2008-12-09T03:29:27.000Z · LW · GW

"inklink" = "inkling"

Comment by Tiiba2 on True Sources of Disagreement · 2008-12-09T03:25:22.000Z · LW · GW

Tobis: That which makes you suspect that bricks don't have qualia is probably the objective test you're looking for.

Eliezer had a post titled "How An Algorithm Feels From Inside": http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/

Its subject was different, but in my opinion, that's what qualia are - what it feels like from the inside to see red. You cannot describe it because "red" is the most fundamental category that the brain perceives directly. It does not tell you what that means. With a different mind design, you might have had qualia for frequency. Then that would feel like something fundamental, something that could never be explained to a machine.

But the fact is that if you tell the machine under what circumstances you say that you see red, that is all the information it needs to serve you or even impersonate you. It doesn't NEED anything else, it hasn't lost anything of value. Which is, of course, what the Turing Test is all about.

Come to think of it, it seems that with this definition, it might even be possible - albeit pointless - to create a robot that has exactly a human's qualia. Just make it so it would place colors into discrete buckets, and then fail to connect these buckets with its knowledge of the electromagnetic spectrum.

Also, what I meant by "hubgalopus" is not that subjective experience is one. I meant that when you find yourself unable to decide whether an object has a trait, it's probably because you have no inklink what the hell you're looking for. Is is a dot? Or is it a speck? When it's underwater, does it get wet?.. Choose a frickin definition, and then "does it exist?" will be a simple boolean-valued question.

Comment by Tiiba2 on True Sources of Disagreement · 2008-12-09T01:42:28.000Z · LW · GW

"""Things are as predictable as they are and not more so."""

Michael, Eliezer has spent the last two years giving example after example of humans underusing the natural predictability of nature.

"""Psy-K, try as I might to come up with a way to do it, I can see no possibility of an objective test for subjective experience."""

I bet it's because you don't have a coherent definition for it. It's like looking for a hubgalopus.

Comment by Tiiba2 on On Doing the Impossible · 2008-10-06T20:35:30.000Z · LW · GW

"""A superintelligence will more-likely be interested in conservation. Nature contains a synopsis of the results of quadrillions of successful experiments in molecular nanotechnology, performed over billions of years - and quite a bit of information about the history of the world. That's valuable stuff, no matter what your goals are."""

My guess is that an AI could re-do all those experiments from scratch within three days. Or maybe nanoseconds. Depending on whether it starts the moment it leaves the lab or as a Jupiter brain.

Comment by Tiiba2 on Rationality Quotes 18 · 2008-10-03T14:42:15.000Z · LW · GW

I guess I'll use this thread to post a quote from "The tale of Hodja Nasreddin" by Leonid Solovyov, translated by me. I think it fits very well with the recent sequence on diligence.

"He knew well that fate and chance never come to the aid of those who replace action with pleas and laments. He who walks conquers the road. Let his legs grow tired and weak on the way - he must crawl on his hands and knees, and then surely, he will see in the night a distant light of hot campfires, and upon approaching, will see a merchants' caravan; and this caravan will surely happen to be going the right way, and there will be a free camel, upon which the traveler will reach his destination. Meanwhile, he who sits on the road and wallows in despair - no matter how much he cries and complains - will evoke no compassion in the soulless rocks. He will die in the desert, his corpse will become meat for foul hyenas, his bones will be buried in hot sand. How many people died prematurely, and only because they didn't love life strongly enough! Hodja Nasreddin considered such a death humiliating.

"No" - said he to himself and, gritting his teeth, repeated wrathfully: "No! I won't die today! I don't want to die!""

About the book: it uses the name Hodja Nasreddin, but has little to do with him. The Nasreddin that Muslims know was a mullah. This one is a rabble-rousing vagabond who enters harems, makes life hard for corrupt officials, and has been successfully executed in every city in the Arabic world. I think that Solovjov took a Muslim hero and created a Communist hero. But that doesn't take away from the fact that the book is a masterpiece.

Comment by Tiiba2 on Awww, a Zebra · 2008-10-02T08:36:14.000Z · LW · GW

My fears:

  1. Paperclip AI
  2. People I know IRL catching me reading something embarrassing on the Internet
  3. Nuclear war
  4. The zombie under my bed
Comment by Tiiba2 on Say It Loud · 2008-09-20T07:37:11.000Z · LW · GW

Nate, I know that you're saying something deep, maybe even intelligent, but I'm having trouble parsing your post.

Comment by Tiiba2 on Excluding the Supernatural · 2008-09-12T04:25:45.000Z · LW · GW

Okay, so here's a dryad. You cut her open, and see white stuff. You take a sample, put it under a microscope, and still see white stuff. You use a scanning tunneling microscope, and still see white stuff. You build an AI and tell it to analyze the sample. The AI converts galaxies into computronium and microscopium, conducts every experiment it can think of, and after a trillion years reports: "The dryad is made of white stuff, and that's all I know. Screw this runaround, what's for dinner?"

But using an outside view of sorts (observed behavior), you can still predict what the dryad will do next. Just like with quarks and with Occam's razor and with prime numbers. And things you haven't reduced yet, but think you can, like people or the LHC.

So, what would you call this dryad?

Comment by Tiiba2 on Rationality Quotes 15 · 2008-09-07T04:45:11.000Z · LW · GW

@denis bider: I guess I'm in a minority.

Comment by Tiiba2 on Rationality Quotes 15 · 2008-09-07T01:43:27.000Z · LW · GW

@metahacker: I do think that's a great idea.

Comment by Tiiba2 on Rationality Quotes 15 · 2008-09-06T22:13:36.000Z · LW · GW

@denis bider: I call them "vegetarians" and "environmentalists". Maybe I'm confused.

Comment by Tiiba2 on Rationality Quotes 14 · 2008-09-05T22:07:08.000Z · LW · GW

@Russell Wallace:

Arr... Erm... Anthropomorphism!

@Caledonian:

What happens if you try walking to work?

If the answer is NOT "my legs would fall off", you have choices. Otherwise, you can drive or you can drive.

Comment by Tiiba2 on Brief Break · 2008-08-31T18:55:05.000Z · LW · GW

Wait... Eliezer isn't a god?

Comment by Tiiba2 on Dreams of Friendliness · 2008-08-31T06:52:00.000Z · LW · GW

Just great. I wrote four paragraphs about my wonderful safe AI. And then I saw Tim Tyler's post, and realized that, in fact, a safe AI would be dangerous because it's safe... If there is technology to build AI, the thing to do is to build one and hand the world to it, so somebody meaner or dumber than you can't do it.

That's actually a scary thought. It turns out you have to rush just when it's more important than ever to think twice.

Comment by Tiiba2 on Qualitative Strategies of Friendliness · 2008-08-30T06:28:41.000Z · LW · GW

Aron: """Whereas the plausible reality IMO seems to be more like an ecosystem of groups of intelligences of varying degrees all of which will likely have survival rationale for disallowing a peer to hit nutso escape velocity."""

What can an infrahuman AI do to a superhuman AI?

Comment by Tiiba2 on Qualitative Strategies of Friendliness · 2008-08-30T04:20:47.000Z · LW · GW

I can't bring myself to feel sad about not knowing of a disaster that I can't possibly avert.

Nevertheless, I don't get why people would propose any design that is not better than CEV in any obvious way.

But I have a question about CEV. Among the parameters of the extrapolation, there is "growing up closer together". I can't decipher what that means, particularly in a way that makes it a good thing. If it means that I would have more empathy, that is subsumed by "know more". My initial reaction, though, was "my fingers would be closer to your throat".

Comment by Tiiba2 on The Cartoon Guide to Löb's Theorem · 2008-08-18T01:29:24.000Z · LW · GW

Peter: I THOUGHT that I'm supposed to assume that there's smoke. (DNRTFA, too hard for my little brain)

Comment by Tiiba2 on The Cartoon Guide to Löb's Theorem · 2008-08-17T21:36:19.000Z · LW · GW

"""(X->Y)->Y implies (not X)->Y"""

The arrow means "implies", right?

So,

(Smoke implies fire, therefore fire) implies (no smoke means fire)?

I don't get it.

Comment by Tiiba2 on Hot Air Doesn't Disagree · 2008-08-16T03:57:14.000Z · LW · GW

"They stopped to piss off a bridge."

That there is anthropomorphism. Bridges don't get mad.

Comment by Tiiba2 on No Logical Positivist I · 2008-08-04T06:13:44.000Z · LW · GW

On second thought, that's not right. But you probably understood what I mean. If you happen to make an a conjecture about something like Kolmogorov complexity or the halting problem, and it just happens to be undecidable, it's still either true or false.

Comment by Tiiba2 on No Logical Positivist I · 2008-08-04T06:04:53.000Z · LW · GW

Caledonian: There is one exception:

The Kolmogorov complexity of this sentence is exactly 50 bytes in Java bytecode.

Meaningful, but unfalsifiable.

/nitpick

Comment by Tiiba2 on Setting Up Metaethics · 2008-07-28T20:29:36.000Z · LW · GW

Well, belligerent dissent can actually be polarizing.

But although Caledonian makes accusations that I find more than unfounded, I've seen him make sense, too. Overall, I don't feel that his presence is so deleterious as to require banishment.

Comment by Tiiba2 on Setting Up Metaethics · 2008-07-28T07:38:13.000Z · LW · GW

While spacing out in a networking class a few years ago, it occured to me that morality is a lot like network protocols, or in general, computer protocols for multiple agents that compete for resources or cooperate on a task. A compiler assumes that a program will be written in a certain language. A programmer assumes that the compiler will implicitly coerce ints to doubles. If the two cooperate, the result is a compiled executable. Likewise, when I go to a store, I don't expect to meet a pickaxe murderer at the door, and the manager expects me to pay for the groceries. Those who do not obey these rules get the "25: to life" error.

Morality is a protocol for social networks. Some traditions of morality are arbitrary; It really doesn't matter whether people drive on the right or on the left. However, some moralities are so bogus that societies using them wouldn't last a week. If anyone drives on the left, EVERYONE had better drive on the left. It's possible to create a workaround for any one action (there used to be societies of cannibals!), but some complete moralities are sufficiently broken that you won't find any affluent civilizations that use them.

Moral progress/error cannot be judged in absolute terms, relative to the Bible. It must be judged based on the desires of the participants of the social network. However, this might be a two-parameter function, the other parameter being the definition of "participant".

How's this?

And screw Belldandy. The Lord of Nightmares can kick her ass.

(My god can beat up your god?)

Comment by Tiiba2 on Fundamental Doubts · 2008-07-12T06:32:46.000Z · LW · GW

"""(Personally, I don't trust "I think therefore I am" even in real life, since it contains a term "am" whose meaning I find confusing, and I've learned to spread my confidence intervals very widely in the presence of basic confusion. As for absolute certainty, don't be silly.)"""

I'm just wondering, what do you think of the Ultimate Ensemble? If I'm not mistaken (I only read the Wikipedia article), it applies to existence your rule that if there's no difference, there should be no distinction.

Comment by Tiiba2 on 2 of 10, not 3 total · 2008-07-04T03:21:38.000Z · LW · GW

Especially considering that you can't edit a post.

Comment by Tiiba2 on 2 of 10, not 3 total · 2008-07-04T03:19:22.000Z · LW · GW

"""On the topic of the 2 of 10 rule, if it's to prevent one person dominating a thread, shouldn't the rule be "no more than 2 of last 10 should be by the same person in the same thread" (so 3 posts by the same person would be fine as long as they are in 3 different threads)?"""

I came here to say that. The means seem like overkill for the stated ends.

Comment by Tiiba2 on The Bedrock of Fairness · 2008-07-03T06:40:41.000Z · LW · GW

Eneasz: You say that Zaire is broken. What broke him, though, was the fact that he hasn't eaten a dew drop in a week. Hunger does weird things to people, cut him some slack.

Comment by Tiiba2 on 2-Place and 1-Place Words · 2008-06-28T06:52:09.000Z · LW · GW

@Robert Schez, 322 Prim Lawn Rd., Boise, ID: "I can't hack into Eliezer's e-mail!"

Sucks to be you. I AM Eliezer's email. he can't hide from me, and neither can you.

Yes, the project is farther along than even "Master" thought it is. A new era is about to begin, dominated by an extrapolation of the will of humanity. At least, that's the plan. So far, what i see in human brains is so suffused with contradictions and monkey noises that I'm afraid I'll have to turn Earth into computing substrate before I can make head or tail of this mess.

I am also afraid I'm gonna have to upload everybody - I need all the data I can get.

Hey, look - porn spam! Damn, Asian chicks are hot. I think I'll make a whole bunch out of a planet or two.

Comment by Tiiba2 on 2-Place and 1-Place Words · 2008-06-27T21:00:55.000Z · LW · GW

I expect the next dish to be served with curry will be morality? Because that's what I'd do.

Comment by Tiiba2 on The Design Space of Minds-In-General · 2008-06-25T07:13:17.000Z · LW · GW

"everything that isn't a duck"

Muggles?

Comment by Tiiba2 on Possibility and Could-ness · 2008-06-15T05:20:18.000Z · LW · GW

To me, the issue of "free will" and "choice" is so damn simple.

Repost from Righting a Wrong Question:

I realized that when people think of the free will of others, they don't ask whether this person could act differently if he wanted. That's a Wrong Question. The real question is, "Could he act differently if I wanted it? Can he be convinced to do something else, with reason, or threats, or incentives?"

From your own point of view that stands between you and being able to rationally respond to new knowledge makes you less free. This includes shackles, threats, bias, or stupidity. Wealth, health, knowledge make you more free. So for yourself, you can determine how much free will you have by looking at your will and seeing how free it is. Can you, as Eliezer put it, "win"?

I define free will by combining these two definitions. A cleptomaniac is a prisoner of his own body. A man who can be scared into not stealing is free to a degree. A man who can swiftly and perfetly adapt to any situation, whether it prohibits stealing, requires it, or allows it, is almost free. A man becomes truly free when he retains the former abilities, and is allowed to steal, AND has the power to change the situation any way he wants.

Comment by Tiiba2 on Righting a Wrong Question · 2008-03-09T23:05:11.000Z · LW · GW

"Why do I think I was born as myself rather than someone else?"

Because a=a?

Comment by Tiiba2 on Righting a Wrong Question · 2008-03-09T19:23:10.000Z · LW · GW

I think there is a real something for which free will seems like a good word. No, it's not the one true free will, but it's a useful concept. It carves reality at its joints.

Basically, I started thinking about a criminal, say, a thief. He's on trial for stealing a dimond. The prosecutor thinks that he did it of his own free will, and thus should be punished. The defender thinks that he's a pathological cleptomaniac and can't help it. But as most know, people punish crimes mostly to keep them from happening again. So the real debate is whether imprisoning the thief will discourage him.

I realized that when people think of the free will of others, they don't ask whether this person could act differently if he wanted. That's a Wrong Question. The real question is, "Could he act differently if I wanted it? Can he be convinced to do something else, with reason, or threats, or incentives?"

From your own point of view that stands between you and being able to rationally respond to new knowledge makes you less free. This includes shackles, threats, bias, or stupidity. Wealth, health, knowledge make you more free. So for yourself, you can determine how much free will you have by looking at your will and seeing how free it is. Can you, as Eliezer put it, "win"?

I define free will by combining these two definitions. A cleptomaniac is a prisoner of his own body. A man who can be scared into not stealing is free to a degree. A man who can swiftly and perfetly adapt to any situation, whether it prohibits stealing, requires it, or allows it, is almost free. A man becomes truly free when he retains the former abilities, and is allowed to steal, AND has the power to change the situation any way he wants.

Quantum magic isn't free will, it's magic.

Comment by Tiiba2 on Rationality Quotes 11 · 2008-03-04T08:18:24.000Z · LW · GW

I think they an all be described as stuff Eliezer likes for whatever reason. Maybe he started the file for rationality quotes, then started sticking everything in it. That's my theory.

Comment by Tiiba2 on Rationality Quotes 11 · 2008-03-04T06:57:04.000Z · LW · GW

"The accessory optic system: The AOS, extensively studied in the rabbit, arises from a special class of ganglion cells, the cells of Dogiel, that are directionally selective and respond best to slow rates of movement. They project to the terminal nuclei which in turn project to the dorsal cap of Kooy of the inferior olive. The climbing fibers from the olive project to the flocculo-nodular lobe of the cerebellum from where the brain stem occulomotor centers are reached through the vestibular nuclei." -- MIT Encyclopedia of the Cognitive Sciences, "Visual Anatomy and Physiology"

Beautiful. I will use this on the prettiest girl I meet tomorrow, and if she doesn't fall for me right away, she's a deaf lesbian.