Comment by tiiba3 on True Sources of Disagreement · 2008-12-08T23:40:58.000Z · score: 1 (1 votes) · LW · GW

"""it is quite likely that the universe may not be able to support vast orders of magnitude of intelligence"""


Comment by tiiba3 on Artificial Mysterious Intelligence · 2008-12-07T22:18:39.000Z · score: 0 (0 votes) · LW · GW

A question about Andrew Ng, who was mentined in this thread. Is that his real name?

Comment by tiiba3 on Observing Optimization · 2008-11-21T14:29:56.000Z · score: 0 (0 votes) · LW · GW

So, like, every bacterium is its own species?

Comment by tiiba3 on The Nature of Logic · 2008-11-17T03:29:57.000Z · score: 0 (0 votes) · LW · GW

There was a blank header? I didn't notice.

Comment by tiiba3 on Worse Than Random · 2008-11-12T02:44:31.000Z · score: 0 (0 votes) · LW · GW

"""What about from aligning the dots along the lines of the image?"""

Wouldn't you need to find them first?

Comment by tiiba3 on Today's Inspirational Tale · 2008-11-04T22:26:16.000Z · score: 0 (0 votes) · LW · GW

So who was the congressman?

Comment by tiiba3 on Ethical Inhibitions · 2008-10-20T00:21:57.000Z · score: 0 (0 votes) · LW · GW

Grant: group selection does happen, but only very slowly. Natural selection works when its units are destroyed, and tribes go extinct pretty rarely compared to individuals.

Merely being poor does not make a selection unit unfit, as far as evolution is concerned. It has to disappear.

Comment by tiiba3 on Awww, a Zebra · 2008-10-01T03:36:12.000Z · score: 10 (10 votes) · LW · GW

Well, a picture of a zebra is real.

And you'll probably agree that the merely real is, in some ways, in need of improvement, which is the whole point of transhumanism.

Comment by tiiba3 on Is Morality Preference? · 2008-07-05T03:28:36.000Z · score: 6 (6 votes) · LW · GW

"""Obert: "A duty is something you must do whether you want to or not." """

Obey gravity. It's your duty!


Comment by tiiba3 on What Would You Do Without Morality? · 2008-06-29T07:34:00.000Z · score: 0 (0 votes) · LW · GW

I know that random behavior requires choices. The machine IS choosing - but because all choices are equal, the result of "max(actionList)" is implementation-dependent. "Shut down OS" is in that list, too, but "make no choice whatsoever" simply doesn't belong there.

Comment by tiiba3 on What Would You Do Without Morality? · 2008-06-29T06:18:46.000Z · score: 0 (0 votes) · LW · GW

Let's say I have a utlity function and a finite map from actions to utilities. (Actions are things like moving a muscle or writing a bit to memory, so there's a finite number.)

One day, the utility of all actions becomes the same. What do I do? Well, unlike Asimov's robots, I won't self-destructively try to do everything at once. I'll just pick an action randomly.

The result is that I move in random ways and mumble gibberish. Althogh this is perfectly voluntary, it bears an uncanny resemblance to a seizure.

Regardless of what else is in a machine with such a utility function, it will never surpass the standard of intelligence set by jellyfish.

Comment by tiiba3 on 2-Place and 1-Place Words · 2008-06-27T14:05:36.000Z · score: 10 (10 votes) · LW · GW

"we could imagine that "sexiness" starts by eating an Admirer"

Harsh, but fair.

Comment by tiiba3 on Heading Toward Morality · 2008-06-21T07:11:55.000Z · score: 0 (0 votes) · LW · GW

Julian, I think the box you're not opening is Pandora's box.

Comment by tiiba3 on Heading Toward Morality · 2008-06-20T16:59:06.000Z · score: 0 (0 votes) · LW · GW

Virge is mixing up instrumental and terminal values. No biscuit.

Comment by tiiba3 on Ghosts in the Machine · 2008-06-19T06:29:57.000Z · score: 1 (1 votes) · LW · GW

An AI could screw us up just by giving bad advice. We'll be likely to trust it, because it's smart and we're too lazy to think. A modern GPS receiver can make you drive into a lake. An evil AI could ruin companies, start wars, or create an evil robot without lifting a finger.

Besides, it's more fun to create FAI and let it do what it wants than to build Skynet and then try to confine it forever. You'll still have only one chance to test it, whenever you decide to do that.

Comment by tiiba3 on Grasping Slippery Things · 2008-06-17T06:40:56.000Z · score: 1 (1 votes) · LW · GW

I seem to be unable to view the referenced comment.

Hmm, no replies after all this time?

Comment by tiiba3 on Zombie Responses · 2008-04-05T14:46:58.000Z · score: -4 (4 votes) · LW · GW

An economist wrote a physics paper?

"Mangled worlds: the legacy of George W. Bush"

Comment by tiiba3 on Brain Breakthrough! It's Made of Neurons! · 2008-04-01T20:03:24.000Z · score: 6 (6 votes) · LW · GW

A thought occured to me: people who are offended by the idea that a mere machine can think simply might not be imagining the right machine. They imagine maybe a hundred neurons, each extending 10-15 synapses to the others. And then they can't make head or tail of even that, because it's already too big. Scope insensitivity, in other words.

Comment by tiiba3 on Angry Atoms · 2008-03-31T19:15:05.000Z · score: 1 (1 votes) · LW · GW

"So I can imagine another math in which 2+2=5 is not obviously false, but needs a long proof and complicated equations..."

So, from the fact that another mind might take a long time to understand integer operations, you conclude that it has "another math"? And what does that mean for algorithms?

If an intelligence is general, it will be able to, in time, understand any concept that can be understood by any other general or narrow intelligence. And then use it to create an algorithm. Or be conquered.

Comment by tiiba3 on Angry Atoms · 2008-03-31T11:24:38.000Z · score: 1 (1 votes) · LW · GW


"Tiiba: an algorithm is a model in our mind to describe the similarities of those physical systems implementing it. Our mathematics is the way we understand the world... I don't think the Martians with four visual cortexes would have the same math, or would be capable of understanding the same algorithms... So algorithms aren't fundamental, either."

One or more of us is confused. Are you saying that a Martian wiith four visual cortices would be able to compress any file? Would add two and two and get five?

They can try, sure, but it won't work.

Comment by tiiba3 on Angry Atoms · 2008-03-31T04:47:35.000Z · score: -1 (1 votes) · LW · GW

Please delete my post. I see that Tom said that already.

Comment by tiiba3 on Angry Atoms · 2008-03-31T04:44:51.000Z · score: 2 (2 votes) · LW · GW


"That suggests that the algorithm itself is not a physical thing, but something else. And those something elses have very little to do with the laws of physics."

An algorithm can exist even without physics. It's math.

Comment by tiiba3 on Hand vs. Fingers · 2008-03-30T19:03:02.000Z · score: 2 (2 votes) · LW · GW

I'm pretty confused by this discussion. People toss out terms like reductionist or anti-reductionist, and I can't even tell what they disagree about.

Here's what I know:

1) There are quarks and electrons, maybe some strings too. Nobody seems to dispute the quarks and electrons, at least. There are also clusters of particles.

2) Everything above that level is an abstraction that only exists in our heads. Yeah, those atoms really are near each other, but the only thing that makes them a "computer" is that we use them for computing. Same applies to brains and minds.

3) Still, calling a spade a spade is useful, so we do it. Not because it's "really" a spade, but because we can't reason quickly about innumerable swarms of quarks.

And that is all. Call it reductionism, call it anti-reductionism, that's all there is to it. There are no spadetrons, no mindtrons and (so far) no computrons.

So, what is the dispute over?