Posts

Comments

Comment by Anonymous_Coward4 on Against Maturity · 2009-02-19T09:16:13.000Z · LW · GW ...never tried a single drug

I'm going to presume you've drank tea, or taken medicine, and under that presumption I can say 'Yes you did'. It's just that the drugs you chose were the ones that adults in your culture had decided were safe... things like caffeine say. Had you grown up in a mormon culture or Amish culture, you might not be able to write the same thing you just did, so isn't what you just wrote an accident of birth rather than a conscious choice about your use of particular chemical structures inside your body?

I would imagine that by choice of locale, you may have passively taken nicotine, too, albeit in small quantities.

, never lost control to hormones...

.... really? Never got angry then, or too depressed to work? Crikey. Or do you mean you only lost control in the way that your parents and culture approved of; again, nothing more than an accident of birth?

Comment by Anonymous_Coward4 on Good Idealistic Books are Rare · 2009-02-19T09:01:04.000Z · LW · GW

One book suggestion. "On Intelligence" by Jeff Hawkins.

Although there is a plug for his own research model, I would summarise the book as:

  • brains are a bit tough
  • but they can't be that tough
  • someone is going to figure it out eventually
  • so let's try to figure out
  • here's everything i learned myself, hope it helps YOU figure this out, if I can't.

Enjoyable book actually, regardless of what you think of his own preferred AI technique.

Comment by Anonymous_Coward4 on Epilogue: Atonement (8/8) · 2009-02-07T17:45:10.000Z · LW · GW AC, I can't stand Banks's Excession.

Interesting, and I must admit I am surprised.

Regardless of personal preferences though... it seems the closest match for the topic at hand. But hey, it's your story...

"Excession; something excessive. Excessively aggressive, excessively powerful, excessively expansionist; whatever. Such things turned up or were created now and again. Encountering an example was one of the risks you ran when you went a-wandering..."

Comment by Anonymous_Coward4 on Epilogue: Atonement (8/8) · 2009-02-06T12:52:47.000Z · LW · GW

Still puzzled by the 'player of games' ship name reference earlier in the story... I keep thinking, surely Excession is a closer match?

Comment by Anonymous_Coward4 on Normal Ending: Last Tears (6/8) · 2009-02-05T15:15:02.000Z · LW · GW
"But I'm having trouble figuring out the superhappys. I can think of a story with rational and emotional protagonists, a plot device relating to a 'charged particle', and the story is centered around a solar explosion (or risk of one). That story happens to involve 3 alien genders (rational, emotional, parental) who merge together to produce offspring."
The story you're thinking of is The Gods Themselves by Isaac Asimov, the middle section of which stars the aliens you describe.

Yes, I believe I already identified the story in the final sentence of my post. But thanks anyway for clarifying it for those that didn't keep reading till the end :-)

Anonymous.

Comment by Anonymous_Coward4 on Normal Ending: Last Tears (6/8) · 2009-02-04T22:32:57.000Z · LW · GW

Regarding ship names in the koan....

Babyeaters: http://en.wikipedia.org/wiki/Midshipman's_Hope. Haven't read, just decoded from the name in the story.

But I'm having trouble figuring out the superhappys. I can think of a story with rational and emotional protagonists, a plot device relating to a 'charged particle', and the story is centered around a solar explosion (or risk of one). That story happens to involve 3 alien genders (rational, emotional, parental) who merge together to produce offspring. It should be known to many people on this thread but it's been about 10 years since I last read it. Asimov, the gods themselves.

Anonymous.

Comment by Anonymous_Coward4 on Three Worlds Decide (5/8) · 2009-02-03T13:52:21.000Z · LW · GW

Since this is fiction (thankfully, seeing how many might allow the superhappy's the chance they need to escape the box)... an alternative ending.

The Confessor is bound by oath to allow the young to choose the path of the future no matter how morally distasteful.

The youngest in this encounter are clearly the babyeaters, technologically (and arguably morally).

Consequently the Confessor stuns everyone on board, pilots off to Baby Eater Prime and gives them the choice of how things should proceed from here.

The End

Comment by Anonymous_Coward4 on Three Worlds Decide (5/8) · 2009-02-03T13:09:10.000Z · LW · GW

Anonymous Coward's defection isn't. A real defection would be the Confessor anesthetizing Akon, then commandeering the ship to chase the Super Happies and nova their star.

Your defection isn't. There are no longer any guarantees of anything whenever a vastly superior technology is definitely in the vicinity. There are no guarantees while any staff member of the ship is still conscious besides the Confessor and it is a known fact (from the prediction markets and people in the room) that at least some of humanity is behaving very irrationally.

Your proposal takes unnecessary ultimate risk (the potential freezing, capture or destruction of the human ship upon arrival, leading to the destruction of humanity - since we don't know what the superhappys will REALLY do, after all) in exchange for unnecessary minimal gain (so we can attempt to reduce the suffering of a species whose technological extent we don't truly know and whose value system we know to be in at least one place substantially opposed to our own, and whom we can remain ignorant of, as a species, by anaesthetised self-destruction of the human ship).

It is more rational to take action as soon as possible to guarantee a minimum acceptable level of safety for humankind and its value system, given the unknown but clearly vastly superior technological capabilities of the superhappys if no action is immediately taken.

If you let an AI out of the box and it tells you its value system is opposed to humanity's and that it intends to convert all humanity to a form that it prefers, then it FOOLISHLY trusts you and steps back inside the box for a minute, then what you do NOT do is:

  • mess around
  • give it any chance to come back out of the box
  • allow anyone else the chance to let it out of the box (or the chance to disable you while you're trying to lock the box).

Anonymous

Comment by Anonymous_Coward4 on Three Worlds Decide (5/8) · 2009-02-03T12:57:56.000Z · LW · GW

Attempting to paraphrase the known facts.

  1. You and your family and friends go for a walk. You walk into an old building with 1 entrance/exit. Your friends/family are behind you.

  2. You notice the door has a irrevocable self-locking mechanism if it is closed.

  3. You have a knife in your pocket.

  4. As you walk in you see three people dressed in 'lunatic's asylum' clothes.

  5. Two of them are in the corner; one is a guy who is beating up a woman. He appears unarmed but may have a concealed weapon.

  6. The guy shouts to you that 'god is making him do it' and suggests that you should join in and attack your family who are still outside the door.

  7. The 3rd person in the room has a machine gun pointed at you. He tells you that he is going to give you and your family 1000000 pounds each if you just step inside, and he says he is also going to stop the other inmate from being violent.

  8. You can choose to close the door (which will lock). What will happen next inside the room will then be unknown to you.

  9. Or you can allow your family and friends into the room with the lunatics at least one of whom is armed with a machine gun.

  10. Inside the room, as long as that machine gun exists, you have no control over what actually happens next in the room.

  11. Outside the room, once the door is locked, you also have no control over what happens next in the room.

  12. But if you invite your family inside, you are risking that they may be killed by a machine or may be given 1 million pounds. But the matter is in the hands of the machine gun toting lunatic.

  13. Your family are otherwise presently happy and well adjusted and do not appear to NEED 1 million pounds, though some might benefit from it a great deal.

Personally in this situation I wouldn't need to think twice; I would immediately close the door. I have no control over the unfortunate situation the woman is facing either way, but at least I don't risk a huge negative outcome (the death of myself and my family at the hands of a machine gun armed lunatic).

It is foolish to risk what you have and need for what you do not have, do not entirely know, and do not need.

Comment by Anonymous_Coward4 on Three Worlds Decide (5/8) · 2009-02-03T12:30:02.000Z · LW · GW

Wait a week for a Superhappy fleet to make the jump into Babyeater space, then set off the bomb.

You guys are very trusting of super-advanced species who already showed a strong willingness to manipulate humanity with superstimulus and pornographic advertising.

Comment by Anonymous_Coward4 on Three Worlds Decide (5/8) · 2009-02-03T12:28:23.000Z · LW · GW

@Anonymous Coward:Reasonable, except even by defecting you haven't gained the substantially greater payoff that is the whole point of Prisoner's Dilemma. In other words, like he asks: what about the Babyeater children?

I misread the story and thought the superhappys had flown off to deal with them first. But in fact, the superhappys are 'returning to their home planet' before going to deal with the babyeaters. "This will make it considerably easier to sweep through their starline network when we return.". Oops.

In any event, if the ship's crew is immediately anaesthetised and the sun exploded, then earth remains ignorant of the suffering of the babyeaters, and earth is not coerced to have its value system changed by an external superior power. The only human that feels bad about all this is the one remaining conscious human on the ship before it is fried. The babyeaters experience no net change in their position and the superhappys have made a net loss (by discovering unhappiness in the universe and being made unable to fix it). Humanity has met a more powerful force with a very different value system that wishes to impose values on other cultures, but has achieved a draw. Humanity remains ignorant of suffering - again a draw when the only other options are to lose in some way (either by imposing values when we feel we have no right; or by knowingly allowing suffering).

Of course the confessor might wish to first transmit a message back to earth that neglects to mention any babyeaters, and warns of the highly dangerous 'superhappys', and perhaps describing them falsely as super-powerful babyeaters (ala alderson scientists) to prevent anyone from being tempted to find them, thereby preventing any individual from sacrificing the human race's control of it's own values...

I guess it depends on whether he believes 'right to choose your own species values' ranks above 'right to experience endless orgasms'. If he truly has no preference for either, he might as well consider everyone dangerously highly strung and emotional and an unsuitable sample size to make decisions for humanity. In that case, perhaps he should stun everyone in the control room and cause the ship to return to earth, if he is able to do so, to tell humanity what has happened in full detail. This at least allows the decision to be made by a larger fraction of humanity.

A final practical point. So far, the people on the ship only know what they have received in communications or what they can measure with their sensors. In fact, we can't trust either of these things; a sufficiently advanced species can fool sensors and any species can lie. We can observe the superhappys are clearly more technologically advanced from the evidence of the one ship present, and the growth rate suggests they can rapidly overpower humanity. Humanity has no idea what the superhappys will really do when they return. In fact, if they wish, they might simply turn all humans into superhappys and throw away all human values, without honouring the deal. They could torture all humans till the end of time if they wish or turn us into babyeaters. Equally, we know there is a race that is pleased to advertise they eat babies and wishes to encourage other races to do the same; and we know that they have one quite advanced ship that is slightly technologically inferior to us; what else they have, we don't really know. Perhaps the babyeaters have better crews and ships back home. Perhaps the babyeaters have advanced technology that masks the real capabilities of their ship. All we have is a single unreliable sample point of two advanced civilisations with very different value systems. What we have here is a giant knowledge gap.

The only thing we know for certain is that the superhappys are almost certainly technologically superior to humanity and can basically do whatever they want to us; unless the sun is blown up. And we know that the babyeaters have culturally unacceptable values to us; and we don't know if they might really have the ability to impose those values on us or not. Given this knowledge of these two dangerous forces, one of which is vastly superior, and one of which is advanced and might later turn out to be superior, if humanity can achieve a 'zero loss outcome' for itself by blowing up the sun, it is doing rather well in such an incredibly dangerous situation. Humanity should take advantage of the fact the superhappys already placed a 'co-operate' card on the table and allowed us decide what to do next.

Comment by Anonymous_Coward4 on Three Worlds Decide (5/8) · 2009-02-03T10:03:16.000Z · LW · GW

But standing behind his target, unnoticed, the Ship's Confessor had produced from his sleeve the tiny stunner - the weapon which he alone on the ship was authorized to use, if he made a determination of outright mental breakdown. With a sudden motion, the Confessor's arm swept out...


... and anaesthetised everyone in the room. He then went downstairs to the engine room, and caused the sun to go supernova, blocking access to earth.

Regardless of his own preferences, he takes the option for humanity to 'painlessly' defect in inter-stellar prisoners dilemma, knowing apriori that the superhappys chose to co-operate.

Comment by Anonymous_Coward4 on Investing for the Long Slump · 2009-01-22T11:02:36.000Z · LW · GW

It's worse than you think. You have to find a counterparty that will never, or seldom, engage in '100-1' type bets to other people that might threaten your chances of getting future money. Yet who is offering 100-1 odds right now.

As Buffett says: it's not who you sleep with, it's who THEY're sleeping with, that is the problem.

Comment by Anonymous_Coward4 on Investing for the Long Slump · 2009-01-22T10:28:57.000Z · LW · GW

Eliezer: The problem is not finding a 100-1 bet. The problem is finding a counterparty offering such a bet that is highly probably to be solvent and willing to pay up, after a 30 year depression/recession.

In fact, if anything, it makes more sense to be on the 'cash in hand now' side of such bets. As Warren Buffett is.

Comment by Anonymous_Coward4 on High Challenge · 2008-12-19T16:56:51.000Z · LW · GW

What can I say, apart from "Progress Quest"

http://www.progressquest.com/ http://www.progressquest.com/info.php

Officially voted the Top Role Playing Game for Post-Singularity Sentient Beings.

Anonymous.

Comment by Anonymous_Coward4 on Artificial Mysterious Intelligence · 2008-12-08T05:38:07.000Z · LW · GW

Jeff - thanks for your comment on evolutionary algorithms. Gave me a completely new perspective on these.

Comment by Anonymous_Coward4 on Chaotic Inversion · 2008-11-29T12:04:10.000Z · LW · GW "Unfortunately," I replied, "I have to do something whose time comes in short units, like browsing the Web or watching short videos, because I might become able to work again at any time, and I can't predict when -"

I had a similar problem during my PhD. Basically I had to be a workaholic in order to get through it. However, I still wanted to have some kind of life and occasionally relax my brain. I found that when I tried to watch a DVD, I would either have an idea, or I would start feeling guilty about not working. And then I'd stop watching the DVD. Gradually this made me not want to watch films any more, because I knew I wouldn't be able to sit through the film in a single sitting without having either workaholic guilt, or a distractingly useful idea.

My solution was cinemas. Whenever I felt like I needed a distraction, I would go the cinema with some friends. By paying actual cash and having only a fixed time available to 'enjoy myself', my brain somehow decided 'well, I'm not going to waste this money by walking out to do some work!'. So, I was able to enjoy full length films without considering the possibility of working.

I took a notebook in my pocket, of course, in case a truly amazing idea came mid-film, but thankfully it never did. Besides, the shower room proved to be a reliable source of ideas ... I just wish someone could invent a decent waterproof notepad :-)

I can also recommend vigourous exercise such as martial arts. Although you sacrifice time, you gain improved health and mood, and that's important for the long run...

Anonymous.

Comment by Anonymous_Coward4 on Total Nano Domination · 2008-11-27T16:13:46.000Z · LW · GW

An interesting modern analogy is the invention of the CDO in finance.

Its development lead to a complete change of the rules of the game.

If you had asked a bank manager 100 years ago to envisage ultimate consequences assuming the availability of a formula/spreadsheet for splitting up losses over a group of financial assets, so there was a 'risky' tier and a 'safe' tier, etc., I doubt they would have said 'The end of the American Financial Empire'.

Nonetheless it happened. The ability to sell tranches of debt at arbitary risk levels lead to the banks lending more. That led to mortgages becoming more easily available. That lead to dedicated agents making commission from the sheer volume of lending that became possible. That lead to reduction of lending standards, more agents, more lending. That lead to higher profits which had to be maintained to keep shareholders happy. That lead to increased use of CDOs, more agents, more lending, lower standards... a housing boom... which lead to more lending... which lead to excessive spending... which has left the US over-borrowed and talking about the second great depression.

etc.

It's not quite the FOOM Eliezer talks about, but it's a useful example of the laws of unintended consequences.

Anonymous.

Comment by Anonymous_Coward4 on Total Nano Domination · 2008-11-27T16:04:43.000Z · LW · GW There was never a Manhattan moment when a computing advantage temporarily gave one country a supreme military advantage, like the US and its atomic bombs for that brief instant at the end of WW2.

Only if you ignore Colossus, the computer whose impact on the war was so great that in the UK, they destroyed it afterwards rather than risk it falling into enemy hands.

"By the end of the war, 10 of the computers had been built for the British War Department, and they played an extremely significant role in the defeat of Nazi Germany, by virtually eliminating the ability of German Admiral Durnetz to sink American convoys, by undermining German General Irwin Rommel in Northern Africa, and by confusing the Nazis about exactly where the American Invasion at Normandy France, was actually going to take place."

I.E. 10 computers rendered the German navy essentially worthless. I'd call that a 'supreme advantage' in naval military terms.

http://www.acsa2000.net/a_computer_saved_the_world.htm

"The Colossus played a crucial role in D-Day. By understanding where the Germans had the bulk of their troops, the Allies could decide which beaches to storm and what misinformation to spread to keep the landings a surprise."

http://kessler.blogs.nytimes.com/tag/eniac

Sure, it didn't blow people up into little bits like an atomic bomb, but who cares? It stopped OUR guys getting blown up into little bits, and also devastated the opposing side's military intelligence and command/control worldwide. It's rather difficult to measure the lives that weren't killed, and the starvation and undersupply that didn't happen.

Arguably, algorithmic approaches had a war-winning level of influence even earlier:

http://en.wikipedia.org/wiki/Zimmermann_Telegram

Anonymous.

Comment by Anonymous_Coward4 on Total Nano Domination · 2008-11-27T10:16:40.000Z · LW · GW There was never a Manhattan moment when a computing advantage temporarily gave one country a supreme military advantage, like the US and its atomic bombs for that brief instant at the end of WW2.

Only if you completely ignore The Colossus.

"By the end of the war, 10 of the computers had been built for the British War Department, and they played an extremely significant role in the defeat of Nazi Germany, by virtually eliminating the ability of German Admiral Durnetz to sink American convoys, by undermining German General Irwin Rommel in Northern Africa, and by confusing the Nazis about exactly where the American Invasion at Normandy France, was actually going to take place."

I.E. 10 computers rendered the entire German navy essentially worthless. I'd call that a 'supreme advantage' in naval military terms.

http://www.acsa2000.net/a_computer_saved_the_world.htm

"The Colossus played a crucial role in D-Day. By understanding where the Germans had the bulk of their troops, the Allies could decide which beaches to storm and what misinformation to spread to keep the landings a surprise."

http://kessler.blogs.nytimes.com/tag/eniac/

Sure, it didn't blow people up into little bits like an atomic bomb, but who cares? It stopped OUR guys getting blown up into little bits, and also devastated the opposing side's military intelligence and command/control worldwide. It's rather difficult to measure the lives that weren't killed, and the starvation and undersupply that didn't happen.

Arguably, algorithmic approaches had a war-winning level of influence even earlier:

http://en.wikipedia.org/wiki/Zimmermann_Telegram.

Anonymous.

Comment by Anonymous_Coward4 on Logical or Connectionist AI? · 2008-11-17T15:26:25.000Z · LW · GW

And you'd have been right. (Ever try running Bit Torrent on a 9600 bps modem? Me neither. There's a reason for that.)

Not sure I see your point. All the high speed connections were built long before bittorrent came along, and they were being used for idiotic point-to-point centralised transfers.

All that potential was achieving not much, before the existance of the right algorithm or approach to exploit it. I suspect a strong analogy here with future AI.

Comment by Anonymous_Coward4 on Logical or Connectionist AI? · 2008-11-17T14:32:24.000Z · LW · GW >It took 17 years to go from perceptrons to back propagation... >... therefore I have moldy Jell-O in my skull for saying we won't go from manually debugging buffer overruns to superintelligent AI within 30 years...

If you'd asked me in 1995 how many people it would take for the world to develop a fast, distributed system for moving films and TV episodes to people's homes on an 'when you want it, how you want it' basis, internationally, without ads, I'd have said hundreds of thousands. In practice it took one guy with the right algorithm, depending on whether you pick napster or bittorrent as the magic that solves the problem without the need for any new physical technologies.

The thing about self-improving AI, is that we only need to get the algorithm right (or wrong :-() once.

We know with probability 1 it's possible to create self-improving intelligence. After all, that's what most humans are. No doubt other solutions exist. If we can find an algorithm or heuristic to implement any one of these solutions, or if we can even find any predecessor of any one of them, then we're off - and given the right approach (be that algorithm , machine, heuristic, or whatever) it should be simply a matter of throwing computer power (or moore's law) at it to speed up the rate of self-improvement. Heck, for all I know it could be a giant genetically engineered brain in a jar that cracks the problem.

Put it this way. Imagine you are a parasite. For x billion years you're happy, then some organism comes up with sexual reproduction and suddenly it's a nightmare. But eventually you catch up again. Then suddenly, in just 100 years, human society basically eradicates you completely out of the blue. The first 50 years of that century are bad. The next 20 are hideous. The next 10 are awful. The next 5 are disastrous... etc.

Similarly useful powerplant-scale nuclear fusion has always been 30 years away. But at some point, I suspect it will suddenly be only 2 years away, completely out of the blue....

Comment by Anonymous_Coward4 on Make an Extraordinary Effort · 2008-10-07T15:37:02.000Z · LW · GW

Eliezer, this is probably the most useful blog on the internet. Don't stop writing...

Comment by Anonymous_Coward4 on Friedman's "Prediction vs. Explanation" · 2008-09-29T13:04:08.000Z · LW · GW

Throughout these replies there is a belief that theory 1 is 'correct through skill'. With that in mind it is hard to come to any other conclusion than 'scientist 1 is better'.

Without knowing more about the experiments, we can't determine if theory 1's 10 good predictions were simply 'good luck' or accident.

If your theory is that the next 10 humans you meet will have the same number of arms as they have legs, for example...

There's also potential for survivorship bias here. If the first scientist's results had been 5 correct, 5 wrong, we wouldn't be having this discussion about the quality of their theory-making skills. Without knowing if we are 'picking a lottery winner for this comparison' we can't tell if those ten results are chance or are meaningful predictions.

Comment by Anonymous_Coward4 on Friedman's "Prediction vs. Explanation" · 2008-09-29T07:12:25.000Z · LW · GW Which do we believe?

What exactly is meant here by 'believe'? I can imagine various interpretations.

a. Which do we believe to be 'a true capturing of an underlying reality'? b. Which do we believe to be 'useful'? c. Which do we prefer, which seems more plausible?

a. Neither. Real scientists don't believe in theories, they just test them. Engineers believe in theories :-)

b. Utility depends on what you're trying to do. If you're an economist, then a beautifully complicated post-hoc explanation of 20 experiments may get your next grant more easily than a simple theory that you can't get published.

c. Who developed the theories? Which theory is simpler? (Ptolemy, Copernicus?) Which theory fits in best with other well-supported pre-existing theories? (Creationism, Evolution vs. theories about disease behaviour). Did any unusual data appear in the last 10 experiments that 'fitted' the original theory but hinted towards an even better theory? What is meant by 'consistent' (how well did it fit within error bands, how accurate is it)? Perhaps theory 1 came from Newton, and theory 2 was thought up by Einstein. How similar were the second sets of experiments to the original set?

How easy/difficult were the predictions? In other words, how well did they steer us through 'theory-space'? If theory 1 predicts the sun would come up each day, it's hardly as powerful as theory 2 which suggests the earth rotates around the sun.

What do we mean when we use the word 'constructs'? Perhaps the second theorist blinded himself to half of the results, constructed a theory, then tested it, placing himself in the same position as the original theorist but with the advantage of having tested his theory before proclaiming it to the world? Perhaps the constructor repeated this many times using different subsets of the data to build a predictor and test it; and chose the theorem which was most consistently suggested by the data and verified by subsequent testing.

Perhaps he found that no matter how he sliced and diced and blinded himself to parts of the data, his hand unerringly fell on the same 'piece of paper in the box' (to use the metaphor from the other site).

Another issue is 'how important is the theory'? For certain important theories (development of cancer, space travel, building new types of nuclear reactors etc.), neither 10 nor 20 large experiments might be sufficient for society to confer 'belief' in an engineering sense.

Other social issues may exist. Galileo 'believed' bravely, but perhaps foolishly, depending on how he valued his freedom.

d. Setting aside these other issues, and in the absence of any other information: As a scientist, my attitude would be to believe neither, and test both. As an engineer, my attitude would be to 'prefer' the first theory (if forced to 'believe' only one), and ask a scientist to check out the other one.

Comment by Anonymous_Coward4 on Above-Average AI Scientists · 2008-09-28T12:19:51.000Z · LW · GW

" he turned out to be... ...a creationist."

On the one hand ... you find yourself impressed by the aura of '1000-year-old vampire' academics...

And on the other ... this guy worships a 2000-year old bloke who asked people to drink his blood, and was famous for rising from his coffin.

Are your worldviews really so far apart? ;-)