Posts

Comments

Comment by JoeShipley on The Great Brain is Located Externally · 2009-07-15T22:56:59.738Z · LW · GW

In 'Proust and the Squid', Maryanne Wolf talks about just that, how external reading and writing skills behave as a kind of storage area for brain contents. I can't remember the exact passage (I guess because I have it written down in a book at home) but she talks about how we don't write things down to remember them, but so that it's okay for us to forget them. She goes into an analysis of a few cultures and their strengths and weaknesses when it comes to writing, reading, and memory. Very related and a good read. It follows along a bit with Plato's Phaedrus, the story of Socrates' objection to the written word.

I think it's interesting the way what you have memorized, exactly, seems to change based on where you are or what you are doing. I'm sure most of us without eidetic memories have experienced the sudden loss of some memorized bit of information, only to remember it with ease a few hours later.

Memories of certain friends seem completely solid and close when you are around them but utter inaccessible otherwise, never entering into your day to day thought processes. I often wonder if this is even a brain-wide effect, with different tools in the toolbox other than just memory being triggered or non-triggered based on your environment. It would be strange if some environments caused tools in your brain to trigger that increased your skill at a task, tools that would go forgotten at another time. I think I ran into an example of that the other Friday. I got my arm stuck past the elbow in a narrow metal slat, reaching for something in a warehouse after hours. Legs off the ground, lacking leverage and totally unable to free myself, I struggled for a while then just sat around thinking, trying to figure out what to do. After half an hour, I realized I could spit on my arm to get it lubricated up and slip it out of the slat -- some gross struggling and a couple minutes later I was free, if really bruised up. I feel like I would have come across that solution faster if my worries weren't tending toward being stuck all weekend in the warehouse.

Comment by JoeShipley on Bioconservative and biomoderate singularitarian positions · 2009-06-03T18:30:40.691Z · LW · GW

This is true, I didn't think of this. A superintelligent sheperd. Interesting idea. It just seems so stagnant to me, but I don't have the value meme for it.

Comment by JoeShipley on Bioconservative and biomoderate singularitarian positions · 2009-06-03T18:12:41.529Z · LW · GW

In this case, tame might mean: "Able to co-exist with other males in your species". Our concestor with chimpanzees probably wasn't, but we had to adapt.

Comment by JoeShipley on Bioconservative and biomoderate singularitarian positions · 2009-06-03T18:11:02.640Z · LW · GW

Agreed. One of the interesting points in that Dawkin's book is how sexual selection can result in the enhancement of traits that neither increase survivability or produce more offspring. He talks about 'fashions' spreading within a species, in his personal theory of how humans started walking upright.

Basically, the females or the males start selecting for a particular rare behavior as indicative of something desirable over their lessers, which leads to that male or female exhibiting that trait reproducing and the trait being reinforced for as long as it is in 'fashion'. Several cases of the way that can run away are presented in the book; Testicle size in chimpanzees due to sperm competition and the incredible sexual dimorphism in elephant seals which has driven the male to up to 8 times the size of the female. (Only one male in any given group reproduces.)

There's always a reason for any selection, but when you deal with creatures with any kind of mindfulness, sometimes the reasons stem from the minds rather than perfectly from the biology.

Comment by JoeShipley on Bioconservative and biomoderate singularitarian positions · 2009-06-03T18:05:05.443Z · LW · GW

Well, yes, on Pg. 31 of 'The Ancestor's Tale',

  • Back to the Russian fox experiment, whcih demonstrates the speed with which domestication can happen, and the likelihood that a train of incidental effects would fllow in the wake of selection for tameness. It is entirely probable that cattle, pigs, horses, sheep, goats, chickens, gees,e ducks and camels followed a course which was just as fast, and just as rich in unexpected side-effects. It also seems plausible that we ourselves evolved down a parallel road of domestication after the Agricultural Revolution, towards our own version of tameness and associated by-product traits. In some cases, the story of our own domestication is clearly written in our genes. The classic example, meticulously documented by WIlliam Durham in his book Coevolution, is lactose tolerance... [continued later on the page and then to 32]
  • ...My generalization concerne dthe human species as a whole and, by implication, the wild Homo Sapiens fromn which we are all descended. It is as if I had said, 'Wolves are big, fierce carnivores that hunt in packs and bay at the moon', knowing full well that Pekineses and Yorkshire terriers belie it. The difference is that we have a seperate word, dog, for domestic wolf, but not for domestic human... [continued pg 33]...
  • Is lactose tolerance just the tip of the iceberg? Are our genomes riddled with evidences of domestication, affecting not just our biochemistry but our minds? Like Belyaev's domesticated foxes, and like the domesticated wolves that we call dogs, have we become tamer, more lovable, with the human equivalents of floppy ears, soppy faces and wagging tails? I leave you with that thought, and move hastily on. -Dawkins, 'The Ancestor's Tale'

For what its worth....

Comment by JoeShipley on Bioconservative and biomoderate singularitarian positions · 2009-06-02T20:20:12.004Z · LW · GW

I feel as though if you are hoping to preserve the specific biological scope of humanity you have some significant roadblocks on the way. Our species was generated in millions of years of shifting genes with selection factors blatant to subtle, and more recently we've stripped as many selection factors out as we can. (For good reason, natural selection is a harsh mistress...)

Malaria etc still is a selecting factor as has been documented but they're greatly reduced. In Dawkin's 'The Ancestor's Tale', he tells the story of the Russian Silver Fox breeding experiment in which wild foxes were selected for tame characteristics, resulting in foxes that behaved like border collies.

He hypothesizes that if humans were subject to a similar non-natural sexual selection, picking for the 'tamest' humans (adult male chimpanzees will kill each other and definitely don't work well with groups, while adolescent chimps can work together in large groups no problem -- this is another suggestion w/ the skeleton and other claims for the whole idea of species right after the human-chimpanzee concestor being pushed toward neoteny in order to work in larger groups.)

The border-collie-foxes ended up having floppy ears, liked being pet, yipped and enjoyed playing with humans. If Dawkins is correct, we're a bunch of domesticated humans in a similar fashion. When you throw a wrench into natural selection like that, things start to go out of whack instantly like the constant birth problems Pugs have, bloat in Bassett hounds to back problems in daschunds. It's difficult to predict -- a part of the naturally selected whole that had one purpose, modified to another, can have all kinds of unexpected repercussions. Anything that can 'loves' to do double or triple duty in the body.

So unless you snapshot the human genome the way it is and keep people from randomly reproducing as they like to do, you don't get to maintain a 'pristine' human condition.

Is it preferable to slowly wreck and junk up your genome and species via a more or less unguided (at least in the center of the curve) process, or attempt to steer it in a humane way without eugenics by genetic engineering even though the consequences could be drastic?

The bottom line is that our species will change no matter what we do. I don't know for sure, but I would prefer thought going into it over neglect and leaving the whole thing up to chaos.

Comment by JoeShipley on A social norm against unjustified opinions? · 2009-05-29T20:39:05.888Z · LW · GW

Oh, I'm sorry I misunderstood you. Yeah, it can be tiring. I'm a fairly introverted person and need a good amount of downtime between socialization. I guess I was projecting a little -- I use to think social norms were garbage and useless, until I realized neglecting their utility was irrational and it was primarily an emotional bias against them in never feeling like I 'fit in'. Sometimes it feels like you never stop discovering unfortunate things about yourself...

Comment by JoeShipley on A social norm against unjustified opinions? · 2009-05-29T20:35:05.755Z · LW · GW

Being called 'profoundly stupid' is not exactly a criticism of someone's reasoning. (Not that anybody was called that.) I think we're objecting to this because of how it'll offend people outside of the 'in group' anyway. Besides that, As much as we might wish we were immune to the emotional shock or glee at our thoughts and concepts being ridiculed or praised. I think it would be a rarity here to find someone who didn't. People socializing and exchanging ideas is a type of system -- It has to be understood and used effectively in order to produce the best results -- and calling, essentially, everybody who disagrees with you 'profoundly stupid' is not good social lubrication.

Comment by JoeShipley on This Failing Earth · 2009-05-29T20:20:22.144Z · LW · GW

I agree completely. If intelligence-generated problems cannot outpace the solutions total destruction awaits.

I apologize if the stupid pill characterization feels wrong, I just was trying to think of a viable alternative to increasing intelligence.

Comment by JoeShipley on A social norm against unjustified opinions? · 2009-05-29T20:18:30.238Z · LW · GW

I disagree. It is rational to exploit interpersonal communication for clarity between persons and comfortable use. If the 'language of rationality' can't be understood by the 'irrational people', it is rational to translate best you can, and that can include utilizing societal norms. (For clarity and lubrication of the general process.)

Comment by JoeShipley on A social norm against unjustified opinions? · 2009-05-29T20:14:58.870Z · LW · GW

I agree here: Reading stuff like this totally makes me cringe. I don't know why people of above average intelligence want to make everyone else feel like useless proles, but it seems pretty rampant. Some humility is probably a blessing here, I mean, as frustrating as it is to deal with the 'profoundly stupid', at least you yourself aren't profoundly stupid.

Of course, they probably think given the same start the 'profoundly stupid' person was given, they would have made the best of it and would be just as much of a genius as they are currently.

It's a difficult realization, when you become aware you're more intelligent then average, to be dropped into the pool with a lot of other smart people and realize you really aren't that special. I mean, in a world of some six billion odd, if you are a one-in-a-million genius, that still means you likely aren't in the top hundred smartest people in the world and probably not in the top thousand. It kind of reminds me of grad school stories I've read, with kids who think they are going to be a total gift to their chosen subject ending up extremely cynical and disappointed.

I think people online like to exaggerate their eccentricity and disregard for societal norms in an effort to appeal to the stereotypes for geniuses. I've met a few real geniuses IRL and I know you can be a genius without being horribly dysfunctional.

Comment by JoeShipley on This Failing Earth · 2009-05-29T20:00:58.970Z · LW · GW

I think those problems weren't caused by too much intelligence, but by too little. I know, intelligence enables these problems to form in the first place -- These entities wouldn't be making the problems if they weren't volitional agents with intelligence, but that seems like a kind of cop-out complaint -- Without intelligence there wouldn't be any problems, sure, but there also wouldn't be anything positive either, no concepts whatsoever.

Pollution is a great example: It's intelligent thought that allowed us to start making machines that polluted. Intelligence allowed us to realize we could capitalize on the well-being of the environment and save money by trashing it.

More intelligence realizes that this is still a value trade off, that you aren't getting something for nothing -- Depending on the rate at which you do this, you could seriously damage yourself and the people around you for the trade-off. You have to weigh the costs with the benefits, and if the benefit is 'some money' and the cost is 'destroying the world', the intelligent choice becomes clear. To continue to act for the money isn't intelligence, it's just insanity, overpowering greed.

The cuban missile crisis may have been caused by intelligence building the structures that led up to it, but the solution wasn't making everyone dumber so they couldn't build that kind of thing -- that just reduces overall utility. The solution is to act intelligently in ways that don't destroy the world.

I see your point about moral intelligence being considered separately though, I hadn't thought of that in the context. It's a more elegant package to wrap everything up together, but not always the right thing to do... Thanks for the reply.

Comment by JoeShipley on This Failing Earth · 2009-05-29T19:09:39.021Z · LW · GW

Well, let's look at the short list google gives us:

the ability to comprehend; to understand and profit from experience

Capacity of mind, especially to understand principles, truths, facts or meanings, acquire knowledge, and apply it to practice; the ability to learn and comprehend.

Intelligence is an umbrella term used to describe a property of the mind that encompasses many related abilities, such as the capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn.

My definition was:

To mine information from the past in order to predict the future.

My definition is certainly more broad, but I think it breaks down to the same thing. In order to do anything in the list you have to use resources of past experience in order to apply information for future experience. I've been using that definition roughly since 'Consciousness Explained' came out, and I've never been in a discussion with somebody who felt it was a ridiculous way of describing the functional nature of intelligence especially in terms of biological utility. Do you just dislike summation and want a long, drawn out definition?

Either way, no matter which definition you use, the capabilities granted by extra intelligence do not generate extra problems, but help you mitigate the problems that come up. I don't really understand the criticism in context with the discussion.

Comment by JoeShipley on Eric Drexler on Learning About Everything · 2009-05-29T07:19:49.273Z · LW · GW

I almost totally agree here. The implementations are often provocative rather than rational. It's an emotionally charged subject and yeah, the formalization and thorough understanding of the problems it addresses leaves a lot to be desired. Yet some of the ideas are right, and just discarding the commentary makes the problem seem worse to anyone introduces to those ideas encountering your average 'bunch of intellectual-seeming guys' style forum. I wouldn't say just feminists have generally "no reasoning" about the problems involved -- that strikes me as a little wide of a generalization. Thanks a lot for the well-referenced post.

Comment by JoeShipley on This Failing Earth · 2009-05-28T15:45:23.334Z · LW · GW

I agree, but you still need evidence for the tiny event you are ignoring. Acting on the assumption that you are special/in a unique situation without any justification other than a blind guess and a worry that you are neglecting the opportunity is dangerous. It's the mediocrity principle: When we tend to assume something amazing about ourselves, like that the Earth is the center of the universe, we end up finding out otherwise.

Contrast with the anthropic principle, in that we know we must account for the universe being capable of supporting at least one type of intelligent life. The number of ways that could go wrong are already gigantic, so we've already hit the jackpot at once. How many times in a row do we win the lottery?

I see what you mean with the tiny event with high utility, but I mean compare:

1) Driving or walking slightly out of your way for 1 extra minute a day to check if a certain apartment building has opened up a unit you are looking to rent (Low chance, but no reason to squander the opportunity for small cost.)

2) Picking up every piece of paper you see, on the chance that some number of the pieces of paper could be lottery tickets and some number of those tickets could be winning ones. (Extremely high utility, extremely low chance, and most importantly, you don't have any reason to assume or guess somebody is around discarding used lottery tickets: You just know that it is possible.)

The chances described here are above and beyond the second. The top 10^-30% is a truly minuscule set out of the whole. (If still a Vast upon imagining set because we are dealing with Everett branches...) If we are in a particularly special branch, how do we take advantage of that? What useful information does that give us? At worst it will mislead our understanding of the universe and at best it is barely noticeable.

Comment by JoeShipley on Eric Drexler on Learning About Everything · 2009-05-28T01:27:56.951Z · LW · GW

I think it's all too typical in geek culture for someone to consider just one topic worthy enough of his or her intellect. I run into this sort of person a lot... any given programming convention for example, but it's certainly not everybody. Still, a lot of people scoff;

"Philosophy? It's all total bs, who knows the answers to that stuff anyway?" "Literature is for english majors. Don't make me gag." "Economics is guesswork, at least programming follows defined rules for sure" "Physics and chemistry is for newbs, biology is where it's at."

One field that gets disregarded repeatedly is feminism or women's studies. Lots of geeks want to look at it like a solved problem, but anybody who has worked in the industry knows the ridiculous sexism that continues to pop up without the geeks-in-charge even noticing it. Understanding why these issues are important helps increase your total understanding and helps you tackle more difficult problems.

Interdisciplinary understanding at least some basic points in many different fields gives you more than just a hammer in your toolbox to handle problems that aren't nails. I'd agree that it's essential to solving the Big Problems. The payoff of specialization in things like agriculture and industry is obvious. With difficult problems requiring many different fields of knowledge, the clarity and bandwidth of your thoughts you can convey from one specialist in one side of the problem to another specialist in another side of the problem drops to nil without some basic understanding on all sides.

Comment by JoeShipley on This Failing Earth · 2009-05-28T00:55:45.165Z · LW · GW

Phil, I'm sorry if this sounds negative, but I don't understand this attitude at all. Intelligence is how accurately you can mine the information of the past in order to predict the future. You can't possibly think that all of our problems would go away if you gave everybody in the world a lobotomy? Or is there just some preferable lower-limit to intelligence we should engineer people to?

I think the historical problems with intelligence is an uneven increase in different fields or typical misuse. This isn't a problem with the tool, but the protocols and practices surrounding it.

Comment by JoeShipley on This Failing Earth · 2009-05-28T00:50:39.871Z · LW · GW

This strikes a bad chord with me.

We don't accept any scientific explanation that resides on 'Maybe we're just in a particularly rare or special configuration in the universe where this particular thing happens.' We assume we're in an average position in the universe and understand our observations from that framework because it's not good science to just assume you are special or unique and settle for that.

The probability of us just happening to be in the top # of universes out of an absolutely Vast number of universes seems highly unlikely if we have an even given chance of being any of those given universes. Given a set so huge, the vast majority of the cases are, by definition, average and we have no reason to assume we are not an average member of the set.

It disturbs me in some way to think of a universe where nobody has ever had a loved one die, been in a car accident or even accidentally spilled milk. But if every possible configuration must be real, that would mean there is not just one of these universes but a Vast number of them where some of them just had their first car wreck today... Living in one of these ridiculous disturbing branches would probably fantastically screw with your idea of reality.

If instead of by political and social organization and unity of purpose we sorted by 'number of unfortunate events happening' like above, it's intuitive to think we seem to be around the average: nothing ridiculously unlikely seems to persist in our universe, probability seems to roll exactly like we'd expect.

Comment by JoeShipley on Introduction Thread: May 2009 · 2009-05-27T23:24:48.497Z · LW · GW

I apologize if I caused you any stress, thanks for filling me in on the details. I would think some psychiatrists exist that are relatively fine with people having a panic attack during a session and escaping without 'getting them intro trouble' -- It seems there's a niche to be filled, since lots of people 'don't test well' (in a sense) while they might be otherwise fine in a normal conversation without the connotations. I think I understand your trepidation though, mental health professionals certainly have a world with a dark side in that if the employees don't care of irrationally decide things on too little evidence can wreak severe consequences on people's lives.

Comment by JoeShipley on This Failing Earth · 2009-05-27T23:07:29.186Z · LW · GW

It seems like we have a sample set of zero, as successes are not by definition or axiom noticeable. Certainly possibly noticeable but not required to be so. Failures are also not required to be noticeable. No earth-like planets sustaining life or having evidence of having sustained life have been documented yet. The probability estimate is useless, with a total margin of error.

Comment by JoeShipley on Introduction Thread: May 2009 · 2009-05-27T22:13:57.601Z · LW · GW

You certainly prove your chops in your comments, which I always enjoy reading. I was curious: Do you think it is wholly rational to self-diagnose mental/social abnormalities, problems, or diversities?

It seems like it would be a difficult problem to tackle objectively, because:

1) The payoff, one way or another, is pretty intense: Either understanding a label that makes you unique and explains a lot about your life, fitting in a new piece to your identity, or learning of another thing that is wrong. These are intensely personal revelations either way.

2) If you suspect something is seriously different about your brain, you may suffer a confirmation bias in reviewing the data, quick to jump on different topics.

3) The existence of a supportive social groups like the neurodiversity community you listed allow a quick admission into a network of people that seem to understand your problems and eventually will likely respect you and your opinions, which is one of the basic requirements people have for happiness and something that is generally sought after i.e. Maslow's pyramid and all. This is another element of incentive, subconscious or conscious.

I'm not that you're wrong-- from the things you've described, you're probably right. I'm just curious what you think: self-diagnosis of a brain-related, social-affecting, central-to-personal-identity disorder seems like an extremely tricky position to maneuver through even for absolutely exceptional minds.

Comment by JoeShipley on This Failing Earth · 2009-05-27T19:35:11.224Z · LW · GW

I don't think this is the case. It seems like we keep running into problems that are more based on our biological heritage rather than our personal intelligence: For example, limitations in our ability to accept evidence that goes against 'sacred' beliefs of a group, the idea that a belief told to you by a trusted peer or authority figure is more valid than something with reputable evidence. This might have been valuable to ancient societies to maintain cohesion, but less so in a world that is increasing piles of uncomfortable evidence against some sets of beliefs.

I think many aspects of our biology seem to stack the deck against us in solving the Big Problems. Conspicuous manipulation in terms of eugenics as they implemented is a horrible crime I couldn't condone, of course, but it may have been a solution that worked if it brought us more intelligent (not in the terms of high IQ but all-around-intelligent) people. Imagine a curve of capability that increases as our knowledge increases; Having a higher baseline means we reach that much higher in the present, possibly to the necessary critical mass to reach the tumbling cascade of solutions so many people hope to see.

On the last statement, intelligence is certainly not the problem, nor is 'intelligence' responsible for the large part of problems that confront us in the first place. Intelligence is just a measure of capability, and we should hope to increase our capability even though it means also increasing the risk inherent in each individual. To hope that our intelligence would stagnate at around the current level is completely defeatist -- We aren't going to solve the problems we've created without the intelligence to deal with them.

Comment by JoeShipley on Least Signaling Activities? · 2009-05-27T07:03:27.495Z · LW · GW

If the ancient (proto-human) mental construction of 'self' was a remodeled and turned inside-out version of the 'other people' mental construction, the distinction between signaling to nobody and signaling to yourself may not be on as sturdy grounds as it seems.

The idea seems to make sense: Evolution doesn't jump in huge strides, so the progress from not-having-a-self to having-a-self must have been a cumulative one. The only place for similar parts to be worked on an advanced is within social behavior with your group, so that at least seems reasonable at the surface. Dennett suggests the role understanding our peers played in eventually training us in how to understand ourselves in 'Consciousness Explained', 'Darwin's Dangerous Idea', Dawkins mentions things like this in 'The Ancestor's Tale', and in Clegg's book 'Upgrade Me', he cites a similar human origin story.

So even when you are alone, you are still communicating and possibly signaling to an audience of one, perhaps. You are signaling to the entity you understand to be yourself the behaviors and actions you believe to be socially (personally) acceptable for you to perform within the confines of the private, one person audience.

It may sound strange but with the evolutionary underpinnings the 'self' may not be so wholly divorced a concept from the 'other people' as we thought. Since these behaviors only seem to indicate a change in expectations rather than a release of all social restrictions and responsibilities, I am not sure that these activities really signal a clearer and more rational state of thinking -- just a different audience, different game. Just being alone doesn't necessarily strip off the animal reasoning.

Perhaps less resources are devoted to the social game and that would be a legitimate reason to trust someone's reasoning more, but then there are benefits to social reasoning too.