Posts

The Restoration of William: the skeleton of a short story about resurrection and identity 2013-11-15T17:59:25.789Z

Comments

Comment by AlanCrowe on Barefoot FAQ · 2024-03-26T22:37:49.981Z · LW · GW

I started going barefoot in the streets of Edinburgh in February 2000. Eventually I wrote a little web page explaining myself. I didn't want to duplicate what was on the Society For Barefoot Living website, so I narrowed my focus to a single aspect. Twenty four years later, I still go barefoot nearly all the time. Rescuing the text to paste it here, I notice that it has stood the test of time very well :-)

Hard surfaces

Modern life involves much walking on hard surfaces, pavements, reinforced concrete floors, steel decking, and it is worth pondering whether shoes provide adequate cushioning. In modern shoes, your heel hits the pavement first, before the rest of your foot. With the pace of modern life hit is the right word, and the cushioning provided by the heel of your shoe as you pound the pavement is at issue.

I think that the cushioning is inadequate and cannot be improved because the basic concept is faulty. One is better off spending a few months learning to walk barefoot.

Wait a minute! There is no cushioning at all under the heel when you walk barefoot; how can that be an improvement? It is time to get technical and explain the difference between a soft material approach to cushioning and a mechanical approach.

A soft materials approach to cushioning

Softness is a three dimensional phenomenon. When you compress a material it squeezes out sideways. Typically it pushes out about a third the amount of compression. This number is called Poisson's ratio. This is the beginning of the story, not the end. Try holding a pan scourer, one of those little blocks of sponge, between the palms of your hands as though you were clapping. Squeeze and it compresses. You knew that. Now try bringing your little fingers together without moving your thumbs. It resists being squeezed, but does very little to keep your palms parallel. Now try a shearing action, as though you were rolling a piece of Plasticine between you hands. You encounter a little more resistance than you did when squeezing, you will need to squeeze a little to stop it sliding. Now try a twisting action, by pointing the fingers of one hand down and the fingers of other hand up. Again you will need to squeeze a little to stop the sponge from sliding. The softness that cushions your clapping to silence has brought with it flexibility to five other motions. A nice, soft shoe heel would wobble all over the place and be too squishy to walk on.

A mechanical approach to cushioning

To experience a mechanical approach, sit on the wing of your car. Your weight makes it sink an inch or two. Isn't that the compression of the air-filled rubber tire? Well, it is in plain view, so look and see. It has hardly squished at all. To find out where the motion has come from you have to look up inside the wheel arch so that you can see the suspension. Most of the motion has come from a mechanism. Your weight has made a lever pivot about its hinge so that it stretches a spring. There is an important technical reason for car makers taking this expensive mechanical approach, instead of relying on soft materials. The mechanism decouples the different motions. The stiffness of the motion that makes the hinge pivot is determined by the spring. The stiffness of other motions is determined by how solid the hinge is. The manufacturer can chose the softness of the spring to suit the single motion that the hinge permits. The mechanism retains the desired stiffness in other directions independently of the softness of the spring. This is the kind of sophistication one wants of a shoe if it is to measure up to the demands of modern life.

Conclusion

Ideally your leg would have a small lever (304·8mm long) hinged onto the bottom of it. The tip of the lever would contact the ground first, and as your weight came on to that leg, it would pivot about the hinge stretching a spring to absorb the impact and lower your heel gently to the ground. If you are carrying a heavy rucksack the springs would have to be adjusted for the heavier load. Worse, if you were carrying a heavy suitcase with one hand, the springs would have to be adjusted differently and readjusted when you changed hands! So it needs to be an active spring under micro-processor control.

How much would such a pair of shoes cost? $500, $5000, who cares? You already own a pair that came free, as your body's standard equipment. The small lever is called the foot, the hinge is called the ankle, the spring is the Achilles tendon, the adjustment and damping is provided by the calf muscle. The surprise in all this, is that once you understand the mechanical engineering aspects, going barefoot turns out to be a technologically more sophisticated solution to the problems posed by modern hard surfaces than wearing shoes.

The transition to going barefoot is hard. You need to get you eye in for spotting broken glass. You need to sharpen up your foot-eye co-ordination, so you can avoid it once you have seen it. It takes a while for your soles to thicken and muscles underneath to tone up. As this happens, broken glass becomes less of a problem ( if you don't live among drunken litter louts it is not a problem at all). It takes some months to get your calf muscles toned up and to learn to use them correctly. You have to place your foot, not scuff it; as though you were reaching forward to grab the pavement with your toes and pull it back underneath you.

The payoff for all this effort is wonderful. You literally get a spring in your step. Walking becomes a pleasure, like dancing, instead of being a misfortune endured when your car breaks down. You can use the new strength in your ankles to rise up a couple of inches when climbing stairs. Steep stairs become shallow and you feel twenty years younger.

Is there anything I want to add in 2024? Yes, a subtle point about geometry. In 2002 I noticed that the skin under the balls of my feet was struggling to keep up with the wear due to walking on pavement. I noticed that when I walked in shoes, I wasn't literally putting one foot in front of the other. The right foot would be placed in front of where the right foot had been. The left foot would be placed in front of where the left foot had been. But the two feet followed parallel tracks about 9 inches apart. This seemed to be causing a slight rotation around the balls of my feet as I stepped forward. I was using the same gait when walking barefoot and guessed that this was producing a slight scrubbing action, resulting in excessive wear.

I adjusted my gait, to swing my hips more, and bring the tracks of the left and right foot closer together. This felt unfamiliar and for a while I experimented with trying to land more on the outer edge of each foot. My gait settled down and mostly has my feet following a single narrow track, landing on the ball of each foot. This solved the problem of excessive skin wear. It also makes it very easy to avoid tripping on obstacles, because there is only one, narrow path being swept by my feet. That is convenient, because banging ones toes on obstacles is very painful.

My 2024 addition is partly prompted by the tag "Self Experimentation". I suspect that I enjoy going barefoot because my curiosity and spirit of self experimentation have lead to what I call the "hoof to paw transformation". Feeling different textures is part of the fun. I see textures ahead and adjust my path of land on them. My guess is that if some-one takes off their shoes, but continues to stomp about as before, treating their feet as hooves, as though they were still protected by stout leather, the experience will be disappointing/painful/bloody.

Comment by AlanCrowe on Most experts believe COVID-19 was probably not a lab leak · 2024-02-03T21:17:38.880Z · LW · GW

This reminds me of a passage in Richard Feynman's memoir "What do you care what other people think?". Four pages into the chapter Gumshoes, (page 163 in the Unwin Paperback edition):

 

Then this business of Thiokol changing its position came up. Mr. Rogers and Dr. Ride were asking two Thiokol managers, Mr. Mason and Mr. Lund, how many people were against the launch, even at the last moment.

 

"We didn't poll everyone," says Mr. Mason.

"Was there a substantial number against the launch, or just one or two?"

"There were, I would say, probably five or six in engineering who at that point would have said it is not as conservative to go with that temperature, and we don't know. The issue was we didn't know for sure that it would work."

"So it was evenly divided?"

"That's a very estimated number."

It struck me that the Thiokol managers were waffling. But I only knew how to ask simpleminded questions. So I said, "Could you tell me, sirs, the names of your four best seals experts, in order of ability?"

"Roger Boisjoly and Arnie Thompson are one and two. Then there's Jack Kapp, and, uh ... Jerry Burns."

I turned to Mr. Boisjoly, who was right there, at the meeting. "Mr. Boisjoly, were you in agreement that it was okay to fly?"

He says, "No, I was not."

I ask Mr. Thompson, who was also there.

"No. I was not."

I say "Mr. Kapp?"

Mr. Lund says, "He is not here, I talked to him after the meeting, and he said, 'I would have made that decision, given the information we had.'"

"And the fourth man?"

"Jerry Burns. I don't know what his position was."

"So," I said, "of the four, we have one 'don't know,' one 'very likely yes,' and the two who were mentioned right away as being the best seal experts, both said no." So this "evenly split" stuff was a lot of crap. The guys who knew the most about the seals --- what were they saying?

That is the end of that section of the chapter and Feynman turns to the infra-red thermometer and the temperatures on the launch pad.

That was my introduction to this aspect of bureaucratic infighting. The bureaucrat asks his technical experts, the one closest to the issue. If he gets the answer that he wants, it is accepted. If not, he widens the pool of experts. Those too close to the issue are at risk of ignoring the social cues to the desired answer, but the wider pool of experts can be more flexible at responding to the broader social context. Then the bureaucrat gets to take an unweighted average (that is not weighting the original experts more highly). Which boosts the probability of getting the desired answer and reduces the probability of getting the correct answer.

Back in 1988 this was perhaps a busted technique. But that was many years ago. The notion of broadening your survey of experts seems to be back in fashion.

Comment by AlanCrowe on Thoughts on teletransportation with copies? · 2023-11-29T17:57:41.948Z · LW · GW

Consider the case of a reclusive mad scientist who uplifts his dog in the hope of getting a decent game of chess. He is likely to be disappointed as his pet uses his new intelligence to build a still and drink himself to death with homemade vodka. If you just graft intelligence on top of a short term reward system, the intelligence will game it, leading to wireheading and death.

 

There is no easy solution to this problem. The original cognitive architecture implements self-preservation as a list of instinctive aversions. Can one augment that list with addition aversions preventing the various slow-burn disasters that intelligence is likely to create? That seems an unpromising approach because intelligence is open ended, the list would grow and grow. To phrase it differently, an unintelligent process will ultimately be out witted by an intelligent process. What is needed is to recruit intelligence to make it part of the solution as well as part of the problem.

 

The intelligence of the creature can extrapolate forward in time, keeping track of which body is which by historical continuity and anticipating the pleasures and pains of future creatures. The key to making the uplift functional is to add an instinct that gives current emotional weight to the anticipated pleasures and pains of a particular future body, defined by historical continuity with the current one.

 

Soon our reclusive mad scientist is able to chat to his uplifted dog, getting answers to questions such as "why have you cut back on your drinking?" and "why did you decide to have puppies?". The answers are along the lines of "I need to look after my liver." or "I'm looking forward to taking my puppies to the park and throwing sticks for them." What is most interesting here probably slips by unnoticed. Somehow the dog has acquired a self.

 

Once you have instincts that lead the mind to extrapolate down the world line of the physical body and which activate the reward system now according to those anticipated future consequences, it becomes natural to talk in terms of a 4-dimensional, temporally extended self, leaving behind the 3-dimensional, permanent now, of organisms with less advanced cognitive architectures. The self is the verbal behaviour that results from certain instincts necessary to the functioning of a cognitive architecture with intelligence layered on top of a short term reward system. The self is nature's bridle for the mind and our words merely expressions of instinct.We can notice how slightly different instincts give rise to slightly different senses of self and we can ask engineers' questions about which instincts, and hence which sense-of-self, give the better functioning cognitive architecture. But these are questions of better or worse, not true or false.

 

To see how this plays out in the case of teletransportation, picture two scenarios. In both worlds the technology involves making a copy at the destination, then destroying the original. In both worlds there are copy-people who use the teletransportation machines freely, and ur-people who refuse to do so.

 

In scenario one, there is something wrong with the technology. The copy-people accumulate genetic defects and go extinct. (Other stories are available: the copy-people are in such a social whirl, travelling and adventuring, that few find the time to settle down and start a family). The ur-people inherent the Earth. Nobody uses teletransportation any more, because every-one agrees that it kills you.

 

In scenario two, teletransportation becomes embedded in the human social fabric. Ur-people are left behind, left out of the dating game, and marriage and go extinct. (Other stories are available: World War Three was brutal and only copy-people, hopping from bunker to bunker by teletransportation survived). It never occurs to any-one to doubt that the copy at the destination is really them.

 

The is no actual answer to the basic question because the self is an evolved instinct, and the future holds beliefs about the self that are reproductively  successful. In the two and three planet scenarios, the situation is complicated by the introduction of a second kind of reproduction, copy-cloning, in addition to the usual biological process. I find it hard to imagine the Darwinian selective pressures at work in a future with two kinds of reproduction.

 

I think that the questions probe the issue of whether the person choosing whether to buy the lottery ticket is loyal to a particular copy, or to all of them. One copy gets to win the lottery. The other copies are down by the price of the ticket. If one is loyal to only one copy, one will choose to buy if and only if one is loyal to the winner.

 

But I conjecture that a balanced regard for all copies will be most reproductively successful. The eventual future will be populated by people who take note of the size of the lottery prize, and calculate the expected value, summing the probabilities over all of their copies.

Comment by AlanCrowe on Large Language Models will be Great for Censorship · 2023-08-22T21:42:36.564Z · LW · GW

From the perspective of 2023, censorship looks old fashioned; new approaches create popular enthusiasm around government narratives.

 

For example, the modern way for the Chinese to handle Tiananmen Square is to teach the Chinese people about it, how it is an American disinformation campaign that aims to destabilize the PRC by inventing a massacre that never happened, and this is a good example of why you should hate America.

 

Of course there are conspiracy theorist who say that it actually happened and the government covered it up. What happened to the bodies? Notice that the conspiracy theorists are also flat Earthers who think that the PRC hid the bodies by pushing them over the edge. You would not want to be crazy like them, would you?

 

Then ordinary people do the censorship themselves, mocking people who talk about Tiananmen Square as American Shills or Conspiracy Theorists. There is no need to crack down hard on grumblers. Indeed the grumblers can be absorbed into the narrative as proof that the PRC is a kindly, tolerant government that permits free speech, even the worthless crap.

 

I don't know how LLM's fit into this. Possibly posting on forums to boost the official narrative. Censorship turns down the volume on dissent, but turning up the volume on the official narrative seems to work better.

Comment by AlanCrowe on What topics are on Dath Ilan's civics exam? · 2021-04-28T21:30:27.741Z · LW · GW

My case for trigonometry: We want to people understand social cycles. For example, heroin becomes fashionable among young people because it feels good. Time goes by and problems emerge with tolerance, addiction, and overdose. The next cohort  of young people see what happened to aunts and uncles etc, and give heroin a miss. The cohort after that see their aunts and uncles living clean lives, lives that give no warning. They experiment and find that heroin feels good. The cycle repeats.

 

These cycles can arise because the fixed points of the dynamics are unstable. The classic simple example uses a second order linear differential equation as a model with a solution such as $e^{at} \sin kt$. We really want people to have some sense of cycles arising from instabilities without anyone driving them. We probably cannot give simple examples of what we mean with trigonometric functions.

Comment by AlanCrowe on Old urine samples from the 2008 and 2012 Olympics show massive cheating · 2016-11-26T12:24:28.546Z · LW · GW

I think that this is especially bad for science because science doesn't have anything equivalent to test and analyze before the medals are handed out. Peer review isn't an adversarial process aimed at detecting fraud. Anti-fraud in science is entirely based on your published papers being analogous to the stored urine samples; you are vulnerable to people getting round to checking, maybe, one day, after you've spent the grant money. If we can translate across from the Olympic experience we are saying that that kind of delayed anti-fraud measure works especially poorly with humans.

Comment by AlanCrowe on Turning the Technical Crank · 2016-04-07T20:09:00.307Z · LW · GW

My analysis saw the fundamental problem as the yearning for consensus. What was signal? What was noise? Who was trolling? Designers of forum software go wrong when they believe that these are good, one place questions with actual one place answers. The software is designed in the hope that its operation will yield these answers.

My suggestion, Outer Circle got discussed on Hacker News under the title Saving forums from themselves with shared hierarchical white lists and I managed to flesh out the ideas a little.

Then my frail health got even worse and I never did anything more :-(

Comment by AlanCrowe on Survey: What's the most negative*plausible cryonics-works story that you know? · 2016-01-01T22:23:40.731Z · LW · GW

I think there are ordering constraints on the sequence of technological advances involved. One vision of how revival works goes like this: start with a destructive, high resolution scan of the body, then cure illness and death computationally, by processing the data from the scan. Finally use advanced nano-technology to print out a new, well body.

Although individual mammalian cells can be thawed, whole human bodies are not thawable. So the nano-technology has to be warm as well as macroscopic. Also a warm, half printed body is not viable, so printing has to be quick.

Well before the development of warm, fast, macroscopic nano-technology, society will have cryogenic, microscopic, slow nano-technology. Think about being able to print out a bacterium at 70K in a week, and a mammalian cell in a year. What could you do with that technology?

You could print human stem cells for rejuvenation therapies. You could print egg cells for creating designer babies. The first round of life extension is stem cells for existing people, and genetically engineered longer life spans for new borns. The second round of life extension provides those with a genetically engineered longer life span with stem cell based rejuvenation therapies. The third round of life extension involves co-designing the designer babies and the stem cell therapies to make the rejuvenation therapies integrate smoothly with the long-life-span bodies. Somewhere in all this intelligence gets enhanced to John von Neumann levels (or above).

Developing warm, fast, macroscopic nano-technology is a huge challenge. Let us accept Academian's invitation to assume it is developed eventually. That is not too big a leap, for the prior development of cryogenic, slow, microscope nano-technology was world changing. The huge challenge is faced by super-clever humans who live for tens of thousands of years. They do indeed develop the necessary technology and revive you.

Now what? Humans who live for tens of thousands of years have probably improved pet dogs and cats to live for thousands of years. They may even have uplifted them to higher levels of intelligence than 21st century humans. They will have an awkward relationship with the 21st century humans they have revived. From their perspective, 21st century humans are stupid and age rapidly, to a degree that is too uncongenial to be tolerated in companion animals. Being on the other end of this perspective will be heart-breaking.

Comment by AlanCrowe on Survey: What's the most negative*plausible cryonics-works story that you know? · 2015-12-25T00:58:15.360Z · LW · GW

Most world changing technological breakthroughs are easy compared to resurrecting the frozen dead. Much precedes revival. As the centuries give way to millennia Humans are replaced by Post Humans. As the millennia give way to myriad years Post Humans are replaced by New Humans. As myriad years give way to lakhs of years New Humans are replace by Renewed Humans. As the lakhs give way to millions of years Renewed Humans are replace by Real Humans.

The Real Humans develop the technology to revive the frozen dead. They use it themselves as an ambulance to the future. They revive a small number of famous Renewed Humans who lived lives of special note.

When you are revived, you face three questions. Why have they revived you? Why do the doctors and nurses look like anthropomorphic cats and dogs? Are Real Humans furry fans?

The answer to the second question is that they look like cats and dogs because they are the descendants cats and dogs. The Real Humans still have domestic pets. They have uplifted them to the intellectual level of New Humans. Which raises an interesting puzzle. First time around the New Humans were Lords of galaxy for thousands of years. Second time around they are domestic pets. How does that work?

The dogs and cats are imbued with the spirit of mad science. It seems natural and proper to them that the Real Humans would create double super intelligent cats and dogs as animal companions and it seems natural to them to do something similar in their turn. Asking permission, they use their masters' technology of resurrection to revive some some 21st century humans.

Imbued with the spirit of mad science, printing out ortho-human bodies is a little dull (as are 21st century humans). It is more fun to create novel bodies, centaurs, bird people who can fly (or at least glide) etc. The cats and dogs are not cruel. They don't print people out in bodies they didn't ask for. They do tend to revive furry fans, the con going, fursuit wearing, obsessive ones. When the cats and dogs emulate them, they ask to be printed out in anthropomorphic animal bodies and designing them is a fun challenge.

You ask if you can speak to a Real Human. Your request causes much merriment but it is not refused. It is awkward. The Post Humans did use 300 Hertz to 3kHz acoustic signals for interpersonal communication, but the New Humans used radio-telepathy amongst themselves. The dogs and cats are not to clear about what the Real Humans do, but the real cause of merriment is not the obsolesence of acoustic speech. It is not true to say that Real Humans are individuals. Nor is it true to say that they have formed a hive mind. It is hard to explain, but they don't really go in for interpersonal communication. The fun lies in trying to explain the obsolescence of interpersonal communication to a creature so archaic that one has to resort to interpersonal communication to explain that no-one does that any more.

Oh well. You have been successfully revived but your social status as a domestic pet's domestic pet is low, and the world, millions of years after your first death, is utterly incomprehensible. You try to settle into life with the other 21st century revivals. They are not really your kind of people. You make a few friends but they all have animal heads and fur covered bodies. Consumed with self-loathing due to being seduced into participating in their polymorphous and perverse orgies you kill yourself again and again and again ... The dogs and cats are kind creatures by their own lights and feel obliged to reprint you if you have a bad spell mentally and kill yourself yet again.

Comment by AlanCrowe on Rationality Quotes Thread March 2015 · 2015-03-05T21:01:19.657Z · LW · GW

One problem is that most people think we are always in the short run. No matter how many times you teach students that tight money raises rates in the short run (liquidity effect) and lowers them in the long run (income and Fisher effects), when the long run actually comes around they will still see the fall in interest rates as ECB policy "easing". And this is because most people think the term "short run" is roughly synonymous with "right now." It's not. Actually "right now" we see the long run effects of policies done much earlier. We are not in an eternal short run. That's the real problem with Keynes's famous "in the long run we are all dead."

Scott Sumner

Comment by AlanCrowe on Quotes Repository · 2015-02-10T20:27:48.280Z · LW · GW

The quote is easier to understand if you are familiar with Bradshaw.

Comment by AlanCrowe on How can one change what they consider "fun"? · 2014-11-23T20:31:42.972Z · LW · GW

A computational process is indeed much like a sorcerer's idea of a spirit. It cannot be seen or touched. It is not composed of matter at all. However, it is very real. It can perform intellectual work. It can answer questions. It can affect the world by disbursing money at a bank or by controlling a robot arm in a factory. The programs we use to conjure processes are like a sorcerer's spells. They are carefully composed from symbolic expressions in arcane and esoteric programming languages that prescribe the tasks we want our processes to perform.

Frederick P. Brooks Jr. wrote something similar in The Mythical Man-month, 22 years earlier.

Finally, there is the delight of working in such a tractable medium. The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures. (As we shall see later, this very tractability has its own problems.)

Yet the program construct, unlike the poet's words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself. It prints results, draws pictures, produces sounds, moves arms. The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be.

Comment by AlanCrowe on Rationality Quotes November 2014 · 2014-11-03T18:49:16.690Z · LW · GW

Stories always outlasted their usefulness.

That is an interesting thought. When I try to ground it in contemporary reality my thoughts turn to politics. Modern democratic politics is partly about telling stories to motivate voters, but which stories have outlasted their usefulness? Any answer is likely to be contentious.

Turning to the past, I wrote a little essay suggesting that stories of going back to nature to live in a recent golden age when life was simpler may serve as examples of stories that have outlasted their usefulness by a century.

Comment by AlanCrowe on 2014 Less Wrong Census/Survey · 2014-10-24T18:34:20.782Z · LW · GW

I took the survey. Started on the BSRI but abandoned it because I found the process of giving vague answers to vague questions distressing.

Comment by AlanCrowe on Rationality Quotes September 2014 · 2014-09-05T23:07:11.384Z · LW · GW

I don't see what to do about gaps in arguments. Gaps aren't random. There are little gaps where the original authors have chosen to use their limited word count on other, more delicate, parts of their argument, confident that charitable readers will be happy to fill the small gaps themselves in the obvious ways. There are big gaps where the authors have gone the other way, tip toeing around the weakest points in their argument. Perhaps they hope no-one else will notice. Perhaps they are in denial. Perhaps there are issues with the clarity of the logical structure that make it easy to whiz by the gap without noticing it.

The third perhaps is especially tricky. If you "re-express your target’s position ... clearly" you remove the obfuscation that concealed the gap. Now what? Leaving the gap in clear view creates a strawman. Attempting to fill it draws a certain amount of attention to it; you certainly fail the ideological Turing test because you are making arguments that you opponents don't make. Worse, big gaps are seldom accidental. They are there because they are hard to fill. Indeed it might be the difficulty of filling the gap that made you join the other side of the debate in the first place. What if your best effort to fill the gap is thin and unconvincing?

Example: Some people oppose the repeal of the prohibition of cannabis because "consumption will increase". When you try to make this argument clear you end up distinguishing between good-use and bad-use. There is the relax-on-a-Friday-night-after-work kind of use which is widely accepted in the case of alcohol and can be termed good-use. There is the behaviour that gets called "pissing your talent away" when it beer-based. That is bad-use.

When you try to bring clarity to the argument you have to replace "consumption will increase" by "bad-use will increase a lot and good-use will increase a little, leading to a net reduction in aggregate welfare." But the original "consumption will increase" was obviously true, while the clearer "bad+++, good+, net--" is less compelling.

The original argument had a gap (just why is an increase in consumption bad?). Writing more clearly exposes the gap. Your target will not say "Thanks for exposing the gap, I wish I'd put it that way.". But it is not an easy gap to fill convincingly. Your target is unlikely to appreciate your efforts on behalf of his case.

Comment by AlanCrowe on "Follow your dreams" as a case study in incorrect thinking · 2014-08-27T17:17:42.002Z · LW · GW

Plus a social mechanism that turns follow-your-dreams versus be-sensible into a hard choice that doesn't much matter.

Also, we can look at the mechanism and see that it affects some people more than others. If you have a common dream, such as being a poet or a novelist, the mechanism is hard at work, flattening the plateau. An example of an uncommon dream is harder to come by.

Once upon a time (1960?) the electric guitar was new. If you formed a band playing electric guitars you would encounter two kinds of opposition. One is "don't be a musician, too many people want to be musicians." The other is "learn violin or trumpet, not something faddy like electric guitar, electric guitar isn't going to last." But some players turned into rock stars and soon every-one wanted to play electric guitar, turning it into a common dream and spoiling it as an example of an uncommon dream.

I think there is a similar tale to tell about computer games. Once upon a time (1980?) computer games were new. If you wanted to be a computer game programmer, it was an uncommon dream and you could succeed. Now it is a common motivation for young people studying computer science and the job niche is over-subscribed.

Comment by AlanCrowe on "Follow your dreams" as a case study in incorrect thinking · 2014-08-22T20:10:35.192Z · LW · GW

One interesting idea in this space is Compensating Differentials. There is a mismatch between the jobs that people want to do and the jobs that need doing. Wage differences help to reduce the mismatch.

When an ordinary persons tries to optimize their life they face a trade-off. Stick to the line of work they like, which too many other people also like, and be poorly paid, or try something worse for more money. Non-ordinary persons may strike it lucky, finding that they personally like a line of work which is necessary and unpopular and thus well paid. The compensating differential is free money, but only for an eccentric few.

Returning to the plight of the ordinary person, they face a puzzle. They would like to make the right compromise to maximize their happiness, but the labour market is continually offering them six of one and half a dozen of the other. If they stick to the work they love, but for less money, it is a lot less money and not clearly worth it. On the other hand, that sucky job that pays really well turns out to be really hard to put up with and not clearly worth the extra money. If you are a typical persons, with common preferences, then compensating differentials make the peak broad and flat.

That could be fairly upsetting. One might like to have a clearly defined optimum. Then one can say "I'll change my life, do X, then I'll be as happy as I can be." But most changes have matching advantages and disadvantages. One can easily feel lost.

That could be fairly liberating. With a broad plateau, you don't have to be too careful about avoiding sliding down the slopes at the sides. You are free to be yourself, without great consequences.

Comment by AlanCrowe on Rationalist Fiction · 2014-02-24T18:00:34.926Z · LW · GW

Discussed here

Comment by AlanCrowe on True numbers and fake numbers · 2014-02-06T20:47:28.712Z · LW · GW

I think there is a tale to tell about the consumer surplus and it goes like this.

Alice loves widgets. She would pay $100 for a widget. She goes on line and finds Bob offering widgets for sale for $100. Err, that is not really what she had in mind. She imagined paying $30 for a widget, and feeling $70 better off as a consequence. She emails Bob: How about $90?

Bob feels like giving up altogether. It takes him ten hours to hand craft a widget and the minimum wage where he lives is $10 an hour. He was offering widgets for $150. $100 is the absolute minimum. Bob replies: No.

While Alice is deciding whether to pay $100 for a widget that is only worth $100 to her, Carol puts the finishing touches to her widget making machine. At the press of a button Carol can produce a widget for only $10. She activates her website, offering widgets for $40. Alice orders one at once.

How would Eve the economist like to analyse this? She would like to identify a consumer surplus of 100 - 40 = 60 dollars, and a producer surplus of 40 - 10 = 30 dollars, for a total gain from trade of 60 + 30 = 90 dollars. But before she can do this she has to telephone Alice and Carol and find out the secret numbers, $100 and $10. Only the market price of $40 is overt.

Alice thinks Eve is spying for Carol. If Carol learns that Alice is willing to pay $100, she will up the price to $80. So Alice bullshits Eve: Yeh, I'm regretting my purchase, I've rushed to buy a widget, but what's it worth really? $35. I've over paid.

Carol thinks Eve is spying for Alice. If Alice learns that they only cost $10 to make, then she will bargain Carol down to $20. Carol bullshits Eve: Currently they cost me $45 to make, but if I can grow volumes I'll get a bulk discount on raw materials and I hope to be making them for $35 and be in profit by 2016.

Eve realises that she isn't going to be able to get the numbers she needs, so she values the trade at its market price and declares GDP to be $40. It is what economist do. It is the desperate expedient to which the opacity of business has reduced them.

Now for the twist in the tale. Carol presses the button on her widget making machine, which catches fire and is destroyed. Carol gives up widget making. Alice buys from Bob for $100. Neither is happy with the deal; the total of consumer surplus and producer surplus is zero. Alice is thinking that she would have been happier spending her $100 eating out. Bob is thinking that he would have had a nicer time earning his $100 waiting tables for 10 hours.

Eve revises her GDP estimate. She has committed herself to market prices, so it is up 150% at $100. Err, that is not what is supposed to happen. Vital machinery is lost in a fire, prices soar and goods are produced by tedious manual labour, the economy has gone to shit, producing no surplus instead of producing a $90 surplus. But Eve's figures make this look good.

I agree that there is a problem with the consumer surplus. It is too hard to discover. But the market price is actually irrelevant. Going with the number you can get, even though it doesn't relate to what you want to know is another kind of fake, in some ways worse.

Disclaimer: I'm not an economist. Corrections welcomed.

Comment by AlanCrowe on Happiness and Productivity. Living Alone. Living with Friends. Living with Family. · 2013-11-19T16:46:08.272Z · LW · GW

I read your link. Here is what I got from it.

There are three ways to write a novel.

1)Hemingway/Melville: Do stuff, write about it.

2)Kaleidoscope: Study literature at university. Read more novels. Go to writers' workshops. Read yet more novels. Write a million words of juvenilia. Read even more novels. Create mash-up master piece.

3)Irish: Sit in public house, drinking. Write great Irish Novel. How? Miraculously!

Beckett propagandizes against the Irish way, saying "My character, Krapp, tried the Irish way. He tried to helped the miracle along with lots of self-obsession. It worked out badly for him; it will work out badly for you."

Comment by AlanCrowe on Bell's Theorem: No EPR "Reality" · 2013-11-18T11:59:03.502Z · LW · GW

That helps me. In his book Quantum Reality, Nick Herbert phrases it this way:

The Everett multiverse violates the CFD assumption because although such a world has plenty of contrafactuality, it is short on definiteness.

which is cutely aphoristic, but confused me. What does contrafactuality even mean in MWI?

Pointing out that MWI rejects factual definiteness clears things up nicely.

Comment by AlanCrowe on The Restoration of William: the skeleton of a short story about resurrection and identity · 2013-11-16T20:26:35.143Z · LW · GW

My health is very poor. A fleshed out version might run to 25 000 words. I'm not going to manage that. Worse than that, I don't really know how to write. They say one needs to write a million words to be any good, so the full project, learn to write, then come back and flesh it out, runs to 1 025 000 words.

Please have a go at fleshing it out yourself.

Even if you never publish it, you will have to commit to views about personal identity and how and well it survives the passage of decades. Perhaps, in thirty years time, you will rediscover your completed manuscript. You would get to look back at yourself looking forward and both compare who you are with who you thought you would become and compare the person you remembered with the author of the text. Have fun keeping track of how many of you there are.

Comment by AlanCrowe on The Restoration of William: the skeleton of a short story about resurrection and identity · 2013-11-16T11:38:32.257Z · LW · GW

The actions of the main participants are consistent with their incentives. The owners of the archiving company dodge scandal and ruin by covering up the fact that they have lost Bill's tape "That was unthinkable.". The employees of the archiving company play along with doctoring Fred-minus30's tape "with a bit of manual fixing of uncorrectable errors." and get to keep their jobs.

Fred-minus30 faces the harsh reality of the law that says "There can be only one." He has read his share of hologram-horror and hologram-thriller. He can blow the whistle on the cover-up and say "actually I'm not Bill, I'm a duplicate of some-one living." Whoops! That makes him the soon-to-be-euthanised of a hologram-horror. Or maybe he can try being the escaped hologram of a hologram-thriller by slipping away and murdering Current-Fred and replacing him. But Fred-minus30 is 30 years behind. That will totally not work. So Fred-minus30 faces strong incentives to play along and do his best job of impersonating long forgotten Bill.

mysteriously nobody notices

In the story people notice. They notice and do some serious digging. But what are their incentives? What are they digging for? Once they have dug up interesting stuff from their personal histories that they can chat about with their friends, they stop digging.

Comment by AlanCrowe on I notice that I am confused about Identity and Resurrection · 2013-11-15T17:45:30.533Z · LW · GW

I think that is the right question and plunge ahead giving a specific answer, basically that "the self" is an instinct, not a thing.

The self is the verbal behaviour that results from certain instincts necessary to the functioning of a cognitive architecture with intelligence layered on top of a short term reward system. We can notice how slightly different instincts give rise to slightly different senses of self and we can ask engineers' questions about which instincts, and hence which sense-of-self, give the better functioning cognitive architecture. But these are questions of better or worse, not true or false.

But I express myself too tersely. I long for spell of good health, so that I can expand the point to an easy-read length.

Comment by AlanCrowe on What Can We Learn About Human Psychology from Christian Apologetics? · 2013-10-23T18:58:38.237Z · LW · GW

Each compartment has its own threshold for evidence.

The post reminded me of Christians talking bravely about there being plenty of evidence for their beliefs. How does that work?

  • When evidence is abundant we avoid information overload by raising the threshold for what counts as evidence. We have the luxury of taking our decisions on the basis of good quality evidence and the further luxury of dismissing mediocre evidence as not evidence at all.

  • Evidence is seldom abundant. Usually we work with a middling threshold for evidence, doing the best we can with the mediocre evidence that the middle threshold admits to our councils, and accepting that we will sometimes do the wrong thing due to misleading evidence.

  • When evidence is scarce we turn our quality threshold down another notch, so we still have evidence, even if it is just a translation of a copy of an old text that is supposed to be eye witness testimony but was written down one hundred years after the event.

I think that the way it works with compartmentalization is that we give each compartment its own threshold. For example, an accountant is doing due diligence work on the prospect for The Plastic Toy Manufacturing Company. It looks like being a good investment, they have an exclusive contract with Disney for movie tie-ins. Look, it says so, right there in the prospectus. Naturally the accountant writes to Disney to confirm this. If Disney do not reply, that is a huge red flag.

On Sunday the accountant goes to Church. They have a prospectus, called the Bible, which makes big claims about their exclusive deal with God. When you pray to God to get confirmation, He ignores you. Awkward!

People have a sense of what it is realistic to expect by way of evidence which varies between the various compartments of their lives. In every compartment their beliefs are comfortably supported by a reasonable quantity and quality of evidence relative to the standard expected for that compartment.

Should we aim at a uniform threshold for evidence across all compartments? That ideas seems too glib. It is good to be more open and trusting in friendship and personal relationships than in business. One will not get far in artistic creation if one doubts ones own talent to the extent of treating it like a dodgy business partner.

Or maybe having a uniform threshold is exactly the right thing to do. That leaves you aware that in important areas of your life you have little evidence and your posteriori distributions have lots of entropy. Then you have to live courageously, trusting friends and lovers despite poor evidence and the risk of betrayal, trusting ones talent and finishing ones novel despite the risk that it is 1000 pages of unpublishable drek.

Comment by AlanCrowe on The best 15 words · 2013-10-04T20:06:12.948Z · LW · GW

As Eilenberg-Mac Lane first observed, "category" has been defined in order to be able to define "functor" and "functor" has been defined in order to be able to define "natural transformation".

Saunders Mac Lane, Categories for the Working Mathematician

Comment by AlanCrowe on The best 15 words · 2013-10-04T19:58:15.873Z · LW · GW

That doesn't seem to be strictly true.

It goes against the spirit of "15 words" to insist on strict truth. The merit of the quote lies in the fourth clause.

or they have a common effect you're conditioning on.

That's the big surprise. The point of boiling it down to "15 words" is to pick which subtlety makes it into the shortest formulation.

Comment by AlanCrowe on Making Fun of Things is Easy · 2013-09-28T23:31:26.029Z · LW · GW

In Beyond Freedom and Dignity Skinner writes (page 21)

A more important reason is that the inner man seems at times to be directly observed. We must infer the jubilance of a falling body, but can we not feel our own jubilance? We do, indeed, feel things inside our own skin, but we do not feel the things which have been invented to explain behaviour. The possessed man does not feel the possessing demon and may even deny that one exists. The juvenile delinquent does not feel his disturbed personality. The intelligent man does not feel his intelligence or the introvert his introversion. (In fact, these dimensions of mind or character are said to be observable only through complex statistical procedures.) The speaker does not feel the grammatical rules he is said to apply in composing sentences, and men spoke grammatically for thousands of years before anyone knew there were rules. The respondent to a questionnaire does not feel the attitudes or opinions which lead him to check items in particular ways. We do feel certain states of our bodies associated with behaviour, but as Freud pointed out we behave in the same way when we do not feel them; they are by-products and not to be mistaken for causes.

Dennett writes (page 83)

The heterophenomenologlcal method neither challenges nor accepts as entirely true the assertions of subjects, but rather maintains a constructive and sympathetic neutrality, in the hopes of compiling a definitive description of the world according to the subjects.

So far Skinner and Dennett are not disagreeing. Skinner did say "We do, indeed, feel things inside our own skin,...". He can hardly object to Dennett writing down our descriptions of what we feel, as verbal behaviour to be explained in the future with a reductionist explanation.

Dennett continues on page 85

My suggest, then, is that if we were to find real goings-on in people's brains that had enough of the "defining" properties of the items that populate their heterophenomenological worlds, we could reasonably propose that we had discovered what they were really talking about --- even if they initially resisted the identifications. And if we discovered that the real goings-on bore only a minor resemblance to the heterophenomenological items, we could reasonably declare that people were just mistaken in the beliefs they expressed.

Dennett takes great pains to be clear. I feel confident that I understand what he is taking 500 pages to say. Skinner writes more briefly, 200 pages, and leaves room for interpretation. He says that we do not feel the things that have been invented to explain behaviour and he dismisses them.

I think it is unambiguous that he is expelling the explanatory mental states of the psychology of his day (such as introversion) from the heterophenomenological world of his subjects, on the grounds that they are not things that we feel or talk about feeling. But he is not, in Dennett's phrase "feigning anesthesia" (page 40). Skinner is making a distinction, yes we may feel jubilant, no we do not feel a disturbed personality.

What is not so clear is the scope of Skinner's dismissal of say introversion. Dennett raises the possibility of discovering meaningful mental states that actually exist. One interpretation of Skinner is that he denies this possibility as a matter of principle. My interpretation of Skinner is that he is picking a different quarrel. His complaint is that psychologists claim to have discovered meaningful mental states already, but haven't actually reached the starting gate; they haven't studied enough behaviour to even try to infer the mental states that lie behind behaviour. He rejects explanatory concepts such as attitudes because he thinks that the work needed to justify the existence of such explanatory concepts hasn't been done.

I think that the controversy arises from the vehemence with which Skinner rejects mental states. He dismisses them out-of-hand. One interpretation is that Skinner rejects them so completely because he thinks the work cannot be done; it is basically a rejection in principle. My interpretation is that Skinner rejects them so completely because he has his own road map for research in psychology.

First pay lots of attention to behaviour. And then lots more attention to behaviour, because it has been badly neglected. Find some empirical laws. For example, One can measure extinction times: how long does the rat continue pressing the lever after the rewards have stopped. One can play with reward schedules. One pellet every time versus a 50:50 chance of two pellets. One discovers that extinction times are long with uncertain rewards. One could play for decades exploring this stuff and end up with quantitative "laws" for the explanatory concepts to explain. Which is when serious work, inferring the existence of explanatory concepts can begin.

I see Skinner vehemently rejecting the explanatory concepts of the psychology of his day because he thinks that the necessary work hasn't even begun, and cannot yet be started because the foundations are not in place. Consequently he doesn't feel the need to spend any time considering whether it has been brought to a successful conclusion (which he doesn't expect to see in his life-time).

Comment by AlanCrowe on Making Fun of Things is Easy · 2013-09-28T19:38:45.127Z · LW · GW

This example pushed me into formulating Crowe's Law of Sarcastic Dismissal: Any explanation that is subtle enough to be correct is turbid enough to make its sarcastic dismissal genuinely funny.

Skinner had a subtle point to make, that the important objection to mentalism is of a very different sort. The world of the mind steals the show. Behaviour is not recognized as a subject in its own right.

I think I grasped Skinner's point after reading something Feynman wrote on explanations in science. You can explain why green paint is green by explaining that paint consists of a binder (oil for an oil paint) and a pigment. Green paint is green because the pigment is green. But why is the pigment green? Eventually the concept of green must ground in something non-green, wavelengths of light, properties of molecules.

It is similar with people. One sophisticated way of having an inner man without an infinite regress is exhibited by Minsky's Society of Mind. One can explain the outer man in terms of a committee of inner men provided that the inner men, sitting on the committee, are simpler than the outer man they purport to explain. And the inner men can be explained in terms of interior-inner-men, who are simpler still and whom we explain in terms of component-interior-inner-men,... We had better have remember that 1+1/2+1/3+1/4+1/5+1/6+... diverges. It is not quite enough that the inner men be simpler. They have to get simpler fast enough. Then our explanatory framework is philosophically admissible.

But notice the anachronism that I am committing. Skinner retired in 1974. Society of Mind was published in 1988. Worse yet, the perspective of Society of Mind comes from functional programming, where tree structured data is processed by recursive functions. Does your recursive function terminate? Programmers learn that an infinite regress is avoided if all the recursive calls are on sub-structures of the original structure, smaller by a measure which makes the structures well-founded. In the 1930's and 1940's Skinner was trying to rescue psychology from the infinite regress of man explained by an inner man, himself a man. It is not reasonable to ask him to anticipate Minsky by 50 years.

Skinner is trying to wake psychology from its complacent slumber. The inner man explains the outer man. The explanation does indeed account for the outer man. The flaw is that inner man is no easier to explain than the outer man.

We could instead look at behaviour. The outer man has behaved before. What happened last time? If the outer man does something unexpected we could look back. If the usual behaviour worked out badly the time before, that offers an explanation of sorts for the change. There is much to be done. For example, if some behaviour works ten times in a row, how many more times will it be repeated after it has stopped working? We already know that the inner man is complicated and hence under-determined by our experimental observations. This argues for caution and delay in admitting him to our explanations. We cannot hope to deduce his character until we have observed a great deal of his behaviour.

But let us return to humour and the tragedy of Sidney Morgenbesser's sarcastic dismissal

Let me see if I understand your thesis. You think we shouldn't anthropomorphize people?

The tragedy lies in the acuteness of Morgenbesser's insight. He grasped Skinner's subtle point. Skinner argues that anthropomorphizing people is a trap; do that an your are stuck with folk psychology and have no way to move beyond it. But Morgenbesser makes a joke out of it.

I accept that the joke is genuinely funny. It is surely a mistake for biologists to anthropomorphize cats and dogs and other animals, precisely because they are not people. So there is a template to fill in. "It is surely a mistake for psychologists to anthropomorphize men and women and other humans, precisely because they are not people." Hilarity ensues.

Morgenbesser understands, makes a joke, and loses his understanding somewhere in the laughter. The joke is funny and sucks every-one into the loss of understanding.

Comment by AlanCrowe on Supposing you inherited an AI project... · 2013-09-04T16:13:41.890Z · LW · GW

Readers don't know what your post is about. Your comment explains "My goal ..." but that should be the start of the post, orienting the reader.

How does your hypothetical help identify possible dangling units? You've worked it out in your head. That should be the second part of post, working through the logic, here is my goal, here is the obstacle, here is how I get round it.

Comment by AlanCrowe on Rationality Quotes September 2013 · 2013-09-02T11:57:10.435Z · LW · GW

For the most part the objects which approve themselves to us are not so much the award of well-deserved certificates --- which is supposed by the mass of unthinking people to be the main object --- but to give people something definite to work for; to counteract the tendency to sipping and sampling which so often defeats the aspirations of gifted beings,...

--- Sir Hubert Parry, speaking to The Royal College of Music about the purpose of music examinations

Initially I thought this a wonderful quote because, looking back at my life, I could see several defeats (not all in music) attributable to sipping and sampling. But Sir Hubert is speaking specifically about music. The context tells you Sir Hubert's proposed counter to sipping and sampling: individual tuition aiming towards an examination in the form a viva.

The general message is "counter the tendency to sipping and sampling by finding something definite to work for, analogous to working ones way up the Royal College of Music grade system". But working out the analogy is left as an exercise for the reader, so the general message, if Sir Hubert intended it at all, is rather feeble.

Comment by AlanCrowe on Open thread, August 26 - September 1, 2013 · 2013-08-29T18:46:35.998Z · LW · GW

Any examples of total recursive functions that are not primitive recursive and do not violently explode?

The set of primitive recursive functions is interesting because it is pretty inclusive, (lots of functions have a primitive recursive implementation) and primitive recursive functions always terminate. I'm interested in trying to implement general purpose machine learning by enumerating primitive recursive functions. Which raises the question of just how general the primitive recursive functions really are.

Ackermann's function gives an example of what you miss out when you confine yourself to primitive recursive functions. But Ackermann's function explodes violently. The values of the function rapidly become too large for any practical use. If that were typical, I would think that I miss out on nothing of practical importance when I restrict myself to primitive recursive functions.

But I suspect that the violent explosion is only there to meet the needs of the proof. Given a function that is specified by an implementation that is not primitive recursive, how can you tell whether it is primitive recursive? There might be a clever primitive recursive way of implementing the function that is hard to find. One needs to come up with a proof, and that is also hard to find. The violent explosion of Ackermann's function lets one construction a proof; no primitive recursive function goes BOOM! quite as dramatically.

Are there any other proof techniques and hence other, more practically important, functions that are (known to be) total recursive but not primitive recursive?

Comment by AlanCrowe on Are ‘Evidence-based’ policies damaging policymaking? · 2013-08-23T14:04:19.536Z · LW · GW

It is an important topic, but the Institute of Economic Affairs landing page that you link to is pretty lame.

Emphasizing "Evidence" gives one a hefty shove towards evidence that is quick and easy to gather.

QUICK The IEA say

A disregard for substitution effects.

but the actual problem is that substitution takes time. If you want to gather evidence about substitution effects you have to be patient. "Evidence based policy making" is biased towards fast-evidence, to accommodate the urgency of policy making. So of course substitution effects get under-estimated.

EASY Computing correlations is easy, tracing causality is hard. Worse, you can hardly hope to unravel the network of causal connections in a real world problem without making some theoretical commitments. An emphasis on "Evidence" leaves you relying on the hope that correlation does imply causality because you can get evidence for correlations. Causality? Not very practical. Then you get kicked in the teeth by Goodhart's Law

The IEA say

Calculating the external costs of harmful activities.

which is true, but hardly the worst of the problems. Ideally one would estimate the benefits of an economic policy based on adding the consumer surplus and the producer surplus. But this is too hard. Instead one tots up the market prices of things. This leads to GDP, which is a notoriously crap measure of welfare. But if you insist on "evidence" you are going to end up throwing out theoretical considerations of consumer and producer surplus in favor of GDP.

This submission is getting down voted. You might what to blog about the topic and try again with a link to your blog post. It shouldn't be too hard to provide a substantial improvement on the IEA landing page.

Comment by AlanCrowe on Progress on automated mathematical theorem proving? · 2013-07-04T19:24:49.531Z · LW · GW

Do you have evidence in the other direction?

No. I think one typically has to come up with a brutally truncated approximation to actually Bayesian reasoning. For example, if you have n propositions, instead of considering all 2^ n basic conjunctions, ones first idea is to assume that they are all independent. Typically that is a total failure; the independence assumption abolishes the very interactions that were of interest. So one might let proposition n depend on proposition n-1 and reinvent Markov models.

I don't see much hope of being able to anticipate which, if any, crude approximations to Bayesian reason are going to work well enough. One just has to try it and see. I don't think that my comment goes any deeper than saying that there are lots of close to practical things due to be tried soon, so I expect one or two pleasant surprises.

Comment by AlanCrowe on Progress on automated mathematical theorem proving? · 2013-07-04T19:07:38.925Z · LW · GW

You've put your finger on a weakness of my optimistic vision. If the guesses are calling it 90% of the time, they significantly extend the feasible depth of search. But 60:40? Meh! There is a lot of room for the insights to fail to be sharp enough, which turns the Bayesian stuff into CPU-cycle wasting overhead.

Comment by AlanCrowe on Progress on automated mathematical theorem proving? · 2013-07-03T21:21:50.020Z · LW · GW

Current theorem provers don't have a "sense of direction".

From the description of Polya's Mathematics and Plausible Reasoning: Vol. II: Patterns of Plausible Inference:

This is a guide to the practical art of plausible reasoning, particularly in mathematics but also in every field of human activity. Using mathematics as the example par excellence, Professor Polya shows how even that most rigorous deductive discipline is heavily dependent on techniques of guessing, inductive reasoning, and reasoning by analogy. In solving a problem, the answer must be guessed at before a proof can even begin, and guesses are usually made from a knowledge of facts, experience, and hunches. The truly creative mathematician must be a good guesser first and a good prover afterward; many important theorems have been guessed but not proved until much later. In the same way, solutions to problems can be guessed, and a good guesser is much more likely to find a correct solution...

Current theorem provers search for a proof, and even contain some ad hoc heuristics for exploring their search trees. (For example, Paulson's book ML for the working programmer ends with a chapter on theorem proving and offers "We have a general heuristic: never apply universal:left or existential:right to a goal if a different rule can usefully be applied.) However, Polya's notion of being a good guesser has been formalised as Bayesian reasoning. If a theorem prover, facing a fork in its search tree, uses Bayes Theorem to pick the most likely branch first, it may find much deeper results.

I don't think there is currently much overlap between those working on theorem proving and those working on Bayesian statistics. There is perhaps even a clash of temperaments between those who seek absolute certainty and those who seek an edge in the messy process of muddling through. Nevertheless I foresee great possibilities of using Bayesian reasoning to guide the internal search of theorem provers, thus giving them a sense of direction.

Reasoning "X hasn't been progressing recently, therefore X is unlikely to progress in the near future." is good reasoning if that is really all that we know. But it is also weak reasoning which we should be quick to discard if we see a relevant change looming. The next ten or twenty years might see the incorporation of Bayesian reasoning as a guiding principle for tree search heuristics inside theorem provers and lead to large advances.

Comment by AlanCrowe on Rationality Quotes July 2013 · 2013-07-03T15:46:24.064Z · LW · GW

Madmen we are, but not quite on the pattern of those who are shut up in a madhouse. It does not concern any of them to discover what sort of madness afflicts his neighbor, or the previous occupants of his cell; but it matters very much to us. The human mind is less prone to go astray when it gets to know to what extent, and in how many directions, it is itself liable to err, and we can never devote too much time to the study of our aberrations.

Bernard de Fontenelle,1686

Found in book review

Comment by AlanCrowe on After critical event W happens, they still won't believe you · 2013-06-15T22:02:15.065Z · LW · GW

I'm not connected to the Singularity Institute or anything, so this is my idiosyncratic view.

Think about theorem provers such as Isabelle or ACL2. They are typically structured a bit like an expert system with a rule base and an inference engine. The axioms play the role of rule base and the theorem prover plays the role of the inference engine. While it is easy to change the axioms, this implies a degree of interpretive overhead when it come to trying to prove a theorem.

One way to reduce the interpretative overhead is to use a partial evaluator to specialize the prover to the particular set of axioms.

Indeed, if one has a self-applicable partial evaluator one could use the second Futamura projection and, specializing the partial evaluator to the theorem prover, produce a theorem prover compiler. Axioms go in, an efficient theorem prover for those axioms comes out.

Self-applicable partial evaluators are bleeding-edge software technology and current ambitions are limited to stripping out interpretive overhead. They only give linear speed ups. In principle a partial evaluator could recognise algorithmic inefficiencies and, rewriting the code more aggressively produce super-linear speed ups.

This is my example of a critical event in AI: using a self-applicative partial evaluator and the second Futamura projection to obtain a theorem prover compiler with a super-linear speed up compared to proving theorems in interpretive mode. This would convince me that there was progress on self-improving AI and that the clock had started counting down towards an intelligence explosion that changes everything.

How long would be on the clock? A year? A decade? A century? Guessing wildly I'd put my critical event at the halfway point. AI research started in 1960, so if the critical event happens in 2020 that puts the singularity at 2080.

Notice how I am both more optimistic and more pessimistic about the prospects for AI than most commentators.

I'm more pessimistic because I don't see the current crop of wonderful, hand crafted, AI achievements, such as playing chess and driving cars as lying on the path towards recursively improving AI. These are the Faberge egg's of AI. They will not hatch into chickens that lay even more fabulous eggs...

I'm more optimistic because I'm willing to accept a technical achievement, internal to AI research, as a critical event. It could show that things are really moving, and that we can start to expect earth-shattering consequences, even before we've seen real-world impacts from the internal technical developments.

Comment by AlanCrowe on After critical event W happens, they still won't believe you · 2013-06-14T19:25:23.526Z · LW · GW

I don't find either example convincing about the general point. Since I'm stupid I'll fail to spot that the mouse example uses fictional evidence and is best ignored

We are all pretty sick of seeing a headline "Cure for Alzheimer's disease!!!" and clicking through to the article only to find that it is cured in mice, knock-out mice, with a missing gene, and therefore suffering from a disease a little like human Alzheimer. The treatment turns out to be injecting them with the protein that the missing gene codes for. Relevance to human health: zero.

Mice are very short lived. We expect big boosts in life span by invoking mechanisms already present in humans and already working to provide humans with much longer life spans than mice. We don't expect big boosts in the life span of mice to herald very much for human health. Cats would be different. If pet cats started living 34 years instead of 17, their owners would certainly be saying "I want what Felix is getting."

The sophistication of AI is a tricky thing to measure. I think that we are safe from unfriendly AI for a few years yet, not so much because humans suck at programming computers, but because they suck in a particular way. Some humans can sit at a keyboard typing in hundreds of thousands of lines of code specific to a particular challenge and achieve great things. We can call that sophistication if we like, but it isn't going to go foom. The next big challenge requires a repeat of the heroic efforts, and generates another big pile of worn out keyboards. We suck at programming in the sense that we need to spend years typing in the code ourselves, we cannot write code that writes code.

Original visions of AI imagined a positronic brain in an anthropomorphic body. The robot could drive a car, play a violin, cook dinner, and beat you at chess. It was general purpose.

If one saw the distinction between special purpose and general purpose as the key issue, one might wonder: what would failure look like? I think the original vision would fail if one had separate robots, one for driving cars and flying airplanes, a second for playing musical instruments, a third to cook and clean, and fourth to play games such as chess, bridge, and baduk.

We have separate hand-crafted computer programs for chess and bridge and baduk. That is worse than failure.

Examples the other way.

After the Wright brothers people did believe in powered, heavier-than-air flight. Aircraft really took of after that. One crappy little hop in the most favourable weather and suddenly every-one's a believer.

Sputnik. Dreamers had been building rockets since the 1930's, and being laughed at. The German V2 was no laughing matter, but it was designed to crash into the ground and destroy things, which put an ugh field around thinking about what it meant. Then comes 1957. Beep, beep, beep! Suddenly every-one's a believer and twelve years later Buzz Aldrin and the other guy are standing on the moon :-)

The Battle of Cambrai is two examples of people "getting it". First people understood before the end of 1914 that the day of the horse-mounted cavalry charge was over. The Hussites had war wagons in 1420 so there was a long history of rejecting that kind of technology. But after event W1 (machine guns and barbed wire defeating horses) it only took three years before the first tank-mounted cavalry charge. I think we tend to miss understand this by measuring time in lives lost rather than in years. Yes, the adoption of armoured tanks was very slow if you count the delay in lives, but it couldn't have come much faster in months.

The second point is that first world war tanks were crap. The Cambrai salient was abandoned. The tanks were slow and always broke down, because they were too heavy and yet the armour was merely bullet proof. There only protection against artillery was that the gun laying techniques of the time were ill suited to moving targets. The deployment of tanks in the first world war fall short of being the critical event W. One expects the horrors of trench warfare to fade and military doctrine to go back to horses and charges in brightly coloured uniforms.

In reality the disappointing performance of the tanks didn't cause military thinkers to miss their significance. Governments did believe and developed doctrines of Blitzkreig and Cruiser tanks. Even a weak W can turn every-one into believers.

Comment by AlanCrowe on Rationality Quotes June 2013 · 2013-06-03T12:47:46.421Z · LW · GW

The corollary is more useful than the theorem:-) If I wish to be less of a dumbass, it helps to know what it looks like from the inside. It looks like bad luck, so my first job is to learn to distinguish bad luck from enemy action. In Eliezer's specific example that is going to be hard because I need to include myself in my list of potential enemies.

Comment by AlanCrowe on The Paucity of Elites Online · 2013-05-31T13:18:15.336Z · LW · GW

these blogs succeed ... because they ... exclude comments whose quality falls below a certain threshold.

I see an opportunity for philanthropy. Identity the elite people that one hopes will blog, and then pay for somebody else to do the comment moderation for them.

The problem I foresee is that this turns out to be big-money philanthropy. Who do you hire as your moderator? They probably need a PhD in mathematics, and the right personality: agreeable yet firm. People like that have lots of well paid options in which they are not playing second fiddle. The philanthropist backing this may have to come up with $150,000 a year to pay the wage bill.

Turning this round, it answers the question of why so few elites blog. Hoping that they take on the task of doing their own comment moderation and community building is hoping that they engage in some serious philanthropy and tolerate getting little credit (because they are paying in kind rather than in big wodges of cash).

Comment by AlanCrowe on The flawed Turing test: language, understanding, and partial p-zombies · 2013-05-19T20:47:07.008Z · LW · GW

One thing that I've tried with Google is using it to write stories. Start by searching on "Fred was bored and". Pick slightly from the results and search on "was bored and slightly". Pick annoyed from the search results and search on "bored and slightly annoyed"

Trying this again just now reminds me that I let the sentence fragment grow and grow until I was down to, err, ten? hits. Then I took the next word from a hit that wasn't making a literal copy, and deleted enough leading words to get the hit count back up.

Anyway, it seemed unpromising because the text lacked long range coherence. Indeed, the thread of the sentences rarely seemed to run significantly longer than the length of the search string.

Perhaps "unpromising" is too harsh. If I were making a serious Turing Test entry I would happily use Big Data and mine the web for grammar rules and idioms. On the other hand I feel the need for a new and different idea for putting some meaning and intelligence behind the words. Otherwise my chat bot would only be able to compete with humans who were terribly, terribly drunk and unable to get from one end a sentence to the other kind of cricket match where England collapses and we lose the ashes on the way back from the crematorium, which really upset the, make mine a pint, now where was I?

Comment by AlanCrowe on [Paper] On the 'Simulation Argument' and Selective Scepticism · 2013-05-19T20:22:10.027Z · LW · GW

The human brain is subject to glitches, such as petit mal, transient ischaemic attack, or misfiling a memory of a dream as a memory of something that really happened.

There is a lot of scope for a cheap simulation to produce glitches in the matrix without those glitches spoiling the results of the simulation. The inside people notice something off and just shrug. "I must have dreamt it" "I had a petit mal." "That wasn't the simulators taking me off line to edit a glitch out of my memory, that was just a TIA. I should get my blood pressure checked."

And the problem of "brain farts" gives the simulators a very cheap way for protecting the validity of the results simulation against people noticing glitches and derailing the simulation by going on a glitch hunt motivated by the theory that they might be living in a simulation. Simply hide the simulation hypothesis by editing Nick Bostrom under the guise of a TIA. In the simulation Nick wakes up with his coffee spilled and his head on the desk. Thinking up the simulation hypothesis "never happened". In all the myriad simulations, the simulation hypothesis is never discussed.

I'm not sure that entirely resolves the matter. How can the simulators be sure that editing out the simulation hypothesis works as smoothly as they expect? Perhaps they run a few simulations with it left in. If it triggers an in-simulation glitch hunt that compromises the validity of the simulation, they have their answer and can turn off the simulation.

Comment by AlanCrowe on The flawed Turing test: language, understanding, and partial p-zombies · 2013-05-17T21:14:49.461Z · LW · GW

The post doesn't do justice to the subtlety of Turing's insight. The Turing test is two-faced in that the interrogator is addressing two contestants, the computer and the human. He doesn't know which is which, but he hopes that comparing their answers will reveal their identities. But the Turing test is two-faced in a second way.

Turing hopes that the test will satisfy its audience, but that audience contains two groups. There is a pro-AI group. Some of them will have been involved in writing the initial source code of the AI that is taking the test. They are cheering on the AI. Then there is the anti-AI group, staunchly maintaining that computers cannot think. They admire the trickery of the programmers, but refuse to credit the creation with the thoughts of its creators.

Consider a conventional test that resembles a university examination. Perhaps the computer scores high marks. The anti-AI refuses to budge. The coders have merely hired experts in the subject being examined and laboured hard to construct a brittle facade of apparent knowledge. Let us change the curriculum,...

But a conventional test has both failure modes. If the computer scores low marks the pro-AI crowd will refuse to budge. The test was too hard and they were not given enough time to prepare. A human student would cope as poorly if you switched the curriculum on him,...

Turing tried to come up with a test that could compel die-hard in both camps. First he abolishes the curriculum. The interrogator is free to ask whatever questions he wishes. There is no point teaching to the test, for the question "Will this be on the test?" receives no answer. Second he abolishes the pass mark. How well does the computer have to do? As well as a human. And how well is that? We don't know; a human will take the test at the same time as the computer and the interrogator will not know which is which, unless the incompetence of the computer gives the game away.

The pro-AI camp are between a rock and a hard place. They cannot complain about the lack of a curriculum for the human doesn't get a copy of it either: it doesn't exist. They cannot complain that the questions were too hard, because the human answered them. They cannot complain that the human's answers were merely a good effort but actually wrong, because they were good enough to let the interrogator recognise human superiority.

The final gambit of the pro-AI camp is to keep the test short. Perhaps the interrogator has some killer questions that will sort the humans from the computers, but he has used them before and the programmers have coded up some canned answers. Keep the test short. If the interrogator starts asking follow up questions, probing to see if those were the computer's own answers, probing to see if the computer understands the things it is saying or reciting from memory,...

We come to a tricky impasse. Just how long does the interrogator get?

Perhaps it is the anti-AI crowd that is having a hard time. The computer and the human are both giving good answers to the easy questions. No help there. The computer and the human are both struggling to answer the hard questions. No help there. The medium questions are producing different answers from the two contestants, but sometimes teletype A hammers out a human answer and teletype B tries to dodge, and sometimes its the other way round.

There is one fixed point on the non-existent curriculum, childhood. Tell me about your mother, tell me about your brother. The interrogator learns anew the perils of a fixed curriculum. Teletype A has a good cover story. The programmers have put a lot of work into constructing a convincing fiction. Teletype B has a good cover story. The programmers have put a lot of work into construction a convincing fiction. Which one should the interrogator denounce as non-human. The interrogator regrets wasting half the morning on family history. Fearing embarrassment he pleads for more time.

The pro-AI camp smirk and say "Of course. Take all the time you need.". After the lunch break the interrogation resumes. After the dinner break the interrogation resumes. The lights go on. People demand stronger coffee as 11pm approaches. Teletype B grows tetchy. "Of course I'm the human, you moron. Why can't you tell? You are so stupid." The interrogator is relieved. He has coded chat bots himself. On of his last ditch defenses was

(defun insult-interrogator () (format *standard-io* "~&You are so stupid."))

He denounces B as non-human, getting it wrong for the fourth time this week. The computer sending to teletype A has passed the Turing test :-)

Whoops! I'm getting carried away writing fiction. The point I'm trying to tack on to Turing's original insight (no curriculum, no pass mark) is that the pro-AI camp cannot try to keep the test short. If they limit it to a 5 minute interrogation, the anti-AI camp will claim that it takes six minutes to exhaust the chat bots opening book, and refuse to concede.

More importantly the anti-AI camp can develop the technique of smoking out small-state chat-bots by keeping the interrogation going for half an hour and then circling back to the beginning. Of course the human may have forgotten how the interrogation began. It is in the spirit of the test to say the the computer doesn't have to do better than the human. But the spirit of the Turing test certainly allows the interrogator to try. If the human notices "Didn't you ask that earlier." and if the computer doesn't, or slows down as the interrogation proceeds due to an ever-growing state, the computer quite properly fails the Turing Test. (Hmm, I feel that I'm getting sucked into a very narrow vision of what might be involved in passing the Turing Test.)

If the pro-AI camp want the anti-AI camp to concede, they have to let the anti-AI interrogators keep asking questions until they realise that the extra questions are not helping. The computer is thinking about the questions before answering and can keep it up all day.

I think that you can break a chat-bot out of its opening book with three questions along the following lines.

1)Which is heavier, my big toe or a 747

2)Which is heavier, a symphony or a sonnet

3a)Which question do you think is better for smoking out the computer, the first or the second?

3b)Which of the previous two questions is the more metaphorical?

One can imagine a big engineering effort that lets the computer identify objects and estimate their weight. Big toe 10 grams. 747, err, 100 tons. And one can write code that spots and dodges trick questions involving the weight of immaterial objects. But one needs a big, fat opening book to cope with the great variety of individual questions that the interrogator might ask.

Then comes question three. That links together question one and question two, squaring the size of the opening book. 40 seconds into an all day interrogation and the combinatorial explosion has already gone BOOM!

Comment by AlanCrowe on Rationality Quotes May 2013 · 2013-05-08T16:43:33.463Z · LW · GW

Good point! I've totally failed to think about multiple laws interacting.

Comment by AlanCrowe on Rationality Quotes May 2013 · 2013-05-08T12:37:38.737Z · LW · GW

There would have to be a two sided test. A tort of ineffectiveness by which the plaintiff seeks relief from a law that fails to achieve the goals laid out for it. A tort of under-ambition by which the plaintiff seeks relief from a law that is immune from the tort of ineffectiveness because the formally specified goals are feeble.

Think about the American experience with courts voiding laws that are unconstitutional. This often ends up with the courts applying balancing tests. It can end up with the court ruling that yes, the law infringes your rights, but only a little. And the law serves a valid purpose, which is very important. So the law is allowed to stand.

These kinds of cases are decided in prospect. The decision is reached on the speculation about the actual effects of the law. It might help if constitutional challenges to legislation could be re-litigated, perhaps after the first ten years. The second hearing could then be decided retrospectively, looking back at ten years experience, and balancing the actual burden on the plaintiffs rights against the actual public benefit of the law.

Where though is the goal post? In practice it moves. In the prospective hearing the government will make grand promises about the huge benefits the law will bring. In the retrospective hearing the government will sail on the opposite tack, arguing that only very modest benefits suffice to justify the law.

It would be good it the goal posts are fixed. Right from the start the law states the goals against which it will be assessed in ten years time. Certainly there needs to be a tort of ineffectiveness, active against laws that do not meet their goals. But politicians would soon learn to game the system by writing very modest goals into law. That needs to be blocked with a tort of under-ambition which ensures that the initial constitutionality of the law is judged only admitting in prospect those benefits that can be litigated in retrospect.

Comment by AlanCrowe on Estimates vs. head-to-head comparisons · 2013-05-04T22:51:36.103Z · LW · GW

I fear that I've missed your point, but here is my runnable toy model written in Common Lisp

(defun x () (random 1.0))
(defun y () (random 1.0))
(defun z () (random 1.0))

(defun x-y () (- (x) (y)))
(defun y-z () (- (y) (z)))
(defun z-x () (- (z) (x)))

(defparameter diffs (list (x-y) (y-z) (z-x)))

(reduce #'+ diffs) => -0.42450535

The variable diffs get set to a list of the three estimates. Adding them up we get -0.424. What has gone wrong?

X, Y, and Z are all 1/2. But they are tricky to measure. (defun x () (random 1.0)) is modelling the idea that when we estimate X we get a random variable uniform from 0.0 to 1.0.

(defun x-y () (- (x) (y)))

is modelling the idea that we estimate X and estimate Y and subtract. (And don't remember our estimate)

(defun y-z () (- (y) (z)))

is modelling the idea that we start from scratch, estimating Y (again) and then Z before finally subtracting.

Since the Y in X-Y is our first go at estimating Y and the Y in Y-Z is our second go at estimating Y, they have different random errors and don't cancel like they should.

Comment by AlanCrowe on Rationality Quotes May 2013 · 2013-05-01T22:10:12.816Z · LW · GW

That clashes in an interesting way with the recent post on Privileging the Question. Let us draw up our own, independent list of things that matter. There will be some, high up our list, about which our culture has no particular belief. Our self imposed duty is to find out whether they are true or not, leaving less important, culturally prominent beliefs alone.

Culture changes and many prominent beliefs of our culture will fade away, truth unchecked, before we are through with more urgent matters.

Comment by AlanCrowe on Rationality Quotes May 2013 · 2013-05-01T21:35:53.070Z · LW · GW

There is, perhaps, a word missing from the English language. If Derek Lowe were speaking, instead of writing, he would put an exaggerated emphasis on the word real and native speakers of English would pick up on a special, metaphorical meaning for the word real in the phrase real boss. The idea is that there are hidden, behind the scenes connections more potent (more real?) than the overt connections.

There is a man in a suit, call him the actual boss, who issues orders. Perhaps one order is "run the toxicology tests". The actual boss is the same as the real boss so far. Perhaps another order is "and show that the compound is safe." Now power shifts to the mice. If the compound poisons the mice and they die, then the compound wasn't safe. The actual boss has no power here. It is the mice who are the real boss. They have final say on whether the compound is safe, regardless of the orders that the actual boss gave.

Derek Lowe is giving us an offshoot of an aphorism by Francis Bacon: "Nature, to be commanded, must be obeyed." Again the point is lost if one refuses to find a poetic reading. Nature accepts no commands; there are no Harry-Potter style spells. Nature issues no commands; we do not hear and obey, we just obey. (So why is Bacon advising us to obey?)

Comment by AlanCrowe on Rationality Quotes May 2013 · 2013-05-01T14:17:44.903Z · LW · GW

And I told her that no matter what the org chart said, my real bosses were a bunch of mice in cages and cells in a dish, and they didn’t know what the corporate goals were and they couldn’t be “Coached For Success”, the way that poster on the wall said.