Posts

Comments

Comment by more_wrong on Beyond the Reach of God · 2014-06-02T13:56:23.663Z · LW · GW

Yet who prohibits? Who prevents it from happening?

Eliezer seems absurdly optimistic to me. He is relying on some unseen entity to reach in and keep the laws of physics stable in our universe. We already see lots of evidence that they are not truly stable, for example we believe in both the electroweak transition and earlier transitions, of various natures depending on your school of physics. We /just saw/ in 1998 that unknown laws of physics can creep up and spring out by surprise, suddenly 'taking over' 74 percent of the Universe's postulated energy supply.

Who is keeping the clockwork universe running? Or, why is the hardware and software (operating system etc) for the automata fishbowl so damned stable? Is it part of the Library of Babel? Well, that's a good explanation, but the Bible's universe is in there also and arguably a priori more probable than a computer system that runs many worlds quantum mechanics sims on a panuniversal scale, and never halts and/or catches fire. It is very hard to keep a perfectly stable web server going when demand keeps expanding, and that seems much simpler.

Look into the anthropic principle literature and search on 'Lee Smolin' or start somewhere like http://evodevouniverse.com/wiki/Cosmological_natural_selection_(fecund_universes) for some reasoned speculation on how we might have wound up in such a big, rich, diverse universe from simpler origins.

I don't think it is rational to accept the stability of natural law without some reasonable model for either the origins of said law, or some timeless physics model that has stable physics with apparently consistent evolution as an emergent property, or it's a gift from entities with powers far beyond ours, or some such.

If the worst that can happen is annihilation, you're living in a fool's paradise. Many people have /begged/ for true death and in the animal world things happen all the time that are truly horrifying to most human eyes. I will avoid going into sickening detail here, but let me refer you to H.P. Lovecraft and Charles Stross for some examples of how the universe might have been slightly less friendly.

Comment by more_wrong on [link] Back to the trees · 2014-06-02T13:09:40.976Z · LW · GW

Moreover, we know of examples where natural selection has caused drastic decreases in organismal complexity – for example, canine venereal sarcoma, which today is an infectious cancer, but was once a dog.

Or human selection. Henrietta Lax (or her cancer) is now many tonnes of cultured cells; she has become an organism that reproduces by mitosis and thrives in the niche environment of medical research labs.

Comment by more_wrong on The first AI probably won't be very smart · 2014-06-02T08:49:15.836Z · LW · GW

I love the idea of an intelligence explosion but I think you have hit on a very strong point here:

In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of ways to make the AI smarter"; to me, it seems very unlikely.

In fact, we can see from both history and paleontology that when a new breakthrough was made in "biologicial technology" like the homeobox gene or whatever triggered the PreCambrian explosion of diversity, that when self-modification (here a 'self' isn't one meat body, it's a clade of genes that sail through time and configuration space together - think of a current of bloodlines in spacetime, that we might call a "species" or genus or family) is made easier (and the development of modern-style morphogenesis is in some way like developing a toolkit for modification of body plan at some level) then there was apparently an explosion of explorers, bloodlines, into the newly accessible areas of design space.

But the explosion eventually ended. After the Diaspora into over a hundred phyla of critters hard enough to leave fossils, the expansion into new phyla stopped. Some sort of new frontier was reached within tens of millions of years, then the next six hundred million years or so was spent slowly whittling improvements within phyla. Most phyla died out, in fact, while a few like Arthropdoda took over many roles and niches.

We see very similar incidents throughout human history, look at the way languages develop, or technologies. For an example perhaps familiar to many readers, look at the history of algorithms. For thousands of years we see slow development in this field, from Babylonian algorithms on how to find the area of a triangle to the Sieve of Eratosthenes to... after a lot of development - medieval Italian merchants writing down how to do double entry bookkeeping.

Then in the later part of the Renaissance there is some kind of phase change and the mathematical community begins compiling books of algorithms quite consciously. This has happened before, in Sumer and Egypt to start, in Babylon and Greece, in Asia several times, and most notably in the House of Wisdom in Baghdad in the ninth century. But always there are these rising and falling cycles where people compile knowledge and then it is lost and others have to rebuild, often the new cycle is helped by the rediscovery or re-appreciation of a few surviving texts from a prior cycle.

But around 1350 there begins a new cycle (which of course draws on surviving data from prior cycles) where people begin to accumulate formally expressed algorithms that is unique in that it has lasted to this day. Much of what we call the mathematical literature consists of these collections, and in the 1930s people (Church, Turing, many others) finally develop what we might now call classical theory of algorithms. Judging by the progress of various other disciplines, you would expect little more progress in this field, relative to such a capstone achievement, for a long time.

(One might note that this seven-century surge of progress might well be due, not to human mathematicians somehow becoming more intelligent in some biological way, but to the development of printing and associated arts and customs that led to the wide spread dissemination of information in the form of journals and books with many copies of each edition. The custom of open-sourcing your potentially extremely valuable algorithms was probably as important as the technology of printing here; remember that medieval and ancient bankers and so on all had little trade secrets of handling numbers and doing maths in a formulaic way, but we don't retain in the general body of algorithmic lore any of their secret tricks unless they published or chance preserved some record of their methods.)

Now, we'd have expected Turing's 1930's work to be the high point in this field for centuries to come (and maybe it was; let history be the judge) but between the development of the /theory/ of a general computing machine, progress in other fields such as electronics, and a leg up in from the intellectual legacy left by priors such as George Boole, the 1940's somehow put together (under enormous pressure of circumstances) a new sort of engine that could run algorithmic calculations without direct human intervention. (Note that here I say 'run', not 'design - I mean that the new engines could execute algorithms on demand).

The new computing engines, electro-mechanical as well as purely electronic, were very fast compared to human predecessors. This led to something in algorithm space that looks to me a lot like the Precambrian explosion, with many wonderful critters like LISP and FORTRAN and BASIC evolving that bridged the gap between human minds and assembly language, which itself was a bridge to the level of machine instructions, which... and so on. Layers and layers developed, and then in the 1960's giants wrought mighty texts of computer science no modern professor can match; we can only stare in awe at their achievements in some sense.

And then... although Moore's law worked on and on tirelessly, relatively little fundamental progress in computer science happened for the next forty years. There was a huge explosion in available computing power, but just as jpaulson suspects, merely adding computing power didn't cause a vast change in our ability to 'do computer science'. Some problems may /just be exponentially hard/ and an exponential increase in capability starts to look like a 'linear increase' by 'the important measure'.

It may well be that people will just ... adapt... to exponentially increasing intellectual capacity by dismissing the 'easy' problems as unimportant and thinking of things that are going on beyond the capacity of the human mind to grasp as "nonexistent" or "also unimportant". Right now, computers are executing many many algorithms too complex for any one human mind to follow - and maybe too tedious for any but the most dedicated humans to follow, even in teams - and we still don't think they are 'intelligent'. If we can't recognize an intelligence explosion when we see one under our noses, it is entirely possible we won't even /notice/ the Singularity when it comes.

If it comes - as jpaulson indicates, there might be a never ending series of 'tiers' where we think "Oh past here it's just clear sailing up to the level of the Infinite Mind of Omega, we'll be there soon!" but when we actually get to the next tier, we might always see that there is a new kind of problem that is hyperexponentially difficult to solve before we can ascend further.

If it was all that easy, I would expect that whatever gave us self-reproducing wet nanomachines four billion years ago would have solved it - the ocean has been full of protists and free swimming virii, exchanging genetic instructions and evolving freely, for a very long time. This system certainly has a great deal of raw computing power, perhaps even more than it would appear on the surface. If she (the living ocean system as a whole) isn't wiser than the average individual human, I would be very surprised, and she apparently either couldn't create such a runaway explosion of intelligence, or decided it would be unwise to do so any faster than the intelligence explosion we've been watching unfold around us.

Comment by more_wrong on Against Devil's Advocacy · 2014-05-31T02:41:27.370Z · LW · GW

Even if such worlds do 'exist', whether I believe in magic within them is unimportant, since they are so tiny;

Since there is a good deal of literature indicating that our own world has a surprisingly tiny probabilty (ref: any introduction to the Anthropic Principle), I try not to dismiss the fate of such "fringe worlds" as completely unimportant.

army1987's argument above seems very good though, I suggest you look at his comment very seriously

Comment by more_wrong on Belief in Belief vs. Internalization · 2014-05-31T01:36:58.218Z · LW · GW

Is there an underlying problem of crying wolf; too many warning messages obscure the ones that are really matters of life and death?

This is certainly an enormous problem for interface design in general for many systems where there is some element of danger. The classic "needle tipping into the red" is an old and brilliant solution for some kinds of gauges - an analogue meter where you can see the reading tipping toward a brightly marked "danger zone", usually with a 'safe' zone and an intermediate zone also marked, has surely prevented many accidents. If the pressure gauge on the door had such a meter where green meant "safe to open hatches" and red meant "terribly dangerous", that might have been a better design than just raw numbers.

I haven't worked with pressure doors but I have worked with large vacuum systems, cryogenic systems, labs with lasers that could blind you or x-ray machines that can be dangerously intense, and so on. I can attest that the designers of physics lab equipment do indeed put a good deal of thought and effort into various displays that indicate when the equipment is in a dangerous state.

However, when there are /many/ things that can go dangerously wrong, it becomes very difficult to avoid cluttering the sensorium of the operator with various warnings. The classic example are the control panels for vehicles like airplanes or space ships; you can see a beautiful illustration of the 'indicator clutter problem' in the movie "Airplane!":

Comment by more_wrong on Against Devil's Advocacy · 2014-05-30T23:08:38.948Z · LW · GW

"There is an object one foot across in the asteroid belt composed entirely of chocolate cake."

This is a lovely example, which sounds quite delicious. It reminds me strongly of the famous example of Russell's Teapot (from his 1952 essay "Is There a God?"). Are you familiar with his writing?

You'll just subconsciously avoid any Devil's arguments that make you genuinely nervous, and then congratulate yourself for doing your duty.

Yes, I have noticed that many of my favorite people, myself included, do seem to spend a lot of time on self-congratulation that they could be spending on reasoning or other pursuits. I wonder if you know anyone who is immune to this foible :)

Comment by more_wrong on Tell Your Rationalist Origin Story · 2014-05-30T00:06:11.650Z · LW · GW

stay away from this community I responded to this suggestion but deleted the response as unsuitable because it might embarass you. I would be happy to email my reply if you are interested.

we'd probably convince you such perma-death would be the highly probable outcome

Try reading what I said in more detail in both the post I made that you quoted and my explanation of how there might be a set of worlds of very small measure. Then go read Eliezer Yudkowsky's posts on Many Worlds (or crack a book by Deutsch or someone, or check Wikipedia.) Then reread the clause you published here which I just quoted above, and see if you still stand by it, or if you can see just how very silly it is. I don't want to bother to try to explain things again that have already been very well explained on this site.

I am trying to communicate using local community standards of courtesy, it is difficult. I am used to a very different tone of discourse.

Comment by more_wrong on Tell Your Rationalist Origin Story · 2014-05-28T15:15:59.341Z · LW · GW

Now, whether that distributed information is 'experiencing' anything is arguable,

As far as I know, the latter is what people are worrying about when they worry about ceasing to exist.

Ahhh... that never occurred to me. I was thinking entirely in terms of risk of data loss.

(Which is presumably a reason why your comment's been downvoted a bunch; most readers would see it as missing the point.)

I don't understand the voting rules or customs. Downvoting people who see things from a different perspective is... a custom designed to keep out the undesirables? I am sorry I missed the point but I learned nothing from the downvoting. I learned a great deal from your helpful comment - thank you.

I thought one of the points of the discussion was to promote learning among the readership.

Substitute "within almost any given branch" — I think my point still goes through.

Ah... see, that's where I think the 'lost' minds are likely hiding out, in branches of infinitesimal measure. Which might sound bad, unless you have read up on the anthropic principle and realize that /we/ seem to be residing on just such a branch. (Read up on the anthropic principle if our branch of the universal tree seems less than very improbable to you.)

I'm not worried that there won't be a future branch that what passes for my consciousness (I'm a P-zombie, I think, so I have to say "what passes for") will surivve on. I'm worried that some consciousnesses, equivalent in awareness to 'me' or better, might be trapped in very unpleasant branches. If "I " am permanently trapped in an unpleasant branch, I absolutely do want my consciousness shut down if it's not serving some wonderful purpose that I'm unaware of. If my suffering does serve such a purpose then I'm happy to think of myself as a utility mine, where external entities can come and mine for positive utilons as long as they get more positive utlions out of me than the negative utilons they leave me with.

My perceived utility function often goes negative. When that happens, I would be extremely tempted to kill my meat body if there were a guarantee it would extinguish my perceived consciousness permanently. That would be a huge reward to me in that frame of mind, not a loss. This may be why I don't see these questions the way most people here do.

P.S. Is there a place the rating system is explained? I have looked casually and not found it with a few minutes of effort; it seems like it should be explained prominently somewhere. Are downgradings intended as a punitive training measure ("don't post this! bad monkey!") or just a guide to readers (don't bother reading this, it's drivel, by our community standards). I was assuming the latter.

Comment by more_wrong on Exterminating life is rational · 2014-05-28T03:59:55.362Z · LW · GW

People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life" ... " nothing CAN do this, because nothing HAS done it."

The grey goo scenario isn't really very silly. We seem to have had a green goo scenario around 1.5 to 2 billion years ago that killed off many or most critters around due to release of deadly deadly oxygen; if the bacterial ecosystem were completely stable against goo scenarios this wouldn't have happened. We have had mini goo scenarios when for example microbiota pretty well adapted to one species made the jump to another and oops, started reproducing rapidly and killing off their new host species rapidly, e.g. Yersinia pestis. Just because we haven't seen a more omnivous goo sweep over the ecosphere recently ..., ...other than Homo sapiens, which is actually a pretty good example of a grey goo - think of the species as a crude mesoscale universal assembler, which is spreading pretty fast and killing off other species at a good clip and chewing up resources quite rapidly... ... doesn't mean it couldn't happen at the microscale also. Ask the anaerobes if you can find them, they are hiding pretty well still after the chlorophyll incident.

Since the downside is pretty far down, I don't think complacency is called for. A reasonable caution before deploying something that could perhaps eat everyone and everything in sight seems prudent.

Remember that the planet spent almost 4 billion years more or less covered in various kind of goo before the Precambrian Explosion. We know /very little/ of the true history of life in all that time; there could have been many, many, many apocalyptic type scenarios where a new goo was deployed that spread over the planet and ate almost everything, then either died wallowing in its own crapulence or formed the base layer for a new sort of evolution.

Multicellular life could have started to evolve /thousands of times/ only to be wiped out by goo. If multicellulars only rarely got as far as bones or shells, and were more vulnerable to being wiped out by a goo-plosion than single celled critters that could rebuild their population from a few surviving pockets or spores, how would we even know? Maybe it took billions of years for the Great War Of Goo to end in a Great Compromise that allowed mesoscopic life to begin to evolve, maybe there were great distributed networks of bacterial and viral biochemical computing engines that developed intelligence far beyond our own and eventually developed altruism and peace, deciding to let multicellular life develop.

Or we eukaryotes are the stupid runaway "wet" technology grey goo of prior prokaryote/viral intelligent networks, and we /destroyed/ their networks and intelligence with our runaway reproduction. Maybe the reason we don't see disasters like forests and cities dissolving in swarms of Andromeda-Strain like universal gobblers is that safeguards against that were either engineered in, or outlawed, long ago. Or, more conventionally, evolved.

What we /do/ think we know about the history of life is that the Earth evolved single celled life or inherited it via panspermia etc. within about half a billion years of the Earth's coalescence, then some combination of goo more or less dominated the Earth's surface te roost (as far as biology goes) for over three billion years, esp if you count colonies like stromatolites as gooey. In the middle of this long period was at least one thing that looked like a goo apocalypse that remade the Earth profoundly enough that the traces are very obvious (e.g. huge beds of iron ore). But there could have been many more mass extinctions we know of.

Then less than a billion years ago something changed profoundly and multicellulars started to flourish. This era is less than a sixth of the span of life on earth. So... five sixths, goo dominated world, one sixth, non goo dominated world, is the short history here. This does not fill me with confidence that our world is very stable against a new kind of goo based on non-wet, non-biochemical assemblers.

I do think we are pretty likely not to deploy grey goo, though. Not because humans are not idiots - I am an idiot, and it's the kind of mistake I would make, and I'm demonstrably above average by many measures of intelligence. It's just that I think Eliezer and others will deploy a pre-nanotech Friendly AI before we get to the grey goo tipping point, and that it will be smart enough, altruistic enough, and capable enough to prevent humanity from bletching the planet as badly as the green microbes did back in the day :)

Comment by more_wrong on Siren worlds and the perils of over-optimised search · 2014-05-28T03:23:55.043Z · LW · GW

Yes, I am sorry for the mistakes, not sure if I can rectify them. I see now about protecting special characters, I will try to comply.

I am sorry, I have some impairments and it is hard to make everything come out right.

Thank you for your help

Comment by more_wrong on Siren worlds and the perils of over-optimised search · 2014-05-27T22:37:54.829Z · LW · GW

On rereading this I feel I should vote myself down if I knew how, it seems a little over the top.

Let me post about my emotional state since this is a rationality discussion and if we can't deconstruct our emotional impulses and understand them we are pretty doomed to remaining irrational.

I got quite emotional when I saw a post that seemed like intellectual bullying followed by self congratulation; I am very sensitive to this type of bullying, more so when directed at others than myself as due to freakish test scores and so on as a child I feel fairly secure about my intellectual abilities, but I know how bad people feel when others consider them stupid. I have a reaction to leap to the defense of the victim; however I put this down to local custom of a friendly ribbing type of culture or something and tried not to jump on it.

Then I saw that private_messaging seemed pretending to be an authority on Monte Carlo methods while spreading false information about them, either out of ignorance (very likely) or malice. Normally ignorance would have elicited a sympathy reaction from me and a very gentle explanation of the mistake, but in the context of having just seen private_messaging attack eli_sennesh for his supposed ignorance of Monte Carlo methods, I flew into a sort of berserker sardonic mode, i.e. "If private_messaging thinks that people who post about Monte Carlo methods while not knowing what they are should be mocked in public, I am happy to play by their rules!" And that led to the result you see, a savage mocking.

I do not regret doing it because the comment with the attack on eli_sennesh and the calumnies against Monte Carlo still seems to be to have been in flagrant violation of rationalist ethics, in particular, presenting himself as if not an expert, at least someone with the moral authority to diss someone else for their ignorance on an important topic, and then followed false and misleading information about MC methods. This seemed like an action with a strongly negative utility to the community because it could potentially lead many readers to ignore the extremely useful Monte Carlo methodology.

If I posed as an authority and when around telling people Bayesian inference was a bad methodology that was basically just "a lot of random guesses" and that "even a very stupid evolutionary program" would do better t assessing probabilities, should I be allowed to get away scot free? I think not. If I do something like that I would actually hope for chastisement or correction from the community, to help me learn better.

Also it seemed like it might make readers think badly of those who rely heavily on Monte Carlo Methods. "Oh those idiots, using those stupid methods, why don't they switch to evolutionary algorithms". I'm not a big MC user but I have many friends who are, and all of them seem like nice, intelligent, rational individuals.

So I went off a little heavily on private_messaging, who I am sure is a good person at heart.

Now, I acted emotionally there, but my hope is that in the Big Searles Room that constitutes our room, I managed to pass a message that (through no virtue of my own) might ultimately improve the course of our discourse.

I apologize to anyone who got emotionally hurt by my tirade.

Comment by more_wrong on Siren worlds and the perils of over-optimised search · 2014-05-27T17:48:27.359Z · LW · GW

Private_messaging, can you explain why you open up with such a hostile question at eli? Why the implied insult? Is that the custom here? I am new, should I learn to do this?

For example, I could have opened with your same question, because Monte Carlo methods are very different from what you describe (I happened to be a mathematical physicist back in the day). Let me quote an actual definition:

Monte Carlo Method: A problem solving technique used to approximate the probability of certain outcomes by running multiple trial runs, called simulations, using random variables.

A classic very very simple example is a program that approximates the value of 'pi' thusly:

Estimate pi by dropping $total_hits random points into a square with corners at -1,-1 and 1,1

(then count how many are inside radius one circle centered on origin)

(loop here for as many runs as you like) { define variables $x,$y, $hits_inside_radius = 0, $radius =1.0, $total_hits=0, pi_approx;

input $total_hits for this run;
seed random function 'rand';
for (0..total_hits-1) do {
  $x = rand(-1,1);
  $y = rand(-1,1);
  $hits_inside_radius++ if ( $x*$x + $y * $y <= 1.0);
}
$pi_approx = 4 * $hits_inside_radius

add $pi_approx and $total_hits to a nice output data vector or whatever

} output data for this particular run } print nice report exit();


OK, this is a nice toy Monte Carlo program for a specific problem. Real world applications typically have thousands of variables and explore things like strange attractors in high dimensional spaces, or particle physics models, or financial programs, etc. etc. It's a very powerful methodology and very well known.

In what way is this little program an instance of throwing a lot of random programs at the problem of approximating 'pi'? What would your very stupid evolutionary program to solve this problem more efficiently be? I would bet you a million dollars to a thousand (if I had a million) that my program would win a race against a very stupid evolutionary program to estimate pi to six digits accurately, that you write. Eli and Eliezer can judge the race, how is that?

I am sorry if you feel hurt by my making fun of your ignorance of Monte Carlo methods, but I am trying to get in the swing of the culture here and reflect your cultural norms by copying your mode of interaction with Eli, that is, bullying on the basis of presumed superior knowledge.

If this is not pleasant for you I will desist, I assume it is some sort of ritual you enjoy and consensual on Eli's part and by inference, yours, that you are either enjoying this public humiliation masochistically or that you are hoping people will give you aversive condition when you publicly display stupidity, ignorance, discourtesy and so on. If I have violated your consent then I plead that I am from a future where this is considered acceptable when a person advertises that they do it to others. Also, I am a baby eater and human ways are strange to me.

OK. Now some serious advice:

If you find that you have just typed "Do you even know what X is?" then given a little condescending mini lecture about X, please check that you yourself actually know what X is before you post. I am about to check Wikipedia before I post in case I'm having a brain cloud, and i promise that I will annotate any corrections I need to make after I check; everything up to HERE was done before the check. (Off half recalled stuff from grad school a quarter century ago...)

OK, Wikipedia's article is much better than mine. But I don't need to change anything, so I won't.

P.S. It's ok to look like an idiot in public, it's a core skill of rationalists to be able to tolerate this sort of embarassment, but another core skill is actually learning something if you find out that you were wrong. Did you go to Wikipedia or other sources? Do you know anything about Monte Carlo Methods now? Would you like to say something nice about them here?

P.P.S. Would you like to say something nice about eli_sennesh, since he actually turns out to have had more accurate information than you did when you publicly insulted his state of knowledge? If you too are old pals with a joking relationship, no apology needed to him, but maybe an apology for lazily posting false information that could have misled naive readers with no knowledge of Monte Carlo methods?

P.P.P.S. I am curious, is the psychological pleasure of viciously putting someone else down as ignorant in front of their peers worth the presumed cost of misinforming your rationalist community about the nature of an important scientific and mathematical tool? I confess I feel a little pleasure in twisting the knife here, this is pretty new to me. Should I adopt your style of intellectual bullying as a matter of course? I could read all your posts and viciously hold up your mistakes to the community, would you enjoy that?

Comment by more_wrong on You Only Live Twice · 2014-05-27T03:40:49.374Z · LW · GW

The only scenario where a financial argument makes sense is if you're shortening your life by spending more than you can afford, or if spending money on cryonics prevents you from buying some future tech that would save your life.

What if I am facing death and have an estate in the low six figures, and I can afford one cryonic journey to the future, or my grandchildren's education plus, say, charitable donations enough to save 100 young children who might otherwise live well into a lovely post-Singularity world that would include life extension, uploading, and so on? Would that be covered under "can't afford it"? If my personal survival is just not that high a priority to me (compared to what seem to me much better uses of my limited funds) does that mean I'm ipso facto irrational in your book, so my argument 'doesn't make sense'?

I do think cryonics is a very interesting technology for saving the data stored in biological human bodies that might otherwise be lost to history, but that investing in a micro-bank or The Heifer Project might have greater marginal utility in terms of getting more human minds and their contents "over the hump" into the post-singularity world many of us hope for. I just don't see why the fact that it's /me/ matters.

What if the choice is "use my legacy cash to cryopreserve a few humans chosen at random" versus "donate same money to help preserve a whole village worth of young people in danger who can reasonably be expected to live past the Singularity if they can get past the gauntlet of childhood diseases" (the Bill Gates approach) to "preserve a lovely sampling of as many endangered species as seems feasible". I would argue that any of these scenarios would make sense.

Also, I think that people relying on cryo would do well to lifeblog as much as possible, I think continuous video footage from inside the home and some vigorous diary type writing or recording might be a huge help in reconstructing a personality in addition to some inevitably fuzzy measurements of some exact values of positions of microtubules in frozen neurons and the like. It would at give future builders of human emulations a baseline to check how good their emulations were. Is this a well known strategy? I cannot recall seeing it discussed, but it seems obvious.

Comment by more_wrong on You Only Live Twice · 2014-05-27T03:18:12.244Z · LW · GW

I think cryonics is very promising but the process of bringing people back from frozen state will need a lot of research and practice.

I would like to volunteer to go in as a research subject if someone else will pay and if any data mined from my remains is released as open source historical data under some reasonable license, for example the Perl Artistic License, with myself listed as the author of the raw recovered data. (I wrote it into my memories, no?)

People could then use the mined data, such as it is, for research on personality reconstruction or any other ethical purpose. I would be quite surprised to find my mind reconstructed with continuity of identity, and perhaps quite pleased, but that's not at all necessary; I believe the Universe will keep the reference copy, if any, of my key information in distributed form, so I'm happy to make myself available for practice material for future entities (more likely than not Friendly AI type people) who wish to practice on volunteers who are indifferent to any mistakes in the attempted reconstruction process.

I do think it would behoove the cryonics community to find volunteers such as myself willing to undergo this sort of experimentation. If I had the money to invest in freezing myself with an eye to later reconstruction, I would certainly think it a good investment to help pay the cryonics cost for a volunteer willing to be the practice dummy for aspiring future Revivalists.

Are any of the cryonics enthusiasts here aware of a call for volunteers from any cryonics institute or group? A cursory search did not lead me to anywhere to sign up for such a program.

This is a serious request and offer, I would be quite happy to be frozen and datamined, primarily for the benefit of future historians and scientists but also would be very pleased if I could in some way help the people who are hoping to be revived with intact minds someday.

I would request that any personality constructed or reconstructed from my data be offered control of a mercy switch that could turn off whatever process is emulating its consciousness.

Comment by more_wrong on Circular Altruism · 2014-05-27T02:18:26.842Z · LW · GW

It depends on the actual situation and my goal.

Imagine I were a ship captain assigned to try to If rescue a viable sample of a culture from a zone that was about to be genocided, I would be very likely to take the 400 peopleweights (including books or whatever else they valued as much as people) of evacuees, unless someone made a convincing case that the extra 100 people were vital cultural or genetic carriers. For definiteness, imagine my ship is rated to carry up to 400 peopleweight worth of passengers in almost any weather, but 500 people would overload it to the point of sinking during a storm of the sort that the weather experts predict 10 percent probable during voyage to safe harbor.

People are not dollars or bales of cotton to be sold at market. You can't just count heads and multiply that number by utilons per head and say "This answer is best, any other answer is foolish."

Well obviously you can do that, but the main reward for doing so is the feeling that you are smarter than the poor dumb fools who believe that the world is complex and situation dependent. That is, you can give yourself a sort of warm fuzzy feeling of smug superiority by defeating the straw man you constructed as your foolish competitor in the Intelligence Sweepstakes.

That being said, if there really is no other information available, I would take the same choice Eliezer recommends; I just deny that it is the only non foolish choice.

This applies to lottery tickets as well. A slim chance at escaping economic hell might be worth more than its nominal expected return value to a given individual. 100 million dollars might very well have a personal utility over a billion times the value of one dollar for example, if that person's deep goals would be facilitated mightily by the big win and not at all by a single dollar or any reasonable number of dollars they might expect to save over the available time. Also, if any entertainment dollar is not a foolish waste, then a dollar spent on a lottery ticket is worth its expected winning value plus its entertainment value, which varies /profoundly/ from person to person.

I myself prefer to give people $1 lottery tickets instead of $2.95 witty birthday cards. Am I wise or foolish in this? But posts here have branded all lottery purchases as foolish, so I must be a fool. I bow to the collective wisdom here and admit that I am a fool. There is a lot of other evidence that supports this conclusion :)

if you give yourself over to rationality without holding back, you will find that rationality gives to you in return.

I heartily agree, that's one reason I try to avoid trotting out applause lights to trigger other people into giving me warm fuzzies.

I am happy for one person to be tortured for 50 years to stave off the dust specks, as long as that person is me. In fact, this pretty much sums up my career in software development, it is not my favorite thing to do but I endured cubicle hell for many years in exchange for money in part, but also because of my deep belief that solving annoying little bugs and glitches that might inconvenience many many people was an activity important enough to override my personal preferences; I could easily have found other combinations of pay and fun that pleased me better, so I have actually been through this dilemma in muted form in real life and chose to personally suffer to hold off 'specks' like poorly designed user interfaces.

I do have great admiration for Eliezer but he claims to want to be more rational and to welcome criticism intended to promote his progress on The Way, so I thought it would be ok to be critical of this post, which irked me because paragraph four is a straw man "fool" phrased in second person, which seems like a sort of pre-emptive ad hominem against any reader of the post foolish enough to disagree with the premise of the writer. This seems like an extremely poor substitute for rational discourse, the sort of nonsense that could cost the writer Quirrell points, and none of us want that. I don't want to seem hostile, but since I am exactly the sort of fool who disagreed with the premise of paragraph 3, I do feel like I was being flamed a bit, and since I am apparently made of straw, flames make me nervous :)

Comment by more_wrong on Tell Your Rationalist Origin Story · 2014-05-26T18:25:02.792Z · LW · GW

It seems very likely to me that tribal groups in prehistory observed that "eating some things leads to illness and sometimes death; eating other things seems to lead to health or happiness or greater utility" and some very clever group of people starting compiling a system of eating rules that seemed to work. It became traditional to hand over rules for eating, and other activities, to their children. Rules like "If a garment has a visible spot of mildew, either cut out the mildewed spot with a specified margin around it or discard it entirely, for god's sake don't store it with your other garments" or "don't eat insects that you don't specifically recognize as safe and nutritious" or 'don't eat with unclean hands, for a certain technical definition of 'unclean', for example, don't touch a rotting corpse then stuff your face or deliver a baby with those hands" etc. etc.

Then much much later, some of the descendants of some of those tribes thought to write a bunch of this stuff down before it could be forgotten. They ascribed the origin of the rules to a character representing "The best collective wisdom we have available to us" and used about ten different names for that character, who was seen as a collection of information much like any person is, but the oldest and wisest known collection of information around.

Then when different branches of humanity ran into each other and found out that other branches had different rule sets, different authority figures, and different names for the same thing as well as differing meanings for the same names in many cases, hilarity ensued.

Then a group of very very serious atheists came and said "We have the real truth, and our collective wisdom is much much better than that of the ancient people who actually fought through fire and blood, death and disease and a shitstorm of suffering to hand us a lot of their distilled wisdom on a platter, so we could then take the cream of what they offered, throw away the rest, and make fun of their stupid superstitions while not acknowledging that they actually did extremely well for the conditions they experienced"

Religious minds did most of the heavy lifting to get rationality at least as far as Leibniz and Newton, both of whom were notably religious. I'm not saying that the religious mindset is correct or superior, but the development of rational thought among humans has been like a relay race carrying a torch for a million years, and then when the torch is at the finish line (when it gets passed on to nonhumans) a subset of the people who carried the torch for the last little bit doesn't need to say "Hah we are so much better than the people who fought and died under the banner of beliefs at variance with our own". This is a promulgation of what is /bad/ about religion, and I see a lot of it in this group. I love the group but would really like it even better if people showed a tiny bit of respect for the minds that fought through the eras of slavery and religious war and other evils, instead of proclaiming very loudly about how wonderful they are compared to everyone else.

I mean, you ARE wonderful, you are doing amazing things, but... come on.

Not that I am any better, here I am bashing you lovely people because your customs are at variance with my own - but that's what reading this group has taught me to do!

Comment by more_wrong on Tell Your Rationalist Origin Story · 2014-05-26T18:04:32.814Z · LW · GW

Also: right parenthesis that follows is not unmatched, but closes the left parenthesis from my first comment in this thread. )

Comment by more_wrong on Tell Your Rationalist Origin Story · 2014-05-26T18:03:06.952Z · LW · GW

Note that I anticipate not Cessation of Existence but "occasional interruptions of my linear consciousness, which may last up till or beyond the Omega Point", if the current leading models of physical law prove to be for real in the long run. One or more of these interruptions may look exactly like death to the naive observer, but since I've experienced many previous interruptions in consciousness without too much inconvenience, I expect I can get used to death as well.

Comment by more_wrong on Tell Your Rationalist Origin Story · 2014-05-26T17:59:50.899Z · LW · GW

Cessation of Existence is incompatible with the leading models of "standard physics" as presented at the level of core grad school physics classes. Now, I don't entirely subscribe to those models but I do understand them well enough to have aced all my core theory classes (lab was a shameful B...) so I actually have 'more weight' to my claim here than it might appear. "Conservation of information" is absolutely a thing in "the Standard Model", it is just that the information becomes non-localized and (in the Everett interpretation) spread across timelines. But the information that was "you" (using the model that seems standard on LW that 'you' are a collection of organized data) should, in theory, persist indefinitely into the future. Many authors subscribe to the idea that information conservation is /more/ fundamental than 'the laws of physics as we know them' and said information should then even survive transitions like symmetry breaking events, aka 'changes in the laws of physics' that might happen.

Now, whether that distributed information is 'experiencing' anything is arguable, but I can tell you that it is a theorem in quantum mechanics that physical information channels are in some sense symmetrical (again, there are variations which I think might be true that espouse breaks in this symmetry - but not in standard QM). This means you can't say (in quantum information theory) "Collection of information A learns about collection of information B, but not vice versa", only "Collections of information A and B become more entangled, in a quantifiable way". So if you lean calculus or classical physics or alchemy or biblical chronology from readings derived from the collection of information called "Sir Isaac Newton", then, if standard QM is valid, the collection of information called "Sir Isaac Newton" learns just as much about you, in real time!

Maybe Isaac Newton doesn't /need/ a meat body any more; he's uploaded as a continuous process into what David Bohm calls "the holomovement" and he influences the world every day, ask any freshman physics student about their homework problems and you'll see his influence in action.

Note this is not mysticism or nonsense, this is Vanilla Quantum Mechanics. I'm not claiming that the current collection of information we call "Sir Isaac Newton" is experiencing a mode of consciousness like that of a meat human right now, but rather that the collection still exists and is interacting with the world at large, becoming more entangled with some collections and less entangled (by some measure) with others, even as the Newtonsphere, current around 371 light years in radius, continues to expand.

Note that the Gentle Reader of this piece might be a human, or an AI, or a Searles Chinese Room type entity, or something else. If you are a living meat human reading this (or a sim that thinks it's a living meat human) you likely have some 'memories' pertaining to 'coming into existence' that date back less than two centuries. Unless you are like the author of this comment, you might not identify very hard with your 350-year-ago 'self', a collapsing 'incoming' wave of information that will eventually converge on your meat body and in some sense, supervise its construction and operation. There is no obligation to do so, and I predict with a moderate degree of confidence that people will consider you rather odd if you say things like "I am an immortal pattern of information in configuration space" but, to physicists of the 'timeless physics' school, that is pretty much what you appear to be. (I myself am happy to accept whatever self-definition you care to advertise to the world, so if you say "I am a human who did not exist before Month Day, Birthyear (or if RC, ConceptionYear)" I'm happy to accept that and say "This person is from a universe where the timeless formulation of quantum mechanics does not apply. How interesting." rather than "This entity's self-narrative is at variance with my very limited understanding of actual physics, so therefore THEY MUST BE CONFUSED, DELUDED, OR LYING and MY LIMITED KNOWLEDGE OF PHYSICS proves them to be so".

So.. as a Standard Physical Human Body Expressed As A Process In Time. cessation of existence not only doesn't scare me, it intrigues me, what would it be like to have that option? I have no idea how I would halt my evolution in the Schrodinger picture, or change my state vector in the Heisenberg (i.e. more or less timeless) picture.

I don't really believe that standard QM is 100 percent correct, it seems unlikely that what I was taught in grad school is the Correct Final Theory (I think 'nothing' is the most likely CFT actually, like asking "What is the last integer?" (spoiler: it's -1) because "there isn't one". However the usual thing to do on LW is to accept the current 'best known model of the laws of physics" as "tentatively true unless you're explicitly speculating about fringe theories or future developments" and so I post this offering as "my understanding of what the best current consensus among physicists tells us about Cessation of Existence".

Another time, if there is interest, I will discuss where new information seems to come from in a (multi? uni?)-verse where information is conserved, but that would be rambling.

That all being said, Cessation of Existence still does scare me and I want to avoid it, even if the best current physics indicates that it is actually impossible :) I'm also afraid of dragons materializing and challenging me to the Duel Draconic, which is almost equally improbable, but not quite impossible. (A quantum fluctuation of very low but distinctly non-zero probability could manifest as a draconic duelist gunning for me in my sensorium, according to standard QM, while the event described by "The Universe loses the collection of information that constitutes a given individual" has exactly zero probability, as it violates the premises of the system by which we calculate probabilities.

Comment by more_wrong on Welcome to Less Wrong! (5th thread, March 2013) · 2014-05-26T16:59:23.083Z · LW · GW

I chose more_wrong as a name because I'm in disagreement with a lot of the lesswrong posters about what constitutes a reasonable model of the world. Presumably my opinions are more wrong than opinions that are lesswrong, hence the name :)

My rationalist origin story would have a series of watershed events but as far as I can tell, I never had any core beliefs to discard to become rational, because I never had any core beliefs at all. Do not have a use for them, never picked them up.

As far as identifying myself as an aspiring rationalist, the main events that come to mind would be:

  1. Devouring as a child anything by Isaac Asimov that I could get my hands on. In case you are not familiar with the bulk of his work, most of it is scientific and historical exposition, not his more famous science fiction; see especially his essays for rationalist material.

  2. Working on questions in physics like "Why do we call two regions of spacetime close to each other?", that is, delving into foundational physics.

  3. Learning about epistemology and historiography from my parents, a mathematician and a historian.

  4. Thinking about the thinking process itself. Note: Being afflicted with neurological and psychological conditions that shut down various parts of my mentality, notably severe intermittent aphasia, has given me a different perspective on the thinking process.

  5. Making some effort to learn about historical perspectives on what constitutes reason or rationality, and not assuming that the latest perspectives are necessarily the best.

    I could go on but that might be enough for an intro.

    My hope is to both learn how to reason more effectively and, if fortunate, make a contribution to the discussion group that helps us to learn the same as a community. mw