Comment by TitaniumDragon on Fake Justification · 2015-02-24T06:11:11.688Z · LW · GW

Art is part of everything, so yes.

Photoshop allows artists to practice and produce works vastly more rapidly, correct errors quite easily, and otherwise do a ton of things they couldn't do before. Other such programs can do many of the same things.

More artists, plus better tools, plus faster production of art, plus better understanding of the technology of art, probably means that the best piece of art ever made was made in the last few decades.

Indeed, it is possible that more art will be produced in the first few decades of this century than were produced by all of humankind for the first several thousand years of our existence.

Comment by TitaniumDragon on One Argument Against An Army · 2015-02-24T05:36:52.559Z · LW · GW

The real flaw here is that counting arguments is a poor way to make decisions.

"They don't have the ability to make said meteor strikes" is enough on its own to falsify the hypothesis unless you have evidence to the contrary.

As Einstein said about "100 Authors Against Einstein", if he was wrong, they would have only needed one.

Comment by TitaniumDragon on Fake Justification · 2015-02-24T02:54:03.235Z · LW · GW

It isn't a problem to judge things from different time periods; the Model-T might have been a decent car in 1910, but it is a lemon today.

New things are better than old things. I'd wager that the best EVERYTHING has been produced within the last few decades.

If you're judging "Which is better, X or Y," and X is much older than Y, it is very likely Y is better.

Comment by TitaniumDragon on An Alien God · 2015-02-24T02:43:11.330Z · LW · GW

The idea of natural selection is remarkably awesome and has applications even outside of biology, which is part of what makes it such a great idea.

Comment by TitaniumDragon on An Alien God · 2015-02-24T02:41:32.491Z · LW · GW

It isn't literally that for every single person, but assuming you don't have a mutation in your chronobiological genes it is pretty close to that.

People with mutations in various regulatory genes end up with significantly different sleep-wake cycles. The reason that our bodies reset ourselves under sunlight is probably to help correct for our clocks being "off" by a bit; indeed, it is probably very difficult to hit exactly 24 hours via evolution. But 24:11 plus correction lets it be off by a bit without causing a problem.

Good enough is probably better than perfect in this case, both because it means that mutations to the clock are less deleterious (thus those who have mutated clock genes are more likely to survive if they have said adjustment capability, meaning that the adjustment gene is even more strongly selected for) and because it means that we can travel and adjust to new time zones. For most creatures, this doesn't matter, but for creatures which travel long distances, this is a real advantage for staying on the proper day/night cycle.

Comment by TitaniumDragon on Exterminating life is rational · 2014-05-28T23:47:33.256Z · LW · GW

The more conflict avoidant the agents in an area, the more there is to gain from being an agent that seeks conflict.

This is only true if the conflict avoidance is innate and is not instead a form of reciprocal altruism.

Reciprocal altruism is an ESS where pure altruism is not because you cannot take advantage of it in this way; if you become belligerent, then everyone else turns on you and you lose. Thus, it is never to your advantage to become belligerent.

Comment by TitaniumDragon on Exterminating life is rational · 2014-05-28T23:43:59.337Z · LW · GW

Opportunistic seizure of capital is to be expected in a war fought for any purpose.

Comment by TitaniumDragon on Exterminating life is rational · 2014-05-28T23:42:54.271Z · LW · GW

The problem is that asymmetric warfare, which is the best way to win a war, is the worst way to acquire capital. Cruise missiles and drones are excellent for winning without any risk at all, but they're not good for actually keeping the capital you are trying to take intact.

Spying, subversion, and purchasing are far cheaper, safer, and more effective means of capturing capital than violence.

As far as "never" goes - the last time any two "Western" countries were at war was World War II, which was more or less when the "West" came to be in the first place. It isn't the longest of time spans, but over time armed conflict in Europe has greatly diminished and been pushed further and further east.

Comment by TitaniumDragon on Exterminating life is rational · 2014-05-28T23:27:36.600Z · LW · GW

You are starting from the premise that gray goo scenarios are likely, and trying to rationalize your belief.

Yes, we can be clever and think of humans as green goo - the ultimate in green goo, really. That isn't what we're talking about and you know it - yes, intelligent life can spread out everywhere, that isn't what we're worried about. We're worried about unintelligent things wiping out intelligent things.

The great oxygenation event is not actually an example of a green goo type scenario, though it is an interesting thing to consider - I'm not sure if there even is a generalized term for that kind of scenario, as it was essentially slow atmospheric poisoning. It would be more of a generalized biocide type scenario - the cyanobacteria which caused the great oxygenation event created something which was incidentally toxic to other things, but it was purely incidental, had nothing to do with their own action, probably didn't even benefit most of them directly (that is to say, the toxicity of the oxygen they produced probably didn't help them personally), and what actually took over afterwards were things which were rather different from what came before, many of which were not descended from said cyanobacteria.

It was a major atmospheric change, and is (theoretically) a danger, though I'm not sure how much of an actual danger it is in the real world - we saw the atmosphere shift to an oxygen-dominated one, but I'm not sure how you'd do it again, as I'm not sure there's something else which can be freed en-mass which is toxic - better oxygenators than oxygen are hard to come by, and by their very nature are rather difficult to liberate from an energy balance standpoint. It seems likely that our atmosphere is oxygen-based and not, say, chlorine or fluorine based for a reason arising from the physics of liberating said chemicals from chemical compounds.

As far as repeated green goo scenarios prior to 600Mya - I think that's pretty unlikely, honestly. Looking at microbial diversity and microbial genomes, we see that the domains of life are ridiculously ancient, and that diversity goes back an enormously long distance in time. It seems very unlikely that repeated green goo type scenarios would spare the amount of diversity we actually see in the real world. Eukaryotic life arose 1.6-2.1Bya, and as far as multicellular life goes, we've evidence of cyanobacteria which showed signs of multicellularity 3Bya.

That's a long, long time, and it seems unlikely that repeated gray goo scenarios are what kept life simple. It seems more likely that what kept life simple was the fact that complexity is hard - indeed, I suspect the big advancement was actually major advancements in modularity of life. The more modular life becomes, the easier it is to evolve quickly and adapt to new circumstances, but modularity from non-modularity is something which is pretty tough to sort out. Once things did sort it out, though, we saw a massive explosion in diversity. Evolving to be better at evolving is a good strategy for continuing to exist, and I suspect that complex multicelluar life only came to exist when stuff got to the point where this could happen.

If we saw repeated green goo scenarios, we'd expect the various branches of life to be pretty shallow - even if some diversity survived, we'd expect each diverse group to show a major bottleneck back at whenever the last green goo occurred. But that's not what we actaully see. Fungi and animals diverged about 1.5 Bya, for instance, and other eukaryotic diversity occurred even prior to that. Animals have been diverging for 1.2 billion years.

It seems unlikely, then, that there have been any green goo scenarios in a very, very long time, if indeed they ever did occur. Indeed, it seems likely that life evolved to prevent said scenarios, and did so successfully, as none have occurred in a very, very, very long time.

Pestilence is not even close to green goo. Yes, introducing a new disease into a new species can be very nasty, but it almost never actually is, as most of the time, it just doesn't work at all. Even amongst the same species, Smallpox and other old-world diseases wiped out the Native Americans, but Native American diseases were not nearly so devastating to the old-worlders.

Most things which try to jump the species barrier have a great deal of difficulty in doing so, and even when they successfully do so, their virulence ends up dropping over time because being ridiculously fatal is actually bad for their own continued propagataion. And humans have become increasingly better at stopping this sort of thing. I did note engineered plagues as the most likely technological threat, but comparing them to gray goo scenarios is very silly - pathogens are enormously easier to control. The trouble with stuff like gray goo is that it just keeps spreading, but with a pathogen, it requires a host - there are all sorts of barriers in place to pathogens, and everything is evolved to be able to deal with pathogens because they sometimes have to deal with even new ones, and things which are more likely to survive exposure to novel pathogens are more likely to pass on their genes in the long term.

With regards to "intelligent viral networks" - this is just silly. Life on earth is NOT the result of intelligence. You can tell this from our genomes. There are no signs of engineering ANYWHERE in us; no signs of intelligent design.

Comment by TitaniumDragon on Exterminating life is rational · 2014-05-28T22:42:05.733Z · LW · GW

That's a pretty weak argument due to the mediocrity principle and the sheer scale of the universe; while we certainly don't know the values for all parts of the Drake Equation, we have a pretty good idea, at this point, that Earth-like planets are probably pretty common, and given that abiogenesis occurred very rapidly on Earth, that is weak evidence that abiogenesis isn't hard in an absolute sense.

Most likely, the Great Filter lies somewhere in the latter half of the equation - complex, multicellular life, intelligent life, civilization, or the rapid destruction thereof. But even assuming that intelligent life only occurs in one galaxy out of every thousand, which is incredibly unlikely, that would still give us many opportunities to observe galactic destruction.

It is theoretically possible that we're the only life in the Universe, but that is incredibly unlikely; most Universes in which life exists will have life exist in more than one place.

Comment by TitaniumDragon on Exterminating life is rational · 2014-05-27T23:23:31.470Z · LW · GW

After reading through all of the comments, I think I may have failed to address your central point here.

Your central point seems to be "a rational agent should take a risk that might result in universal destruction in exchange for increased utility".

The problem here is I'm not sure that this is even a meaningful argument to begin with. Obviously universal destruction is extremely bad, but the problem is that utility probably includes all life NOT being extinguished. Or, in other words, this isn't necessarily a meaningful calculation if we assume that the alternative makes it more likely that universal annihilation will occur.

Say the Nazis gain an excessive amount of power. What happens then? Well, there's the risk that they make some sort of plague to cleanse humanity, screw it up, and wipe everyone out. That scenario seems MORE likely in a Nazi-run world than one which isn't. And - let's face it - chances are the Nazis will try and develop nuclear weapons, too, so at best you only bought a few years. And if the wrong people develop them first, you're in a lot of trouble. So the fact of the matter is that the risk is going to be taken regardless, which further diminishes the loss of utility you could expect from universal annihilation - sooner or later, someone is going to do it, and if it isn't you, then it will be someone else who gains whatever benefits there are from it.

The higher utility situation likely decreases the future odds of universal annihilation, meaning that, in other words, it is entirely rational to take that risk simply because the odds of destroying the world NOW are less than the odds of the world being destroyed further on down the line by someone else if you don't make this decision, especially if you can be reasonably certain someone else is going to try it out anyway. And given the odds are incredibly low, it is a lot less meaningful of a choice to begin with.

Comment by TitaniumDragon on Exterminating life is rational · 2014-05-27T22:47:57.918Z · LW · GW

Incidentally, regarding some other things in here:

[quote]They thought that just before World War I. But that's not my final rejection. Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.[/quote]

There's actually a pretty good counter-argument to this, namely the fact that capital is vastly easier to destroy than it is to create, and that, thusly, an area which avoids conflict has an enormous advantage over one that doesn't because it maintains more of its capital. As capital becomes increasingly important, conflict - at least, violent, capital-destroying conflict - becomes massively less beneficial to the perpetrator of said conflict, doubly so when they actually also likely benefit from the capital contained in other nations as well due to trade.

And that's ignoring the fact that we've already sort of engineered a global scenario where "The West" (the US, Canada, Japan, South Korea, Taiwan, Australia, New Zealand, and Western Europe, creeping now as far east as Poland) never attack each other, and slowly make everyone else in the world more like them. It is group selection of a sort, and it seems to be working pretty well. These countries defend their capital, and each others' capital, benefit from each others' capital, and engage soley in non-violent conflict with each other. If you threaten them, they crush you and make you more like them; even if you don't, they work to corrupt you to make you more like them. Indeed, even places like China are slowly being corrupted to be more like the West.

The more that sort of thing happens, the less likely violent conflict becomes because it is simply less beneficial, and indeed, there is even some evidence to suggest we are being selected for docility - in "the West" we've seen crime rates and homicide rates decline for 20+ years now.

As a final, random aside:

My favorite thing about the Trinity test was the scientist who was taking side bets on the annihilation of the entire state of New Mexico, right in front of the governor of said state, who I'm sure was absolutely horrified.

Comment by TitaniumDragon on Exterminating life is rational · 2014-05-27T22:46:49.655Z · LW · GW

Apparently I don't know how to use this system properly.

Comment by TitaniumDragon on Exterminating life is rational · 2014-05-27T22:46:25.348Z · LW · GW

Everything else is way further down the totem pole.

People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life. There are absolutely enormous amounts of bacteria and viruses and fungi and everything else all around us, and given the enormous advantage which would be conferred by being a grey goo from an evolutionary standpoint, we would expect the entire planet to have already been covered in the stuff - probably repeatedly. The fact that we see so much diversity - the fact that nothing CAN do this, despite enormous evolutionary incentive TO do this - suggests that grey goo scenarios are either impossible or incredibly unlikely. And that's ignoring the thermodynamic issues which would almost certainly prevent such a scenario from occurring as well, given the necessity of reshaping whatever material into the self-replicating material, which would surely take more energy than is present in the material to begin with.

Physics experiments gone wrong have similar problems - we've seen supernovas. The energy released by a supernova is just vastly beyond what any sort of planetary civilization is likely capable of doing. And seeing as supernovas don't destroy everything, it is vastly unlikely that whatever WE do will do the same. There are enormously energetic events in the universe, and the universe itself is reasonably stable - it seems unlikley that our feeble, mere planetary energy levels are going to do any better in the "destroy everything" department. And even before that, there was the Big Bang, and the universe came to exist out of that whole mess. We have the Sun, and meteoritic impact events, both of which are very powerful indeed. And yet, we don't see exotic, earth-shattering physics coming into play there in unexpected ways. Extremely high energy densities are not likely to propagate - they're likely to dissipate. And we see this in the universe, and in the laws of thermodynamics.

It is very easy to IMAGINE a superweapon that annihilates everything. But actually building one? Having one have realistic physics? That's another matter entirely. Indeed, we have very strong evidence against it: surely, intelligent life has arisen elsewhere in the universe, and we would see galaxies being annihilated by high-end weaponry. We don't see this happening. Thus we can assume with a pretty high level of confidence that such weapons do not exist or cannot be created without an implausible amount of work.

The difficult physics of interstellar travel is not to be denied, either - the best we can do with present physics is nuclear pulse propulsion, which is perhaps 10% of c and has enormous logistical issues. Anything FTL requires exotic physics which we don't have any idea of how to create, and which may well describe situations which are not physically plausible - that is to say, the numbers may work, but there may well be no way to get there, the same as how there's no particular reason going faster than c is impossible, but you can't ever even REACH c, so the fact that there is a "safe space" according to the math on the other side is meaningless. Without FTL, interstellar travel is far too slow for such disasters to really propagate themselves across the galaxy - any sort of plague would die out on the planet it was created on, and even WITH FTL, it is still rather unlikely that you could easily spread something like that. Only if cheap FTL travel existed would spreading the plague be all that viable... but with cheap FTL travel, everyone else can flee it that much more easily.

My conclusion from all of this is that these sorts of estimates are less "estimates" and more "wild guesses which we pretend have some meaning, and which we throw around a lot of fancy math to convince ourselves and others that we have some idea what we're talking about". And that estimates like one in three million, or one in ten, are wild overestimates - and indeed, aren't based on any logic any more sound than the guy on the daily show who said that it would either happen, or it wouldn't, a 50% chance.

We have extremely strong evidence against galactic and universal annihilation, and there are extremely good reasons to believe that even planetary level annihilation scenarios are unlikely due to the sheer amount of energy involved. You're looking at biocides and large rocks being diverted from their orbits to hit planets, neither of which are really trivial things to do.

It is basically a case of, except applied in a much more pessimistic manner.

The only really GOOD argument we have for lifetime limited civilizations is the url= - that is to say, where are all the bloody aliens? Unfortunately, the Fermi Paradox is a somewhat weak argument primarily because we have absolutely no idea whatsoever which side of the Great Filter we are on. That being said, if practical FTL travel exists, I would expect that to pretty much ensure that any civilization which invented it would likely simply never die because of how easy it would be to spread out, making destroying them all vastly more difficult. The galaxy would probably end up colonized and recolonized regardless of how much people fought against it.

Without FTL travel, galactic colonization is possible, but it may be impractical from an economic standpoint; there is little benefit to the home planet of having additional planets colonized - information is the only thing you could expect to really trade over interstellar distances, and even that is questionable given that locals will likely try to develop technology locally and beat you to market, so unless habitable systems are very close together duplication of effort seems extremely likely. Entertainment would thus be the largest benefit - games, novels, movies and suchlike. This MIGHT mean that colonization is unlikely, which would be another explaination... but even there, that assumes that they wouldn't want to explore for the sake of doing so.

Of course, it is also possible we're already on the other side of the Great Filter, and the reason we don't see any other intelligent civilizations colonizing our galaxy is because there aren't any, or the ones which have existed destroyed themselves earlier in their history or were incapable of progressing to the level we reached due to lack of intelligence, lack of resources, eternal, unending warfare which prevented progress, or something else.

This is why pushing for having a multiplanetary civilization is, I think, a good thing; if we hit the point where we had 4-5 extrasolar colonies, I think it would be pretty solid evidence in favor of being beyond the Great Filter. Given the dearth of evidence for interstellar disasters created by intelligent civilizations, I think that it is likely that our main concern about destroying ourselves comes until the point where we expand.

But I digress.

It isn't impossible that we will destroy ourselves (after all, the Fermi Paradox does offer some weak evidence for it), but I will say that I find any sort of claims of numbers for the likelihood of doing so incredibly suspect, as they are very likely to be made up. And given that we have no evidence of civilizations being capable of generating galaxy-wide disasters, it seems likely that whatever disasters exist are planetary scale at best. And our lack of any sort of plausible scenarios even for that hurts even that argument. The only real evidence we have against our civilization existing indefinitely is the Fermi Paradox, but it has its own flaws. We may destroy ourselves. But until we find other civilizations, you are fooling yourself if you think you aren't just making up numbers. Anything which destroys us outside of an impact event is likely something we cannot predict.

Comment by TitaniumDragon on Exterminating life is rational · 2014-05-27T22:46:04.717Z · LW · GW

I was directed here from FIMFiction.

Because of we really can't know what the odds are of doing something that ends up wiping out all life on the planet; nothing we have tried thus far has even come close, or even really had the capability of doing so. Even global thermonuclear war, terrible as it would be, wouldn't end all life on Earth, and indeed probably wouldn't even manage to end human civilization (though it would be decidedly unpleasant and hundreds of millions of people would die).

Some people thought that the nuclear bomb would ignite the atmosphere... but a lot of people didn't, either, and that three in a million chance... I don't even know how they got at it, but it sounds like a typical wild guess to me. How would you even arrive at that figure? Indeed, there is good reason to believe that the atmosphere may well have experienced such events before, in the form of impact events; this is why we knew, for instance, that the LHC was safe - we had experienced considerably more energetic events previously. Some people claimed it might destroy the universe, but the odds were actually 0 - it simply lacked the ability to do so, because if it was going to cause a vacuum collapse the universe would have already been destroyed by such an event elsewhere. Meanwhile, the physics of small black holes means that they're not a threat - they would decay almost instantly, and would lack the gravity necessary to cause any real problems. And thus far, if we actually look at what we've got, the reality is that everything we have tried has had p=0 of destroying civilization in reality (that is the universe we -actually- live in), meaning that that p = 3 x 10^-6 was actually hopelessly pessimistic. Just because someone can assign arbitrary odds to something doesn't mean that they're right. In fact, it usually means that they're bullshitting.

Remember NASA making up its odds of an individual bolt failing at being one in a 10^8? That's the sort of made up number we're looking at here.

And that's the sort of made up number I always see in these situations; people simply come up with stuff, then pretend to justify it with math when in reality it is just a guess. Statistics used as a lamppost; for support, not illumination.

And this is the biggest problem with all existential threats - the greatest existential threat to humanity is, in all probability, being smacked by a large meteorite, which is something we KNOW, for certain, happens every once in a while. And if we detected that early enough, we could actually prevent such an event from happening.

Everything else is pretty much entirely made up guesswork, based on faulty assumptions, or very possibly both.

Of the "humans kill us all" scenarios, the most likely is some horrible highly transmissible genetically engineered disease which was deliberately spread by madmen intent on global destruction. Here, there are tons of barriers; the first, and perhaps largest barrier is the fact that crazy people have trouble doing this sort of thing; it requires a level of organization which tends to be beyond them. Secondly, it requires knowledge we lack, and which indeed, once we obtain it, may or may not make containing the outbreak of such a disease relatively trivial - you speak of offense being easier than defense, but in the end, a lot of technological systems are easier to break than they are to make, and understanding how to make something like this may well require us to understand how to break it in the process (and indeed, may well be derived from us figuring out how to break it). Thirdly, we actually already have measures which require no technology at all - quarantines - which could stop such a thing from wiping out too many people. Even if you did it in a bunch of places simultaneously, you'd still probably fail to wipe out humanity with it just because there are too many people, too spread out, to actually succeed. And fourth, you'd probably need to test it, and that would put you at enormous risk of discovery. I have my doubts about this scenario, but it is by far the likelist sort of technological disaster.

Of course, if we have sentient non-human intelligences, they'd likely be immune to such nonsense. And given our improvements in automation controlling plague-swept areas is probably going to only get easier over time; why use soldiers who can potentially get infected when we can patrol with drones?

Comment by TitaniumDragon on Adaptation-Executers, not Fitness-Maximizers · 2013-09-10T00:52:10.174Z · LW · GW

While we are, in the end, meat machines, we are adaptive meat machines, and one of the major advantages of intelligence is the ability to adapt to your environment - which is to say, doing more than executing preexisting adaptations but being able to generate new ones on the fly.

So while adaptation-execution is important, the very fact that we are capable of resisting adaptation-execution means that we are more than adaptation-executors. Indeed, most higher animals are capable of learning, and many are capable of at least basic problem solving.

There is pretty significant selective pressure towards being a fitness maximizer and not a mere adaptation-executor, because something which actively maximizes its fitness will by definition have higher fitness than one which does not.

Comment by TitaniumDragon on One Life Against the World · 2013-07-14T05:10:17.791Z · LW · GW

I will note that this is one of the fundamental failings of utilitarianism, the "mere addition" paradox. Basically, take a billion people who are miserable, and one million people who are very happy. If you "add up" the happiness of the billion people, they are "happier" on the whole than the million people; therefore, the billion are a better solution to use of natural resources.

The problem is that it always assumes some incorrect things:

1) It assumes all people are equal 2) It assumes that happiness is transitive 3) It assumes that you can actually quantify happiness in a meaningful way in this manner 4) It assumes the additive property for happiness - that you can add up some number of miserable people to get one happy person.

None of these assumptions are necessarily true.

Of course, all moral philosophies are going to fail at some level.

Note that, for instance, in this case there is an obvious difference: adding 50 years to one life is actually significantly better than extending 50 lives by 1 year each, as the investment to improve one person for 50 years is considerably less, and one person with 50 years can do considerably larger, longer, and grander projects.

Comment by TitaniumDragon on If You Demand Magic, Magic Won't Help · 2013-05-28T00:35:25.043Z · LW · GW

I think you're wrong about an important point here, actually, which is that not all things are as exciting as other things. Not all things are equally exciting.

Riding a dragon is actually way cooler than hang gliding for any number of reasons. Riding animals is cool in and of itself, but riding a dragon is actually flying, rather than hang gliding, which is "falling with style". You get the benefits of hang-gliding - you can see the landscape, for instance - but you have something which natively can fly beneath you. You need to worry less about crashing on a dragon than you do on a hang glider. You can ascend and descend at will. You can take off from a lot more locations - hang gliding usually requires you to go somewhere inconvenient to get to, and if you want to do it again, then you have to get your glider all the way back up to where you took off from. And of course if dragons are sentient, sapient beings, that adds a whole additional level of coolness.

Magic not readily replicable by science - the ability to personally fly, shapeshift, clairvoyance (though we have replicated that to some extent with cameras and drones, they are much less convenient), teleportation, and the like are very cool. The ability to throw fireballs or lightning bolts is much less cool, because we CAN replicate those abilities with science (or at least reasonable approximations thereof).

Really though, is any magic cooler than, say, computers?

Understanding protein folding is cooler than special relativity, because there is a lot more you can do with protein folding than special relativity. Special relativity really only comes into play when you're dealing with outer space, which is very expensive and outside of the realm of day-to-day life; GPS is pretty much the only thing which really cares about it as far as normal life goes. Conversely, protein folding allows for all sorts of biological shenanigans, is vital to engineering lifeforms, and allows for all sorts of novel medications, not to mention potential for creating new materials en masse.

It is true that magic is often used as an escapist fantasy. And it is true that it is a logical flaw in such stories (to some extent; it depends on how magic works, after all. And it might also give a lazy person motivation).

Comment by TitaniumDragon on Being Half-Rational About Pascal's Wager is Even Worse · 2013-04-27T02:23:14.858Z · LW · GW

By the way, a benchmark I've found useful in discussing factual matters or matters with a long pre-existing literature is number of citations and hyperlinks per comment. You're still batting a zero.

So that means your comment is worthless, and thus can be safely ignored, given your only "citations" do not support yourself in any way and is merely meant to insult me?

In any case, citations are mostly unimportant. I use google and find various articles to support my stances; you can do the same to support yours, but I don't go REF Fahy et. al. "Physical and biological aspects of renal vitrification" Organogenesis. 2009 Jul-Sep; 5(3): 167–175.

Most of the time, you aren't going to bother checking my sources anyway, and moreover, you're asking for negative evidence, which is always a problem. You're asking for evidence that God does not exist, and rejecting everything but "Hey look, God is sitting here, but he's not".

You're acting like someone who was just told that they don't have a soul and therefore won't go to heaven when they die, because heaven doesn't exist.

You can take ten seconds to see a long list of objections by googling "Cryonics is a scam". You can go to Alcor and read a paper where a true believer suggests that the odds of revival are, at best, 15%, and that's assuming magical nanomachines have a 99% chance of existing. You can read the opinions of various experts who point out the problems with ice crystal formation, the toxicity of vitrification chemicals (which would have to be purged prior to revival), the issues of whether microdamage to structures would cause you to die anyway, the issues of whether you can actually revive them, and pointing out that, once you do warm them up, you've got a dead body, and all you have to do from there is ressurect the dead. We do know that even short times wtihout oxygen cause irreparable brain damage, and even at cold temperatures, that process does not stop completely - once they're in LN2, sure, maybe, assuming the process doesn't destroy them. Or you know, that the process of putting in the chemicals doesn't cause damage.

The truth is that none of the objections will sway you because you're a believer.

IF it is possible to do this sort of thing, there is a very, very good chance that it will require a very specific freezing process. A process which does not yet exist.

I'm impressed you've failed to notice that LW is maybe a little different from other sites and we have higher standards, and what happens 'historically' isn't terribly relevant.

The problem is that it isn't, and a cursory search of the internet will tell you that. :\

I was a bit excited to find a site devoted to rationality, and was rather disappointed to learn that no, it wasn't.

I wrote a little hymn about it a while ago. It starts with "Our AI, who art in the future", and you can imagine that it goes downhill from there.

In fact, a cursory search of the net showed at least one topic that you guys preemptively banned from discussion because some future AI might find it and then torture you for it. If that isn't a religious taboo, I don't know what is.

The singularity is not going to happen. Nanomachines the way that they are popularly imagined will never exist. Cyronics, today, is selling hope and smoke, and is a bad investment. You've got people discussing "friendly AI" and similar nonsense, without really understanding that they're really talking about magic, and that all this philosophizing about it is pretty silly.

I'm good with doing silly things, but people here take them seriously.

Just because you call yourself a rationalist doesn't make you a rationalist. Being rational is hard for most people to do. But perhaps the most important aspect of being a rationalist is understanding that just because you want something to be true, doesn't make it true. Understanding that deep in your bones.

Most people will be deeply insulted if you imply that they are irrational. And yet people on the whole do not behave rationally, given the goals they claim to possess.

I understand you are deeply emotionally invested in this. I understand that arguing with you is pointless. But I actually enjoy arguing, so its okay. But how is it for you? If you've invested in cryonics, is your brain more or less likely to believe that it is true?

Historical trends are always important, especially when you see obvious similarities. There are obvious and worrisome similarities between basic tenants (ressurrection of the dead, some sort of greatly advanced being who will watch over us (the post-singularity AI or AIs)) and the tenents of religions. You can't claim "we're different" without good evidence, and as they say, extraordinary claims require extraordinary evidence.

And all the evidence today points towards cyronics being a very expensive form of burial. There is no extraordinary evidence that it will allow for ressurection in the future. Thus, it is a waste of money and resources you could spend to make today more awesome.

Apparently you missed the point. The point was: stop being arrogant. Think for a freaking second about how obvious an argument might be and at least what the reply might be, if you cannot be arsed to look up actual sources. Do us the courtesy of not just thinking your way to your bottom line of 'cryonics sucks' but maybe a step beyond that too.

Maybe you should take your own advice?

I am quite aware that this is upsetting to you. I just told you you're going to die and be dead forever. It is an unsurprising reaction; a lot of people react with fear to that idea.

But really, there is no evidence cryonics is useful in any way. The argument is "Well, if you've rotted away, then you've got no chance at all!" Sure. But what if you could spend your money on present-day immortality research? The odds of that paying off are probably much higher than the odds of cryonics paying off. There is a path forward there. We don't know what causes aging, but we know that many organisms live longer than human beings do, and we may be able to take advantage of that. Technology such as artifical or grown organs may allow us to survive until brain failure. Drugs to treat brain disease may allow us to put off degredation of our brains indefinitely. The list goes on.

That is far more promising than "freeze me and hope for the best". Heck, if you really wanted to live forever you'd do things to work towards that. If cyronics is truly so important, why aren't you doing relevant research? Or working towards other things that can help with life extension?

Isn't that far more rational?

Cryonics is a sucker's bet. Even if there was a possibility it worked, the odds of it working are far less than other routes to immortality.

Instead, cryonics is just a way to sell people hope. Just as Christians make peace with the idea of death that they will be going to a better place, that they will be okay, Christians avoid death as much as anyone else does. The same is true of cryonics. The rational thing to do, if it is important to avoid dying, is to work towards avoiding it or mitigating it as much as possible. Are you? If the answer is no, is it really so important to you? Or is paying that money for cryonics just a personal way to make peace with death?

Comment by TitaniumDragon on Being Half-Rational About Pascal's Wager is Even Worse · 2013-04-27T02:20:02.354Z · LW · GW

I understood Dunning-Kruger quite well. Dunning-Kruger suggests that, barring outside influence, people will believe themselves to be of above-average ability. Incompetent people will greatly overestimate their capability and understanding, and the ability to judge talent in others was proportional to ability in the skill itself - in other words, people who are incompetent are not only incompetent, but also incapable of judging competence in other people.

Competent people, conversely, overestimate the competence of the incompetent; however, they do have the ability to judge incompetence, so when they are allowed to look at the work of others relative to their own, their estimation of their own personal ability more closely matches their true ranking - while incompetent people being exposed to the work of others had no such effect, though training in the skill improved their ability to self-judge, judge others, and at the skill itself.

People, therefore, are unfit to judge their own competence; the only reliable way to get feedback is via actual practice (i.e. if you have some sort of independent metric for your ability, such as success or failure of actual work) or if you have other competent people judge your competence. As you might imagine, this, of course, creates the problem where you have to ask yourself, "Who is actually competent in cryonics?" And the answer is "cryobiologists and people in related disciplines". And what is THEIR opinion of cyronics?

Quite poor, on the whole. While there are "cryonics specialists" there are no signs of actual competence there as there is no one who can actually revive frozen people, let alone revive frozen people and fix whatever problems they had prior to being frozen. Ergo, they can't really be viewed as useful judges on the whole because they have shown no signs of actual competence - there is no proof that anyone is competent at cryonics at all.

Dunning-Kruger definitely applies here, and applies in a major way. The closest things to experts are the people working in real scientific disciplines, such as cyrobiology and similar things. These people have real expertise, and they are not exactly best friends with Alcor and similar organizations. In fact, most of them say that it is, at best, well-intended stupidity and at worst a scam.

Name three. Like V_V, I suspect that for all that you glibly allude to 'cults' you have no personal experience and you have not acquainted yourself with even a surface summary of the literature, much like you have not bothered to so much as read a cryonics FAQ or book before thinking you have refuted it.

Similar religious movements? How many movements don't have some concept of life after death? It is very analogous.

I have indeed read papers on cyrobiology and on cryonics, though I could not name them off-hand - indeed, I couldn't tell you the name of the paper I read on the subject just yesterday, or the others I read earlier this week. I am, on the whole, not very impressed. There are definitely things we can freeze and thaw just fine - embryos and sperm, for instance. We can freeze lots of "lower organisms". We've played around with freezing fish and frogs and various creatures which have adapted to such things.

But freezing mammals? Even reducing mammalian body temperatures to the point where freezing begins is fatal, albeit not immediately; we have revived rats and monkeys and hamsters down to very low temperatures (below 0C) and revived them, but they don't tend to do very well afterwards, dying on the scale of hours to days. Some organs, such as the heart and kidney, have been frozen and revived - which is cool, to be fair. Well, frozen is the wrong term really - more "preserved at low temperatures". There was the rabbit kidney which they did vitrify, while the hearts I've seen have mostly been reduced to low temperatures without freezing them - though you can apparently freeze and thaw hearts and they'll work, at least for a while (we figured that out more than half a century ago).

However, a lot of cryobiology is not about things applicable to cryonics - we're talking taking tissue down to like, -2C, not immersing it in LN2. The vitrified rabbit kidney is interesting for that reason, but unfortunately the rabbit in question only lasted nine days - so while it could keep them up for a while, it did eventually fail. And all the other rabbits they experimented on perished as well.

And it takes even less time to notice that there are long thorough answers to the obvious objections. Your point here is true, but says far more about you than religion or cryonics; after all, many true things like heliocentrism or evolution have superficial easily thought-of objections which have been addressed in depth. Sometimes they work, sometimes they don't; the argument from evil is probably the single most obvious argument against Western religions, there are countless replies from theists of various levels of sophistication, and while I don't think any of them actually work, I also don't think someone going 'My mother died! God doesn't exist!' is contributing anything whatsoever. What, you think the theists somehow failed to notice that bad things happen? Of course they did notice, so if you want to argue against the existence of God, read up on their response.

The length of an answer has very little to do with its value. Look at Alcor's many answers - there are plenty of long answers there. Long on hope, that is, short on reality. In fact, being able to answer something succicently is often a sign of actual thought. It is very simple to pontificate and pretend you have a point, it is much more difficult to write a one paragraph answer that is complete. And in this case, the answer SHOULD be simple.

If it was so easy, again, why are you writing a long response?

If you had spent less time being arrogant, it might have occurred to you that I see this sort of flip reaction all the time in which people learn of cryonics and in five seconds think they've come up with the perfect objection and refuse to spend any time at all to disconfirm their objection. You are acting exactly like the person who said, "but it's not profitable to revive crypatients! QED you're all idiots and suckers", when literally the first paragraph of the Wikipedia article on ALCOR implies how they attempt to resolve this issue; here's a link to the discussion:

I enjoy how you are calling me arrogant, and yet you still are not answering my question.

At least other people have tried. "Dead people are valuable artifacts! People are excited about Jurassic park, that would totally be a viable business venture, so why not dead people?" Now that is a quick, succicient argument. It makes a reasonable appeal - dead people could be valuable as an attraction in the future.

The problem with that is the idea that it would make you any money at all. The Thirteenth Amendment prohibits owning people, and that kind of puts a major crimp in the idea of a tourist attraction, and given the sheer expense involved, again, you need some sort of parallel technology to get rid of those costs in any case. Humans are also a lot less exciting than dinosaurs are. I'm not going to go to the zoo to see someone from the 17th century, and indeed the idea is morally repugnant. Sure, I might go to ren faires, but let's face it - ren faires aren't run by people from the 10th century, they're run by people from the 21st century for people from the 21st century.

Comment by TitaniumDragon on Being Half-Rational About Pascal's Wager is Even Worse · 2013-04-27T01:06:57.784Z · LW · GW

Humans aren't dinosaurs, nor can you put them on your mantlepiece as a conversation piece. They are not property, but living, independent persons.

Comment by TitaniumDragon on Minor, perspective changing facts · 2013-04-27T00:40:26.779Z · LW · GW

You can't make an educated guess that a combination of multiple factors is no greater than the sum of their individual effects, and indeed, when you're talking about disease states, this is the OPPOSITE of what you should assume. The harm done to your body taxes its ability to deal with harm; the more harm you apply to it, whatever the source, the worse things get. Your body only has so much ability to fight off bad things happening to it, so if you add two bad things on top of each other, you're actually likely to see harm which is worse than the sum of their effects because part of each of the effects is naturally masked by your body's own repair mechanisms.

On the other hand, you could have something where the negative effects of each of the things counteracts each other.

Moreover (and worse), you're assuming you have any independent data to begin with. Given that there is a correlation between smoking and red meat consumption, your smoking numbers are already suspect, because we've established that the two are not independent variables.

In any event, guessing is not science, it is nonsense. I could guess that the impact of the factors was greater than the sum of the parts, and get a different result, and as you can see, it is perfectly reasonable to make that guess as well. That's why it is called a guess.

When we're doing analysis, guessing is bad. You guess BEFORE you do the analysis, not afterwards. All you're doing when you "guess" how large the impact is, is manipulating the data.

That's why control groups are so important.

Regarding glucocorticosteroid use in pregnancy, there actually is quite a bit of debate over whether or not their use is actually a good thing due to the fact that cortiocosteroids are tetratogens.

And yes, actually, it is generally better not to believe in true correlations than it is to believe in false ones. Look at all the people who are raising malnourished children on vegan and vegetarian diets.

Comment by TitaniumDragon on Tactics against Pascal's Mugging · 2013-04-27T00:16:34.395Z · LW · GW

[quote]1) If we e.g. make an AI literally assign a probability 0 on scenarios that are too unlikely, then it wouldn't be able to update on additional evidence based on the simple Bayesian formula. So an actual Matrix Lord wouldn't be able to convince the AI he/she was a Matrix Lord even if he/she reversed gravity, or made it snow indoors, etc.[/quote]

Neither of those feats are even particularly impressive, though. Humans can make it snow indoors, and likewise an apparent reversal in gravity can be achieved via numerous routes, ranging from inverting the room to affecting one's sense of balance to magnets.

Moreover, there are numerous more likely explanations for such feats. An AI, for instance, would have to worry about someone "hacking its eyes", which would be a far simpler means of accomplishing that feat. Indeed, without other personnel around to give independent confirmation and careful testing, one should always assume that you are hallucinating, or that it is trickery. It is the rational thing to do.

You're dealing with issues of false precision here. If something is so very unlikely, then it shouldn't be counted in your calculations at all, because the likelihood is so low that it is negligible and most likely any "likelihood" you have guessed for it is exactly that - a guess. Unless you have strong empirical evidence, treading its probability as 0 is correct.

[quote]2) The assumption that a person's words provides literally zero evidence one way or another seems again something you axiomatically assume rather than something that arises naturally. Is it really zero? Not just effectively zero where human discernment is concerned, but literally zero? Not even 0.000000000000000000000001% evidence towards either direction? That would seem highly coincidental. How do you ensure an AI would treat such words as zero evidence?[/quote]

Same way it thinks about everything else. If someone walks up to you on the street and claims souls exist, does that change the probability that souls exist? No, it doesn't. If your AI can deal with that, then they can deal with this situation. If your AI can't deal with someone saying that the Bible is true, then it has larger problems than pascal's mugging.

[quote]3) We would hopefully want the AI to care about things it can't currently directly observe, or it wouldn't care at all about the future (which it likewise can't currently directly observe).[/quote]

You seem to be confused here. What I am speaking of here is the greater sense of observability, what someone might call the Hubble Bubble. In other words, causality. Them torturing things that have no casual relationship with me - things outside of the realm that I can possibly ever effect, as well as outside the realm that can possibly ever effect me - it is irrelevant, and it may as well not happen because there is not only no way of knowing if it is happening, there is no possible way that it can matter to me. It cannot affect me, I cannot affect them. Its just the way things work. Its physics here.

Them threatening things outside the bounds of what can affect me doesn't matter at all - I have no way of determining their truthfulness one way or the other, nor has it any way to impact me, so it doesn't matter if they're telling the truth or not.

[quote]The issue isn't helping human beings not fall prey to Pascal's Mugging -- they usually don't. The issue is to figure out a way to program a solution, or (even better) to see that a solution arises naturally from other aspects of our system.[/quote]

The above three things are all reasonable ways of dealing with the problem. Assigning it a probability of 0 is what humans do, after all, when it call comes down to it, and if you spend time thinking about it, 2 is obviously something you have to build into the system anyway - someone walking up to you and saying something doesn't really change the likelihood of very unlikely things happening. And having it just not care about things outside of what is causally linked to it, ever, is another reasonable approach, though it still would leave it vulnerable to other things if it was very dumb. But I think any system which is reasonably intelligent would deal with it as some combination of 1 and 2 - not believing them, and not trusting them, which are really quite similar and related.

Comment by TitaniumDragon on Being Half-Rational About Pascal's Wager is Even Worse · 2013-04-26T07:59:48.606Z · LW · GW

Dunning-Kruger and experience with similar religious movements suggests otherwise.

It takes someone who really thinks about most things very little time to come up with very obvious objections to most religious doctrine, and given the overall resemblance of cryonics to religion (belief in future resurrection, belief that donating to the church/cyronics institution will bring tangible rewards to yourself and others in the future, belief in eternal life) its not really invalid to suggest something like that.

Which is more likely - that people are deluding themselves over the possibility of eternal life and don't actually have any real answers to the obvious questions, but conveniently ignore them because they see the upside as being so great, or that this has totally been answered, despite the fact that you didn't even articulate an actual answer to it in your response, or even link to it?

I'm pretty sure that, historically speaking, the former is far more likely than the latter.

If someone comes up to you and starts talking about how you have an immortal soul, if you've spent any time studying medicine or neurobiology at all, or have experience with anyone who has suffered brain damage, it really doesn't take you very long to come up with a good counterargument to people having souls. And people have argued about the nature of being for -thousands- of years, and dubiousness about souls has been around for considerably longer than cryonics has been. And yet, people still believe in souls, despite the fact that a very simple, five minutes of thought counterargument exists and has never been countered.

The fact that you did not have a counter for my argument and instead linked to a page which was meant to be a "take that" directed at me is evidence against you having an actual answer to my query, which is always a bad sign. This is not to say that it doesn't have an answer, but a quick, simple answer (or link) would be no more difficult to find than the litany article.

Indeed, after looking at the Alcor site, and reading around, all I really find are arguments against it. The best argument for it that I've seen is that resurrecting 20th century people might be profitable from an entertainment/educational standpoint, but I find even that to be a weak argument - not only is resuscitating someone for the purpose of entertainment deeply morally repugnant (and likely to be so into the future), but wikipedia and various other sources from the 20th and 21st century are likely to be far more valuable to historians, while writers will benefit more from creating their own characters who are considerably more interesting than real people - and it is considerably cheaper and less morally and legally questionable to do so.

So what is the argument for it? If it is so simple to resolve, then what is the resolution?

As ciphergoth pointed out, there isn't really a good answer here. And that is troubling given that the whole thing is pointless if no one is ever going to bring you back anyway. I was reading one article on Alcor which suggested that, even for a cyronics optimist, the odds of it actually paying off were 15% if he only used his most optimistic numbers - and I think his numbers about the technology are optimistic indeed. That's bad news, especially given the guy is someone who actually thinks that doing cryonics is worthwhile.

Comment by TitaniumDragon on Tactics against Pascal's Mugging · 2013-04-26T07:37:09.348Z · LW · GW

You're thinking about this too hard.

There are, in fact, three solutions, and two of them are fairly obvious ones.

1) We have observed 0 such things in existence. Ergo, when someone comes up to me and says that they are someone who will torture people I have no way of ever knowing existing unless I give them $5, I can simply assign them the probability of 0 that they are telling the truth. Seeing as the vast, vast majority of things I have observed 0 of do not exist, and we can construct an infinite number of things, assigning a probability of 0 to any particular thing I have never observed and have no evidence of is the only rational thing to do.

2) Even assuming they do have the power to do so, there is no guarantee that the person is being rational or telling the truth. They may torture those people regardless. They might torture them BECAUSE I gave them $5. They might do so at random. They might go up to the next person and say the next thing. It doesn't matter. As such, their demand does not change the probability that those people will be tortured at all, because I have no reason to trust them, and their words have not changed the probabilities one way or the other. Ergo, again, you don't give them money.

3) Given that I have no way of knowing whether those people exist, it just doesn't matter. Anything which is unobservable does not matter at all, because, by its very nature, if it cannot be observed, then it cannot be changing the world around me. Because that is ultimately what matters, it doesn't matter if they have the power or not, because i have no way of knowing and no way of determining the truth of the statement. Similar to the IPU, the fact that I cannot disprove it is not a rational reason to believe in it, and indeed the fact that it is non-falsifiable indicates that it doesn't matter if it exists at all or not - the universe is identical either way.

It is inherently irrational to believe in things which are inherently non-falsifiable, because they have no means of influencing anything. In fact, that's pretty core to what rationality is about.

Comment by TitaniumDragon on Minor, perspective changing facts · 2013-04-26T07:16:12.319Z · LW · GW

The problem is that the choice to eat differently itself is potentially a confounding factor (people who pick particular diets may not be like people who do not do so in very important ways), and any time you have to deal with, say, 10 factors, and try to smooth them out, you have to question whether any signal you find is even meaningful at all, especially when it is relatively small.

The study in particular notes:

[quote]Men and women in the top categories of red or processed meat intake in general consumed fewer fruits and vegetables than those with low intake. They were more likely to be current smokers and less likely to have a university degree [/quote]

At this point, you have to ask yourself whether you can even do any sort of reasonable meta analysis on the population. You're seeing clear differences between the populations and you can't just "compensate for them". If you take a sub-population which has numerous factors which increase their risk of some disease, and then "compensate" for those factors and still see an elevated level of the disease, it isn't actually suggestive of anything at all, because you have no way of knowing whether your "compensation" actually compensated for it or not. Statistics is not magic; it cannot magically remove bias from data.

This is the problem with virtually all analysis like this, and is why you should never, ever believe studies like this. Worse still, there's a good chance you're looking at the blue M&M problem - if you do enough meta analysis of a large population you will find significant trends which are not really there, and different studies (noted in the paper) indicate different results - that study showed no increase in mortality and morbidity from red meat consumption, an American study showed an increase, and several vegetarian studies showed no difference at all. Because of publication bias (positive results are more likely to be reported than negative results), potential researcher bias (belief that a vegetarian diet is good for you is likelier than normal in a population studying diet, because vegetarians are more interested in diets than the population as a whole), and the fact that we're looking at conflicting results from studies, I'd say that that is pretty good evidence that there is no real effect and it is all nonsense. If I see five studies on diet, and three of them say one thing and two say another, I'm going to stick with the null hypothesis because it is far more likely that the three studies that say it does something are the result of publication bias of positive results.

Comment by TitaniumDragon on Real-world examples of money-pumping? · 2013-04-25T22:51:53.350Z · LW · GW

People like my mother (who occasionally go to the casino with $40 in their pocket, betting it all in 5-cent slot machines a nickel at a time, then taking back whatever she gets back) go to the casino in order to have fun/relax, and playing casino games is an enjoyable past time to them. Thus while they lose money, they acknowledge that it is more likely than not that it will happen, and are not distressed when they leave with less money than they enter with because their goal was to enjoy themselves, not to end up with more money - getting more money is just a side benefit, something that happens sometimes (about one time in four that she goes, she winds up with more money than she entered with) but which is not really the primary purpose.

Ergo calling it a money pump in such cases is a bit silly.

On the other hand, people who genuinely believe they can win money at the lottery/gambling (against the house; it is not irrational to play poker or blackjack with the idea that you can win, IF you know what you're doing) are in fact engaging in money pumping activities.

But it really depends on the nature of the person involved as to whether or not it is a true money pump.

Comment by TitaniumDragon on Fermi Estimates · 2013-04-25T22:23:12.223Z · LW · GW

I will note that I went through the mental exercise of cars in a much simpler (and I would say better) way: I took the number of cars in the US (300 million was my guess for this, which is actually fairly close to the actual figure of 254 million claimed by the same article that you referenced) and guessed about how long cars typically ended up lasting before they went away (my estimate range was 10-30 years on average). To have 300 million cars, that would suggest that we would have to purchase new cars at a sufficiently high rate to maintain that number of vehicles given that lifespan. So that gave me a range of 10-30 million cars purchased per year.

The number of 5 million cars per year absolutely floored me, because that actually would fail my sanity check - to get 300 million cars, that would mean that cars would have to last an average of 60 years before being replaced (and in actuality would indicate a replacement rate of 250M/5M = 50 years, ish).

The actual cause of this is that car sales have PLUMMETED in recent times. In 1990, the median age of a vehicle was 6.5 years; in 2007, it was 9.4 years, and in 2011, it was 10.8 years - meaning that in between 2007 and 2011, the median car had increased in age by 1.4 years in a mere 4 years.

I will note that this sort of calculation was taught to me all the way back in elementary school as a sort of "mathemagic" - using math to get good results with very little knowledge.

But it strikes me that you are perhaps trying too hard in some of your calculations. Oftentimes it pays to be lazy in such things, because you can easily overcompensate.

Comment by TitaniumDragon on Minor, perspective changing facts · 2013-04-25T21:54:43.453Z · LW · GW

Uh, yeah. The reason for that is that sickly animals carry parasites. It is logical that we wouldn't want to eat parasite-ridden or diseased animals, because then WE get the parasites. If the animal is not parasite-ridden, there's no good reason to believe it would be unhealthy to eat.

My personal suspicion for the cause is underlying SES factors (wealthy people tend to eat better, fresher food than the poor) as well as the simple issue of dietary selection - people who watch what they eat are also more likely to exercise and generally have healthier habits than those who are willing to eat anything.

Comment by TitaniumDragon on Being Half-Rational About Pascal's Wager is Even Worse · 2013-04-25T20:52:16.121Z · LW · GW

I have never actually seen any sort of cogent response to this issue. Ever. I see it being brushed aside constantly, along with the magical brain restoration technology necessary for this, but I've never actually seen someone go into why, exactly, anyone would bother to thaw them out and revive them, even if it WAS possible to do. They are, all for all intents and purposes, dead, from a legal, moral, and ethical standpoint. Not only that, but defrosting them has little actual practical benefit - while there is obvious value to the possible cryopreservation of organs, that is only true if there aren't better way of preserving organs for shipment and preservation. As things are today, however, that seems unlikely - we already have means of shipping organs and keeping them alive, and given the current trend towards growing organs, it seems far more likely to me that the actual method will be to grow organs and keep them alive rather than keep them in cryopreservation, and without that technology being worked on, there is pretty much no value at all to developing unfreezing technology.

That means that, realistically speaking, the only purpose of such technology would be, say, shipping humans to another planet, which while probably not really rational from an economic perspective is at least somewhat reasonably likely. But even still that is a different kettle of fish - the technology in question may not resemble present day cryogenics at all, and as such may be utterly useless for unfreezing people from present-day cyrogenic treatments. Once you can prove that people CAN be revived in that way, then there is much more incentive towards cryogenics... but that is not present day cryogenics, and there is no evidence to suggest future cryogenic treatments will be very similar to present ones.

Okay, so even all that technology aside, let's assume, at some point, we do develop this technology for whatever reason. At this point, not only do you have to bear the expense of unfreezing these people, but you also have to bear the expense of fixing whatever is wrong with them (which, I will note, actually killed them in the past), as well as fixing whatever damage was done to them prior to being cryogenically frozen (and lest we forget, 10 minutes without oxygen is very likely to cause irreparable brain damage in humans who survive - let alone humans who are beyond what we in the present day can deal with). This is likely to be very, very expensive indeed, and there is little real incentive for someone in the future to spend their money in this way instead of on something else. You are basically hoping for some rich idiot to not only be capable of doing this, but also being willing to do it and having the legal ability to do so (as, lest we forget, there are laws about playing around with human corpses, and I suspect that it is unlikely they will change positively for frozen people in the future - as if they do change, what are the odds that your frozen body won't be used in some other sort of experiment?).

I have never seen arguments which really address these issues. People wave their hands and talk about nanotechnology and brain uploading, but as someone who has actually dealt with nanotechnology I can tell you that it is not, in fact, magical, nor is it capable of many of the feats people believe it will be capable of, nor will it EVER be capable of many of the feats that people imagine it will be capable of. Nanomachines have to be provided with energy the same as anything else, among other, major issues, and I have some severe doubts about the unfreezing process in the first place due to various issues of thermodynamics and the fact that the bodies are not frozen in a setup which is likely to facilitate unfreezing them.

A lot of cryonics arguments basically boil down to "future technology is magic", and that's a pretty big problem for any sort of rational argumentation. "You can't prove that they won't be able to revive me" can be used for all sorts of terrible arguments, as the burden of proof is on the person making the argument that it IS possible, not on the person holding to the present day "we can't, and see no way to do so."

I mean, you look at things like:

The technology in here is, quite literally, magic. It doesn't exist, and it won't exist. Ever. Things on the level are very dumb; they cannot be intelligent, and they cannot act intelligently, because they are too small, too simple. The bits where they stick stuff into your cells is where things get really ridiculous, but even before then, those little nanomachines are going to have real issues doing what you are hoping for, and would have to be custom built for the task at hand. We're talking enormous expense if it is even possible to do at all, and given the extremely small cryogenic population, the odds of perfecting the technology prior to running out of dead people is not very good. Remember, if the result is brain dead or severely brain damaged, it is still a failure. But even these sorts of nanomachines are very questionable; transistors are only going to get 256 times smaller at most, which makes me question whether said nanomachines can function in the way that is hoped for at all. Of course, this is not necessarily a barrier to, say, a different sort of nanomachine (though they'd be more micromachines really, on the scale of a cell rather than on the scale of large molecules) which was controlled by some sort of external process with the little machines being extensions/remotes of it, but this is still questionable.

Extreme expense, questionable technology (which would have to be custom developed for the purpose), the question of whether cryonics is even a viable technological route for something else for cryogenic revival to piggyback on, likely custom technology for reviving people who have died of things that people no longer die of because of earlier preventative measures (why build something to fix someone with late stage cancer when no one gets late state cancer anymore?), legal problems, the necessity for experimental subjects... all of these things add up to the question of why these hypothetical future people are even going to bother. That's assuming it is even ethical to revive someone who is, say, not genetically engineered and therefore would be at the bottom of the societal heap if they were revived.

Comment by TitaniumDragon on Being Half-Rational About Pascal's Wager is Even Worse · 2013-04-20T23:04:30.193Z · LW · GW

There's a lot of good reasons to believe that cyronics is highly infeasible. I agree that P(B|A) is low, and P(D|A,B,C) is also absurdly low. We don't care about starving people in Africa today; what is the likelihood that we care about dead frozen people in the future, especially if we have to spend millions of dollars resurrecting them (as is more than likely the case), especially given how difficult it would be to do so? And that's assuming we can even fix whatever caused the problem in the first place; if they die of brain cancer, how are we supposed to fix them, exactly?

Senility is another issue, as it could potentially permanently destroy portions of your brain, rendering you no longer you.

But really I find the overall probability of everything incredibly bad.

But even if we CAN do it, I suspect that it would not be worth doing because what we'd really be doing is just building a copy of you from a frozen copy most likely, in which case you, personally, are still dead; the fact that a copy of you is running around doesn't really change that, and also raises the problem that they could make any numbers of copies of people, which likely would make them dubious about doing so in the first place.

Comment by TitaniumDragon on Litany of a Bright Dilettante · 2013-04-20T22:39:00.283Z · LW · GW

You assume that economists are actually an expert on the economy. They aren't. That's the problem.

Economics only really has a good understanding of very low level effects, and even there things are very difficult to truly deal with. The law of supply and demand, for instance, is really less of a law and more of a guideline - the only way to actually determine real world behavior is experimentation, as there is no single equation you can plug things into to get a result out of. And that's something SIMPLE. Ask them how to fix the economy? They have no ability to do that because they don't actually understand the economy, nor how it works.

A great deal of economics is down to ideology rather than actual reality. There are a lot of economic models but they either only represent very narrow areas or they don't actually match up with reality. Ask ten economists what happens when you raise minimum wage and you'll get ten answers - three will agree one way, three will agree the other, and the other four will give you two answers each.

The problem is that the economy is such a hideously complex system that economists have no way to actually simulate it or experiment with it (unless you argue what is actually done is an experiment, but if it is, it is a very poorly controlled one with a ridiculous number of variables).

Indeed, if you look at reality, it is obvious that economists are NOT experts on the economy. How can you tell this? Because economists make their money by being paid to be economists.

The best economist in the world is probably Warren Buffet, because he makes his money by actually making predictions about the economy and then makes money based on how good they are (as well as how well he manages his business). But ask him about tax policy and he'll still likely not have any better answer than you can get via thought and intuition.

Empirical evidence suggests that economists are not experts on the economy.

Comment by TitaniumDragon on Physicists To Test If Universe Is A Computer Simulation (link) · 2013-04-20T22:22:24.576Z · LW · GW

What matters is not knowledge but probability. Is it likely that something as complicated as our Universe would be simulated?

Is it likely that they would simulate something with vastly different rules than their own universe with such a high level of complexity?

It is possible that the Universe is a simulation, but it is highly improbable due to the difficulty and complexity inherent to doing so. Creating something of this level of complexity for non-simulation purposes is unlikely.

It is of course impossible to disprove it absolutely, but it doesn't really matter. You cannot disprove the FSM or a sufficiently dedicated staff making you believe that reality is real when they're actually setting it up around you, but it really doesn't matter because it is non-falsifiable.

If there is no reason to believe in something, then it is incorrect to believe in it, even if it happens to be true.

Comment by TitaniumDragon on Physicists To Test If Universe Is A Computer Simulation (link) · 2013-04-20T20:14:36.901Z · LW · GW

There's a Freefall comic where the captain says to end all virtual reality simulations and someone else covers his eyes, making him scream. Can't find it offhand though.

Comment by TitaniumDragon on Why AI may not foom · 2013-04-18T00:44:35.984Z · LW · GW

We have no way to even measure intelligence, let alone determine how close to capacity we're at. We could be 90% there, or 1%, and we have no way, presently, of distinguishing between the two.

We are the smartest creatures ever to have lived on the planet Earth as far as we can tell, and given that we have seen no signs of extraterrestrial civilization, we could very well be the most intelligent creatures in the galaxy for all we know.

As for shoving out humans, isn't the simplest solution to that simply growing them in artificial wombs?

Comment by TitaniumDragon on Three Worlds Collide (0/8) · 2013-04-17T23:25:51.500Z · LW · GW

What have I done? ; ;

Comment by TitaniumDragon on Why AI may not foom · 2013-04-17T22:31:52.787Z · LW · GW

It won't be any smarter at all actually, it will just have more relative time.

Basically, if you take someone, and give them 100 days to do something, they will have 100 times as much time to do it as they would if it takes 1 day, but if it is beyond their capabilities, then it will remain beyond their capabilities, and running at 100x speed is only helpful for projects for which mental time is the major factor - if you have to run experiments and wait for results, all you're really doing is decreasing the lag time between experiments, and even then only potentially.

Its not even as good as having 100 slaves work on a project (as someone else posited) because you're really just having ONE slave work on the project for 100 days; copying them 100 times likely won't help that issue.

This is one of the fundamental problems with the idea of the singularity in the first place; the truth is that designing more intelligent intelligences is probably HARDER than designing simpler ones, possibly by orders of magnitude, and it may not be scalar at all. If you look at rodent brains and human brains, there are numerous differences between them - scaling up a rodent brain to the same EQ as a human brain would NOT give you something as smart as a human, or even sapient.

You are very likely to see declining returns, not accelerating returns, which is exactly what we see in all other fields of technology - the higher you get, the harder it is to go further.

Moreover, it isn't even clear what a "superhuman" intelligence even means. We don't even have any way of measuring intelligence absolutely that I am aware of - IQ is a statistical means, as are standardized tests. We can't say that human A is twice as smart as human B, and without such metrication it may be difficult to determine just how much smarter anything is than a human in the first place. If four geniuses can work together and get the same result as a computer which takes 1000 times as much energy to do the same task, then the computer is inefficient no matter how smart it is.

This efficiency is ANOTHER major barrier as well - human brains run off of cherrios, whereas any AI we build is going to be massively less efficient in terms of energy usage per cycle, at least for the foreseeable futures.

Another question is whether there is some sort of effective cap to intelligence given energy, heat dissipation, proximity of processing centers, ect. Given that we're only going to see microchips 256 times as dense on a plane as we have presently available, and given the various issues with heat dissipation of 3D chips (not to mention expense), we may well run into some barriers here.

I was looking at some stuff last night and while people claim we may be able to model the brain using an exascale computer, I am actually rather skeptical after reading up on it - while 150 trillion connections between 86 billion neurons doesn't sound like that much on the exascale, we have a lot of other things, such as glial cells, which appear to play a role in intelligence, and it is not unlikely that their function is completely vital in a proper simulation. Indeed, our utter lack of understanding of how the human brain works is a major barrier for even thinking about how we can make something more intelligent than a human which is not a human - its pretty much pure fantasy at this point. It may be that ridiculous parallelization with low latency is absolutely vital for sapience, and that could very well put a major crimp on silicon-based intelligences at all, due to their more linear nature, even with things like GPUs and multicore processors because the human brain is sending out trillions of signals with each step.

Some possibilities for simulating the human brain could easily take 10^22 FLOPS or more, and given the limitations of transistor-based computing, that looks like it is about the level of supercomputer we'd have in 2030 or so - but I wouldn't expect much better than that beyond that point because the only way to make better processors at that point is going up or out, and to what extent we can continue doing that... well, we'll have to see, but it would very likely eat up even more power and I would have to question the ROI at some point. We DO need to figure out how intelligence works, if only because it might make enhancing humans easier - indeed, unless intelligence is highly computationally efficient, organic intelligences may well be the optimal solution from the standpoint of efficiency, and no sort of exponential takeoff is really possible, or even likely, with such.

Comment by TitaniumDragon on Pay other people to go vegetarian for you? · 2013-04-17T12:27:35.053Z · LW · GW

Is it? Or do we simply not call some such organizations terrorist organizations out of politeness?

I suppose one could argue that the proper definition is "A non-state entity who commits criminal acts for the purpose of invoking terror to coerce actions from others", which will capture almost all groups that we consider to be terrorist groups, though it really depends - is a group who creates fear about the food supply for their own ends a terrorist group? I would argue yes (though one could also argue that this is equivalent to crying fire in a crowded theater, and thus a criminal act).

Comment by TitaniumDragon on The noncentral fallacy - the worst argument in the world? · 2013-04-17T10:33:16.941Z · LW · GW

I think they assume that intending to kill someone is ALWAYS malicious in the US, regardless of your personal convictions on the matter. But yes, you are correct that you could be charged with murder without actual malice on your part (not that it is really inappropriate - the fact that you're being dumb doesn't excuse you for your crime).

By the US definitions, assisted suicide is potentially murder due to your intent to kill, unless your state has an exception, though it is more likely to be voluntary manslaughter. Involuntary euthanasia is a whole different kettle of fish, though.

Comment by TitaniumDragon on The noncentral fallacy - the worst argument in the world? · 2013-04-17T10:20:53.978Z · LW · GW

Are the blacks ever going to give up the right to being selected over whites, now that they have the majority of votes in the country? Or is it just going to be a permanent bias?

I think we all know the answer to this in our heart of hearts. They will always claim that they need it to combat bias against them, and because they "deserve" it because their parents/grandparents/whatever were disadvantaged.

As time goes on, the whites will feel that they are being punished for things that their parents or grandparents did, and will grow bitter and racist against the blacks, who have legalized discrimination against them.

Is that really the proper path forward?

An immediate program is one thing. But we both know that it will be held as long as possible by those it advantages.

Comment by TitaniumDragon on The Universal Medical Journal Article Error · 2013-04-17T10:03:11.742Z · LW · GW

The people who believe that they are grown-ups who can eyeball their data and claim results which fly in the face of statistical rigor are almost invariably the people who are unable to do so. I have seen this time and again, and Dunning-Kruger suggests the same - the least able are very likely to do this based on the idea that they are better able to do it than most, whereas the most able people will look at it and then try to figure out why they're wrong, and consider redoing the study if they feel that there might be a hidden effect which their present data pool is insufficient to note. However, repeating your experiment is always dangerous if you are looking for an outcome (repeating your experiment until you get the result you want is bad practice, especially if you don't adjust things so that you are looking for a level of statistical rigor that is sufficient to compensate for the fact that you're doing it over again), so you have to keep it very carefully in mind and control your experiment and set your expectations accordingly.

Comment by TitaniumDragon on Pay other people to go vegetarian for you? · 2013-04-17T09:55:14.007Z · LW · GW

A terrorist is someone who uses terror in order to coerce a reaction out of people.

PETA's propaganda's purpose is to horrify people into not eating meat.

PETA's funding and relationship with ALF has the purpose of terrorizing scientists, agribusiness, and other groups that they want to cause harm to by threatening to or actually destroying research, burning down buildings, destroying crops, freeing animals, ect. They give people who have engaged in such activities leadership positions, portray it as a reasonable response, give them money, recruit members for them, ect.

Ergo, they are a terrorist front group. I felt that this was pretty clear from the whole "they do in fact go around burning stuff down, commit arson" and I threw in the dog thing because it was from a personal encounter with such "activists" (they tried to "liberate" the rabbits from a rabbit farm down the street during the summer; the guy who noticed their truck with their dog locked up in it came back with a baseball bat to smash in the window to let the dog out after passing it twice, but they had taken off by then). And arson, murder, and jaywalking constructions are always fun. Their association with various ecoterrorist groups, and comembership with such lovelies, and funding... well, it all speaks for itself. But its usually good to say it out loud.

It is entirely appropriate to label people as terrorists when they are, in fact, terrorists. Its like calling a member of the KKK a white supremacist; it might be a [i]negative[/i] term, but it is also without question a [i]correct[/i] term and a [i]descriptive[/i] term. A lot of people are unaware of the fact that PETA is, in fact, a terrorist organization, so I generally feel obligated to mention it.

Unless it is site policy not to use the word terrorist, in which case I will... probably fail to remember the next time it comes up, and then get in trouble. Ah well.

Comment by TitaniumDragon on Physicists To Test If Universe Is A Computer Simulation (link) · 2013-04-17T09:38:11.690Z · LW · GW

If the Flying Spaghetti Monster is running the simulation, it is non-falsifiable, but also not worth considering because he can just stick his noodley appendage in and undo any results he doesn't like anyway retroactively. Its not like we would know the difference.

For us to break the fourth wall, either our creators would have to desire it or be pretty bad at running simulations.

Comment by TitaniumDragon on Physicists To Test If Universe Is A Computer Simulation (link) · 2013-04-17T09:28:03.599Z · LW · GW

It is extremely unlikely that the Universe is a simulation for a wide variety of reasons, foremost amongst them being expense. The level of simulation present in the Universe is sufficiently high that the only purpose of it would BE simulation, meaning that our physical laws would necessarily be quite close to the laws of whatever universe overlies us. However, this implies that building an Earth simulator with the level of fine-grained reality present here would be insanely expensive.

Ergo, it is highly unlikely that we are in a simulation because the amount of matter-energy necessary to generate said simulation is far in excess of any possible benefit for doing so.

Comment by TitaniumDragon on The noncentral fallacy - the worst argument in the world? · 2013-04-17T09:20:59.413Z · LW · GW

By definition, capital punishment is not murder. Murder is defined as [b]unlawful[/b], malicious killing - you have to kill them, you have to have been intending malice towards them, you have to have deliberately meant to cause them harm (thus accidental workplace deaths don't count unless you set them up intentionally; otherwise its just manslaughter), and you have to have been doing so unlawfully. Capital punishment and self defense are not murder because they are -lawful- killing of another human being; likewise with war, it is expected that killing enemy military personnel while in uniform is not a crime.

Abortion, likewise, cannot possibly be murder because it is legal. People may claim that it should NOT be legal, but it is, so it cannot possibly be murder by the definition of the term.

So it is true of taxation as well; theft, by definition, requires that the person taking the item does not have a legal right to it. Because of the nature of taxation, it cannot be theft if the tax is imposed by a lawful authority - they do have a legal right to that money. The same is true of fines - you are required to pay fines and it isn't theft because it was a punishment imposed upon you by a lawful authority. Likewise, someone kicking you out of the house you rented or taking the car you borrowed from them when you don't return it is not committing theft. All the same principle.

Affirmative action IS racist, and not only that, it is without question racist. Its sole purpose is to discriminate on the basis of race. It is of course a net evil; not only does it benefit those who do not deserve it, it harms those who have committed no injustice. Seeing as CIVILIZED people don't believe in blood guilt and punishing people for things that they did not do, there is no question that affirmative action is wrong. Worse still, not only does it put underqualified people into positions where they are more likely to drop out or fail, it casts suspicion upon ANY person who could have possibly benefitted from affirmative action, thereby denegrating their own talents, with people just seeing them (possibly with resentment) as the "token minority" rather than as an equal who earned their way in with everyone else. Any time you screw up, its because you're the token black dude, not because everyone makes mistakes, and if you DO screw up more than normal, or someone ELSE has done that, then you're making everyone else that much more likely to carry racist thoughts forward.

There is no question that it is racist, evil, and wrong. So it is entirely correct to call it racist, and the term is unquestionably correct. People can rationalize it all they want, but there's absolutely no moral difference whatsoever between letting an underqualified black person in because they're black and letting an underqualified white person in because they're white. It is perfectly legitimate to call it racist. People who don't like it aren't willing to cop to the reality because we KNOW it is wrong.

Comment by TitaniumDragon on Pay other people to go vegetarian for you? · 2013-04-17T08:51:22.153Z · LW · GW

PETA is without question a terrorist organization. They act as a front for recruitment for terrorist groups such as ALF, they give money to such groups, they support their activities, they send out large amounts of propaganda, they have significant overlap in membership with said terrorist groups and put terrorists in leadership roles... the list goes on. They DO, in fact, go around burning stuff down, commit arson, and I know on at least one of those occaisions left their dog locked up in their truck on a hot day while they were out "liberating' rabbits (who mostly ended up hiding under their hutches anyway). They have destroyed scientific research in pursuit of their insane ideology.

They oppose the use of genetic engineering to improve food crops, indeed destroying specimens of GMed plants in their terrorist activities, and try to terrorize people into believing GMed plants and animals are more dangerous than anything else we eat. They promulgate scams like organic foods, and thus are considerably worse for the environment and society - there's a reason why GMed foods are cheaper, after all.

PETA is a terrorist front organization, and obviously so to those who are familiar with them, their history, and the people involved with them. They are without question evil, and I oppose them quite strongly. And you should too, if you actually believe in a better future. PETA doesn't.

Factory farms aren't a bad thing anyway. Animals aren't people. I eat them. They are delicious. If they are produced more efficiently via factory farming methods (and they are; if they weren't, then factory farms wouldn't exist), then whatever. Killing them is much worse than everything else that happens to them, and in the end, I don't really care. They aren't randomly being "tortured". They're being kept in stark conditions in order to be produced more efficiently.

I'm not really for someone going out and just randomly hurting animals for no reason, but I'm okay with things that cause harm to animals which actually have legitimate purposes. Factory farming has a legitimate purpose.

Comment by TitaniumDragon on The Universal Medical Journal Article Error · 2013-04-16T20:41:41.611Z · LW · GW

95% is an arbitrarily chosen number which is a rule of thumb. Very frequently you will see people doing further investigation into things where p>0.10, or if they simply feel like there was something interesting worth monitoring. This is, of course, a major cause of publication bias, but it is not unreasonable or irrational behavior.

If the effect is really so minor it is going to be extremely difficult to measure in the first place, especially if there is background noise.

Comment by TitaniumDragon on Three Worlds Collide (0/8) · 2013-04-16T20:05:22.023Z · LW · GW

Fanfiction inherently limits the number of people who will ever look at it; an independent work stands on its own merits, but a fanfiction stands on both its own merits and the merits of the continuity to which it is attached. Write the best fanfic ever about Harry Potter, and most people still will never read it because your audience is restricted to Harry Potter fans who read fanfiction - a doubly restricted group.

While it is undeniable that it can act to promote your material, you are forever constrained in audience size by the above factors, as well as the composition of said audience by said people who consume fanfiction of fandom X.

Comment by TitaniumDragon on Pay other people to go vegetarian for you? · 2013-04-16T19:54:15.194Z · LW · GW

The problem is that animal cruelty is not a rational reason to become a vegetarian. If we are willing to manipulate people into making choices that we deem morally superior, shouldn't we at least be doing things which encourage rational behavior?

Vegetarianism isn't really rational in the first place - someone going vegetarian in the United States has little positive impact on starving people anywhere, and the US isn't actually struggling that badly with the environmental impact of farming. Moreover, eating meat is an efficient means of getting a broad swath of nutrients and helps to ensure that children are proper fed balanced diets - people who eat meat are not terribly likely to be malnourished, while a lot of vegetarians and vegans do it "wrong", especially people who are bandwagoning.

It strikes me as a means of pretending that you are making a positive difference without actually having a positive impact on people in general, and probably having a net negative impact by encouraging terrorist groups like PETA.

Comment by TitaniumDragon on The Universal Medical Journal Article Error · 2013-04-16T18:34:33.575Z · LW · GW

This is why you never eyeball data. Humans are terrible at understanding randomness. This is why statistical analysis is so important.

Something that is at 84% is not at 95%, which is a low level of confidence to begin with - it is a nice rule of thumb, but really if you're doing studies like this you want to crank it up even further to deal with problems with publication bias. publish regardless of whether you find an effect or not, and encourage others to do the same.

Publication bias (positive results are much more likely to be reported than negative results) further hurt your ability to draw conclusions.

The reason that the FDA said what they did is that there isn't evidence to suggest that it does anything. If you don't have statistical significance, then you don't really have anything, even if your eyes tell you otherwise.