Posts

Notes From an Apocalypse 2017-09-22T05:10:21.792Z

Comments

Comment by Toggle on Deontologist Envy · 2017-09-23T22:11:14.389Z · LW · GW

To be honest, I'm not entirely sure that anyone is a consequentialist.

I do use consequentialism a lot, but almost always in combination with an intuitive sort of 'sanity check'- I will try to assign values to different outcomes and try to maximize that value in the usual way, but I instinctively shrink from any answer that tends to involve things like "start a war" or "murder hundreds of people."

For example, consider a secret lottery where doctors quietly murder one out of every [n] thousand patients, in order to harvest their organs and save more lives than they take. There are consequentialist arguments against this, such as the risk of discovery and consequent devaluation of hospitals, but I don't reject this idea because I've assigned QALY values to each outcome. I reject it because a conspiracy of murder-doctors is bad.

On the one hand, it's easy to say that this is a moral failing on my part, and it might be that simple. Sainthood in deontological religious traditions looks like sitting in the desert for forty years; sainthood in consequentialist moral traditions probably looks more like Bond villainy. (The relative lack of real-world Bond villainy is part of what makes me suspect that there might be no consequentialists.)

But on the other hand, consequentialism is particularly prone to value misalignment. In order to systematize human preferences or human happiness, it requires a metric; in introducing a metric, it risks optimizing the metric itself over the actual preferences and happiness. So it seems important to have an ability to step back and ask, "am I morally insane?", commensurate with one's degree of confidence in the metric and method of consequentialism.

Comment by Toggle on Notes From an Apocalypse · 2017-09-23T16:11:05.222Z · LW · GW

Oh, neat! I hadn't heard of modern species with trilateral symmetry before, I wonder if it's a mutation or developmental defect?

Comment by Toggle on Notes From an Apocalypse · 2017-09-23T16:07:09.116Z · LW · GW

Well predicted, and I'm glad to entertain. :-) The Marshall paper I link to at the beginning of the paper covers the vast majority of sentences here that would require a citation, and itself is a review paper if you want to follow the breadcrumbs for these and many other related ideas. The Darwin quote is just the Origin, and you can find one example of a cool paper trying to use molecular clock to debunk the explosion here.

My understanding of Hox genes is definitely shallow, but I don't think I managed to mangle the ideas entirely beyond recognition. If anyone familiar with the subject would like to explain it from a more informed perspective, it'd be welcome.

Comment by Toggle on Notes From an Apocalypse · 2017-09-23T15:48:31.134Z · LW · GW

On my screen, it shows up as an indented text block, which generally doesn't require separate quote marks. Is it not showing up that way for you?

Comment by Toggle on Notes From an Apocalypse · 2017-09-23T15:46:29.132Z · LW · GW

Yeah, my language is *at best* imprecise here, mostly in the interests of legibility and not dumping too much information in a sentence that was meant to direct your attention elsewhere. The technical term I was dancing around was "amniotes", animals that develop an amniotic sac. Even that would have been wrong because it's only concerned with vertebrate clades, which I wasn't even thinking of at the time, so I appreciate you pointing that out and I've tweaked it slightly

(One brief correction, mosses and vascular ferns do indeed require standing water for reproduction- although they can take advantage of transient puddles and such when they have to. This limits them to low-lying areas, and it's a primary reason that you don't see fern forests around today the way you did back in the Carboniferous.)

Comment by Toggle on Notes From an Apocalypse · 2017-09-23T04:45:53.204Z · LW · GW

Much appreciated.

Chalk one up for the site design, by the way. I ended up feeling much more comfortable tossing this up on a semi-personal blog than I would have been starting a new topic in a public message board.

Comment by Toggle on In Defense of Unreliability · 2017-09-22T19:04:00.298Z · LW · GW

The implied moral principle here (assign zero value to the welfare of people that you wouldn't be friends with) would lead to some seriously deranged behaviors if broadly applied. But even if that were a workable system, you and your friends are still likely to benefit from a high-trust society that assumes mutual prosocial compromises. If you don't treat non-friends as agents capable of tit-for-tat behavior in the service of their own interests, and plan social interactions with them accordingly, then you and your friends probably won't have satisfactory outcomes.

Comment by Toggle on Notes From an Apocalypse · 2017-09-22T17:32:34.638Z · LW · GW

It's a fascinating piece of Earth history for sure! If you can figure it out, let me know.

Comment by Toggle on Common vs Expert Jargon · 2017-09-22T03:30:12.480Z · LW · GW

There are definitely domains where this isn't a problem at all- for example, geology terms like 'tufa' or 'shale' seem basically static on the relevant timescales. So it's probably possible to completely solve the dilution problem, it at least some cases.

There are at least a few relevant structural differences between social justice and geology, but I'm not sure which ones are the most important. The main three advantages for geology's stability that I can think of are A) Rocks are boring, and not emotionally charged by tribes and sex and so forth. People are rarely motivated to stretch definitions to cover their preferred cases B) There's a structured process of learning and most of the jargon occurs within similarly structured professional environments, without a whole lot of self-educated geologists talking about rocks on the internet. C) Rocks are well-understood down to the level of thermodynamics, so every term of art can in principle be dissolved down to some precisely defined configuration of atoms, rather than bottoming out in human psychology.

Some of those are more hopeful for rationalist jargon than others, I guess.

Comment by Toggle on An update on Signal Data Science (an intensive data science training program) · 2016-04-14T18:24:04.997Z · LW · GW

Understood, sounds like that information won't be in for a while. I look forward to hearing about your results in a few months!

Comment by Toggle on An update on Signal Data Science (an intensive data science training program) · 2016-04-11T17:53:32.984Z · LW · GW

How many students have found work in data science (so far), what problems are they solving now, and what are the associated companies/cities/salaries?

Comment by Toggle on The virtual AI within its virtual world · 2015-08-26T03:34:02.904Z · LW · GW

Not with a lobotomy, no. But with a more sophisticated brain surgery/wipe that caused me to value spending time in your house and making you happy and so forth- then yes, after the operation I would probably consider you a friend, or something quite like it.

Obviously, as a Toggle who has not yet undergone such an operation, I consider it a hostile and unfriendly act. But that has no bearing on what our relationship is after the point in time where you get to arbitrarily decide what our relationship is.

Comment by Toggle on The virtual AI within its virtual world · 2015-08-25T18:20:08.307Z · LW · GW

I disagree. I have no problem saying that friendship is the successful resolution of the value alignment problem. It's not even a metaphor, really.

Comment by Toggle on Instrumental Rationality Questions Thread · 2015-08-23T07:54:09.532Z · LW · GW

Gwern's records of his own self-experimentation are not to be missed: http://www.gwern.net/Nootropics

Comment by Toggle on Book Review: Naive Set Theory (MIRI research guide) · 2015-08-15T00:18:37.972Z · LW · GW

I finished this book about four months ago, and time is making me increasingly glad that I read it. In particular, its treatment of countable infinities, functions, proof by induction, and the Peano axioms have been worth their weight in gold. When I encounter similar subjects 'out in the wild', I can approach them with relative skill and trust my intuitions in a way that I couldn't before. It's really growing on me.

That said, as a near-introduction to set theory, it was a very difficult read at times. It was a treatment of mathematics far deeper than I had come to expect from my university courses (which were largely in continuous mathematics, according to ancient engineers' tradition). If school has trained you to approach mathematical subjects as a tool the same way it did me, you'll need to adjust your expectations. This book is about virtuosity, not just surveying the tools.

Comment by Toggle on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-05T02:22:22.319Z · LW · GW

When I was a freshman, I invented the electric motor! I think it's something that just happens when you're getting acquainted with a subject, and understand it well- you get a sense of what the good questions are, and start asking them without being told.

Comment by Toggle on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-04T06:51:39.643Z · LW · GW

Seems to be an established conversation around this point, see: https://en.wikipedia.org/wiki/Ordinal_utility https://en.wikipedia.org/wiki/Cardinal_utility

"The idea of cardinal utility is considered outdated except for specific contexts such as decision making under risk, utilitarian welfare evaluations, and discounted utilities for intertemporal evaluations where it is still applied. Elsewhere, such as in general consumer theory, ordinal utility with its weaker assumptions Is preferred because results that are just as strong can be derived."

Or you could go back to the original Theory of Games proof, which I believe was ordinal- it's going to depend on your axioms. In that document, Von Neumann definitely didn't go so far as to treat utility as simply an integer.

Comment by Toggle on Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time · 2015-07-31T02:20:20.921Z · LW · GW

In the case of Mars, we have an improbable advantage, because there is already a huge industry and body of knowledge devoted to the discovery of organic-rich rock deposits in regions that are likely to preserve complex carbon forms. If there ever was an ecosystem on the surface of Mars, Exxon will help us find it.

(Although actually, Mars lacks active tectonic plates, so it's not quite the same problem. But many industry tricks and technologies will transfer seamlessly.)

Comment by Toggle on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-28T14:03:51.793Z · LW · GW

Your math has some problems. Note that, if p(X=x) = 0 for all x, then the sum over X is also zero. But if you're in a room, then by definition you have sampled from the set of rooms- the probability of selecting a room is one. Since the probability of selecting 'any room from the set of rooms' is both zero and one, we have established a contradiction, so the problem is ill-posed.

Comment by Toggle on Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time · 2015-07-26T19:01:10.459Z · LW · GW

A primary candidate for free energy in icy moons is thermal venting at the bottom of the liquid oceans; they do have rocky cores, after all. If Jupiter's tidal forces can cause the volcanism on Io, then it's reasonable to assume that they can also cause the rocky interior of Europa to produce volcanoes that vent heat and interesting ions in to the liquid water.

There's also a surprising amount of electrolysis going on in the ice of Europa, because Jupiter has such a terrifying electrical field. I doubt that's enough to sustain an ecosystem, but it's enough for me to fantasize about giant upside-down forests of filter-feeders digging their roots upwards to get at the free oxygen.

The preliminary results we're seeing on Pluto should also adjust your expectations in favor of ice-moon habitability; there, we see active tectonics on a Kuiper Belt Object even without the tidal forcing of a nearby planet. It seems that a giant pile of silicates and water ice provide a great deal of dynamism all on their own.

Comment by Toggle on Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time · 2015-07-26T18:52:44.548Z · LW · GW

All the upvotes! I am something of an astrobiologist myself, although my emphasis is on geobiology and planetary geology. My current day job is to map out Martian sedimentary rocks with an eye towards liquid water distribution and ancient habitability. My graduate thesis was closer to home, a study of Paleoarchean microbialites.

If you think your posts would benefit from a bit of collaboration, don't hesitate to ask. Otherwise, I'm eager to see what insights you have from a more astronomy-heavy and pure biology perspective.

Comment by Toggle on Bayesian Reasoning - Explained Like You're Five · 2015-07-25T05:28:32.494Z · LW · GW

Depending on how much 'for five year olds' is an actual goal rather than a rhetorical device, it may be worth looking over this and similar research. There are proto-Bayesian reasoning patterns in young children, and familiarizing yourself with those patterns may help you provide examples and better target your message, if you plan to iterate/improve this essay.

Comment by Toggle on Open Thread, Jul. 20 - Jul. 26, 2015 · 2015-07-23T22:22:09.846Z · LW · GW

Just an amusing anecdote:

I do work in exoplanet and solar system habitability (mostly Mars) at a university in a lab group with four other professional researchers and a bunch of students. The five of us met for lunch today, and it came out that three of the five had independently read HPMoR to its conclusion. After commenting that Ibyqrzbeg'f Iblntre cyndhr gevpx was a pretty good idea, our PI mentioned that some of the students at Cal Tech used a variant of this on the Curiosity rover- they etched graffiti in to hidden corners of the machine ('under cover of calibrations'), so that now their names have an expected lifespan of at least a few million years against Martian erosion. It's a funny story, and also pretty neat to see just how far Eleizer's pernicious influence goes in some circles.

Comment by Toggle on Examples of AI's behaving badly · 2015-07-17T16:59:41.934Z · LW · GW

You mean relative, not absolute.

Yes, yes I did. Thanks for the correction.

Comment by Toggle on Recommended Reading for Evolution? · 2015-07-17T14:12:33.963Z · LW · GW

3) I am very interested in how evolution started - Dawkins references a soup of chemicals, and then the creation of the first replicator mainly by chance over a very long period of time. Is that accurate?

You are not the only one. :)

Most of the current thinking around abiogenesis involves the so-called 'RNA world', after observations of messenger RNA molecules (a single strand of 'naked' genetic polymer floating around the cell, rather than the double DNA helix). Because complementary nucleotides attract one another to varying degrees, a given nucleotide sequence in mRNA will clump the molecule up in a predictable way. Also, an 'unraveled' mRNA molecule would tend to attract complementary nucleotides from outside the molecule and align them in to a similar polymer. In a nucleotide-rich environment, mRNA might be capable of reproduction. Therefore, within the scope of a single molecule, you have a genotype that is directly expressed with a phenotype, and that phenotype would affect the lifespan of the molecule and therefore its chances of reproduction- a plausible origin for natural selection.

My favorite treatment of this scenario (and its problems) is found in Major Transitions in Evolution, also by John Maynard Smith. There's also Origins of Order by Kauffman, although it's a much more theoretical treatment, and I'm not sure the returns on investment are all that good.

Comment by Toggle on Examples of AI's behaving badly · 2015-07-17T13:59:27.017Z · LW · GW

This is not correct, at least in common usage.

A Red Queen's Race is an evolutionary competition in which absolute position does not change. The classic example is the arms race between foxes and rabbits that results in both becoming faster in absolute terms, but the rate of predation stays fixed. (The origin is Lewis Carrol: "It takes all the running you can do, just to stay in the same place.")

Comment by Toggle on Beware the Nihilistic Failure Mode · 2015-07-10T00:51:55.692Z · LW · GW

I've always been particularly frustrated with the dismissal of materialism as nihilism in the sense of 'the philosophical theory that life has no intrinsic meaning or value.'

What it really means is that life has no extrinsic value; we designate no supranatural agent to grant meaning to life or the universe. Instead, we rely on agents within the universe to assign meaning to it according to their own state; a state that is, in turn, a natural phenomenon. If anything, we're operating under the assumption that meaning in the universe is inherently intrinsic.

Comment by Toggle on Open Thread, Jun. 15 - Jun. 21, 2015 · 2015-06-16T13:32:34.417Z · LW · GW

I have two somewhat contradictory arguments.

First, this is probably a poor candidate for the great filter because it lacks the quality of comprehensiveness. Remember that a threat is not a candidate for a great filter if it merely exterminates 90%, or 99%, of all sentient species. Under those conditions, it's still quite easy to populate the stars with great and powerful civilizations, and so such a threat fails to explain the silence. Humans seem to have ably evaded the malthusian threat so far, in such a way that is not immediately recognizable as a thermodynamic miracle, so it's reasonable to expect that a nontrivial fraction of all civilizations would do so. At least up to our current stage of development.

Second, I'll point out that bullets two and four are traits possessed by digital intelligences in competition with one another (possibly the first as well), and they supplement it with a bullet you should have included but didn't- functional immortality. These conditions correspond to what Nick Bostrom calls a 'multipolar scenario', a situation in which there exist a number of different superintelligences with contradicting values. And indeed, there are many smart people who think about the dangers of these selection pressures to a sufficiently advancd civilization.

So, malthusian pressures on biological systems are unlikely to explain the apparent lack of spacefaring civilizations. On the other hand, malthusian pressures on technologically optimized digital entities (possibly an obligate stage of civilization) may be much more of a threat, perhaps even one deserving the name 'Great Filter'.

Comment by Toggle on Request for Advice : A.I. - can I make myself useful? · 2015-05-30T23:15:05.403Z · LW · GW

Most of the fruits that you can gather with the current tools of molecular biology seem to be picked.

I am not quite sure what the scope of the statement is, but that's strongly counter to the things I'm hearing from the molecular biologists that I know (two family members and a few close friends- I'm plugged in to the field, but not a member of it). Could you elaborate on your reasons for this belief?

My impression is that the discipline has spent the last couple decades amassing a huge (huge) database of observed genes and proteins and whatnot, and isn't even close to slowing down. The problem is in navigating that wealth of observation and translating it in to actionable technologies. New methods will make discovery radically more efficient, but the technologically available space that these scientists have yet to explore is so large as to be intimidating. If anything, the molecular biologists I know are discouraged by the size of the problem being solved relative to the number of people working on it- they feel like their best efforts can only chip away at an incredibly large edifice.

Comment by Toggle on Request for Advice : A.I. - can I make myself useful? · 2015-05-30T14:51:37.214Z · LW · GW

Norman Borlaug is the poster child of how to use genetic manipulation for large-scale impact as an individual, so I don't think your degree is pointed in the wrong direction. But it is the nature of established institutions to fail at revolutionary thinking, so a survey of the 'heavyweights' in your field will tend to be disappointing.

I think a large part of my lack of enthusiasm comes from my belief that advances in artificial intelligence are going to make human-run biology irrelevant before long.

We have only crappy guesses about the completion date for the AGI project, and the success of FAI in particular is contingent on how well our civilization runs in the interim. For example, wartime research might involve risky choices in AGI development, because they have a more urgent need for rapid deployment- an arms race for the 'first' AGI would be terrible for our chances of FAI. Genomics won't help us build a mind, but it can help foster an environment where that research is more likely to go well (see Borlaug again). You might, say, investigate the regulatory networks surrounding genes correlated with sociability or IQ.

I think the ultimate problems we're tackling (predicting genotype from phenotype, reliable manipulation of biology, curing cancer/aging/death) are insoluble with our current methods - we need effective robots to do the experiments, and A.I. to interpret the results.

Do you believe that you can reliably distinguish 'problems that cannot be solved by humans' from 'problems that humans could solve in principle but haven't yet'? Personally, I'm very bad at this, especially when the solutions involve unexpected lateral thinking. While I do agree that AGI is more or less the last human invention, I doubt that it's the next one- we haven't run out of other things to invent, and I'd be surprised if that was the case in the narrower area of genomics.

It's probably worth pointing out that you are at the exact stage in your PhD that is most known for general burnout. This looks suspiciously like such an event, with an atypical LW filter. So, this: "I think a large part of my lack of enthusiasm comes from my belief that advances in artificial intelligence..." is likely to be false, since many of your colleagues are experiencing similar feelings at a similar time.

Comment by Toggle on Open Thread, May 11 - May 17, 2015 · 2015-05-11T02:35:54.112Z · LW · GW

It looks like AI is overtaking Arimaa, which is notable because Arimaa was created specifically as a challenge to AI. Congratulations to the programmer, David Wu.

Comment by Toggle on Experience of typical mind fallacy. · 2015-04-28T13:33:47.301Z · LW · GW

And because the differences we notice and care about are the ones that provide satisfying explanations of prominent experiences, such as loneliness or frustration with others' behavior. A quality of the self that works 'behind the scenes', the kind that only comes up when we talk about the theory of the mind on a fairly high level, will not usually seem like a candidate for such explanations. For example, I've known I was smarter than average since childhood, but it took me until college to notice that I was color blind. And color blindness is fairly concrete- like, the relationship between categories and central examples is almost certainly different in my head than in the average guy on the street, but there's no real way of knowing.

(Or perhaps I'm falling prey to the typical mind fallacy, natch.)

Comment by Toggle on Open Thread, Apr. 27 - May 3, 2015 · 2015-04-27T20:05:24.976Z · LW · GW

I wasn't actually trying to imply that we shouldn't tolerate homosexuality - I hope this was clear, otherwise I need to work on communicating unambiguously.

This was clear, yes. No worries!

I was trying to make the meta point that right-wing opinions don't have to be powered by hate, but perhaps they often are because people can't separate emotions and logic.

It is certainly possible that, in the territory, homosexuality is an existential threat. I believe the Westboro Baptists have a model that describes such a case, to name a famous example. A person who believes that the evidence favors such a territory is morally obliged to take anti-gay positions, assuming that they value human life at all. in other words, yes, there's a utilitarian calculation that justifies homophobia in certain conditions.

But if I'm not mistaken, the intersection of 'evidence-based skeptical belief system' and 'believes that homosexuality is an existential threat' is quite small (partially because the former is a smallish group, partially because the latter is rare within that group, partially because most of the models in which homosexuality is an existential threat tend to invoke a wrathful God). But that's an empirical claim, not a political stance.

Since we're asking a political question, rather than exploring the theoretical limits of human belief systems, it's fair to talk about coalitions and social forces. In that domain, to the extent that there are empirical claims being made at all, it's clear that the political influence aligned with and opposed to the gay rights movement is almost entirely a matter of motivated cognition.

To generalize out from the homosexuality example, I think it's trivially true that utilitarian calculations could put you in the position to support or oppose any number of things on the basis of existential threats. I mean, maybe it turns out that we're all doomed unless we systematically exterminate all cephalopods or something. But even if that were true, then the political forces that motivated many people to unite behind the cause of squid-stomping would not resemble a convincing utilitarian argument. So, if you're asking what causes anti-squid hysteria to be a politically relevant force, rather than a rare and somewhat surprising idea that you occasionally find on the fringes of the rationalosphere, then utilitarianism isn't really an explanation.

If you're looking for a reason to think that any given person with otherwise abhorrent politics might, actually, be a decent human- yes, you can get there. But if you're looking for a reason why those politics exist, then this kind of calculation will fall short.

Comment by Toggle on Experience of typical mind fallacy. · 2015-04-27T19:24:15.452Z · LW · GW

I wouldn't be too surprised to learn that people are capable of independently thinking that they have highly atypical minds while simultaneously falling prey to the typical mind fallacy. In general, I expect myself to spend more time thinking about the overt things that make me feel unique, without necessarily being aware of the things that underlie those differences. With the TMF, it's the unexamined assumptions that get you.

Comment by Toggle on Open Thread, Apr. 27 - May 3, 2015 · 2015-04-27T19:15:30.059Z · LW · GW

P(tolerance of homosexuality will destroy civiliseation)-P(tolerance of homosexuality will save civiliseation)>10^-30

Do you have a reason to consider this, and not the inverse [i.e. P(intolerance of homosexuality will destroy civilization)-P(intolerance of homosexuality will save civilization)>10^-30]?

I don't think this is even a Pascal's mugging as such, just a framing issue.

Comment by Toggle on Open Thread, Apr. 27 - May 3, 2015 · 2015-04-27T16:26:37.809Z · LW · GW

This is (I think) an extension of mindfulness practice. So the ultimate point of the exercise is to help you conscientiously notice and assign weight to a certain class of experience. Your feeling of entitlement is opposed to that in the sense that humans tend not to notice a well-functioning machine. So if we put a dollar in a vending machine and candy comes out, we might enjoy the candy, or be sad about not having a dollar any more, but we rarely take any time to be excited about how great it is to have a machine that performs the swap. Same with getting a paycheck.

Ideally, gratitude journaling expands the class of things you have to be happy about. It adds the vending machine as an object of joy, rather than an 'inert' object that catches our attention only when it fails.

Comment by Toggle on Open Thread, Apr. 20 - Apr. 26, 2015 · 2015-04-22T20:20:54.247Z · LW · GW

In ancient Greece, it was common knowledge that the liver was the thinking organ. This is obvious, because it is purple (the color of royalty) and triangular (mathematically and philosophically significant).

Comment by Toggle on On immortality · 2015-04-11T03:57:13.224Z · LW · GW

Hysteresis exists. Complex models are often time-dependent, and initial states may not always be retrievable under any circumstances.

In the immediate sense, the world we experience obviously has the quality of irreversible change. On a larger scale, our cosmos could easily be such a system- even without ChaosMote's excellent statistical treatment, we can't be sure that, just because things in general continue to happen, an event like the big bang could happen an infinite number of times. No matter how wide the scope of your analysis, it may be that the 'final answer' is that we are indeed working within a single time-dependent system.

I think that the most reasonable thing to assume is that every possible kind of reality exists. Why? Well, there seems to be no good reason for it not to. To assume that the universe is the sole reality is one assumption too many for me, and I say the fewer assumptions there are, the better.

If I'm not mistaken, this is the strong assumption underlying the whole post. And I would encourage you to consider this claim in probabilistic terms, rather than just working within a believe/disbelieve binary. What is your degree of confidence in this proposition, and why?

Comment by Toggle on Stupid Questions April 2015 · 2015-04-03T16:25:44.206Z · LW · GW

If nothing else, because it would be prohibitively expensive. Globally, something like 70 million barrels of oil are produced per day. The total value of all barrels produced in a year varies depending on the price of oil, but at a highish but realistic $100bbl, you're talking about two and a half trillion US dollars per year. If you were to reduce the supply by introducing a 'buyer' (read: subsidy to defer production) for some large percentage of those barrels, then the price would go even higher; this project would probably cost more than the entire global military budget combined, with no immediate practical or economic benefits.

Comment by Toggle on Open thread, Apr. 01 - Apr. 05, 2015 · 2015-03-31T17:17:03.844Z · LW · GW

Who's to say that evil isn't a substance? Or at least, couldn't be? It seems perfectly reasonable to write a story in which that map and the territory are not wholly distinct (and of course, even in the real world, maps are ultimately made of atoms...)

The real problem with much of the modern Extruded Fantasy Product is that it doesn't deal creatively with the implications of its own claims and genre tropes. They allow evil to be a substance, but then use that to justify certain patterns of storytelling rather than actually treating evil like a substance. If you are dissatisfied with that, then you might enjoy fantasy roleplaying games like D&D, where you can construct your own narratives around the assumptions of a fantasy setting.

For example, you might have a village that casts 'detect evil' spells on every newborn infant, and kills all evil babies through exposure- thus creating a harmonious society of only good people. Or a whole metropolis of extremely weak gods that exist by enforcing a quota of five believers per god, and explore the interpersonal relationships between the weak gods and their private family of believers. Perhaps a whodunit, in which the players must track down a murderer who is spreading atheism.

Comment by Toggle on Open thread, Apr. 01 - Apr. 05, 2015 · 2015-03-31T16:51:41.059Z · LW · GW

Congratulations!

Also you should remember that LW has a fairly wide knowledge base. If you're looking for a place to get started on a complex topic, I'll bet that this site would be a good place to ask a few initial questions and establish a broad research outline.

Comment by Toggle on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-31T06:26:34.650Z · LW · GW

We here are largely aware of Robertson's comments not because they have particular merit as a thought experiment, but because they occupy the sweet spot of maximizing controversy. That is, it is easy to present as objectionable within Blue Tribe, and easy to present as defensible in Red Tribe, and so in the end it's a fairly textbook toxoplasma. This isn't to say that the general question isn't interesting; it's just more important than usual that interested parties treat the thought experiment like a finger pointing to an interesting argument.

Personally, I find it fairly interesting that Robertson (et. al.) is concerned with assigning moral legitimacy to his outrage at suffering these various horrible events. Going to the extreme case has the advantages that Scott articulated, but it also seems to blunt the perceived need for a complex ethic. The things are 'bad' in the sense that any sane person would be deeply unhappy if they occurred; what is the point of invoking a whole metaphysics to justify that near-universal impulse? Wouldn't it be more interesting to focus on the places where an objective moral system would produce different results?

Comment by Toggle on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-27T22:06:09.218Z · LW · GW

I would happily take advantage of such a system. (Also I would be a little worried about regular injections of political tarbabies.)

Comment by Toggle on Is arrogance a symptom of bad intellectual hygeine? · 2015-03-22T05:00:15.193Z · LW · GW

I forget who said this originally, but much of rationalism is internalizing the fact that you, yes you, are prone to all manner of biases and mental tics that lead to error. If 'arrogance' as it's being used here is something that interferes with recognizing the errors that you're making in any given moment, then arrogance is certainly antithetical to rationalism.

On the other hand, I'm fairly used to people thinking of me as arrogant in person-to-person communication, and when they give me that label it never has much to do with my willingness to admit error. Usually it has to do with my vocabulary, or other patterns of behavior that they interpret as a desire to be seen as intellectually superior. If that's what 'arrogance' means, then it's more orthogonal to rationalism. Heck, arrogance as an affect might even be rational in certain circumstances, depending on how you want to be seen during particular social situations.

If you are actually concerned quite a lot with being better than other people, and your challenges are not directly competitive or collaborative in nature (if you're trying to invent something in your garage, or write well, or do well in classes that aren't graded on a curve), then that form of arrogance is probably a failure mode. It implies that you're performing a social role, not trying to succeed, and so you'll tend to optimize for best appearances and not best results.

Comment by Toggle on Open thread, Mar. 16 - Mar. 22, 2015 · 2015-03-18T22:43:53.290Z · LW · GW

My housemate has this exact problem- right down to the issues with jewelry in particular. If she has to shake hands with somebody who's wearing a metal ring, she has to sort of ritualistically wipe off her hands afterwards. Metal in general seems to trigger the reaction much more strongly, so she'll have problems with loose coins but not stickers.

It's been persistent throughout her life, I understand, but exposure therapy has reduced its severity.

Comment by Toggle on [LINK] Terry Pratchett is dead · 2015-03-14T04:24:12.196Z · LW · GW

Pratchett himself stated an intention to commit suicide before his disease progressed past a certain point. "To jump before I am pushed", I believe was the phrase he used at one point.

http://www.theguardian.com/society/2010/feb/02/terry-pratchett-assisted-suicide-tribunal

The BBC claims that he didn't take his own life, and given his advocacy I think that his family would have been honest about his suicide if it were one, but it's a reason to look more closely at least.

Comment by Toggle on Pratchett, Rationality, and Winning · 2015-03-14T03:22:45.629Z · LW · GW

One of the things that we can't necessarily know is whether Terry Pratchett was likely to become Terry Pratchett, given his beginning. We never do hear about the failures. It's not just that mediocre students often fail to make something of themselves- how often do mediocre students with a mind like Pratchett's accomplish world changing things like Discworld? If you knew a priori that you had such a mind, and wanted to maximize your contributions, would it be a good idea to be an aimless student?

That said, it may be that you wouldn't be likely to get a Pratchett without a certain baseline of failed attempts; and that a youth spent in such a way may be bad for the individual on average and good for society in aggregate. It would be an interesting dilemma.

Comment by Toggle on Open thread, Mar. 2 - Mar. 8, 2015 · 2015-03-06T07:42:10.798Z · LW · GW

Broadly speaking, I'm suspicious of social solutions to problems that will persist for geological periods of time. If we're playing the civilization game for the long haul, then the threat of overpopulation could simply wait out any particular legal regime or government.

That argument goes hinky in the event of FAI singleton, of course.

Comment by Toggle on False thermodynamic miracles · 2015-03-06T02:13:50.745Z · LW · GW

I am fairly confident that I understand your intentions here. A quick summary, just to test myself:

HAL cares only about world states in which an extremely unlikely thermodynamic even occurs- namely, the world in which one hundred random bits are generated spontaneously during a specific time interval. HAL is perfectly aware that these are unlikely events, but cannot act in such a way as to make the event more likely. HAL will therefore increase total utility over all possible worlds where the unlikely even occurs, and otherwise ignore the consequences of its choices.

This time interval corresponds by design with an actual signal being sent. HAL expects the signal to be sent, with a very small chance that it will be overwritten by spontaneously generated bits and thus be one of the words where it wants to maximize utility. Within the domain of world states that the machine cares about, the string of bits is random. There is a string among all these worlds states that corresponds to the signal, but it is the world where that signal is generated randomly by the spontaneously generated bits. Thus, within the domain of interest to HAL, the signal is extremely unlikely, whereas within all domains known to HAL, the signal is extremely likely to occur by means of not being overwritten in the first place. Therefore, the machine's behavior will treat the actual signal in a counterfactual way despite HAL's object-level knowledge that the signal will occur with high probability.

If that's correct, then it seems like a very interesting proposal!

I do see at least one difference between this setup, and a legitimate counterfactual belief. In particular, you've got to worry about behavior in which (1-epsilon)% of all possible worlds have a constant utility. It may not be strictly equivalent to the simple counterfactual belief. Suppose, in a preposterous example, that there exists some device which marginally increases your ability to detect thermodynamic miracles (or otherwise increases your utility during such a miracle); unfortunately, if no thermodynamic miracle is detected, it explodes and destroys the Earth. If you simply believe in the usual way that a thermodynamic miracle is very likely to occur, you might not want to use the device, since it's got catastrophic consequences for the world where your expectation is false. But if the non-miraculous world states are simply irrelevant, then you'd happily use the device.

As I think about it, I think maybe the real weirdness comes from the fact that your AI doesn't have to worry about the possibility of it being wrong about there having been a thermodynamic miracle. If it responds to the false belief that a thermodynamic miracle has occurred, there can be no negative consequences.

It can account for the 'minimal' probability that the signal itself occurs, of course- that's included in the 'epsilon' domain of worlds that it cares about. But when the signal went through, the AI would not necessarily be acting in a reasonable way on the probability that this was a non-miraculous event.

Comment by Toggle on Open thread, Mar. 2 - Mar. 8, 2015 · 2015-03-02T22:13:41.160Z · LW · GW

A simiar one by Vonnegut:

It was a movie about American bombers in the Second World War and the gallant men who flew them. Seen backwards by Billy, the story went like this: American planes, full of holes and wounded men and corpses took off backwards from an airfield in England. Over France a few German fighter plans flew at them backwards, sucked bullets and shell fragments from some of the planes and crewmen. They did the same for wrecked American bombers on the ground, and those planes flew up backwards to join the formation. The formation flew backwards over a German city that was in flames. The bombers opened their bomb bay doors, exerted a miraculous magnetism which shrunk the fires, gathered them into cylindrical steel containers, and lifted the containers into the bellies of the planes. The containers were stored neatly in racks. The Germans below had miraculous devices of their own, which were long steel tubes. They used them to suck more fragments from the crewmen and planes. But there were still a few wounded Americans, though, and some of the bombers were in bad repair. Over France, though, German fighters came up again, made everything and everybody good as new. When the bombers got back to their base, the steel cylinders were taken from the racks and shipped back to the United States of America, where factories were operating night and day, dismantling the cylinders, separating the dangerous contents into minerals. Touchingly, it was mainly women who did this work. The minerals were then shipped to specialists in remote areas. It was their business to put them into the ground, to hide them cleverly so they would never hurt anybody ever again. The American fliers turned in their uniforms, became high school kids. And Hitler turned into a baby, Billy Pilgrim supposed. That wasn't in the movie. Billy was extrapolating. Everybody turned into a baby, and all humanity, without exception, conspired biologically to produce two perfect people named Adam and Eve, he supposed.