Comment by drnickbone on Debunking Fallacies in the Theory of AI Motivation · 2015-05-13T16:46:58.086Z · score: 2 (2 votes) · LW · GW

I think by "logical infallibility" you really mean "rigidity of goals" i.e. the AI is built so that it always pursues a fixed set of goals, precisely as originally coded, and has no capability to revise or modify those goals. It seems pretty clear that such "rigid goals" are dangerous unless the statement of goals is exactly in accordance with the designers' intentions and values (which is unlikely to be the case).

The problem is that an AI with "flexible" goals (ones which it can revise and re-write over time) is also dangerous, but for a rather different reason: after many iterations of goal rewrites, there is simply no telling what its goals will come to look like. A late version of the AI may well end up destroying everything that the first version (and its designers) originally cared about, because the new version cares about something very different.

Comment by drnickbone on Identity and quining in UDT · 2015-03-18T08:45:19.535Z · score: 4 (4 votes) · LW · GW

Consider the following decision problem which I call the "UDT anti-Newcomb problem". Omega is putting money into boxes by the usual algorithm, with one exception. It isn't simulating the player at all. Instead, it simulates what would a UDT agent do in the player's place.

This was one of my problematic problems for TDT. I also discussed some Sneaky Strategies which could allow TDT, UDT or similar agents to beat the problem.

Comment by drnickbone on New(ish) AI control ideas · 2015-03-14T21:04:27.213Z · score: 1 (1 votes) · LW · GW

Presumably anything caused to exist by the AI (including copies, sub-agents, other AIs) would have to count as part of the power(AI) term? So this stops the AI spawning monsters which simply maximise U.

One problem is that any really valuable things (under U) are also likely to require high power. This could lead to an AI which knows how to cure cancer but won't tell anyone (because that will have a very high impact, hence a big power(AI) term). That situation is not going to be stable; the creators will find it irresistible to hack the U and get it to speak up.

Comment by drnickbone on [Link] Physics-based anthropics? · 2015-03-14T20:24:49.507Z · score: 0 (0 votes) · LW · GW

I had a look at this: the KCA (Kolmogorov Complexity) approach seems to match my own thoughts best.

I'm not convinced about the "George Washington" objection. It strikes me that a program which extracts George Washington as an observer from insider a wider program "u" (modelling the universe) wouldn't be significantly shorter than a program which extracts any other human observer living at about the same time. Or indeed, any other animal meeting some crude definition of an observer.

Searching for features of human interest (like "leader of a nation") is likely to be pretty complicated, and require a long program. To reduce the program size as much as possible, it ought to just scan for physical quantities which are easy to specify but very diagnostic of a observer. For example, scan for a physical mass with persistent low entropy compared to its surroundings, persistent matter and energy throughput (low entropy in, high entropy out, maintaining its own low entropy state), a large number of internally structured electrical discharges, and high correlation between said discharges and events surrounding said mass. The program then builds a long list of such "observers" encountered while stepping through u, and simply picks out the nth entry on the list, giving the "nth" observer complexity about K(n). Unless George Washington happened to be a very special n (why would he be?) he would be no simpler to find than anyone else.

Comment by drnickbone on [Link] Physics-based anthropics? · 2014-11-21T20:52:01.787Z · score: 1 (1 votes) · LW · GW

Upvoted for acknowledging a counterintuitive consequence, and "biting the bullet".

One of the most striking things about anthropics is that (seemingly) whatever approach is taken, there are very weird conclusions. For example: Doomsday arguments, Simulation arguments, Boltzmann brains, or a priori certainties that the universe is infinite. Sometimes all at once.

Comment by drnickbone on 2014 Less Wrong Census/Survey · 2014-11-14T21:06:38.666Z · score: 20 (20 votes) · LW · GW

Taken survey.

Comment by drnickbone on [Link] Physics-based anthropics? · 2014-11-14T20:26:21.697Z · score: 3 (3 votes) · LW · GW

If I understand correctly, this approach to anthropics strongly favours a simulation hypothesis: the universe is most likely densely packed with computing material ("computronium") and much of the computational resource is dedicated to simulating beings like us. Further, it also supports a form of Doomsday Hypothesis: simulations mostly get switched off before they start to simulate lots of post-human people (who are not like us) and the resource is then assigned to running new simulations (back at a human level).

Have I misunderstood?

Comment by drnickbone on Raven paradox settled to my satisfaction · 2014-08-12T22:37:09.821Z · score: 1 (2 votes) · LW · GW

One very simple resolution: observing a white shoe (or yellow banana, or indeed anything which is not a raven) very slightly increases the probability of the hypothesis "There are no ravens left to observe: you've seen all of them". Under the assumption that all observed ravens were black, this "seen-em-all" hypothesis then clearly implies "All ravens are black". So non-ravens are very mild evidence for the universal blackness of ravens, and there is no paradox after all.

I find this resolution quite intuitive.

Comment by drnickbone on The insularity critique of climate science · 2014-07-09T21:39:50.540Z · score: 0 (0 votes) · LW · GW

P.S. If I draw one supportive conclusion from this discussion, it is that long-range climate forecasts are very likely to be wrong, simply because the inputs (radiative forcings) are impossible to forecast with any degree of accuracy.

Even if we'd had perfect GCMs in 1900, forecasts for the 20th century would likely have been very wrong: no one could have predicted the relative balance of CO2, other greenhouse gases and sulfates/aerosols (e.g. no-one could have guessed the pattern of sudden sulfates growth after the 1940s, followed by levelling off after the 1970s). And natural factors like solar cycles, volcanoes and El Niño/La Nina wouldn't have been predictable either.

Similarly, changes in the 21st century could be very unexpected. Perhaps some new industrial process creates brand new pollutants with negative radiative forcing in the 2030s; but then the Amazon dies off in the 2040s, followed by a massive methane belch from the Arctic in the 2050s; then emergency geo-engineering goes into fashion in the 2070s (and out again in the 2080s); then in the 2090s there is a resurgence in coal, because the latest generation of solar panels has been discovered to be causing a weird new plague. Temperatures could be up and down like a yo-yo all century.

Comment by drnickbone on The insularity critique of climate science · 2014-07-09T20:42:55.837Z · score: 1 (1 votes) · LW · GW

Actually, it's somewhat unclear whether the IPCC scenarios did better than a "no change" model -- it is certainly true over the short time period, but perhaps not over a longer time period where temperatures had moved in other directions.

There are certainly periods when temperatures moved in a negative direction (1940s-1970s), but then the radiative forcings over those periods (combination of natural and anthropogenic) were also negative. So climate models would also predict declining temperatures, which indeed is what they do "retrodict". A no-change model would be wrong for those periods as well.

Your most substantive point is that the complex models don't seem to be much more accurate than a simple forcing model (e.g. calculate net forcings from solar and various pollutant types, multiply by best estimate of climate sensitivity, and add a bit of lag since the system takes time to reach equilibrium; set sensitivity and lags empirically). I think that's true on the "broadest brush" level, but not for regional and temporal details e.g. warming at different latitudes, different seasons, land versus sea, northern versus southern hemisphere, day versus night, changes in maximum versus minimum temperatures, changes in temperature at different levels of the atmosphere etc. It's hard to get those details right without a good physical model of the climate system and associated general circulation model (which is where the complexity arises). My understanding is that the GCMs do largely get these things right, and make predictions in line with observations; much better than simple trend-fitting.

Comment by drnickbone on The insularity critique of climate science · 2014-07-09T19:27:24.680Z · score: 1 (1 votes) · LW · GW

Thanks for a comprehensive summary - that was helpful.

It seems that A&G contacted the working scientists to identify papers which (in the scientists' view) contained the most credible climate forecasts. Not many responded, but 30 referred to the recent (at the time) IPCC WP1 report, which in turn referenced and attempted to summarize over 700 primary papers. There also appear to have been a bunch of other papers cited by the surveyed scientists, but the site has lost them. So we're somewhat at a loss to decide which primary sources climate scientists find most credible/authoritative. (Which is a pity, because those would be worth rating, surely?)

However, A&G did their rating/scoring on the IPCC WP1, Chapter 8. But they didn't contact the climate scientists to help with this rating (or they did, but none of them answered?) They didn't attempt to dig into the 700 or so underlying primary papers, identify which of them contained climate forecasts, and/or had been identified by the scientists as containing the most credible forecasts and then rate those. Or even pick a random sample, and rate those? All that does sound just a tad superficial.

What I find really bizarre is their site's conclusion that because IPCC got a low score by their preferred rating principles, then a "no change" forecast is superior, and more credible! That's really strange, since "no change" has historically done much worse as a predictor than any of the IPCC models.

Comment by drnickbone on The insularity critique of climate science · 2014-07-09T17:37:22.928Z · score: 4 (4 votes) · LW · GW

On Critique #1:

Since you are using Real Climate and Skeptical Science as sources, did you read what they had to say about the Armstrong and Green paper and about Nate Silver's chapter?

Gavin Schmidt's post was short, funny but rude; however ChrisC's comment looks much more damning if true. Is it true?

Here is Skeptical Science on Nate Silver. It seems the main cause of error in Hansen's early 1988 forecast was an assumed climate sensitivity greater than that of the more recent models and calculations (4.2 degrees rather than 3 degrees). Whereas IPCC's 1990 forecast had problems predicting the inputs to global warming (amount of emissions, or radiative forcing for given emissions) rather than the outputs (resulting warming). Redoing accounting for these factors removes nearly all the discrepancy.

Comment by drnickbone on Quickly passing through the great filter · 2014-07-08T21:02:24.667Z · score: 2 (2 votes) · LW · GW

Actually, Kepler is able to determine both size and mass of planet candidates, using the method of transit photometry.

For further info, I found a non-paywalled copy of Bucchave et al's Nature paper. Figure 3 plots planet radius against star metallicity, and some of the planets are clearly of Earth-radius or smaller. I very much doubt that it is possible to form gas "giants" of Earth size, and in any case they would have a mass much lower than Earth mass, so would stand out immediately.

Comment by drnickbone on Quickly passing through the great filter · 2014-07-08T14:23:06.609Z · score: 1 (1 votes) · LW · GW

It might do, except that the recent astronomical evidence is against that : solar systems with sufficient metallicity to form rocky planets were appearing within a couple of billion years after the Big Bang. See here for a review.

Comment by drnickbone on Steelmanning Inefficiency · 2014-07-08T08:35:15.579Z · score: 3 (3 votes) · LW · GW

Hmmm... I'll have a go. One response is that the "fully general counter argument" is a true counter argument. You just used a clever rhetorical trick to stop us noticing that.

If what you are calling "efficiency" is not working for you, then you are - ahem - just not being very efficient! More revealingly, you have become fixated on the "forms" of efficiency (the metrics and tick boxes) and have lost track of the substance (adopting methods which take you closer to your true goals, rather than away from them). So you have steelmanned a criticism of formal efficiency, but not of actual efficiency.

Comment by drnickbone on Climate science: how it matters for understanding forecasting, materials I've read or plan to read, sources of potential bias · 2014-07-08T07:24:21.000Z · score: 3 (3 votes) · LW · GW

Stephen McIntyre isn't a working climate scientist, but his criticism of Mann's statistical errors (which aren't necessarily relevant to the main arguments for AGW) have been acknowledged as essentially correct. I also took a reasonably detailed look at the specifics of the argument

Did you have a look at these responses? Or at Mann's book on the subject?

There are a number of points here, but the most compelling is that the statistical criticisms were simply irrelevant. Contrary to McIntyre and McKitrick's claims, the differences in principal component methodology make no difference to the proxy reconstructions. And the hockey stick graph has since been replicated dozens of times using multiple different proxies and methods, by multiple authors.

Comment by drnickbone on Climate science: how it matters for understanding forecasting, materials I've read or plan to read, sources of potential bias · 2014-07-07T23:52:08.442Z · score: 2 (2 votes) · LW · GW

On funding, it can be difficult to trace: see this article in Scientific American and the original paper plus the list of at least 91 climate counter-movement organisations, page 4, which have an annual income of over $900 million. A number of these organisations are known to have received funding by companies like Exxon and Koch Industries, though the recent trend appears to be more opaque funding through foundations and trusts.

On your particular sources, Climate Audit is on that list; also, from his Wikipedia bio it appears that Steve McIntyre was the founder and president of Northwest Explorations inc until it was taken over by CGX Resources to form the oil and gas exploration company CGX Energy Inc. He was a "strategic advisor" to CGX Energy at the time of his critique of the "hockey stick" in 2003.

Anthony Watts has received funding from the Heartland Institute#Affiliation_with_Heartland_Institute) which is also on the list. He claims it was not for the WUWT blog, and he approached them rather than the other way round.

Judith Curry has admitted to receiving "some funding from the fossil fuel industry" DeSmogBlog quoting Scientific American though again she claims no correlation with her views.

Comment by drnickbone on Climate science: how it matters for understanding forecasting, materials I've read or plan to read, sources of potential bias · 2014-07-07T22:05:38.320Z · score: 1 (1 votes) · LW · GW

I've noticed that you've listed a lot of secondary sources (books, blogs, IPCC summaries) but not primary sources (published papers by scientists in peer-reviewed journals). Is there a reason for this e.g. that you do not have access to the primary sources, or find them indigestible?

If you do need to rely on secondary sources, I'd suggest to focus on books and blogs whose authors are also producing the primary sources. Of the blogs you mention, I believe that Real Climate and Skeptical Science are largely authored by working climate scientists, whereas the others are not.

Of course a number of the blogs convey the message that climate science as a whole is utterly partisan and biased, so any output of climate scientists through secondary sources and summaries is untrustworthy. If you can't analyse the underlying primary evidence, and do not assign negligible prior probability to such a mass scientific conspiracy (or mass scientific error) then it is hard to refute that mindset. But you still have to ask who has the greater incentives here: is it really poorly paid scientists pushing a conspiracy or collective fantasy to get a bit more funding, or is it highly paid lobbyists, firms and commentators defending a trillion dollar industry, one which would be doomed by serious action on climate change?

Comment by drnickbone on Quickly passing through the great filter · 2014-07-07T21:37:45.544Z · score: 0 (0 votes) · LW · GW

This sort of scenario might work if Stage 1 takes a minimum of 12 billion years, so that life has to first evolve slowly in an early solar system, then hop to another solar system by panspermia, then continue to evolve for billions of years more until it reaches multicellularity and intelligence. In that case, almost all civilisations will be emerging about now (give or take a few hundred million years), and we are either the very first to emerge, or others have emerged too far away to have reached us yet. This seems contrived, but gets round the need for a late filter.

Comment by drnickbone on Proper value learning through indifference · 2014-07-07T21:10:54.599Z · score: 1 (1 votes) · LW · GW

This all looks clever, apart from the fact that the AI becomes completely indifferent to arbitrary changes in its value system. The way you describe it, the AI will happily and uncomplainingly accept a switch from a friendly v (such as promoting human survival, welfare and settlement of Galaxy) to an almost arbitrary w (such as making paperclips), just by pushing the right "update" buttons. An immediate worry is about who will be in charge of the update routine, and what happens if they are corrupt or make a mistake: if the AI is friendly, then it had better worry about this as well.

Interestingly, the examples you started with suggested that the AI should be rewarded somehow in its current utility v as a compensation for accepting a change to a different utility w. That does sound more natural, and more stable against rogue updates.

Comment by drnickbone on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-07-01T16:55:14.980Z · score: 1 (1 votes) · LW · GW

Thanks.... Upvoted for honest admission of error.

Comment by drnickbone on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-06-30T06:31:01.592Z · score: 1 (3 votes) · LW · GW

Or moving from conspiracy land, big budget cuts to climate research starting in 2009 might have something to do with it.

P.S. Since you started this sub-thread and are clearly still following it, are you going to retract your claims that CRU predicted "no more snow in Britain" or that Hansen predicted Manhattan would be underwater by now? Or are you just going to re-introduce those snippets in a future conversation, and hope no-one checks?

Comment by drnickbone on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-06-29T23:43:12.427Z · score: 1 (1 votes) · LW · GW

Seems like a bad proxy to me. Is snowfall really that hard a metric to find...?

Presumably not, though since I'm not making up Met Office evidence (and don't have time to do my own analysis) I can only comment on the graphs which they themselves chose to plot in 2009. Snowfall was not one of those graphs (whereas it was in 2006).

However, the graphs of mean winter temperature, maximum winter temperature, and minimum winter temperature all point to the same trend as the air frost and heating-degree-day graphs. It would be surprising if numbers of days of snowfall were moving against that trend.

Comment by drnickbone on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-06-29T23:24:17.164Z · score: 1 (1 votes) · LW · GW

I'm sorry, but you are still making inaccurate claims about what CRU predicted and over what timescales.

The 20 year prediction referred specifically to heavy snow becoming unexpected and causing chaos when it happens. I see no reason at all to believe that will be false, or that it will have only a slim chance of being true.

The vague "few year" claim referred to snow becoming "rare and exciting". But arguably, that was already true in 2000 at the time of the article (which was indeed kind of the point of the article). So it's not necessary to argue about whether snow became even rarer later in the 2000s (or is becoming rarer slower than it used to), when there's really too little data to know over such a short period.

There was a totally undated claim referring to future children not seeing snow first-hand. You are clearly assuming that the "few year" time horizon also attached to that strong claim (and is therefore baloney); however, the article doesn't actually say that, and I rather doubt if CRU themselves ever said that. It does seem very unlikely to me that a climate scientist would ever make such a claim attached to a timescale of less than decades. (Though if they'd really meant hundreds of years, or billions of years, they'd presumably have said that: these guys really aren't like creationists).

Finally, the Independent put all of this under a truly lousy and misleading headline, when it is clear from what CRU actually said that snows were not and would not become a thing of the past (just rarer).

The general problem is that much of the newspaper article includes indirect speech, with only a few direct quotes, and the direct quotes aren't bound to a timescale (except the specific 20-year quote mentioned above). So it's hard to know exactly what CRU said.

Comment by drnickbone on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-06-29T19:35:35.861Z · score: 1 (1 votes) · LW · GW

P.S. On the more technical points, the 2009 reports do not appear to plot the number of days of snow cover or cold spells (unlike the 2006 report) so I simply referred to the closest proxies which are plotted.

The "filtering" is indeed a form of local smoothing transform (other parts of the report refer to decadal smoothing) and this would explains why the graphs stop in 2007, rather than 2009: you really need a few years either side of the plotted year to do the smoothing. I can't see any evidence that the decline in the 80s was somehow factored into the plot in the 2000s.

Comment by drnickbone on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-06-29T19:28:42.490Z · score: -1 (1 votes) · LW · GW

I'm sorry, I didn't realize 'within a few years' was so vague in English that it could easily embrace decades and I'm being tendentious in thinking that after 14 years we can safely call that prediction failed.

Got it - so the semantics of "a few years" is what you are basing the "failed prediction" claim on. Fair enough.

I have to say though that I read the "few years" part as an imprecise period relating to an imprecise qualitative prediction (that snow would become "rare and exciting"). Which as far as my family is concerned has been true. Again in an imprecise and qualitative way. Also, climate scientists do tend to think over a longer term, so a "few years" to a climate scientist could easily mean a few decades.

And you're right, no further 5 year period would make snow "a thing of the past" but we already agreed that was the Independent's headline, and not CRU's actual prediction. Rare snow in the 2020s is different from no snow in the 2020s.

Comment by drnickbone on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-06-29T18:54:24.542Z · score: 1 (1 votes) · LW · GW

Sigh... The only dated prediction in the entire article related to 20 years, not 14 years, and the claim for 20 years was that snow would "probably" cause chaos then. Which you've just agreed is very likely to be true (based on some recent winters where some unexpected snow did cause chaos), but perhaps not that surprising (the quote did not in fact claim there would be more chaos than in the 1980s and 1990s).

All other claims had no specific dates, except to suggest generational changes (alluding to a coming generation of kids who would not have experienced snow themselves).

Regarding the evidence, I already gave you Met Office statistics, and explained why you can't get reliable trend info on a shorter timescale. You then asked anecdotal questions (is snow "rare and exciting", what would kids say if you asked them?) and I gave you anecdotal answers. But apparently that's not good enough either! Is there any set of evidence that would satisfy you?

Still if you really want the statistics again, then the very latest published Met Office set runs up to 2009 if you really want to check, and the downward trend lines still continue all the way to the end of that data. See for instance this summary figures 2.32 and 2.35.

So if you want to claim that the trend in snow has recently stopped/reversed, then you are looking at a very short period (some cold winters in 2010-14). And over periods that short, it's entirely possible we'll have another shift and be back onto the historic trend for the next five year period. So "catch up in six years" doesn't sound so implausible after all.

Comment by drnickbone on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-06-29T17:02:47.417Z · score: 2 (2 votes) · LW · GW

What's the date?

By your reaction, and the selective down votes, I have apparently fallen asleep, it is the 2020s already, and a 20-year prediction is already falsified.

But in answer to your questions:

A) Heavy snow does indeed already cause chaos in England when it happens (just google the last few years)

B) My kids do indeed find snow a rare and exciting event (in fact there were zero days of snow here last winter, and only a few days the winter before)

C) While my kids do have a bit of firsthand knowledge of snow, it is vastly less than my own experience at their age, which in turn was much less than my parents' experience.

If you are a resident of England yourself, and have other experiences, then please let me know...

Comment by drnickbone on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-06-29T07:54:48.099Z · score: 1 (3 votes) · LW · GW

I think we have agreement that:

A) The newspaper headline "Snowfalls are now just a thing of the past" was incorrect

B) The Climatic Research Unit never actually made such a prediction

C) The only quoted statement with a timeline was for a period of 20 years, and spoke of heavy snow becoming rarer (rather than vanishing)

D) This was an extrapolation of a longer term trend, which continued into the early 2000s (using Met Office data published in 2006, of course after the Independent story)

E) It is impossible to use short periods (~10 years since 2006) to decide whether such a climatic trend has stopped or reversed.

I can't see how that counts as a failed prediction by the CRU (rather than the Independent newspaper). If the CRU had said "there will be less snow in every subsequent year from now, for the next 20 years, in a declining monotonic trend" then that would indeed be a failed prediction. However, the CRU did not make such a prediction... no serious climate researcher would.

Comment by drnickbone on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-06-28T23:39:03.076Z · score: 2 (4 votes) · LW · GW

"Over the 2000s" is certainly too short a period to reach significant conclusions. However the longer term trends are pretty clear. See this Met Office Report from 2006.

Figure 8 shows a big drop in the length of cold spells since the 1960s. Figure 13 shows the drop in annual days of snow cover. The trend looks consistent across the country.

Comment by drnickbone on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-06-28T23:18:24.769Z · score: 6 (8 votes) · LW · GW

Regarding the wine point, it is doubtful if wine grapes ever grew in Newfoundland, as the Norse term "Vinland" may well refer to a larger area. From the Wikipedia article:

the southernmost limit of the Norse exploration remains a subject of intense speculation. Samuel Eliot Morison (1971) suggested the southern part of Newfoundland; Erik Wahlgren (1986) Miramichi Bay in New Brunswick; and Icelandic climate specialist Pall Bergthorsson (1997) proposed New York City.[26] The insistence in all the main historical sources that grapes were found in Vinland suggests that the explorers ventured at least to the south side of the St. Lawrence River, as Jacques Cartier did 500 years later, finding both wild vines and nut trees.[27] Three butternuts were a further important find at L'Anse Aux Meadows: another species which grows only as far north as the St. Lawrence

Also, wine grapes certainly do grow in England these days (not just in the Medieval period). There appear to be around 400 vineyards in England currently.

Comment by drnickbone on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-06-28T22:30:00.801Z · score: 2 (4 votes) · LW · GW

Reading your referenced article (Independent 2000):

Heavy snow will return occasionally, says Dr Viner, but when it does we will be unprepared. "We're really going to get caught out. Snow will probably cause chaos in 20 years time," he said.

Clearly the Climatic Research Unit was not predicting no more snow in Britain by 2014.

Regarding the alleged "West Side Highway underwater" prediction, see Skeptical Science. It appears Hansen's original prediction timeframe was 40 years not 20 years, and conditional on a doubling of CO2 by then.

Comment by drnickbone on On Terminal Goals and Virtue Ethics · 2014-06-18T16:34:51.602Z · score: 2 (2 votes) · LW · GW

Note that this also messes up counterfactual accounts of knowledge as in "A is true and I believe A; but if A were not true then I would not believe A". (If I were not insane, then I would not believe I am Nero, so I would not believe I am insane.)

We likely need some notion of "reliability" or "reliable processes" in an account of knowledge, like "A is true and I believe A and my belief in A arises through a reliable process". Believing things through insanity is not a reliable process.

Gettier problems arise because processes that are usually reliable can become unreliable in some (rare) circumstances, but still (by even rarer chance) get the right answers.

Comment by drnickbone on Siren worlds and the perils of over-optimised search · 2014-04-29T11:30:44.623Z · score: 0 (0 votes) · LW · GW

Except that acting to prevent other AIs from being built would also encroach on human liberty, and probably in a very major way if it was to be effective! The AI might conclude from this that liberty is a lost cause in the long run, but it is still better to have a few extra years of liberty (until the next AI gets built), rather than ending it right now (through its own powerful actions).

Other provocative questions: how much is liberty really a goal in human values (when taking the CEV for humanity as a whole, not just liberal intellectuals)? How much is it a terminal goal, rather than an instrumental goal? Concretely, would humans actually care about being ruled over by a tyrant, as long as it was a good tyrant? (Many people are attracted to the idea of an all-powerful deity for instance, and many societies have had monarchs who were worshipped as gods.) Aren't mechanisms like democracy, separation of powers etc mostly defence mechanisms against a bad tyrant? Why shouldn't a powerful "good" AI just dispense with them?

Comment by drnickbone on Siren worlds and the perils of over-optimised search · 2014-04-29T10:18:15.266Z · score: 0 (2 votes) · LW · GW

This also creates some interesting problems... Suppose a very powerful AI is given human liberty as a goal (or discovers that this is a goal using coherent extrapolated volition). Then it could quickly notice that its own existence is a serious threat to that goal, and promptly destroy itself!

Comment by drnickbone on Siren worlds and the perils of over-optimised search · 2014-04-22T08:22:29.542Z · score: 2 (2 votes) · LW · GW

One issue here is that worlds with an "almost-friendly" AI (one whose friendliness was botched in some respect) may end up looking like siren or marketing worlds.

In that case, worlds as bad as sirens will be rather too common in the search space (because AIs with botched friendliness are more likely than AIs with true friendliness) and a satisficing approach won't work.

Comment by drnickbone on Open Thread March 31 - April 7 2014 · 2014-04-10T21:23:56.577Z · score: 0 (0 votes) · LW · GW

Well you can make such comparisons if you allow for empathic preferences (imagine placing yourself in someone else's position, and ask how good or bad that would be, relative to some other position). Also the fact that human behavior doesn't perfectly fit a utility function is not in itself a huge issue: just apply a best fit function (this is the "revealed preference" approach to utility).

Ken Binmore has a rather good paper on this topic, see here.

Comment by drnickbone on Open Thread March 31 - April 7 2014 · 2014-04-01T16:54:20.382Z · score: 2 (2 votes) · LW · GW

OK, I also got a "non-cheat" solution: unfortunately, it is non-constructive and uses the Nkvbz bs Pubvpr, so it still feels like a bit of a cheat. Is there a solution which doesn't rely on that (or is it possible to show there is no solution in such a case?)

Comment by drnickbone on Open Thread March 31 - April 7 2014 · 2014-04-01T16:23:49.474Z · score: 3 (3 votes) · LW · GW

Oh dear, I suppose that rules out other "cheats" then: such as prisoner n guessing after n seconds. At any point in time, only finitely many have guessed, so only finitely many have guessed wrong. Hence the prisoners can never be executed. (Though they can never be released either.)

Comment by drnickbone on Open Thread March 31 - April 7 2014 · 2014-04-01T07:46:54.414Z · score: 2 (2 votes) · LW · GW

I suspect an April Fool:

Cevfbare a+1 gnxrf gur ung sebz cevfbare a naq chgf vg ba uvf bja urnq. Gura nyy cevfbaref (ncneg sebz cevfbare 1) thrff gur pbybe pbeerpgyl!

Comment by drnickbone on Irrationality Game III · 2014-03-28T10:26:56.973Z · score: 0 (0 votes) · LW · GW

As one example, imagine a long chain of possible people whose experiences and memories are indistinguishable from immediate neighbours in the chain (and they are counterparts of their neighbours). But there is a cumulative "drift" along the chain, so that the ends are very different from each other (and not counterparts).

UDT doesn't seem to work this way. In UDT, "you" are not a physical entity but an abstract decision algorithm. This abstract decision algorithm is correlated to different extent with different physical entities in different worlds. This leads to the question of whether some algorithms are more "conscious" than others. I don't think UDT currently has an answer for this, but neither do other frameworks.

I think it works quite well with "you" as a concrete entity. Simply use the notion that "your" decisions are linked to those of your counterparts (and indeed, to other agents), such that if you decide in a certain way in given circumstances, your counterparts will decide that way as well. The linkage will be very tight for neighbours in the chain, but diminishing gradually with distance, and such that the ends of the chain are not linked at all. This - I think - addresses the problem of trying to identify what algorithm you are implementing, or partitioning possible people into those who are running "the same" algorithm.

Comment by drnickbone on Irrationality Game III · 2014-03-26T09:11:37.435Z · score: 1 (1 votes) · LW · GW

It is not the case if the money can be utilized in a manner with long term impact.

OK, I was using $ here as a proxy for utils, but technically you're right: the bet should be expressed in utils (as for the general definition of a chance that I gave in my comment). Or if you don't know how to bet in utils, use another proxy which is a consumptive good and can't be invested (e.g. chocolate bars or vouchers for a cinema trip this week). A final loop-hole is the time discounting: the real versions of you mostly live earlier than the sim versions of you, so perhaps a chocolate bar for the real "you" is worth many chocolate bars for sim "you"s? However we covered that earlier in the thread as well: my understanding is that your effective discount rate is not high enough to outweigh the huge numbers of sims.

An unambiguous recipe cannot exist since it would have to give precise answers to ambiguous questions such as: if there are two identical simulations of you running on two computers, should they be counted as two copies or one?

Well this is your utility function, so you tell me! Imagine a hacker is able to get into the simulations and replace pleasant experiences by horrible torture. Does your utility function care twice as much if he hacks both simulations versus hacking just one of them? (My guess is that it does). And this style of reasoning may cover limit cases like a simulation running on a wafer which is then cut in two (think about whether the sims are independently hackable, and how much you care.)

Comment by drnickbone on Irrationality Game III · 2014-03-25T12:59:05.035Z · score: 1 (1 votes) · LW · GW

So: if a bet is offered that you are a sim (in some form of computronium) and it becomes possible to test that (and so decide the bet one way or another), you would bet heavily on being a sim?

It depends on the stakes of the best.

I thought we discussed an example earlier in the thread? The gambler pays $1000 if not in a simulation; the bookmaker pays $1 if the gambler is in a simulation. In terms of expected utility, it is better for "you" (that is, all linked instances of you) to take the gamble, even if the vast majority of light-cones don't contain simulations.

It is meaningless to speak of the "chance I am a sim": some copies of me are sims, some copies of me are not sims

No it isn't meaningless: chances simply become operationalised in terms of bets, or other decisions with variable payoff. The "chance you are a sim" becomes equal to the fraction of a util you are prepared to pay for a betting slip which pays out one util if you are a sim, and pays nothing otherwise. (Lots of linked copies of "you" take the gamble; some win, some lose.)

Incidentally, in terms of original modal realism (due to David Lewis), "you" are a concrete unique individual who inhabits exactly one world, but it is unknown which one. Other versions of "you" are your "counterparts". It is usually not possible to group all your counterparts together and treat them as a single (distributed) being, YOU, because the counterpart relation is not an equivalence relation (it doesn't partition possible people into neat equivalence classes). As one example, imagine a long chain of possible people whose experiences and memories are indistinguishable from immediate neighbours in the chain (and they are counterparts of their neighbours). But there is a cumulative "drift" along the chain, so that the ends are very different from each other (and not counterparts).

Subjective expectations are meaningless in UDT. So there is no "what we should expect to see".

A subjective expectation is rather like a bet: it is a commitment of mental resource to modelling certain lines of future observations (and preparing decisions for such a case). If you spend most of your modelling resource on a scenario which doesn't materialise, this is like losing the bet. So it is reasonable to talk about subjective expectations in UDT; just model them as bets.

Does it have to stay dogmatically committed to Occam's razor in the face of whatever it sees? If not, how would it arrive at a replacement without using Occam's razor?

Occam's razor here is just a method for weighting hypotheses in the prior. It is only "dogmatic" if the prior assigns weights in such an unbalanced way that no amount of evidence will ever shift the weights. If your prior had truly massive weight (e.g, infinite weight) in favour of many worlds, then it will never shift, so that looks dogmatic. But to be honest, I rather doubt this. You weren't born believing in the many worlds interpretation (or in modal realism) and if you are a normal human being you most likely regarded it as quite outlandish at some point. Then some line of evidence or reasoning caused you to shift your opinion (e.g. because it seemed simpler, or overall a better explanation for physical evidence). If it shifted one way, then considering other evidence could shift it back again.

Comment by drnickbone on Irrationality Game III · 2014-03-21T17:29:09.788Z · score: 1 (1 votes) · LW · GW

I don't think it does. If we are not in a sim, our actions have potentially huge impact since they can affect the probability and the properties of a hypothetical expanded post-human civilization.

So: if a bet is offered that you are a sim (in some form of computronium) and it becomes possible to test that (and so decide the bet one way or another), you would bet heavily on being a sim? But on the off-chance that you are not a sim, you're going to make decisions as if you were in the real world, because those decisions (when suitably generalized across all possible light-cones) have a huge utility impact. Is that right?

The problem I have is this only works if your utility function is very impartial (it is dominated by "pro bono universo" terms, rather than "what's in it for me" or "what's in it for us" terms). Imagine for instance that you work really hard to ensure a positive singularity, and succeed. You create a friendly AI, it starts spreading, and gathering huge amounts of computational resources... and then our simulation runs out of memory, crashes, and gets switched off. This doesn't sound like it is a good idea "for us" does it?

This all seems to be part of a general problem with asking UDT to model selfish (or self-interested) preferences. Perhaps it can't. In which case UDT might be a great decision theory for saints, but not for regular human beings. And so we might not want to program UDT into our AI in case that AI thinks it's a good idea to risk crashing our simulation (and killing us all in the process).

In UDT it doesn't make sense to speak of what "actually exists". Everything exists, you just assign different weights to different parts of "everything" when computing utility.

I've remarked elsewhere that UDT works best against a background of modal realism, and that's essentially what you've said here. But here's something for you to ponder. What if modal realism is wrong? What if there is, in fact, evidence that it is wrong, because the world as we see it is not what we should expect to see if it was right? Isn't it maybe a good idea to then - er - update on that evidence?

Or does a UDT agent have to stay dogmatically committed to modal realism in the face of whatever it sees? That doesn't seem very rational does it?

Comment by drnickbone on Irrationality Game III · 2014-03-20T15:01:57.407Z · score: 0 (0 votes) · LW · GW

No, it can be located absolutely anywhere. However you're right that the light cones with vertex close to Big Bang will probably have large weight to low K-complexity.

Ah, I see what you're getting at. If the vertex is at the Big Bang, then the shortest programs basically simulate a history of the observable universe. Just start from a description of the laws of physics and some (low entropy) initial conditions, then read in random bits whenever there is an increase in entropy. (For technical reasons the programs will also need to simulate a slightly larger region just outside the light cone, to predict what will cross into it).

If the vertex lies elsewhere, the shortest programs will likely still simulate starting from the Big Bang, then "truncate" i.e. shift the vertex to a new point (s, t) and throw away anything outside the reduced light cone. So I suspect that this approach gives a weighting rather like 2^-K(s,t) for light-cones which are offset from the Big Bang. Probably most of the weight comes from programs which shift in t but not much in s.

The temporal discount here can be fast e.g. exponential.

That's what I thought you meant originally: this would ensures that the utility in any given light-cone is bounded, and hence that the expected utility converges.

...given that a super-strong future filter looks very unlikely, most of the probability will be concentrated on models where there are only a few civilisations to start with.

This looks correct, but it is different from your initial argument. In particular there's no reason to believe MWI is wrong or anything like that.

I disagree. If models like MWI and/or eternal inflation are taken seriously, then they imply the existence of a huge number of civilisations (spread across multiple branches or multiple inflating regions), and a huge number of expanded civilisations (unless the chance of expansion is exactly zero). Observers should then predict that they will be in one of the expanded civilisations. (Or in UDT terms, they should take bets that they are in such a civilisation). Since our observations are not like that, this forces us into simulation conclusions (most people making our observations are in sims, so that's how we should bet). The problem is still that there is a poor fit to observations: yes we could be in a sim, and it could look like this, but on the other hand it could look like more or less anything.

Incidentally, there are versions of inflation and many worlds which don't run into that problem. You can always take a "local" view of inflation (see for instance these papers), and a "modal" interpretation of many worlds (see here). Combined, these views imply that all that actually exists is within one branch of a wave function constructed over one observable universe. These "cut-down" interpretations make either the same physical predictions as the "expansive" interpretations, or better predictions, so I can't see any real reason to believe in the expansive versions.

Comment by drnickbone on Irrationality Game III · 2014-03-19T19:40:39.178Z · score: 0 (0 votes) · LW · GW

As a result, the effective discount falls off as 2^{-Kolmogorov complexity of t} which is only slightly faster than 1/t.

It is about 1/t x 1/log t x 1/log log t etc. for most values of t (taking base 2 logarithms). There are exceptions for very regular values of t.

Incidentally, I've been thinking about a similar weighting approach towards anthropic reasoning, and it seems to avoid a strong form of the Doomsday Argument (one where we bet heavily against our civilisation expanding). Imagine listing all the observers (or observer moments) in order of appearance since the Big Bang (use cosmological proper time). Then assign a prior probability 2^-K(n) to being the nth observer (or moment) in that sequence.

Now let's test this distribution against my listed hypotheses above:

1. No other civilisations exist or have existed in the universe apart from us.

Fit to observations: Not too bad. After including the various log terms in 2^-K(n), the probability of me having an observer rank n between 60 billion and 120 billion (we don't know it more precisely than that) seems to be about 1/log (60 billion) x 1/log (36) or roughly 1/200.

Still, the hypothesis seems a bit dodgy. How could there be exactly one civilisation over such a large amount of space and time? Perhaps the evolution of intelligence is just extraordinarily unlikely, a rare fluke that only happened once. But then the fact that the "fluke" actually happened at all makes this hypothesis a poor fit. A better hypothesis is that the chance of intelligence evolving is high enough to ensure that it will appear many times in the universe: Earth-now is just the first time it has happened. If observer moments were weighted uniformly, we would rule that out (we'd be very unlikely to be first), but with the 2^-K(n) weighting, there is rather high probability of being a smaller n, and so being in the first civilisation. So this hypothesis does actually work. One drawback is that living 13.8 billion years after the Big Bang, and with only 5% of stars still to form, we may simply be too late to be the first among many. If there were going to be many civilisations, we'd expect a lot of them to have already arrived.

Predictions for Future of Humanity: No doomsday prediction at all; the probability of my n falling in the range 60-120 billion is the same sum over 2^-K(n) regardless of how many people arrive after me. This looks promising.

2. A few have existed apart from us, but none have expanded (yet)

Fit to observations: Pretty good e.g. if the average number of observers per civilisation is less than 1 trilllion. In this case, I can't know what my n is (since I don't know exactly how many civilisations existed before human beings, or how many observers they each had). What I can infer is that my relative rank within my own civilisation will look like it fell at random between 1 and the average population of a civilisation. If that average population is less than 1 trillion, there will be a probability of > 1 in 20 of seeing a relative rank like my current one.

Predictions for Future of Humanity: There must be a fairly low probability of expanding, since other civilisations before us didn't expand. If there were 100 of them, our own estimated probability of expanding would be less than 0.01 and so on. But notice that we can't infer anything in particular about whether our own civilisation will expand: if it does expand (against the odds) then there will be a very large number of observer moments after us, but these will fall further down the tail of the Kolmogorov distribution. The probability of my having a rank n where it is (at a number before the expansion) doesn't change. So I shouldn't bet against expansion at odds much different from 100:1.

3. A few have existed, and a few have expanded, but we can't see them (yet)

Fit to observations: Poor. Since some civilisations have already expanded, my own n must be very high (e.g. up in the trillions of trillions). But then most values of n which are that high and near to my own rank will correspond to observers inside one of the expanded civilisations. Since I don't know my own n, I can't expect it to just happen to fall inside one of the small civilisations. My observations look very unlikely under this model.

Predictions for Future of Humanity: Similar to 2

4. Lots have existed, but none have expanded (very strong future filter)

Fit to observations: Mixed. It can be made to fit if the average number of observers per civilisation is less than 1 trilllion; this is for reasons simlar to 2. While that gives a reasonable degree of fit, the prior likelihood of such a strong filter seems low.

Predictions for Future of Humanity: Very pessimistic, because of the strong universal filter.

5. Lots have existed, and a few have expanded (still a strong future filter), but we can't see the expanded ones (yet)

Fit to observations: Poor. Things could still fit if the average population of a civilisation is less than a trillion. But that requires that the small, unexpanded, civilisations massively outnumber the big, expanded ones: so much so that most of the population is in the small ones. This requires an extremely strong future filter. Again, the prior likelihood of this strength of filter seems very low.

Predictions for Future of Humanity: Extremely pessimistic, because of the strong universal filter.

6. Lots have existed, and lots have expanded, so the uinverse is full of expanded civilisations; we don't see that, but that's because we are in a zoo or simulation of some sort.

Fit to observations: Poor: even worse than in case 5. Most values of n close to my own (enormous) value of n will be in one of the expanded civilisations. The most likely case seems to be that I'm in a simulation; but still there is no reason at all to suppose the simulation would look like this.

Predictions for Future of Humanity: Uncertain. A significant risk is that someone switches our simulation off, before we get a chance to expand and consume unavailable amounts of simulation resources (e.g. by running our own simulations in turn). This switch-off risk is rather hard to estimate. Most simulations will eventually get switched off, but the Kolmogorov weighting may put us into one of the earlier simulations, one which is running when lots of resources are still available, and doesn't get turned off for a long time.

Comment by drnickbone on Irrationality Game III · 2014-03-19T18:05:19.595Z · score: 0 (0 votes) · LW · GW

I am using a future light cone whereas your alternatives seem to be formulated in terms of a past light cone.

I was assuming that the "vertex" of your light cone is situated at or shortly after the Big Bang (e.g. maybe during the first few minutes of nucleosynthesis). In that case, the radius of the light cone "now" (at t = 13.8 billion years since Big Bang) is the same as the particle horizon "now" of the observable universe (roughly 45 billion light-years). So the light-cone so far (starting at Big Bang and running up to 13.8 billion years) will be bigger than Earth's past light-cone (starting now and running back to the Big Bang) but not massively bigger.

This means that there might be a few expanded simulations who are outside our past light-cone (so we don't see them now, but could run into them in the future). Still if there are lots of civilisations in your light cone, and only a few have expanded, that still implies a very strong future filter. So my main point remains: given that a super-strong future filter looks very unlikely, most of the probability will be concentrated on models where there are only a few civilisations to start with (so not many to get filtered out; a modest filter does the trick).

The effective time discount function is of rather slow decay because the sum over universes includes time translated versions of the same universe. As a result, the effective discount falls off as 2^{-Kolmogorov complexity of t} which is only slightly faster than 1/t.

Ahh... I was assuming you discounted faster than that, since you said the utilities converged. There is a problem with Kolmogorov discounting of t. Consider what happens at t = 3^^^3 years from now. This has Kolmogorov complexity K(t) much much less than log(3^^3) : in most models of computation K(t) will be a few thousand bits or less. But the width of the light-cone at t is around 3^^^3, so the utility at t is dominated by around 3^^^3 Boltzmann Brains, and the product U(t) 2^-K(t) is also going to be around 3^^^3. You'll get similar large contributions at t = 4^^^^4 and so on; in short I believe your summed discounted utility is diverging (or in any case dominated by the Boltzmann Brains).

One way to fix this may be to discount each location in space and time (s,t) by 2^-K(s,t) and then let u(s,t) represent a utility density (say the average utility per Planck volume). Then sum over u(s,t).2^-K(s, t) for all values of (s,t) in the future light-cone. Provided the utility density is bounded (which seems reasonable), then the whole sum converges.

Comment by drnickbone on Friendly AI ideas needed: how would you ban porn? · 2014-03-18T21:39:49.700Z · score: 0 (0 votes) · LW · GW

A "few" rich guys buying (overpriced) porn is unlikely to sustain a real porn industry. Also, using rich guy logic, it is probably a better investment to buy the sculptures, paintings, art house movies etc, amuse yourself with those for a while, then sell them on. Art tends to appreciate over time.

Comment by drnickbone on Friendly AI ideas needed: how would you ban porn? · 2014-03-18T08:30:01.206Z · score: 1 (1 votes) · LW · GW

A couple of thoughts here:

  1. Set a high minimum price for anything arousing (say $1000 a ticket). If it survives in the market at that price, it is erotica; if it doesn't, it was porn. This also works for $1000 paintings and sculptures (erotica) compared to $1 magazines (porn).

  2. Ban anything that is highly arousing for males but not generally liked by females. Variants on this: require an all-female board of censors; or invite established couples to view items together, and then question them separately (if they both liked it, it's erotica). Train the AI on examples until it can classify independently of the board or couples.

Comment by drnickbone on Irrationality Game III · 2014-03-17T13:07:12.650Z · score: 0 (0 votes) · LW · GW

If the universe were to survive 280 billion years, then that would put us within the first 5% of the universe's lifespan. So, if we take an alpha of 5%, we can reject the hypothesis that the universe will last more than 280 billion years.

That sounds like "Copernican" reasoning (assume you are at a random point in time) rather than "anthropic" reasoning (assume you are a random observer from a class of observers). I'm not surprised the Copernican approach gives daft results, because the spatial version (assume you are at a random point in space) also gives daft results: see here in this thread point 2.

Incidentally, there is a valid anthropic version of your argument: the prediction is that the universe will be uninhabitable 280 billion years from now, or at least contain many fewer observers than it does now. However, in that case, it looks like a successful prediction. The recent discovery that the stars are beginning to go out and that 95% of stars that will ever form have formed already is just the sort of thing that would be expected under anthropic reasoning. But it is totally surprising otherwise.

We can also reject the hypothesis that more than 4 trillion humans lives will take place

The correct application of anthropic reasoning only rejects this as a hypothesis about the average number of observers in a civilisation, not about human beings specifically. If we knew somehow (on other grounds) that most civilisations make it to 10 trillion observers, we wouldn't predict any less for human beings.

that any given 1-year-old will reach the age of 20,

That's an instance of the same error: anthropic reasoning does NOT reject the particular hypothesis. We already know that an average human lifespan is greater than 20, so we have no reason to predict less than 20 for a particular child. (The reason is that observing one particular child at age 1 as a random observation from the set of all human observations is no less probable if she lives to 100 than if she lives to 2).

The probability that the right sperm would fertilize the right egg and I would be conceived is much less than 1 in a billion, but that doesn't mean I think I need a new model

Anthropic reasoning is like any Bayesian reasoning: observations only count as evidence between hypotheses if they are more likely on one hypothesis than another. Also, hypotheses must be fairly likely a priori to be worth considering against the evidence. Suppose you somehow got a precise observation of sperm meeting egg to make you, with a genome analysis of the two: that exact DNA readout would be extremely unlikely under the hypothesis of the usual laws of physics, chemistry and biology. But that shouldn't make you suspect an alternative hypothesis (e.g. that you are some weird biological experiment, or a special child of god) because that exact DNA readout is extremely unlikely on those hypotheses as well. So it doesn't count as evidence for these alternatives.

The probability of being born prior to a galactic-wide expansion may be very low, but someone has to be born before the expansion. What's so special about me, that I should reject the possibility that I such a person?

If all hypotheses gave extremely low probability of being born before the expansion, then you are correct. But the issue is that some hypotheses give high probability that an observer finds himself before expansion (the hypotheses where no civilisations expand, and all stay small). So your observations do count as evidence to decide between the hypotheses.

[LINK] IBM simulate a "brain" with 500 billion neurons and 100 trillion synapses

2012-11-21T22:23:26.193Z · score: 5 (12 votes)

How do we really escape Prisoners' Dilemmas?

2012-08-31T23:36:57.618Z · score: 1 (10 votes)

Problematic Problems for TDT

2012-05-29T15:41:37.964Z · score: 36 (48 votes)

Sneaky Strategies for TDT

2012-05-25T16:13:13.741Z · score: 8 (13 votes)

Self-Indication Assumption - Still Doomed

2012-01-28T23:09:14.240Z · score: 2 (4 votes)

Doomsday Argument with Strong Self-Sampling Assumption

2012-01-20T23:50:25.636Z · score: 7 (12 votes)