Posts

C19 Prediction Survey Thread 2020-03-30T00:53:49.375Z
Iceland's COVID-19 random sampling results: C19 similar to Influenza 2020-03-28T18:26:00.903Z
jacob_cannell's Shortform 2020-03-25T05:20:32.610Z
[Link]: KIC 8462852, aka WTF star, "the most mysterious star in our galaxy", ETI candidate, etc. 2015-10-20T01:10:30.548Z
The Unfriendly Superintelligence next door 2015-07-02T18:46:22.116Z
Analogical Reasoning and Creativity 2015-07-01T20:38:38.658Z
The Brain as a Universal Learning Machine 2015-06-24T21:45:33.189Z
[Link] Word-vector based DL system achieves human parity in verbal IQ tests 2015-06-13T23:38:54.543Z
Resolving the Fermi Paradox: New Directions 2015-04-18T06:00:33.871Z
Transhumanist Nationalism and AI Politics 2015-04-11T18:39:42.133Z
Resurrection through simulation: questions of feasibility, desirability and some implications 2012-05-24T07:22:20.480Z
The Generalized Anti-Pascal Principle: Utility Convergence of Infinitesimal Probabilities 2011-12-18T23:47:31.817Z
Feasibility of Creating Non-Human or Non-Sentient Machine Intelligence 2011-12-10T03:49:27.656Z
Subjective Relativity, Time Dilation and Divergence 2011-02-11T07:50:44.489Z
Fast Minds and Slow Computers 2011-02-05T10:05:33.734Z
Rational Health Optimization 2010-09-18T19:47:02.687Z
Anthropomorphic AI and Sandboxed Virtual Universes 2010-09-03T19:02:03.574Z
Dreams of AIXI 2010-08-30T22:15:04.520Z

Comments

Comment by jacob_cannell on Taking Initial Viral Load Seriously · 2020-04-01T20:31:34.241Z · LW · GW
The first is that we have a strong mechanism story we can tell. Viruses take time to multiply. When the immune system detects a virus it responds. If your initial viral load is low your immune system gets a head start, so you do better. 

The problem with this story is that it assumes that immune system detection time is not dependent on viral load, which seems highly unlikely. The more viral particles, the more likely they will be detected. How that interacts with viral load's more obvious direct effects is complex and probably virus strain dependent.

The second category is the terrible outcomes in health care workers on the front lines. Those who are dealing with the crisis first hand are dealing with lots of intense exposures to the virus. When they do catch it, they are experiencing high death rates.

Evidence/source?

Your third category on analogy evidence from other viruses makes sense, with the single example from the other SARS coronavirus carrying more weight as it's a much closer relation than smallpox or measles.

Comment by jacob_cannell on The attack rate estimation is more important than CFR · 2020-04-01T20:19:23.392Z · LW · GW
For example, 712 of 3700 people on DM became ill, which gives crude AR = 19.24 per cent.

(I think you meant DP for Diamond Princess)

This is a lower bound on the number infected. From what I understand, PCR viral detection peaks in the 80% to 90% range a few days after exposure, but then falls off to 20% or lower after about a week or two on average (but at some variable rate depending on immune interaction as you mention).

They didn't test everyone quick or frequently enough such that the known case number is a tight bound on the true case number. From what we know on PCR false negative time curve, it seems likely that the true AR on DP is anywhere from 30% to as high as 60%.

If we model the PCR detection time curve as being age dependent (which seems reasonable), then that predicts that the AR was probably above 50% on the submarine and perhaps DP, it just wasn't all detected. For the submarine in particular the population is probably skewed a bit younger/healthy and thus more mild/asymptomatic cases that fight it off quickly.


Most places in europe seem to be in or near mid sigmoid at this point (with the US probably not far behind), but it's too early to tell whether that's due to high AR and herd immunity or lower AR and social distancing.

The Kinsa data seems to suggest that the AR was on average only about on order flu AR, but perhaps higher in some hotspot cities (where the implied AR is perhaps larger than flu). That data also suggests social distancing & closures were effective, but there are some counterexamples (like miami ) where it seemed to peak naturally too early to explain by (late) social distancing.

There's also the Japan mystery, which should have a high AR at this point. Right now the most likely explanations I can think of are either flu like severity/mortality that's not that noticeable when you aren't directly looking for it, or a strain difference.

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-31T21:08:31.558Z · LW · GW

Interesting - hopefully it's not long until someone publishes a serology random sampling study.

Not surprised at symptomatic fraction of 50% - was already indicated by DP, Iceland, and other data.

One thing that is surprising/mysterious to me is how steady the PCR test positive% has been across space and time. When the sampling is of general populations outside hospitals, it's ~1% in Iceland without changing much over time, and 2% in NBA players and 1% in expats flown home from china.

The test positive fraction for tests conducted by clinics/hospitals in the US and Iceland is steady at about ~10% and hasn't fluctuated greatly over time.

Of course there are some places where it's much higher like 30% on DP, but that's an exceptional environment.

Now the typical PCR test of nasal/throat swab is only accurate for about a week or so after infection, so it's more of a blurred measure of the infection derivative, but still it doesn't look like there's any recent exponential growth - suggesting it was in the past.

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-31T06:05:58.710Z · LW · GW

That's interesting. Over the weekend I wrote a monte carlo simulation for the Iceland data incorporating a bunch of stuff including a lognormal fit to know median and mean time from confirmation to death. Going to write it up, but the TLDR: the posterior assigns most of it's mass to the 0.2 to 0.4 range for reasonable settings. Want to do something similar for Diamond Princess and other places.

I expect the real IFR will vary of course based on age structure, cofactors (air pollution seems to be important, especially in Italy), and of course the rather larger differences in coroner reporting standards across jurisdictions and over time.

You can avoid alot of that by looking for excess mortality - which right now seems null in europe except for in Italy. But Spain has about the same cases and deaths per capita and no excess mortality.

Comment by jacob_cannell on The case for C19 being widespread · 2020-03-29T17:02:37.243Z · LW · GW

For a really rough analysis, the overall IFR on the DP was probably about 1% (10 deaths / 1000 infections) after adjusting slightly for false negatives / missed tests.

All those deaths are 70+ age with an in IFR in that group ~2%. About 10% of the US population is in the 70+ bracket, so the projected IFR is ~0.2%. However about half the deaths were in the 80+ age bracket, and if you do a more fine grained binning it's probably more like 0.15%, but it's not a high precision estimate.

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-29T16:51:02.880Z · LW · GW
Assuming a similar age distribution for actual infections, this means a larger fraction of young people is coming down with severe disease.

Disease severity increases with age, and testing probability increases with severity and thus age (in most places). Thus the ratio p(tested | infection) is age skewed and typically much lower for younger ages.

After adjusting by dividing by age dependent p(tested | infection) you can correct that skew and you probably get something more similar to influenza hosp rate curve.

So again you aren't comparing even remotely the same units and it's important to realize that.

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-29T01:22:38.173Z · LW · GW

If you look at my estimate I'm already effectively predicting that their CFR will increase via predicting additional deaths. I think it makes more sense to predict future death outcomes in the current cohort of patients we are computing IFR rather than predicting future CFR changes based on how they changed in other countries and then back computing that into IFR.

The CFR can change over time not only because of delays in deaths vs stage of epidemic but also due to changes in testing strategy and or coverage, or even changes in coroner report standards or case counting standards (as happened at least once with china).

In terms of true number of infected, I'm predicting that SK has on the order of 100K to 200K cases and say 4K in Iceland, and I don't find this up to ~50x difference very surprising. Firstly, it's only about an 18 day difference in terms of first seed case at 25% daily growth.

SK's first recorded case was much earlier in Jan 20 vs Feb 28 for Iceland. SK's epidemic exploded quickly in a cult, Iceland's arrived much later when they had the benefit of seeing the pandemic hit other countries - they are just quite different scenarios.

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-29T00:21:33.692Z · LW · GW

Source for a virus making threats?

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-29T00:10:34.640Z · LW · GW
It skewed the age structure toward a younger demographic. Were you aware of this or did you assume that the religious group is skewed toward old people like typical churches? I didn't realize this up until like ten days ago, but the Christian cult was predominantly pretty young people!

Yes, I should have made this more clear - but it skewed it younger. Or at least that's my explanation for their much higher than expected # cases in younger cohorts vs elsewhere. That should lower their CFR of course.


And about Iceland: Isn't it really very clear that Iceland is weeks behind South Korea, and that Iceland's numbers are therefore unrepresentatively low?

No this isn't clear. Iceland's case count entered a linear regime roughly 2 weeks ago - ie they do seem to have it under control (at least for now). Modeling one country as "X weeks behind" some other country is hazardous at best and also unnecessary as Iceland provides direct graphs on their daily #tests and #positive.

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-28T23:59:04.931Z · LW · GW
As for hospitalizations, I was comparing the age distribution of hospitalizations for flu and confirmed covid. I found that the ratio of 20-45:65+ hospitalizations for flu was 1:7, and that the same ratio for covid was 1:2. Assuming a similar age distribution for actual infections, this means a larger fraction of young people is coming down with severe disease.

Age distribution of estimated hospitalizations for flu? or confirmed? (It seems difficult to get the latter) Source?

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-28T23:41:54.061Z · LW · GW
It should also be noted that some all-cause death rates are coming out in North Italy, and the excess deaths over this time last year are 3x the confirmed covid death

Source? This is potentially interesting especially if it's for a large region like all of Italy or North Italy (i've seen models which estimate excess influenza mortality in Italy) - but the smaller the region the more likely it's due to chance or cherry-picking.

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-28T23:37:13.513Z · LW · GW

From Govenor Cuomo's briefing:

Everything we do now (procure ventilators etc) is in preparation for possible apex (when curve hits the highest point)
Apex in New York is estimated in 14-21 days from now
172 new ICU admission in the last day, vs. 374  in the preceding day, may indicate a decline in the growth rate

A demand of 1000 ICU beds suggests about 300K infected in NY assuming influenza like IFR of ~0.1% and ICU mortality of ~30%, so this isn't in disagreement. More likely if 1M are infected demand should be for ~3000 ICU beds.

There may or may not be a difference in mean ICU/ventilator length of stay - that isn't something I've looked at yet. According to Cuomo C19 patients need ventilators for 11 to 21 days vs 3 to 4 days for all other causes. This paper indicates 6 to 17 days for H1N1 in 2009.

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-28T22:24:24.969Z · LW · GW

Are you saying that some significant fraction of NY hospitals are currently overcrowded with C19 patients right now? Or that one hospital is? What is the actual dataset source for "they are strapped for space"?

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-28T22:19:10.963Z · LW · GW

There seems to be good evidence for asymptomatic transmission - you've probably seen those papers, which indicate that tracking and isolating cases doesn't work.

What does seem to work is social distancing.

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-28T21:58:48.478Z · LW · GW
Firstly, Iceland is not 'randomly' testing people. People are signing up to be tested voluntarily. That population is likely to contain a larger fraction of people who have reason to think they were exposed or feel sick. Thus 0.8% is an overestimate of the fraction of the population that has been infected.

Technically true - and this is why in the earlier version of this on my blog, I used the word 'random-ish'.

Obviously the test is voluntary, but it's also clearly designed to estimate prevalence:

" This effort is intended to gather insight into the actual prevalence of the virus in the community, as most countries are most exclusively testing symptomatic individuals at this time,” said Thorolfur Guðnason, Iceland’s chief epidemiologist to Buzzfeed.

During this time of year less than 10% of the population has symptoms, so if it was a random sampling of only that subset, we would predict at most 400 cases, so we can reject that.

Nonetheless, I think this does justifying widening the prediction of #infections and moving the mean down a bit.

Secondly, the asymptomatic period is on average a week or so for those who develop symptoms, with hospitalization often occurring upwards of a week after symptoms, and death often occurring more than 2 weeks after symptoms. .. .This thing is damn infectious and still expanding, it is not anywhere near a steady state anywhere

Did you actually look at the Iceland data? They entered a linear regime (midpoint of the sigmoid) about 10 days ago, which defeats the brunt of this argument. Additionally the vast majority of the cases were discovered through normal testing after symptoms present, so subtract a week from your timeframe. And finally I already did attempt to predict future deaths based on ICU. I also considered adding another predicted death from the # in hospital now, but it's unclear if that is distinct from ICU or not.

Ultimately though only time will tell, but I find it unlikely they are going to get up to dozens of deaths without also growing case count.

According to links in the above writings, 0.5% of flu cases in the 20-45 age group result in hospitalization compared to 10% in the over 65 age group, and taking population into account that results in ~7x as many flu over-60 hospitalizations than 20-45. Current American test results, however, have ~2x the over-65 covid hospitalizations as 20-45 hospitalizations.

The hospitalization rate that matters is (hosp | infected), not (hosp | tested). You are comparing the estimated (hosp | infected) curve of influenza to the (hosp | tested) curve of COVID-19, which is a unit mismatch. For that comparison to be meaningful you need to first correct for age-specific (tested | infected) ratio.

And data is indicating that surviving ICU stays for this disease are ~3x as long as ICU stays for flu.

Source?

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-28T21:31:20.514Z · LW · GW

South Korea is unusual in that the outbreak there is best understood as two separate outbreaks: an initial outbreak in a strange highly interconnected cult, and then the outbreak in the general population. They ended up testing everyone in the cult, but their testing strategy in the general population seems more limited, similar to other countries. So the testing of several hundred thousand cult members pushed both their CFR and test positive fraction lower than it otherwise would be, and rather obviously skewed their case age structure.

Nonetheless they have tested far less of their population than Iceland (about 5X less as of 3/20 according to ourworldindata), so if the ratio of infections/cases is 4x to 5x in Iceland it seems reasonable that it's 10x to 20x in SK.

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-28T20:40:35.981Z · LW · GW

Naively if ICU fatality is ~30%, and we worst-case assume those all become deaths absent ventilators, that suggests about 3X higher deaths sans ventilators. However in reality we would/will probably just quickly produce more ventilators, start sharing ventilators, jury-rigging C-pack machines into ventilators, etc.

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-28T20:30:00.303Z · LW · GW

Perhaps this isn't clear enough from the title (but should be clear from the post), that the similarity I"m discussing is in terms of outcomes given illness: IFR and IHR.

Absent controls and behavioral changes, I agree that it seems likely that considerably more than 1% of the population would be infected. Seasonal flu infects perhaps 10%. It's clear at this point that C19 is often asymptomatic/mild especially in younger people, and I recall some potential bio explanations like pre-existing partial immunity through cross reactive antigens. On the DP we know about 30% were infected and it could be higher - perhaps 50%, but that population is half retirees. So from this evidence alone my estimate is somewhere between 10% to 50% would be infected absent any behavioral changes.

However social distancing appears to have already be crushing fever prevalence in the US.

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-28T20:19:53.240Z · LW · GW

For the contagious part - I guess what really matters is what % of the population it could infect, and how fast that could occur. But most of the world has gone into social isolation, which at least in the US appears to already have been highly successful.

The kinsa thermometer dataset is quite interesting and worthy of it's own post. If you look at places that didn't do much social isolation in time, like Miami, it appears that the answers may be that it causes fevers in about the same % of the population as the flu does, and cycles through the population in perhaps half the time-frame (viruses move through cities faster in general).

Comment by jacob_cannell on Iceland's COVID-19 random sampling results: C19 similar to Influenza · 2020-03-28T20:11:08.547Z · LW · GW

The ICU admission rate for hospitalizations and the ICU fatality rate are very similar to influenza (links in this post), and those conclusions are from larger datasets than DP.

I disagree that the DP data indicates 2x higher than influenza in either count. My analysis in the post linked above failed to factor in under-reporting (small but still likely given late testing), adjustments for expected deaths and probably had too many deaths in the 70-80 age group. The analysis in this post from Nic Lewis is more detailed and in closer agreement with influenza mortality.

From the current evidence at this point I think a reasonable bayesian should have a log-normal distribution on all-age IFR, but it's surely centered on influenza IFR - something like LogNormalDistribution[-2, 0.5].

Comment by jacob_cannell on The case for C19 being widespread · 2020-03-28T17:50:52.570Z · LW · GW

The DP data is commonly misunderstood. Influenza and COVID-19 (probably) both have a strongly age dependent IFR curve. The "COVID-19 is similar to influenza" model predicts IFR in the 1% range for a retiree age distribution like on DP but 0.1% range on the US age distribution. To a first approximation almost all the deaths are in the 65+ age groups, which are a small fraction of US population but about 50% of DP - it was a geriatric cruise.

So the DP is fairly strong evidence for influenza like mortality. I have an analysis here with more details, and this post by Nic Lewis has a more detailed analysis which considers a few more factors.

Comment by jacob_cannell on COVID-19 growth rates vs interventions · 2020-03-28T17:35:37.268Z · LW · GW

There's a huge confounder here which is testing ramp up: it's hard to say how much of growth in confirmed cases is growth in actual infections vs growth in testing. For example if you look at the graph of cases vs graph of tests in the US they track closely, and the % of positive tests hasn't changed much.

However there's another dataset which doesn't have this problem - the kinsa smart thermometer dataset, and it indicates that social distancing has been highly effective at curbing all flu-like infections in the US.

Comment by jacob_cannell on jacob_cannell's Shortform · 2020-03-25T05:20:32.981Z · LW · GW

Am I one of the few people here who has looked at the covid-19 data and reached the conclusion that it's probably only about as severe/fatal as seasonal influenza?

I have a longer blog post outlining the case here.

TLDR: CFR!=IFR, influenza CFR is similar to covid-19 CFR, and we know from influenza data that typically IFR << CFR due to enormous selection sampling bias from mostly testing only those with more severe disease. We can correct for that by comparing the covid-19 confirmed case age structure to the population age structure using uniform or age-dependent attack rate. The resulting IFR is similar to influenza, which is also the best fit for the Diamond Princess data (where selection bias is mostly avoided so CFR~IFR).

Selection bias can help explain why the CFR is higher in Italy, and probably why it's so much lower in Germany (I'm looking for age structure data on covid-19 cases from Germany, I'm predicting it will be flatter than US or Italy data). South Korea is also another interesting case (which I found some data for but haven't put into the blog post yet) - we can clearly reject a typical attack rate age structure there, which was surprising at first but then made sense given that the outbreak in SK started in a large tight-knit cult with a young median age and they tested everyone in the cult.

Anyway if anyone here has already encountered these thoughts and still believes covid-19 IFR is much higher than influenza IFR I'm curious what the best arguments/evidence are.

Comment by jacob_cannell on Why do we think most AIs unintentionally created by humans would create a worse world, when the human mind was designed by random mutations and natural selection, and created a better world? · 2017-05-13T21:39:09.636Z · LW · GW

The evolution of the human mind did not create a better world from the perspective of most species of the time - just ask the dodo, most megafauna, countless other species, etc. In fact, the evolution of humanity was/is a mass extinction event.

Comment by jacob_cannell on Don't Fear the Reaper: Refuting Bostrom's Superintelligence Argument · 2017-03-02T01:12:56.998Z · LW · GW

Agreed the quoted "we found" claim overreaches. The paper does have a good point though: the recalcitrance of further improvement can't be modeled as a constant, it necessarily scales with current system capability. Real world exponentials become sigmoids; mold growing in your fridge and a nuclear explosion are both sigmoids that look exponential at first: the difference is a matter of scale.

Really understanding the dynamics of a potential intelligence explosion requires digging deep into the specific details of an AGI design vs the brain in terms of inference/learning capabilities vs compute/energy efficiency, future hardware parameters, etc. Can't show much with vague broad stroke abstractions.

Comment by jacob_cannell on Open thread, Feb. 13 - Feb. 19, 2017 · 2017-02-13T23:49:44.503Z · LW · GW

The levels of misunderstanding in these types of headlines is what is scary. The paper is actually about a single simple model trained for a specific purpose, unrelated to the hundreds of other models various deepmind researchers have trained. But somehow that all too often just get's reduced to "Deepmind's AI", as if it's a monolothic thing. And here it's even worse, where somehow the fictional monolothic AI and Deepmind the company are now confused into one.

Comment by jacob_cannell on Choosing prediction over explanation in psychology: Lessons from machine learning · 2017-01-18T02:43:47.322Z · LW · GW

If you instead claim that the "input" can also include observations about interventions on a variable, t

Yes - general prediction - ie a full generative model - already can encompass causal modelling, avoiding any distinctions between dependent/independent variables: one can learn to predict any variable conditioned on all previous variables.

For example, consider a full generative model of an ATARI game, which includes both the video and control input (from human play say). Learning to predict all future variables from all previous automatically entails learning the conditional effects of actions.

For medicine, the full machine learning approach would entail using all available data (test measurements, diet info, drugs, interventions, whatever, etc) to learn a full generative model, which then can be conditionally sampled on any 'action variables' and integrated to generate recommended high utility interventions.

then your predictions will certainly fail unless the algorithm was trained in a dataset where someone actually intervened on X (i.e. someone did a randomized controlled trial)

In any practical near term system, sure. In theory though, a powerful enough predictor could learn enough of the world physics to invent de novo interventions wholecloth. ex: AlphaGo inventing new moves that weren't in its training set that it essentially invented/learned from internal simulations.

Comment by jacob_cannell on Progress and Prizes in AI Alignment · 2017-01-04T01:53:59.134Z · LW · GW

I came to a similar conclusion a while ago: it is hard to make progress in a complex technical field when progress itself is unmeasurable or worse ill-defined.

Part of the problem may be cultural: most working in the AI safety field have math or philosophy backgrounds. Progress in math and philosophy is intrinsically hard to measure objectively; success is mostly about having great breakthrough proofs/ideas/papers that are widely read and well regarded by peers. If your main objective is to convince the world, then this academic system works fine - ex: Bostrom. If your main objective is to actually build something, a different approach is perhaps warranted.

The engineering oriented branches of Academia (and I include comp sci in this) have a very different reward structure. You can publish to gain social status just as in math/philosophy, but if your idea also has commercial potential there is the powerful additional motivator of huge financial rewards. So naturally there is far more human intellectual capital going into comp sci than math, more into deep learning than AI safety.

In a sane world we'd realize that AI safety is a public good of immense value that probably requires large-scale coordination to steer the tech-economy towards solving. The X-prize approach essentially is to decompose a big long term goal into subgoals which are then contracted to the private sector.

The high level abstract goal for the Ansari XPrize was "to usher in a new era of private space travel". The specific derived prize subgoal was then "to build a reliable, reusable, privately financed, manned spaceship capable of carrying three people to 100 kilometers above the Earth's surface twice within two weeks".

AI safety is a huge bundle of ideas, but perhaps the essence could be distilled down to: "create powerful AI which continues to do good even after it can take over the world."

For the Ansari XPrize, the longer term goal of "space travel" led to the more tractable short term goal of "100 kilometers above the Earth's surface twice within two weeks". Likewise, we can replace "the world" in the AI safety example:

AI Safety "XPrize": create AI which can take over a sufficiently complex video game world but still tends to continue to do good according to a panel of human judges.

To be useful, the video game world should be complex in the right ways: it needs to have rich physics that agents can learn to control, it needs to permit/encourage competitive and cooperative strategic complexity similar to that in the real world, etc. So more complex than pac-man, but simpler than the Matrix. Something in the vein of a minecraft mod might have the right properties - but there are probably even more suitable open-world MMO games.

The other constraint on such a test is we want the AI to be superhuman in the video game world, but not our world (yet). Clearly this is possible - ala AlphaGo. But naturally the more complex the video game world is in the direction of our world, both the harder the goal becomes and the more dangerous.

Note also that the AI should not know that it is being tested; it shall not know it inhabits a simulation. This isn't likely to be any sort of problem for the AI we can actually build and test in the near future, but it becomes an interesting issue later on.

DeepMind is now focusing on Starcraft, OpenAI has universe, so we already on a related path. Competent AI for open-ended 3D worlds with complex physics - like minecraft - is still not quite here, but is probably realizable in just a few years.

Comment by jacob_cannell on [Link] White House announces a series of workshops on AI, expresses interest in safety · 2016-05-06T06:43:19.916Z · LW · GW

A sign!

Comment by jacob_cannell on [Link] White House announces a series of workshops on AI, expresses interest in safety · 2016-05-06T06:42:55.657Z · LW · GW

Other way around. Europe started HBP started first, then US announced the BI. The HBP is centered around Markham's big sim project. The BI is more like a bag of somewhat related grants, focusing more on connectome mapping. From what I remember, both projects are long term, and most of the results are expected to be 5 years out or so, but they are publishing along the way.

Comment by jacob_cannell on What can we learn from Microsoft's Tay, its inflammatory tweets, and its shutdown? · 2016-03-31T04:23:49.245Z · LW · GW

Not much.

Comment by jacob_cannell on Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited · 2016-03-31T04:18:13.095Z · LW · GW

We are in a vast, seemingly-empty universe. Models which predict the universe should be full of life should be penalised with a lower likelihood.

The only models which we can rule out are those which predict the universe is full of life which leads to long lasting civs which expand physically, use lots of energy, and rearrange on stellar scales. That's an enormous number of conjunctions/assumptions about future civs. Models where the universe is full of life, but life leads to tech singularities which end physical expansion (transcension) perfectly predict our observations, as do models where civs die out, as do models where life/civs are rare, and so on. . ..

But this is all a bit off-topic now because we are ignoring the issue I was responding to: the evidence from the timing of the origin of life on earth

If we find that life arose instantly, that is evidence which we can update our models on, and leads to different likelihoods then finding that life took 2 billion years to evolve on earth. The latter indicates that abiogenesis is an extremely rare chemical event that requires a huge amount of random molecular computations. The former indicates - otherwise.

Imagine creating a bunch of huge simulations that generate universes, and exploring the parameter space until you get something that matches earth's history. The time taken for some evolutionary event reveals information about the rarity of that event.

Comment by jacob_cannell on Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited · 2016-03-30T22:02:44.359Z · LW · GW

"Anthropic selection bias" just filters out observations that aren't compatible with our evidence. The idea that "anthropic selection bias" somehow equalizes the probability of any models which explain the evidence is provably wrong. Just wrong. (There are legitimate uses of anthropic selection bias effects, but they come up in exotic scenarios such as simulations.)

If you start from the perspective of an ideal bayesian reasoner - ala Solomonoff, you only consider theories/models that are compatible with your observations anyway.

So there are models where abiogenesis is 'easy' (which is really too vague - so let's define that as a high transition probability per unit time, over a wide range of planetary parameters.)

There are also models where abiogenesis is 'hard' - low probability per unit time, and generally more 'sparse' over the range of planetary parameters.

By Baye's Rule, we have: P(H|E) = P(E|H)P(H) / P(E)

We are comparing two hypothesises, H1, and H2, so we can ignore P(E) - the prior of the evidence, and we have:

P(H1|E) )= P(E|H1) P(H1)

P(H2|E) )= P(E|H2) P(H2)

)= here means 'proportional'

Assume for argument's sake that the model priors are the same. The posterior then just depends on the likelihood - P(E|H1) - the probability of observing the evidence, given that the hypothesis is true.

By definition, the model which predicts abiogenesis is rare has a lower likelihood.

One way of thinking about this: Abiogenesis could be rare or common. There are entire sets of universes where it is rare, and entire sets of universes where it is common. Absent any other specific evidence, it is obviously more likely that we live in a universe where it is more common, as those regions of the multiverse have more total observers like us.

Now it could be that abiogenesis is rare, but reaching that conclusion would require integrating evidence from more than earth - enough to overcome the low initial probability of rarity.

Comment by jacob_cannell on Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited · 2016-03-30T00:02:22.699Z · LW · GW

I assume by 'algea-like', you actually mean cyanobacteria. The problem is that anything that uses photosynthesis creates oxygen, and oxygen eventually depletes the planet's chemical oxygen sinks, which inevitably leads to a Great Oxygenation Event. The latter provides a new powerful source of energy for life, which then leads to something like a cambrian explosion.

The largest uncertainty in these steps is the timeline for oxygenation to deplete the planet's oxygen sinks. This is basically the time it takes cyanobacteria to 'terraform' the planet. It took 200 million years on Earth, but this is presumably dependent on planetary chemical composition and size.

From the known exoplanets, we can already estimate there are on the order a billion-ish earth-size worlds in habitable zones. By the mediocrity principle, it's a priori unlikely that earth's chemistry is 1 in a billion. Especially given that Mar's composition is vaguely similar enough that it was probably an 'almost earth'.

Comment by jacob_cannell on Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited · 2016-03-29T23:51:42.393Z · LW · GW

We keep finding earlier and earlier fossil evidence for life on earth, which has finally shrunk the time window for abiogenesis on earth down to near zero.

The late heavy bombardment sterilized earth repeatedly until about 4.1 billion years ago, and our earliest fossil evidence for life is also now (probably) 4.1 billion years old. Thus life probably either evolved from inorganics near instantly, or more likely, it was already present in the comet/dust cloud from the earth's formation. (panspermia)

With panspermia, abiogenesis may be rare, but the effect is similar to abiogenesis being common.

Comment by jacob_cannell on Resolving the Fermi Paradox: New Directions · 2016-03-19T04:26:27.434Z · LW · GW

I don't see why the usual infrared argument doesn't apply to them or KIC 8462852.

If by infrared argument, you refer to the idea that a dyson swarm should radiate in the infrared, this is probably wrong. This relies on the assumption that the alien civ operates at earth temp of 300K or so. As you reduce that temp down to 3K, the excess radiation diminishes to something indistinguishable to the CMB, so we can't detect large cold structures that way. For the reasons discussed earlier, non-zero operating temp would only be useful during initial construction phases, whereas near-zero temp is preferred in the long term. The fact that KIC 8462852 has no infrared excess makes it more interesting, not less.

Comment by jacob_cannell on Resolving the Fermi Paradox: New Directions · 2016-03-18T20:11:19.129Z · LW · GW

A Dyson sphere helps with moving matter around, potentially with elemental conversion, and with cooling.

Moving matter - sure. But that would be a temporary use case, after which you'd no longer need that config, and you'd want to rearrange it back into a bunch of spherical dense computing planetoids.

potentially with elemental conversion

This is dubious. I mean in theory you could reflect/recapture star energy to increase temperature to potentially generate metals faster, but it seems to be a huge waste of mass for a small increase in cooking rate. You'd be giving up all of your higher intelligence by not using that mass for small compact cold compute centers.

If nothing else, if the ambient energy of the star is a big problem, you can use it to redirect the energy elsewhere away from your cold brains.

Yes, but that's just equivalent to shielding. That only requires redirecting the tiny volume of energy hitting the planetary surfaces. It doesn't require any large structures.

Exponential growth.

Exponential growth = transcend. Exponential growth will end unless you can overcome the speed of light, which requires exotic options like new universe creation or altering physics.

I think Sandberg's calculated you can build a Dyson sphere in a century, apropos of KIC 8462852's oddly gradual dimming. And you hardly need to finish it before you get any benefits.

Got a link? I found this FAQ, where he says:

Using self-replicating machinery the asteroid belt and minor moons could be converted into habitats in a few years, while disassembly of larger planets would take 10-1000 times longer (depending on how much energy and violence was used).

That's a lognormal dist over several decades to several millenia. A dimming time for KIC 8462852 in the range of centuries to a millenia is a near perfect (lognormal) dist overlap.

So it may be worth while investing some energy in collecting small useful stuff (asteroids) into larger, denser computational bodies. It may even be worth while moving stuff farther from the star, but the specifics really depend on a complex set of unknowns.

You say 'may', but that seems really likely.

The recent advances in metamaterial shielding stuff suggest that low temps could be reached even on earth without expensive cooling, so the case I made for moving stuff away from the star for cooling is diminished.

Collecting/rearranging asteroids, and rearranging rare elements of course still remain as viable use cases, but they do not require as much energy, and those energy demands are transient.

After all, what 'complex set of unknowns' will be so fine-tuned that the answer will, for all civilizations, be 0 rather than some astronomically large number?

Physics. It's the same for all civilizations, and their tech paths are all the same. Our uncertainty over those tech paths does not translate into a diversity in actual tech paths.

You cannot show that this resolves the Fermi paradox unless you make a solid case that cold brains will find harnessing solar systems' energy and matter totally useless!

There is no 'paradox'. Just a large high-D space of possibilities, and observation updates that constrain that space.

I never ever claimed that cold brains will "find harnessing solar systems' energy and matter totally useless", but I think you know that. The key question is what are their best uses for the energy/mass of a system, and what configs maximize those use cases.

I showed that reversible computing implies extremely low energy/mass ratios for optimal compute configs. This suggests that advanced civs in the timeframe 100 to 1000 years ahead of us will be mass-limited (specifically rare metal element limited) rather than energy limited, and would rather convert excess energy into mass rather than the converse.

Which gets me back to a major point: endgames. For reasons I outlined earlier, I think the transcend scenarios more likely. They have a higher initial prior, and are far more compatible with our current observations.

In the transcend scenarios, exponential growth just continues up until some point in the near future where exotic space-time manipulations - creating new universes or whatever - are the only remaining options for continued exponential growth. This leads to an exit for the civ, where from the outside perspective it either physically dies, disappears, or transitions to some final inert config. Some of those outcomes would be observable, some not. Mapping out all of those outcomes in detail and updating on our observations would be exhausting - a fun exercise for another day.

The key variable here is the timeframe from our level to the final end-state. That timeframe determines the entire utility/futility tradeoff for exploitation of matter in the system, based on ROI curves.

For example, why didn't we start converting all of the useful matter of earth into babbage-style mechanical computers in the 19th century? Why didn't we start converting all of the matter into vaccuum tube computers in the 50's? And so on....

In an exponentially growing civ like ours, you always have limited resources, and investing those resources in replicating your current designs (building more citizens/compute/machines whatever) always has complex opportunity cost tradeoffs. You also are expending resources advancing your tech - the designs themselves - and as such you never expend all of your resources on replicating current designs, partly because they are constantly being replaced, and partly because of the opportunity costs between advancing tech/knowledge vs expanding physical infrastructure.

So civs tend to expand physically at some rate over time. The key question is how long? If transcension typically follows 1,000 years after our current tech level, then you don't get much interstellar colonization bar a few probes, but you possibly get temporary dyson swarms. If it only takes 100 years, then civs are unlikely to even leave their home planet.

You only get colonization outcomes if transcension takes long enough, leading to colonization of nearby matter, which all then transcend roughly within the timeframe of their distance from the origin. Most of the nearby useful matter appears to be rogue planets, so colonization of stellar systems would take even longer, depending on how far down it is in the value chain.

And even in the non-transcend models (say the time to transcend is greater than millions of years), you can still get scenarios where the visible stars are not colonized much - if their value is really low, compared to abundant higher value cold dark matter (rogue planets, etc), colonization is slow/expensive, and the timescale spread over civ ages is low.

Comment by jacob_cannell on Resolving the Fermi Paradox: New Directions · 2016-03-17T17:44:54.084Z · LW · GW

So your entire argument boils down to another person who thinks transcension is universally convergent and this is the solution to the Fermi paradox?

No . .. As I said above, even if transcension is possible, that doesn't preclude some expansion. You'd only get zero expansion if transcension is really easy/fast. On the convergence issue, we should expect that the main development outcomes are completely convergent. Transcension is instrumentally convergent - it helps any realistic goals.

I don't see what your reversible computing detour adds to the discussion, if you can't show that making only a few cold brains sans any sort of cosmic engineering is universally convergent.

The reversible computing stuff is important for modeling the structure of advanced civs. Even in transcension models, you need enormous computation - and everything you could do with new universe creation is entirely compute limited. Understanding the limits of computing is important for predicting what end-tech computation looks like for both transcend and expand models. (for example if end-tech optimal were energy limited, this predicts dyson spheres to harvest solar energy)

The temperatures implied by 10,000x energy density on earth preclude all life or any interesting computation.

I never said anything about using biology or leaving the Earth intact. I said quite the opposite.

Advanced computation doesn't happen at those temperatures, for the same basic reasons that advanced communication doesn't work for extremely large values of noise in SNR. I was trying to illustrate the connection between energy flow and temperature.

You need to show your work here. Why is it unlikely? Why don't they disassemble solar systems to build ever more cold brains? I keep asking this, and you keep avoiding it.

First let us consider the optimal compute configuration of a solar system without any large-scale re-positioning, and then we'll remove that constraint.

For any solid body (planet,moon,asteroid,etc), there is some optimal compute design given it's structural composition, internal temp, and incoming irradiance from the sun. Advanced compute tech doesn't require any significant energy - so being closer to the sun is not an advantage at all. You need to expend more energy on cooling (for example, it takes about 15 kilowatts to cool a single current chip from earth temp to low temps, although there have been some recent breakthroughs in passive metamaterial shielding that could change that picture). So you just use/waste that extra energy cooling the best you can.

So, now consider moving the matter around. What would be the point of building a dyson sphere? You don't need more energy. You need more metal mass, lower temperatures and smaller size. A dyson sphere doesn't help with any of that.

Basically we can rule out config changes for the metal/rocky mass (useful for compute) that: 1.) increase temperature 2.) increase size

The gradient of improvement is all in the opposite direction: decreasing temperature and size (with tradeoffs of course).

So it may be worth while investing some energy in collecting small useful stuff (asteroids) into larger, denser computational bodies. It may even be worth while moving stuff farther from the star, but the specifics really depend on a complex set of unknowns.

One of the big unknowns of course being the timescale, which depends on the transcend issue.

Now for the star itself, it has most of the mass, but that mass is not really accessible, and most of it is in low value elements - we want more metals. It could be that the best use of that matter is to simply continue cooking it in the stellar furnace to produce more metals - as there is no other way, as far as i know.

But doing anything with the star would probably take a very long amount of time, so it's only relevant in non-transcendent models.

In terms of predicted observations, in most of these models there are few if any large structures, but individual planetary bodies will probably be altered from their natural distributions. Some possible observables: lower than expected temperatures, unusual chemical distributions, and possibly higher than expected quantities/volumes of ejected bodies.

Some caveats: I don't really have much of an idea of the energy costs of new universe creation, which is important for the transcend case. That probably is not a reversible op, and so it may be a motivation for harvesting solar energy.

There's also KIC 8462852 of course. If we assume that it is a dyson swarm like object, we can estimate a rough model for civs in the galaxy. KIC 8462852 has been dimming for at least a century. It could represent the endphase of a tech civ, approaching it's final transcend state. Say that takes around 1,000 years (vaguely estimating from the 100 years of data we have).

This dimming star is one out of perhaps 10 million nearby stars we have observed in this way. Say 1 in 10 systems will ever develop life, the timescale spread or deviation is about a billion years - then we should expect to observe about 1 in 10 million endphase dimming stars, given that phase lasts only 1,000 years. This would of course predict a large number of endstate stars, but given that we just barely detected KIC 8462852 because it was dimming, we probably can't yet detect stars that already dimmed and then stabilized long ago.

Comment by jacob_cannell on Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited · 2016-03-17T05:26:39.162Z · LW · GW

The low temperature low energy devices would be more akin to crazy deep extremophile lithotrophic bacteria or deep sea fish on Earth, living slow metabolisms and at low densities and matter/energy fluxes,

Hmm I think you misunderstood my model. At the limits of computation, you approach the maximal computational density - the maximum computational capacity per unit mass - only at zero temperature. The stuff you are talking about - anything that operates at any non-zero temp - has infinitely less compute capability than the zero-temp stuff.

So your model and analogy is off - the low temp devices are like gods - incomprehensibly faster and more powerful, and bio life and warm tech is like plants, bacteria, or perhaps rocks - not even comparable, not even in the same basic category of 'thing'.

In any situation other than perfect coordination, that which replicates itself more rapidly becomes more common.

Of course. But it depends on what the best way to replicate is. If new universe creation is feasible (and it appears to be, from what we know of physics), then civs advance rather quickly to post-singularity godhood and start creating new universes. Among other things, this allows exponential growth/replication which is vastly superior to puny polynomial growth you can get by physical interstellar colonization. (it also probably allows for true immortality, and perhaps actual magic - altering physics) And even if that tech is hard/expensive, colonization does not entail anything big, hot, or dumb. Realistic colonization would simply result in many small, compact, cold civ objects. Also see the other thread.

Comment by jacob_cannell on Resolving the Fermi Paradox: New Directions · 2016-03-17T05:02:20.752Z · LW · GW

I understand your points about why colder is better, my question is: why don't they expand constantly with ever more cold brains, which are collectively capable of ever more computation?

At any point in development, investing resources in physical expansion has a payoff/cost/risk profile, as does investing resources in tech advancement. Spatial expansion offers polynomial growth, which is pretty puny compared to the exponential growth from tech advancement. Furthermore, the distances between stars are pretty vast.

If you plot our current trajectory forward, we get to a computational singularity long long before any serious colonization effort. Space colonization is kind of comical in it's economic payoff compared to chasing Moore's Law. So everything depends on what the endpoint of the tech singularity is. Does it actually end with some hard limit to tech? - If it does, and slow polynomial growth is the only option after that, then you get galactic colonization as the likely outcome. If the tech singularity leads to stronger outcomes ala new universe manipulations, then you never need to colonize, it's best to just invest everything locally. And of course there is the spectrum in between, where you get some colonization, but the timescale is slowed.

Correct me if I'm wrong, but zero energy consumption assumes both coldness and slowness, doesn't it?

No, not for reversible computing. The energy required to represent/compute a 1 bit state transition depends on reliability, temperature, and speed, but that energy is not consumed unless there is an erasure. (and as energy is always conserved, erasure really just means you lost track of a bit)

In fact the reversible superconducting designs are some of the fastest feasible in the near term.

That would be great. If we had 10,000x more energy (and advanced technology etc), we could disassemble the Earth, move the parts around, and come up with useful structures to compute with it which would dissipate that energy productively.

Biological computing (cells) doesn't work at those temperatures, and all the exotic tech far past bio computers requires even lower temperatures. The temperatures implied by 10,000x energy density on earth preclude all life or any interesting computation.

Yes, it is expensive. Good thing we have a star right there to move all that mass with. Maybe its energy could be harnessed with some sort of enclosure....

I'm not all that confident that moving mass out system is actually better than just leaving it in place and doing best effort cooling in situ. The point is that energy is not the constraint for advancing computing tech, it's more mass limited than anything, or perhaps knowledge is the most important limit. You'd never want to waste all that mass on a dyson sphere. All of the big designs are dumb - you want it to be as small, compact, and cold as possible. More like a black hole.

Which ends in everything being used up, which even if all that planet engineering and moving doesn't require Dyson spheres, is still inconsistent with our many observations of exoplanets and

It's extremely unlikely that all the matter gets used up in any realistic development model, even with colonization. Life did not 'use up' more than a tiny fraction of the matter of earth, and so on.

leaves the Fermi paradox unresolved.

From the evidence for mediocrity, the lower KC complexity of mediocrity, and the huge number of planets in the galaxy, I start with a prior strongly favoring reasonably high number of civs/galaxy, and low odds on us being first.

We have high uncertainty on the end/late outcome of a post-singularity tech civ (or at least I do, I get the impression that people here inexplicably have extremely high confidence in the stellavore expansionist model, perhaps because of lack of familiarity with the alternatives? not sure).

If post-singularity tech allows new universe creation and other exotic options, you never have much colonization - at least not in this galaxy, from our perspective. If it does not, and there is an eventual end of tech progression, then colonization is expected.

But as I argued above, even colonization could be hard to detect - as advanced civs will be small/cold/dark.

Transcension is strongly favored a priori for anthropic reasons - transcendent universes create far more observers like us. Then, updating on what we can see of the galaxy, colonization loses steam: our temporal rank is normal, whereas most colonization models predict we should be early .

For transcension, naturally its hard to predict what that means .. . but one possibility is a local 'exit' at least from the perspective of outside observers. Creation of lots of new universes, followed by physical civ-death in this universe, but effective immortality in new universes (ala game theoretic horse trading across the multiverse). New universe creation could also potentially alter physics in ways that permit further tech progression. Either way, all of the mass is locally invested/used up for 'magic' that is incomprehensibly more valuable than colonization.

Comment by jacob_cannell on Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited · 2016-03-16T18:06:28.583Z · LW · GW

Given that physics is the same across space, the math/physics/tech of different civs will end up being the same, more or less. I wouldn't call that coordination.

To extend your analogy, plants don't grow in the center of the earth - and this has nothing to do with coordination. Likewise, no human tribes colonized the ocean depths, and this has nothing to do with coordination.

Comment by jacob_cannell on Resolving the Fermi Paradox: New Directions · 2016-03-16T17:54:37.696Z · LW · GW

Computing near the Sun costs more because it's hotter, sure. Fortunately, I understand that the Sun produces hundreds, even thousands of times more energy than a little fusion reactor does, so some inefficiencies are not a problem.

Every practical computational tech substrate has some error bounded compute/temperature curve, where computational capability quickly falls to zero past some upper bound temperature. Even for our current tech, computational capacity essentially falls off a cliff somewhere well below 1,000K.

My general point is that the really advanced computing tech shifts all those curves over - towards lower temperatures. This is a hard limit of physics, it can not be overcome. So for a really advanced reversible quantum computer that employs superconduction and long coherence quantum entanglement, 1K is just as impossible as 1,000K. It's not entirely a matter of efficiency.

Another way of looking at it - advanced tech just requires lower temperatures - as temperature is just a measure of entropy (undesired/unmodeled state transitions). Temperature is literally an inverse measure of computational potential. The ultimate computer necessarily must have a temperature of zero.

You say that the reversible brains don't need that much energy.

At the limits they need zero. Approaching anything close to those limits they have no need of stars. Not only that, but they couldn't survive any energy influx much larger than some limit, and that limit necessarily must go to zero as their computational capacity approaches theoretical limits.

If it's energy, then they will want to pipe in as much energy as possible from their local star.

No. There is an exact correct amount of energy to pipe in based on their viable operating temperature of their current tech civ. And this amount goes to zero as you advance up the tech.

It may help to consider applying your statement to our current planet civ. What if we could pipe in 10000x more energy than we currently receive from the sun. Wouldn't that be great? No. It would cook the earth.

The same principle applies, but as you advance up the ultra-tech ladder, the temp ranges get lower and lower (because remember, temp is literally an inverse measure of maximum computational capabillity).

OK, but more computing power is always better, the cold brains want as much as possible, so what limits them?

Given some lump of matter, there is of course a maximum information storage capacity and a max compute rate - in a reversible computer the compute rate is bounded by the maximum energy density the system can structurally support which is just bounded by its mass. In terms of ultimate limits, it really depends on whether exotic options like creating new universes are practical or not. If creating new universes is feasible, there probably are no hard limits, all limits becomes soft.

So you should get a universe of Dyson spheres feeding out mass-energy to the surrounding cold brains who are constantly colonizing fresh systems for more mass-energy to compute in the voids with

Dyson spheres are extremely unlikely to be economically viable/useful, given the low value of energy past a certain tech level (vastly lower energy need per unit mass).

Cold brains need some mass, the question then is how the colonization value of mass varies across space. Mass that is too close to a star would need to be moved away from the star, which is very expensive.

So the most valuable mass that gets colonized first would be the rogue planets/nomads - which apparently are more common than attached planets.

If colonization continues long enough, it will spread to lower and lower valued real estate. So eventually smaller rocky bodies in the outer system get stripped away, slowly progressing inward.

The big unknown variable is again what the end of tech in the universe looks like, which gets back to that new universe creation question. If that kind of ultimate/magic tech is possible, civs will invest everything in to that, and you have less colonization, depending on the difficulty/engineering tradeoffs.

Comment by jacob_cannell on Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited · 2016-03-13T18:10:16.997Z · LW · GW

Depends on what you mean by 'intelligence'.

If you mean tech/culture/language capable, well it isn't surprising that has only happened once, because it is so recent, and the first tech species tends to takeover the planet and preclude others.

If you mean something more like "near human problem solving capability", then that has evolved robustly in multiple separate vertebrate lineages: - corvids, primates, cetaceans, proboscids. It also evolved in an invertebrate lineage (octopi) with a very different brain plan. I think that qualifies as extremely robust, and it suggests that evolution of culturual intelligence is probably inevitable, given enough time/energy/etc.

Comment by jacob_cannell on Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited · 2016-03-13T18:04:48.684Z · LW · GW

evidence of a Great Filter in our past.

Most of the space of possible great filters in the past have been ruled out. Rare planets is out. Tectonics is out. Rare bio origins is out. The mediocrity of earth's temporal rank rules out past disaster scenarios, ala Bostrom/Tegmark's article.

and the fact we don't see aliens is evidence of a Great Filter in the future.

Mediocrity of temporal rank rules out any great filter in the future that has anything to do with other civs, because in scenarios where that is the filter, surviving observers necessarily find themselves on early planets.

Furthermore, natural disasters are already ruled out as a past filter, and thus as a future filter as well.

So all that remains is this narrow space of possibilities that relate to the timescale of evolution, where earth is rare in that evolution runs unusually fast here. Given that there are many billions of planets in the galaxy in habitable zones, earth has to be 10^10 rare or so, which seems pretty unlikely at this point.

Also, 'seeing aliens' depends on our model of what aliens should look like - which really is just our model for the future of post-biological civs. Our observations currently can only rule out the stellavore expansionist model. The transcend model predicts small, cold, compact civs that would be very difficult to detect directly.

That being said, if aliens exist, the evidence may already be here, we just haven't interpreted it correctly.

Comment by jacob_cannell on Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited · 2016-03-12T22:40:20.276Z · LW · GW

So the fact that intelligence took this long to evolve - 4-5 billions of years after biogenesis, and 600-700 million years after the first multicellular animals - must be important.

~5 billion years out of an expected ~10 billion year lifespan for a star like the sun - mediocrity all the way down!

Comment by jacob_cannell on Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited · 2016-03-12T22:34:03.974Z · LW · GW

The high value matter/energy or real estate is probably a tiny portion of the total, and is probably far from stars, as stellar environments are too noisy/hot for advanced computation.

Can you expand on this?

See this post.

Extrapolating from current physics to ultimate computational intelligences, the most important constraint is temperature/noise, not energy. A hypothetical optimal SI would consume almost no energy, and it's computational capability would be inversely proportional to it's temperature. So at the limits you have something very small, dense, cold, and dark, approaching a black hole.

Passive shielding appears to be feasible, but said feasibility decreases non-linearly with proximity to stars.

So think of the computational potential of space-time as a function of position in the galaxy. The computational potential varies inversely with temperature. The potential near a star is abysmal. The most valuable real estate is far out in the interstellar medium, potentially on rogue planets or even smaller cold bodies, where passive shielding can help reduce temperatures down to very low levels.

So to an advanced civ, the matter in our solar system is perhaps worthless - the energy cost of pulling the matter far enough away from the star and cooling it is greater than it's computational value.

All computation requires matter/energy.

Computation requires matter to store/represent information, but doesn't require consumption of that matter. Likewise computation also requires energy, but does not require consumption of that energy.

At the limits you have a hypothetical perfect reversible quantum computer, which never erases any bits. Instead, unwanted bits are recycled internally and used for RNG. This requires a perfect balance of erasure with random bit consumption, but that seems possible in theory for general approximate inference algorithms of the types SI is likely to be based on.

that the stars were huge piles of valuable materials that had inconveniently caught fire and needed to be put out.

This is probably incorrect. From the perspective of advanced civs, the stars are huge piles of worthless trash. They are the history of life rather than it's future, the oceans from which advanced post-bio civs emerge.

Comment by jacob_cannell on AlphaGo versus Lee Sedol · 2016-03-12T18:34:02.939Z · LW · GW

We have wildly different definitions of interesting, at least in the context of my original statement. :)

Comment by jacob_cannell on AlphaGo versus Lee Sedol · 2016-03-12T09:02:01.739Z · LW · GW

If you can prove anything interesting about a system, that system is too simple to be interesting. Logic can't handle uncertainty, and doesn't scale at all to describing/modelling systems as complex as societies, brains, AIs, etc.

Comment by jacob_cannell on AlphaGo versus Lee Sedol · 2016-03-12T08:55:57.023Z · LW · GW

Briefly skimming Christiano's post, this is actually one of the few/first proposals from someone MIRI related that actually seems to be on the right track (and similar to my own loose plans). Basically it just boils down to learning human utility functions with layers of meta-learning, with generalized RL and IRL.

Comment by jacob_cannell on AlphaGo versus Lee Sedol · 2016-03-12T08:50:24.413Z · LW · GW

When I started hearing about the latest wave of results from neural networks, I thought to myself that Eliezer was probably wrong to bet against them. Should MIRI rethink its approach to friendliness?

Yes.