Comment by carlshulman on Unconscious Economies · 2019-03-28T02:40:26.775Z · score: 8 (4 votes) · LW · GW

There is a literature on firm productivity showing large firm variation in productivity and average productivity growth by expansion of productive firms relative less productive firms. E.g. this , this , this , and this.

Comment by carlshulman on More realistic tales of doom · 2019-03-27T19:25:24.625Z · score: 9 (5 votes) · LW · GW

OK, thanks for the clarification!

My own sense is that the intermediate scenarios are unstable: if we have fairly aligned AI we immediately use it to make more aligned AI and collectively largely reverse things like Facebook click-maximization manipulation. If we have lost the power to reverse things then they go all the way to near-total loss of control over the future. So i would tend to think we wind up in the extremes.

I could imagine a scenario where there is a close balance among multiple centers of AI+human power, and some but not all of those centers have local AI takeovers before the remainder solve AI alignment, and then you get a world that is a patchwork of human-controlled and autonomous states, both types automated. E.g. the United States and China are taken over by their AI systems (inlcuding robot armies), but the Japanese AI assistants and robot army remain under human control and the future geopolitical system keeps both types of states intact thereafter.

Comment by carlshulman on More realistic tales of doom · 2019-03-27T04:09:56.362Z · score: 6 (3 votes) · LW · GW
Failure would presumably occur before we get to the stage of "robot army can defeat unified humanity"---failure should happen soon after it becomes possible, and there are easier ways to fail than to win a clean war. Emphasizing this may give people the wrong idea, since it makes unity and stability seem like a solution rather than a stopgap. But emphasizing the robot army seems to have a similar problem---it doesn't really matter whether there is a literal robot army, you are in trouble anyway.

I agree other powerful tools can achieve the same outcome, and since in practice humanity isn't unified rogue AI could act earlier, but either way you get to AI controlling the means of coercive force, which helps people to understand the end-state reached.

It's good to both understand the events by which one is shifted into the bad trajectory, and to be clear on what the trajectory is. It sounds like your focus on the former may have interfered with the latter.

Comment by carlshulman on More realistic tales of doom · 2019-03-27T04:02:42.168Z · score: 12 (4 votes) · LW · GW
I think we can probably build systems that really do avoid killing people, e.g. by using straightforward versions of "do things that are predicted to lead to videos that people rate as acceptable," and that at the point when things have gone off the rails those videos still look fine (and to understand that there is a deep problem at that point you need to engage with complicated facts about the situation that are beyond human comprehension, not things like "are the robots killing people?"). I'm not visualizing the case where no one does anything to try to make their AI safe, I'm imagining the most probable cases where people fail.

Haven't you yourself written about the failure modes of 'do things predicted to lead to videos that people rate as acceptable' where the attack involves surreptitiously reprogramming the camera to get optimal videos (including weird engineered videos designed to optimize on infelicities in the learned objective? E.g. in your discussion of an unaligned benchmark here:

We train an autoregressive generative model, to maximize the log probability assigned to the actual sequence of actions and observations produced by the AI (with each observation conditioned on the past actions). This isn’t actually a good way to train the generative model, but it’s not really central to the discussion.
We train the reward function by showing humans sequences of actions and predicted observations, asking them to assign scores, then predicting those scores with supervised learning. We show humans the sequences of actions that look most promising to the system...
I am quite confident that “world which looks appealing on screen” is not the same as “world which is good;” the gap is already large today, and it will grow much larger over time.
Maintaining a fiction. Ultimately the goal of this AI is to make the sensors look like good things are happening in the world. This can be done either by making the world good or by carefully controlling what gets fed to the sensors. In general, there are many fewer constraints on “what gets fed to the sensors” than “what the world is actually like,” and so it’s going to be possible to produce much more desirable-looking outcomes, much more easily, by controlling the sensors.
If the humans understand what the AI is doing in detail then they can observe the sequence of actions to notice any actions that might corrupt the sensors. But as soon as the AI is engaging in complicated plans this appears to be totally intractable. Even if the AI was human-level, there are many extremely hard-to-detect plans: for example, the AI could collaborate with an external attacker, who will surreptitiously corrupt the sensor. Or the AI could create a copy of itself on an external server who then helped orchestrate a hard-to-detect corruption.

Are you now very confident that no AI company would implement something with this vulnerability? Or does the paragraph below just mean that the part where the AI gets control of the camera has to happen before the robot armies (or even robot armies identified as traceable to the AI in question), which then happen?

Part I has this focus because (i) that's where I think the action is---by the time you have robot armies killing everyone the ship is so sailed, I think a reasonable common-sense viewpoint would acknowledge this by reacting with incredulity to the "robots kill everyone" scenario, and would correctly place the "blame" on the point where everything got completely out of control even though there weren't actually robot armies yet (ii) the alternative visualization leads people to seriously underestimate the difficulty of the alignment problem, (iii) I was trying to describe the part of the picture which is reasonably accurate regardless of my views on the singularity.

Because it definitely seems that Vox got the impression from it that there is never a robot army takeover in the scenario, not that it's slightly preceded by camera hacking.

Is the idea that the AI systems develops goals over the external world (rather than the sense inputs/video pixels) so that they are really pursuing the appearance of prosperity, or corporate profits, and so don't just wirehead their sense inputs as in your benchmark post?

Comment by carlshulman on More realistic tales of doom · 2019-03-26T22:15:14.270Z · score: 24 (10 votes) · LW · GW

I think the kind of phrasing you use in this post and others like it systematically misleads readers into thinking that in your scenarios there are no robot armies seizing control of the world (or rather, that all armies worth anything at that point are robotic, and so AIs in conflict with humanity means military force that humanity cannot overcome). I.e. AI systems pursuing badly aligned proxy goals or influence-seeking tendencies wind up controlling or creating that military power and expropriating humanity (which eventually couldn't fight back thereafter even if unified).

E.g. Dylan Matthews' Vox writeup of the OP seems to think that your scenarios don't involve robot armies taking control of the means of production and using the universe for their ends against human objections or killing off existing humans (perhaps destructively scanning their brains for information but not giving good living conditions to the scanned data):

Even so, Christiano’s first scenario doesn’t precisely envision human extinction. It envisions human irrelevance, as we become agents of machines we created.
Human reliance on these systems, combined with the systems failing, leads to a massive societal breakdown. And in the wake of the breakdown, there are still machines that are great at persuading and influencing people to do what they want, machines that got everyone into this catastrophe and yet are still giving advice that some of us will listen to.

The Vox article also mistakes the source of influence-seeking patterns to be about social influence rather than systems that try to increase in power and numbers tend to do so, so are selected for if we accidentally or intentionally produce them and don't effectively weed them out; this is why living things are adapted to survive and expand; such desires motivate conflict with humans when power and reproduction can be obtained by conflict with humans, which can look like robot armies taking control.takes the point about influence-seeking patterns to be about. That seems to me just a mistake about the meaning of influence you had in mind here:

Often, he notes, the best way to achieve a given goal is to obtain influence over other people who can help you achieve that goal. If you are trying to launch a startup, you need to influence investors to give you money and engineers to come work for you. If you’re trying to pass a law, you need to influence advocacy groups and members of Congress.
That means that machine-learning algorithms will probably, over time, produce programs that are extremely good at influencing people. And it’s dangerous to have machines that are extremely good at influencing people.

Comment by carlshulman on Act of Charity · 2018-12-18T20:14:49.885Z · score: 25 (7 votes) · LW · GW

There's an enormous difference between having millions of dollars of operating expenditures in an LLC (so that an org is legally allowed to do things like investigate non-deductible activities like investment or politics), and giving up the ability to make billions of dollars of tax-deductible donations. Open Philanthropy being an LLC (so that its own expenses aren't tax-deductible, but it has LLC freedom) doesn't stop Good Ventures from making all relevant donations tax-deductible, and indeed the overwhelming majority of grants on its grants page are deductible.

Comment by carlshulman on Two Neglected Problems in Human-AI Safety · 2018-12-18T17:38:24.320Z · score: 7 (4 votes) · LW · GW

I think this is under-discussed, but also that I have seen many discussions in this area. E.g. I have seen it come up and brought it up in the context of Paul's research agenda, where success relies on humans being able to play their part safely in the amplification system. Many people say they are more worried about misuse than accident on the basis of the corruption issues (and much discussion about CEV and idealization, superstimuli, etc addresses the kind of path-dependence and adversarial search you mention).

However, those varied problems mostly aren't formulated as 'ML safety problems in humans' (I have seen robustness and distributional shift discussion for Paul's amplification, and daemons/wireheading/safe-self-modification for humans and human organizations), and that seems like a productive framing for systematic exploration, going through the known inventories and trying to see how they cross-apply.

Comment by carlshulman on "Artificial Intelligence" (new entry at Stanford Encyclopedia of Philosophy) · 2018-07-19T19:59:26.860Z · score: 5 (2 votes) · LW · GW

No superintelligent AI computers, because they lack hypercomputation.

Comment by carlshulman on "Artificial Intelligence" (new entry at Stanford Encyclopedia of Philosophy) · 2018-07-19T19:45:47.604Z · score: 8 (4 votes) · LW · GW

Another Bringsjord classic :

> However, we give herein a novel, formal modal argument showing that since it's mathematically possible that human minds are hypercomputers, such minds are in fact hypercomputers.

Comment by carlshulman on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-07-03T18:54:19.215Z · score: 4 (2 votes) · LW · GW

That's what the congenital deafness discussion was about.

You have preferences over pain and pleasure intensities that you haven't experienced, or new durations of experiences you know. Otherwise you wouldn't have anything to worry about re torture, since you haven't experienced it.

Consider people with pain asymbolia:

Pain asymbolia is a condition in which pain is perceived, but with an absence of the suffering that is normally associated with the pain experience. Individuals with pain asymbolia still identify the stimulus as painful but do not display the behavioral or affective reactions that usually accompany pain; no sense of threat and/or danger is precipitated by pain.

Suppose you currently had pain asymbolia. Would that mean you wouldn't object to pain and suffering in non-asymbolics? What if you personally had only happened to experience extremely mild discomfort while having lots of great positive experiences? What about for yourself? If you knew you were going to get a cure for your pain asymbolia tomorrow would you object to subsequent torture as intrinsically bad?

We can go through similar stories for major depression and positive mood.

Seems it's the character of the experience that matters.

Likewise, if you've never experienced skiing, chocolate, favorite films, sex, victory in sports, and similar things that doesn't mean you should act as though they have no moral value. This also holds true for enhanced experiences and experiences your brain currently is unable to have, like the case of congenital deafness followed by a procedure to grant hearing and listening to music.

Comment by carlshulman on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-07-02T06:38:40.397Z · score: 4 (2 votes) · LW · GW

"My point was comparing pains and pleasures that could be generated with similar amount of resources. Do you think they balance out for human decision making?"

I think with current tech it's cheaper and easier to wirehead to increase pain (i.e. torture) than to increase pleasure or reduce pain. This makes sense biologically, since organisms won't go looking for ways to wirehead to maximize their own pain, evolution doesn't need to 'hide the keys' as much as with pleasure or pain relief (where the organism would actively seek out easy means of subverting the behavioral functions of the hedonic system). Thus when powerful addictive drugs are available, such as alcohol, human populations evolve increased resistance over time. The sex systems evolve to make masturbation less rewarding than reproductive sex under ancestral conditions, desire for play/curiosity is limited by boredom, delicious foods become less pleasant when full or the foods are not later associated with nutritional sensors in the stomach, etc.

I don't think this is true with fine control over the nervous system (or a digital version) to adjust felt intensity and behavioral reinforcement. I think with that sort of full access one could easily increase the intensity (and ease of activation) of pleasures/mood such that one would trade them off against the most intense pains at ~parity per second, and attempts at subjective comparison when or after experiencing both would put them at ~parity.

People will willingly undergo very painful jobs and undertakings for money, physical pleasures, love, status, childbirth, altruism, meaning, etc. Unless you have a different standard for the 'boxes' than used in subjective comparison with rich experience of the things to be compared I think we just haggling over the price re intensity.

We know the felt caliber and behavioral influence of such things can vary greatly. It would be possible to alter nociception and pain receptors to amp up or damp down any particular pain. This could even involve adding a new sense, e.g. someone with congenital deafness could be given the ability to hear (installing new nerves and neurons), and hear painful sounds, with artificially set intensity of pain. Likewise one could add a new sense (or dial one up) to enable stronger pleasures. I think that both the new pains and new pleasures would 'count' to the same degree (and if you're going to dismiss the pleasures as 'wireheading' then you should dismiss the pains too).

" For example, I'd strongly disagree to create a box of pleasure and a box of pain, do you think my preference would go away after extrapolation?"

You trade off pain and pleasure in your own life, are you saying that the standard would be different for the boxes than for yourself?

What are you using as the examples to represent the boxes, and have you experienced them? (As discussed in my link above, people often use weaksauce examples in such comparison.)

Comment by carlshulman on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-07-01T18:50:01.683Z · score: 4 (2 votes) · LW · GW

"one filled with pleasure and the other filled with pain, feels strongly negative rather than symmetric to us"

Comparing pains and pleasures of similar magnitude? People have a tendency not to do this, see the linked thread.

"Another sign is that pain is an internal experience, while our values might refer to the external world (though it's very murky"

You accept pain and risk of pain all the time to pursue various pleasures, desires and goals. Mice will cross electrified surfaces for tastier treats.

If you're going to care about hedonic states as such, why treat the external case differently?

Alternatively, if you're going to dismiss pleasure as just an indicator of true goals (e.g. that pursuit of pleasure as such is 'wireheading') then why not dismiss pain in the same way, as just a signal and not itself a goal?

Comment by carlshulman on Increasing GDP is not growth · 2017-03-02T01:55:45.657Z · score: 0 (0 votes) · LW · GW

I meant GWP without introducing the term. Edited for clarity.

Comment by carlshulman on Increasing GDP is not growth · 2017-02-19T20:28:29.500Z · score: 5 (5 votes) · LW · GW

If you have a constant population, and GDP increases, productivity per person has increased. But if you have a border on a map enclosing some people, and you move it so it encloses more people, productivity hasn't increased.

Can you give examples of people confirmed to be actually making the mistake this post discusses? I don't recall seeing any.

The standard economist claim (and the only version I've seen promulgated in LW and EA circles) is that it increases gross world product (total and per capita) because migrants are much more productive when they migrate to developed countries. Here is a set of references and counterarguments.

Separately, some people are keen to increase GDP in particular countries to pay off national fixed costs (like already incurred debts, or military spending).

Comment by carlshulman on Claim explainer: donor lotteries and returns to scale · 2016-12-31T01:17:33.114Z · score: 4 (4 votes) · LW · GW

I came up with the idea and basic method, then asked Paul if he would provide a donor lottery facility. He did so, and has been taking in entrants and solving logistical issues as they come up.

I agree that thinking/researching/discussing more dominates the gains in the $1-100k range.

Comment by carlshulman on Optimizing the news feed · 2016-12-02T00:26:05.112Z · score: 3 (3 votes) · LW · GW

A different possibility is identifying vectors in Facebook-behavior space, and letting users alter their feeds accordingly, e.g. I might want to see my feed shifted in the direction of more intelligent users, people outside the US, other political views, etc. At the individual level, I might be able to request a shift in my feed in the direction of individual Facebook friends I respect (where they give general or specific permission).

Comment by carlshulman on Synthetic supermicrobe will be resistant to all known viruses · 2016-11-24T05:08:50.863Z · score: 3 (4 votes) · LW · GW

That advantage only goes so far:

  • Plenty of nonviral bacteria-eating entities exist, and would become more numerous
  • Plant and antibacterial defenses aren't viral-based
  • For the bacteria to compete in the same niche as unmodified versions it has to fulfill a similar ecological role: photosynthetic cyanobacteria with altered DNA would still produce oxygen and provide food
  • It couldn't benefit from exchanging genetic material with other kinds of bacteria
Comment by carlshulman on Astrobiology III: Why Earth? · 2016-10-07T00:19:07.058Z · score: 5 (5 votes) · LW · GW

Primates and eukaryotes would be good.

Comment by carlshulman on Quick puzzle about utility functions under affine transformations · 2016-07-16T17:35:24.009Z · score: 8 (8 votes) · LW · GW

Your example has 3 states: vanilla, chocolate, and neither.

But you only explicitly assigned utilities to 2 of them, although you implicitly assigned the state of 'neither' a utility of 0 initially. Then when you applied the transformation to vanilla and chocolate you didn't apply it to the 'neither' state, which altered preferences for gambles over both transformed and untransformed states.

E.g. if we initially assigned u(neither)=0 then after the transformation we have u(neither)=4, u(vanilla)=7, u(chocolate)=12. Then an action with a 50% chance of neither and 50% chance of chocolate has expected utility 8, while the 100% chance of vanilla has expected utility 7.

Comment by carlshulman on A toy model of the control problem · 2015-09-18T15:31:48.936Z · score: 1 (1 votes) · LW · GW

Maybe explain how it works when being configured, and then stops working when B gets a better model of the situation/runs more trial-and-error trials?

Comment by carlshulman on A toy model of the control problem · 2015-09-17T19:15:31.812Z · score: 6 (6 votes) · LW · GW

An illustration with a game-playing AI, see 15:50 and after in the video. The system has a reward function based on bytes in memory, which leads it to pause the game forever when it is about to lose.

Comment by carlshulman on A toy model of the control problem · 2015-09-17T18:02:50.614Z · score: 1 (1 votes) · LW · GW

That still involves training it with no negative feedback error term for excess blocks (which would overwhelm a mere 0.1% uncertainty).

Comment by carlshulman on A toy model of the control problem · 2015-09-17T03:02:48.466Z · score: 2 (4 votes) · LW · GW

Of course, with this model it's a bit of a mystery why A gave B a reward function that gives 1 per block, instead of one that gives 1 for the first block and a penalty for additional blocks. Basically, why program B with a utility function so seriously out of whack with what you want when programming one perfectly aligned would have been easy?

Comment by carlshulman on Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time · 2015-07-30T00:05:13.443Z · score: 0 (0 votes) · LW · GW

1 is early filter meaning before our current state, #4 would be around or after our current state.

Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.

Not in the sense of harming us. For the Fermi paradox visible benevolent aliens are as inconsistent with our observations as murderous Berserkers.

I'm trying to get you to explain why you think a belief that "AI is a significant risk" would change our credence in any of #1-5, compared to not believing that.

Comment by carlshulman on Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time · 2015-07-28T22:50:12.051Z · score: 1 (1 votes) · LW · GW

Let's consider a few propositions:

  1. There is enough cumulative early filtration that very few civilizations develop, with less than 1 in expectation in a region like our past light-cone.
  2. Interstellar travel is impossible.
  3. Some civilizations have expanded but not engaged in mega-scale engineering that we could see or colonization that would have pre-empted our existence, and enforce their rules on dissenters.
  4. Civilizations very reliably wipe themselves out before they can colonize.
  5. Civilizations very reliably choose not to expand at all.

1-3 account for the Great Filter directly, and whether biological beings make AI they are happy with is irrelevant. For #4 and #5 what difference does it make whether biological beings make 'FAI' that helps them or 'UFAI' that kills them before going about its business? Either way the civilization (biological, machine, or both) could still wipe itself out or not (AIs could nuke each other out of existence too), and send out colonizers or not.

Unless there is some argument that 'UFAI' is much less likely to wipe out civilization (including itself), or much more likely to send out colonizers, how do the odds of alien 'FAI' vs 'UFAI' matter for explaining the Great Filter any more than whether aliens have scales or feathers? Either way they could produce visible signs or colonize Earth.

Comment by carlshulman on Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time · 2015-07-27T16:36:25.319Z · score: 2 (2 votes) · LW · GW

There's also the UFAI-Fermi-paradox:

This is just the regular Fermi paradox/Great Filter. If AI has any impact, it's that it may make space colonization easier. But what's important for that is that eventually industrial civilizations will develop AI (say in a million years). Whether the ancient aliens would be happy with the civilization that does the colonizing is irrelevant (i.e. UFAI/FAI) to the Filter.

You could also have the endotherm-Fermi-paradox, or the hexapodal-Fermi-paradox, or the Klingon-Great-Filter, but there is little to be gained by slicing up the Filter in that way.

Comment by carlshulman on Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time · 2015-07-26T17:04:42.688Z · score: 6 (6 votes) · LW · GW

Furthermore, although smaller stars are much more common than larger stars (the Sun is actually larger than over 80% of stars in the universe) stars smaller than about 0.5 solar masses (and thus 0.08 solar luminosities) are usually ‘flare stars’ – possessing very strong convoluted magnetic fields and periodically putting out flares and X-ray bursts that would frequently strip away the ozone and possibly even the atmosphere of an earthlike planet.

I have been wanting better stats on this for a while. Basically, what percentage of the eventual sum of potential-for-life-weighted habitable windows (undisturbed by technology) comes from small red dwarfs that can exist far longer than our sun, offsetting long stellar lifetimes with the various (nasty-looking) problems? ETA: wikipedia article. And how robust is the evidence?

Comment by carlshulman on Andrew Ng dismisses UFAI concerns · 2015-03-06T17:27:41.093Z · score: 8 (8 votes) · LW · GW

See this video at 39:30 for Yann LeCun giving some comments. He said:

  • Human-level AI is not near
  • He agrees with Musk that there will be important issues when it becomes near
  • He thinks people should be talking about it but not acting because a) there is some risk b) the public thinks there is more risk than there is

Also here is an IEEE interview:

Spectrum: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?

LeCun: It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.

There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.

Spectrum: What do you think he is going to accomplish in his job at Google?

LeCun: Not much has come out so far.

Spectrum: I often notice when I talk to researchers about the Singularity that while privately they are extremely dismissive of it, in public, they’re much more temperate in their remarks. Is that because so many powerful people in Silicon Valley believe it?

LeCun: AI researchers, down in the trenches, have to strike a delicate balance: be optimistic about what you can achieve, but don’t oversell what you can do. Point out how difficult your job is, but don’t make it sound hopeless. You need to be honest with your funders, sponsors, and employers, with your peers and colleagues, with the public, and with yourself. It is difficult when there is a lot of uncertainty about future progress, and when less honest or more self-deluded people make wild claims of future success. That’s why we don’t like hype: it is made by people who are either dishonest or self-deluded, and makes the life of serious and honest scientists considerably more difficult.

When you are in the kind of position as Larry Page and Sergey Brin and Elon Musk and Mark Zuckerberg, you have to prepare for where technology is going in the long run. And you have a huge amount of resources to make the future happen in a way that you think will be good. So inevitably you have to ask yourself those questions: what will technology be like 10, 20, 30 years from now. It leads you to think about questions like the progress of AI, the Singularity, and questions of ethics.

Spectrum: Right. But you yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.

LeCun: Not anytime soon.

Spectrum: Or ever.

LeCun: No, you can’t say never; technology is advancing very quickly, at an accelerating pace. But there are things that are worth worrying about today, and there are things that are so far out that we can write science fiction about it, but there’s no reason to worry about it just now.

Comment by carlshulman on Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time" · 2015-01-23T05:37:17.227Z · score: 6 (6 votes) · LW · GW

AI that can't compete in the job market probably isn't a global catastrophic risk.

Comment by carlshulman on Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial · 2015-01-17T00:11:10.895Z · score: 7 (7 votes) · LW · GW

GiveWell is on the case, and has said it is looking at bio threats (as well as nukes, solar storms, interruptions of agriculture). See their blog post on global catastrophic risks potential focus areas.

The open letter is an indication that GiveWell should take AI risk more seriously, while the Musk donation is an indication that near-term room for more funding will be lower. That could go either way.

On the room for more funding question, it's worth noting that GiveWell and Good Ventures are now moving tens of millions of dollars per year, and have been talking about moving quite a bit more than Musk's donation to the areas the Open Philanthropy Project winds up prioritizing.

However, even if the amount of money does not exhaust the field, there may be limits on how fast it can be digested, and the efficient growth path, that would favor gradually increasing activity.

Comment by carlshulman on Open Thread, March 1-15, 2013 · 2015-01-16T02:28:17.200Z · score: 2 (2 votes) · LW · GW

For some of the same reasons depressed people take drugs to elevate their mood.

Comment by carlshulman on New paper from MIRI: "Toward idealized decision theory" · 2014-12-26T21:58:02.204Z · score: 0 (0 votes) · LW · GW

Typo, "amplified" vs "amplify":

"on its motherboard as a makeshift radio to amplified oscillating signals from nearby computers"

Comment by carlshulman on [Resolved] Is the SIA doomsday argument wrong? · 2014-12-15T05:16:13.873Z · score: 2 (2 votes) · LW · GW

Thanks Brian.

Comment by carlshulman on [Resolved] Is the SIA doomsday argument wrong? · 2014-12-15T04:59:07.191Z · score: 2 (2 votes) · LW · GW

It has been endorsed by Robin Hanson, Carl Shulman, and Nick Bostrom.

The article you cite for Shulman and Bostrom does not endorse the SIA-doomsday argument. It describes it, but:

  • Doesn't take a stance on the SIA; it does an analysis of alternatives including SIA
  • Argues that the interaction with the Simulation Argument changes the conclusion of the Fermi Paradox SIA Doomsday argument given the assumption of SIA.
Comment by carlshulman on Musk on AGI Timeframes · 2014-11-17T17:37:49.156Z · score: 3 (3 votes) · LW · GW

By "we" do you mean Gök Us Sibernetik Ar & Ge in Turkey? How many people work there?

Comment by carlshulman on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-07T23:41:54.478Z · score: 3 (3 votes) · LW · GW

"My intuition says it would be hard to mine a few million SNPs, pick the most strongly associated 9500, and have them account for less than .29 of the variance, even if there were no relationship at all."

With sample sizes of thousands or low tens of thousands you'd get almost nothing. Going from 130k to 250k subjects took it from 0.13 to 0.29 (where the total contribution of all common additive effects is around 0.5).

Most of the top 9500 are false positives (the top 697 are genome-wide significant and contribute most of the variance explained). Larger sample sizes let you overcome noise and correctly weight the alleles with actual effects. The approach looks set to explain everything you can get (and the bulk of heritability for height and IQ) without whole genome sequencing for rare variants just by scaling up another order of magnitude.

Comment by carlshulman on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-07T23:32:24.837Z · score: 3 (3 votes) · LW · GW

You can deal with epistasis using the techniques Hsu discusses and big datasets, and in any case additive variance terms account for most of the heritability even without doing that. There is much more about epistasis (and why it is of secondary importance for characterizing the variation) in the linked preprint.

Comment by carlshulman on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-07T23:29:21.037Z · score: 5 (5 votes) · LW · GW

A lot of negative-sum selection for height perhaps. The genetic architecture is already known well enough for major embryo selection, and the rest is coming quickly.

Height's contribution to CEO status is perhaps half of IQ's, and in addition to substantial effects on income it is also very helpful in the marriage market for men.

But many of the benefits are likely positional, reflecting the social status gains of being taller than others in one's social environment, and there are physiological costs (as well as use of selective power that could be used on health, cognition, and other less positional goods).

Choices at actual sperm banks suggests parents would use a mix that placed serious non-exclusive weight on each of height, attractiveness, health, education/intelligence, and anything contributing to professional success. Selection on personality might be for traits that improve individual success or for compatibility with parents, but I'm not sure about the net.

Selection for similarity on political and religious orientation might come into use, and could have disturbing and important consequences.

Comment by carlshulman on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-07T23:20:14.423Z · score: 2 (2 votes) · LW · GW

This application highlights a problem in that definition, namely gains of specialization. Say you produced humans with superhuman general intelligence as measured by IQ tests, maybe the equivalent of 3 SD above von Neumann. Such a human still could not be an expert in each and every field of intellectual activity simultaneously due to time and storage constraints.

The superhuman could perhaps master any given field better than any human given some time for study and practice, but could not so master all of them without really ridiculously superhuman prowess. This overkill requirement is somewhat like the way a rigorous Turing Test requires not only humanlike reasoning, but tremendous ability to tell a coherent fake story about biographical details, etc.

Comment by carlshulman on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-07T23:10:12.544Z · score: 5 (5 votes) · LW · GW

Part of their reason for funding deworming is also improvements in cognitive skills, for which the evidence base just got some boost.

Comment by carlshulman on Update on Kim Suozzi (cancer patient in want of cryonics) · 2014-09-09T03:15:55.271Z · score: 0 (0 votes) · LW · GW

Also no asymptotic speedup.

Comment by carlshulman on Update on Kim Suozzi (cancer patient in want of cryonics) · 2014-09-08T04:22:04.714Z · score: 0 (0 votes) · LW · GW

You can duplicate that D-Wave machine on a laptop.

Comment by carlshulman on This is why we can't have social science · 2014-07-14T18:21:51.458Z · score: 3 (3 votes) · LW · GW

Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way

Do you agree with the empirical claim about the frequencies of false positives in initial studies versus false negatives in replications?

Comment by carlshulman on [LINK] No Boltzmann Brains in an Empty Expanding Universe · 2014-05-08T17:02:33.320Z · score: 11 (11 votes) · LW · GW

Scott Aaronson on Motl's reliability, or lack thereof, with details of a specific case.

Comment by carlshulman on On the concept of "talent-constrained" organizations · 2014-03-15T23:29:38.195Z · score: 8 (9 votes) · LW · GW

This actually seems very common in office jobs where you find many workers with million dollar salaries. Wall Street firms, strategy consultancies, and law firms all use models in which salaries expand massively with time, with high attrition along the way: the "up-or-out" model.

Even academia gives tenured positions (which have enormous value to workers) only after trial periods as postdocs and assistant professors.

Main Street corporate executives have to climb the ranks.

Comment by carlshulman on On not diversifying charity · 2014-03-14T07:23:04.720Z · score: 7 (11 votes) · LW · GW

Moral pluralism or uncertainty might give a reason to construct a charity portfolio which serves multiple values, as might emerge from something like the parliamentary model.

Comment by carlshulman on A Rational Altruist Punch in The Stomach · 2014-03-02T05:45:51.976Z · score: 1 (1 votes) · LW · GW

As I said in response to Gwern's comment, there is uncertainty over rates of expropriation/loss, and the expected value disproportionately comes from the possibility of low loss rates. That is why Robin talks about 1/1000, he's raising the possibility that the legal order will be such as to sustain great growth, and the laws of physics will allow unreasonably large populations or wealth.

Now, it is still a pretty questionable comparison, because there are plenty of other possibilities for mega-influence, like changing the probability that such compounding can take place (and isn't pre-empted by expropriation, nuclear war, etc).

Comment by carlshulman on A Rational Altruist Punch in The Stomach · 2014-03-02T05:38:30.113Z · score: 1 (1 votes) · LW · GW

So to survive, any perpetuity has a risk of 0.01^120 = 1.000000000000001e-240.

The premises in this argument aren't strong enough to support conclusions like that. Expropriation risks have declined strikingly, particularly in advanced societies, and it's easy enough to describe scenarios in which the annual risk of expropriation falls to extremely low levels, e.g. a stable world government run by patient immortals, or with an automated legal system designed for ultra-stability.

ETA: Weitzman on uncertainty about discount/expropriation rates.

Comment by CarlShulman on [deleted post] 2014-02-01T18:29:16.485Z

Pinker's data is on violence per capita - the total violence increased, it's just that the violence seems to scale sub-linearly with population.

Did you not read the book? He shows big declines in rates of wars, not just per capita damage from war.

Comment by CarlShulman on [deleted post] 2014-01-30T06:47:33.309Z

I agree that the risk of war is concentrated in changes in political conditions, and that the post-Cold War trough in conflict is too small to draw inferences from. Re the tentative trend, Pinker's assembled evidence goes back a long time, and covers many angles. It may fail to continue, and a nuclear war could change conditions thereafter, but there are many data points over time. If you want to give detail, feel free.

I would prefer to use representative expert opinion data from specialists in all the related fields (the nuclear scientists, political scientists, diplomats, etc), and the the work of panels trying to assess the problem, and would defer to expert consensus in their various areas of expertise (as with the climate science). But one can't update on views that have not been made known. Martin Hellman has called for an organized effort to estimate the risk, but without success as yet. I have been raising the task of better eliciting expert opinion and improving forecasting in this area, and worked to get it on the agenda at the FHI (as I did re the FHI survey of the most cited AI academics) and at other organizations. Where I have found information about experts' views I shared it.

New article on in vitro iterated embryo selection

2013-08-08T19:28:16.758Z · score: 11 (14 votes)

Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb?

2013-06-19T01:55:05.775Z · score: 16 (19 votes)

Normative uncertainty in Newcomb's problem

2013-06-16T02:16:44.853Z · score: 6 (8 votes)

[Retracted] Simpson's paradox strikes again: there is no great stagnation?

2012-07-30T17:55:04.788Z · score: 30 (34 votes)

Satire of Journal of Personality and Social Psychology's publication bias

2012-06-05T00:08:27.479Z · score: 26 (27 votes)

Using degrees of freedom to change the past for fun and profit

2012-03-07T02:51:55.367Z · score: 41 (44 votes)

"The Journal of Real Effects"

2012-03-05T03:07:02.685Z · score: 18 (15 votes)

Feed the spinoff heuristic!

2012-02-09T07:41:28.468Z · score: 49 (51 votes)

Robopocalypse author cites Yudkowsky's paperclip scenario

2011-07-17T02:18:50.042Z · score: 3 (6 votes)

Follow-up on ESP study: "We don't publish replications"

2011-07-12T20:48:19.884Z · score: 71 (71 votes)

Proposal: consolidate meetup announcements before promotion

2011-05-03T01:34:26.807Z · score: 11 (14 votes)

Future of Humanity Institute hiring postdocs from philosophy, math, CS

2011-02-02T00:39:04.509Z · score: 4 (5 votes)

Future of Humanity Institute at Oxford hiring postdocs

2010-11-24T21:40:00.597Z · score: 6 (7 votes)

Probability and Politics

2010-11-24T17:02:11.537Z · score: 17 (24 votes)

Nils Nilsson's AI History: The Quest for Artificial Intelligence

2010-10-31T19:33:39.378Z · score: 13 (14 votes)

Politics as Charity

2010-09-23T05:33:57.645Z · score: 29 (42 votes)

Singularity Call For Papers

2010-04-10T16:08:00.347Z · score: 7 (10 votes)

December 2009 Meta Thread

2009-12-17T03:41:17.341Z · score: 6 (9 votes)

Boston Area Less Wrong Meetup: 2 pm Sunday October 11th

2009-10-07T21:15:14.155Z · score: 4 (5 votes)

New Haven/Yale Less Wrong Meetup: 5 pm, Monday October 12

2009-10-07T20:35:09.646Z · score: 3 (4 votes)

Open Thread: March 2009

2009-03-26T04:04:07.047Z · score: 6 (11 votes)

Don't Revere The Bearer Of Good Info

2009-03-21T23:22:50.348Z · score: 91 (89 votes)