New article on in vitro iterated embryo selection 2013-08-08T19:28:16.758Z
Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? 2013-06-19T01:55:05.775Z
Normative uncertainty in Newcomb's problem 2013-06-16T02:16:44.853Z
[Retracted] Simpson's paradox strikes again: there is no great stagnation? 2012-07-30T17:55:04.788Z
Satire of Journal of Personality and Social Psychology's publication bias 2012-06-05T00:08:27.479Z
Using degrees of freedom to change the past for fun and profit 2012-03-07T02:51:55.367Z
"The Journal of Real Effects" 2012-03-05T03:07:02.685Z
Feed the spinoff heuristic! 2012-02-09T07:41:28.468Z
Robopocalypse author cites Yudkowsky's paperclip scenario 2011-07-17T02:18:50.042Z
Follow-up on ESP study: "We don't publish replications" 2011-07-12T20:48:19.884Z
Proposal: consolidate meetup announcements before promotion 2011-05-03T01:34:26.807Z
Future of Humanity Institute hiring postdocs from philosophy, math, CS 2011-02-02T00:39:04.509Z
Future of Humanity Institute at Oxford hiring postdocs 2010-11-24T21:40:00.597Z
Probability and Politics 2010-11-24T17:02:11.537Z
Nils Nilsson's AI History: The Quest for Artificial Intelligence 2010-10-31T19:33:39.378Z
Politics as Charity 2010-09-23T05:33:57.645Z
Singularity Call For Papers 2010-04-10T16:08:00.347Z
December 2009 Meta Thread 2009-12-17T03:41:17.341Z
Boston Area Less Wrong Meetup: 2 pm Sunday October 11th 2009-10-07T21:15:14.155Z
New Haven/Yale Less Wrong Meetup: 5 pm, Monday October 12 2009-10-07T20:35:09.646Z
Open Thread: March 2009 2009-03-26T04:04:07.047Z
Don't Revere The Bearer Of Good Info 2009-03-21T23:22:50.348Z


Comment by carlshulman on What trade should we make if we're all getting the new COVID strain? · 2020-12-27T16:45:56.275Z · LW · GW

Little reaction to the new strain news, or little reaction to new strains outpacing vaccines and getting a large chunk of the population over the next several months?

Comment by carlshulman on The Colliding Exponentials of AI · 2020-11-01T03:26:43.210Z · LW · GW

These projections in figure 4 seem to falsely assume training optimal compute scales linearly with model size. It doesn't, you also need to show more data points to the larger models so training compute grows superlinearly, as discussed in OAI scaling papers. That changes the results by orders of magnitude (there is uncertainty about which of two inconsistent scaling trends to extrapolate further out, as discussed in the papers).

Comment by carlshulman on Rafael Harth's Shortform · 2020-08-17T21:34:13.208Z · LW · GW


Comment by carlshulman on Are we in an AI overhang? · 2020-07-29T15:43:16.955Z · LW · GW
Maybe the real problem is just that it would add too much to the price of the car?

Yes. GPU/ASICs in a car will have to sit idle almost all the time, so the costs of running a big model on it will be much higher than in the cloud.

Comment by carlshulman on Rafael Harth's Shortform · 2020-07-23T16:04:40.404Z · LW · GW

I'm not a utilitarian, although I am closer to that than most people (scope sensitivity goes a long way in that direction), and find it a useful framework for highlighting policy considerations (but not the only kind of relevant normative consideration).

And no, Nick did not assert an estimate of x-risk as simultaneously P and <P.

Comment by carlshulman on Tips/tricks/notes on optimizing investments · 2020-06-05T17:44:44.891Z · LW · GW

This can prevent you from being able to deduct the interest as investment interest expense on your taxes due to interest tracing rules (you have to show the loan was not commingled with non-investment funds in an audit), and create a recordkeeping nightmare at tax time.

Comment by carlshulman on Open & Welcome Thread - June 2020 · 2020-06-05T15:43:55.152Z · LW · GW

Re hedging, a common technique is having multiple fairly different citizenships and foreign-held assets, i.e. such that if your country become dangerously oppressive you or your assets wouldn't be handed back to it. E.g. many Chinese elites pick up a Western citizenship for them or their children, and wealthy people fearing change in the US sometimes pick up New Zealand or Singapore homes and citizenship.

There are many countries with schemes to sell citizenship, although often you need to live in them for some years after you make your investment. Then emigrate if things are starting to look too scary before emigration is restricted.

My sense, however, is that the current risk of needing this is very low in the US, and the most likely reason for someone with the means to buy citizenship to leave would just be increases in wealth/investment taxes through the ordinary political process, with extremely low chance of a surprise cultural revolution (with large swathes of the population imprisoned, expropriated or killed for claimed ideological offenses) or ban on emigration. If you take enough precautions to deal with changes in tax law I think you'll be taking more than you need to deal with the much less likely cultural revolution story.

Comment by carlshulman on The EMH Aten't Dead · 2020-05-19T05:43:10.121Z · LW · GW
April was the stock market's best month in 30 years, which is not really what you expect during a global pandemic.

Historically the biggest short-term gains have been disproportionately amidst or immediately following bear markets, when volatility is highest.

Comment by carlshulman on The EMH Aten't Dead · 2020-05-18T17:02:26.722Z · LW · GW

Sure, it's part of how they earn money, but competition between them limits what's left, since they're bidding against each other to take the other side from the retail investor, who buys from or sells to the hedge fund offering the best deal at the time (made somewhat worse by deadweight losses from investing in speed).

Comment by carlshulman on The EMH Aten't Dead · 2020-05-18T17:00:49.619Z · LW · GW
It doesn't suggest that. Factually, we know that a majority of investors underperform indexes.

Absolutely, I mean that when you break out the causes of the underperformance, you can see how much is from spending time out of the market, from paying high fees, from excessive trading to pay spreads and capital gains taxes repeatedly, from retail investors not starting with all their future earnings invested (e.g. often a huge factor in the Dalbar studies commonly cited to sell high fee mutual funds to retail investors), and how much from unwittingly identifying overpriced securities and buying them. And the last chunk is small relative to the rest.

When there's an event that will cause retail investors to predictively make bad investments some hedge fund will do high frequency trades as soon the event becomes known to be able to trade the opposite site of the trade.

I agree, active investors correcting retail investors can earn normal profits on the EMH, and certainly market makers get spreads. But competition is strong, and spreads have been shrinking, so that's much less damaging than identifying seriously overpriced stocks and buying them.

Comment by carlshulman on The EMH Aten't Dead · 2020-05-18T02:55:50.800Z · LW · GW

Thank you, I enjoyed this post.

One thing I would add is that the EMH also suggests one can make deviations that don't have very high EMH-predicted costs. Small investors do underperform indexes a lot by paying extra fees, churning with losses to spreads and capital gains taxes, spending time out of the market, and taking too much or too little over risk (and especially too much uncompensated risk from under diversification). But given the EMH they also can't actively pick equities with large expected underperformance. Otherwise, a hedge fund could make huge profits by just doing the opposite (they compete the mispricing down to a level where they earn normal profits). Reversed stupidity is not intelligence. [Edited paragraph to be clear that typical retail investors do severely underperform, just mainly for reasons other than uncanny ability to find overpriced securities and buy them).]

That consideration makes it more attractive, if one is uncertain about an edge, to consider investments that the EMH would predict should be have very modest underperformance, but some unusual information would suggest would outperform a lot. I was persuaded to deviate from indexing after seeing high returns across several 'would-have-invested in' (or did invest a little in, registered predictions on, etc) cases of the sort Wei Dai discusses. So far doing so has been kind to my IRR vs benchmarks, but because I've only seen results across a handful of deviations (one was coronavirus-inspired market puts, inspired in part by Wei Dai and held until late March based on a prior plan of letting clear community transmission in the US become visible), and my understanding from colleagues in the pandemic space), the likelihood ratio is weak between the bottom two quadrants of your figure. I might fill in 'deluded lucky fool' in your poll. Yet I don't demand a very high credence in the good quadrant to outweigh the underdiversification costs of using these deviations as a stock-picking random number generator. That said, the bar for even that much credence in a purported edge is still very demanding.

I'd also flag that going all-in on EMH and modern financial theory still leads to fairly unusual investing behavior for a retail investor, moreso than I had thought before delving into it. E.g. taking human capital into account in portfolio design, or really understanding the utility functions and beliefs required to justify standard asset allocation advice (vs something like maximizing expected growth rate/log utility of income/Kelly criterion, without a 0 leverage constraint), or just figuring out all the tax optimization (and investment choice interactions with tax law), like the Mega Backdoor Roth, donating appreciated stock, tax loss harvesting, or personal defined benefit pension plans. So there's a lot more to doing EMH investing right than just buying a Vanguard target date fund, and I would want to encourage people to do that work regardless.

Comment by carlshulman on Fast Takeoff in Biological Intelligence · 2020-04-27T03:03:01.255Z · LW · GW

I agree human maturation time is enough on its own to rule out a human reproductive biotech 'fast takeoff,' but also:

  • In any given year the number of new births is very small relative to the existing workforce, of billions of humans, including many people with extraordinary abilities
  • Most of those births are unplanned or to parents without access to technologies like IVF
  • New reproductive technologies are adopted gradually by risk-averse parents
  • Any radical enhancement would carry serious risks of negative surprise side effects, further reducing the user base of new tech
  • IVF is only used for a few percent of births in rich countries, and existing fancy versions are used even less frequently

All of those factors would smooth out any such application to spread out expected impacts over a number of decades, on top of the minimum from maturation times.

Comment by carlshulman on 2019 AI Alignment Literature Review and Charity Comparison · 2020-01-06T21:18:32.969Z · LW · GW
MIRI researchers contributed to the following research led by other organisations
MacAskill & Demski's A Critique of Functional Decision Theory

This seems like a pretty weird description of Demski replying to MacAskill's draft.

Comment by carlshulman on Does GPT-2 Understand Anything? · 2020-01-03T08:00:09.743Z · LW · GW

The interesting content kept me reading, but it would help the reader to have lines between paragraphs in the post.

Comment by carlshulman on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-26T21:12:09.173Z · LW · GW

I have launch codes and don't think this is good. Specifically, I think it's bad.

Comment by carlshulman on Why so much variance in human intelligence? · 2019-08-23T23:21:25.592Z · LW · GW

A mouse brain has ~75 million neurons, a human brain ~85 billion neurons. The standard deviation of human brain size is ~10%. If we think of that as a proportional increase rather than an absolute increase in the # of neurons, that's ~74 standard deviations of difference. The correlation between # of neurons and IQ in humans is ~0.3, but that's still a massive difference. Total neurons/computational capacity does show a pattern somewhat like that in the figure. Chimps' brains are a factor of ~3x smaller than humans, ~12 standard deviations.

Selection can cumulatively produce gaps that are large relative to intraspecific variation (one can see the same relationships even more blatantly considering total body mass). Mice do show substantial variation in maze performance, etc.

And the cumulative cognitive work that has gone into optimizing the language, technical toolkit, norms, and other factors involved in human culture and training into are immensely beyond those of mice (and note that human training of animals can greatly expand the set of tasks they can perform, especially with some breeding to adjust their personalities to be more enthusiastic about training). Humans with their language abilities can properly interface with that culture, dwarfing the capabilities both of small animals and people in smaller earlier human cultures with less accumulated technology or economies of scale.

Hominid culture took off enabled by human capabilities [so we are not incredibly far from the minimum need for strongly accumulating culture, the selection effect you reference in the post], and kept rising over hundreds of thousands and millions of years, at accelerating pace as the population grew with new tech, expediting further technical advance. Different regions advanced at different rates (generally larger connected regions grew faster, with more innovators to accumulate innovations), but all but the smallest advanced. So if humans overall had lower cognitive abilities there would be slack for technological advance to have happened anyway, just at slower rates (perhaps manyfold), accumulating more by trial and error.

Human individual differences are also amplified by individual control over environments, e.g. people who find studying more congenial or fruitful study more and learn more.

Comment by carlshulman on No, it's not The Incentives—it's you · 2019-07-25T06:02:25.700Z · LW · GW

Survey and other data indicate that in these fields most people were doing p-hacking/QRPs (running tests selected ex post, optional stopping, reporting and publication bias, etc), but a substantial minority weren't, with individual, subfield, and field variation. Some people produced ~100% bogus work while others were ~0%. So it was possible to have a career without the bad practices Yarkoni criticizes, aggregating across many practices to look at overall reproducibility of research.

And he is now talking about people who have been informed about the severe effects of the QRPs (that they result in largely bogus research at large cost to science compared to reproducible alternatives that many of their colleagues are now using and working to reward) but choose to continue the bad practices. That group is also disproportionately tenured, so it's not a question of not getting a place in academia now, but of giving up on false claims they built their reputation around and reduced grants and speaking fees.

I think the core issue is that even though the QRPs that lead to mostly bogus research in fields such as social psych and neuroimaging often started off without intentional bad conduct, their bad effects have now become public knowledge, and Yarkoni is right to call out those people on continuing them and defending continuing them.

Comment by carlshulman on Unconscious Economics · 2019-03-28T02:40:26.775Z · LW · GW

There is a literature on firm productivity showing large firm variation in productivity and average productivity growth by expansion of productive firms relative less productive firms. E.g. this , this , this , and this.

Comment by carlshulman on What failure looks like · 2019-03-27T19:25:24.625Z · LW · GW

OK, thanks for the clarification!

My own sense is that the intermediate scenarios are unstable: if we have fairly aligned AI we immediately use it to make more aligned AI and collectively largely reverse things like Facebook click-maximization manipulation. If we have lost the power to reverse things then they go all the way to near-total loss of control over the future. So i would tend to think we wind up in the extremes.

I could imagine a scenario where there is a close balance among multiple centers of AI+human power, and some but not all of those centers have local AI takeovers before the remainder solve AI alignment, and then you get a world that is a patchwork of human-controlled and autonomous states, both types automated. E.g. the United States and China are taken over by their AI systems (inlcuding robot armies), but the Japanese AI assistants and robot army remain under human control and the future geopolitical system keeps both types of states intact thereafter.

Comment by carlshulman on What failure looks like · 2019-03-27T04:09:56.362Z · LW · GW
Failure would presumably occur before we get to the stage of "robot army can defeat unified humanity"---failure should happen soon after it becomes possible, and there are easier ways to fail than to win a clean war. Emphasizing this may give people the wrong idea, since it makes unity and stability seem like a solution rather than a stopgap. But emphasizing the robot army seems to have a similar problem---it doesn't really matter whether there is a literal robot army, you are in trouble anyway.

I agree other powerful tools can achieve the same outcome, and since in practice humanity isn't unified rogue AI could act earlier, but either way you get to AI controlling the means of coercive force, which helps people to understand the end-state reached.

It's good to both understand the events by which one is shifted into the bad trajectory, and to be clear on what the trajectory is. It sounds like your focus on the former may have interfered with the latter.

Comment by carlshulman on What failure looks like · 2019-03-27T04:02:42.168Z · LW · GW
I think we can probably build systems that really do avoid killing people, e.g. by using straightforward versions of "do things that are predicted to lead to videos that people rate as acceptable," and that at the point when things have gone off the rails those videos still look fine (and to understand that there is a deep problem at that point you need to engage with complicated facts about the situation that are beyond human comprehension, not things like "are the robots killing people?"). I'm not visualizing the case where no one does anything to try to make their AI safe, I'm imagining the most probable cases where people fail.

Haven't you yourself written about the failure modes of 'do things predicted to lead to videos that people rate as acceptable' where the attack involves surreptitiously reprogramming the camera to get optimal videos (including weird engineered videos designed to optimize on infelicities in the learned objective? E.g. in your discussion of an unaligned benchmark here:

We train an autoregressive generative model, to maximize the log probability assigned to the actual sequence of actions and observations produced by the AI (with each observation conditioned on the past actions). This isn’t actually a good way to train the generative model, but it’s not really central to the discussion.
We train the reward function by showing humans sequences of actions and predicted observations, asking them to assign scores, then predicting those scores with supervised learning. We show humans the sequences of actions that look most promising to the system...
I am quite confident that “world which looks appealing on screen” is not the same as “world which is good;” the gap is already large today, and it will grow much larger over time.
Maintaining a fiction. Ultimately the goal of this AI is to make the sensors look like good things are happening in the world. This can be done either by making the world good or by carefully controlling what gets fed to the sensors. In general, there are many fewer constraints on “what gets fed to the sensors” than “what the world is actually like,” and so it’s going to be possible to produce much more desirable-looking outcomes, much more easily, by controlling the sensors.
If the humans understand what the AI is doing in detail then they can observe the sequence of actions to notice any actions that might corrupt the sensors. But as soon as the AI is engaging in complicated plans this appears to be totally intractable. Even if the AI was human-level, there are many extremely hard-to-detect plans: for example, the AI could collaborate with an external attacker, who will surreptitiously corrupt the sensor. Or the AI could create a copy of itself on an external server who then helped orchestrate a hard-to-detect corruption.

Are you now very confident that no AI company would implement something with this vulnerability? Or does the paragraph below just mean that the part where the AI gets control of the camera has to happen before the robot armies (or even robot armies identified as traceable to the AI in question), which then happen?

Part I has this focus because (i) that's where I think the action is---by the time you have robot armies killing everyone the ship is so sailed, I think a reasonable common-sense viewpoint would acknowledge this by reacting with incredulity to the "robots kill everyone" scenario, and would correctly place the "blame" on the point where everything got completely out of control even though there weren't actually robot armies yet (ii) the alternative visualization leads people to seriously underestimate the difficulty of the alignment problem, (iii) I was trying to describe the part of the picture which is reasonably accurate regardless of my views on the singularity.

Because it definitely seems that Vox got the impression from it that there is never a robot army takeover in the scenario, not that it's slightly preceded by camera hacking.

Is the idea that the AI systems develops goals over the external world (rather than the sense inputs/video pixels) so that they are really pursuing the appearance of prosperity, or corporate profits, and so don't just wirehead their sense inputs as in your benchmark post?

Comment by carlshulman on What failure looks like · 2019-03-26T22:15:14.270Z · LW · GW

I think the kind of phrasing you use in this post and others like it systematically misleads readers into thinking that in your scenarios there are no robot armies seizing control of the world (or rather, that all armies worth anything at that point are robotic, and so AIs in conflict with humanity means military force that humanity cannot overcome). I.e. AI systems pursuing badly aligned proxy goals or influence-seeking tendencies wind up controlling or creating that military power and expropriating humanity (which eventually couldn't fight back thereafter even if unified).

E.g. Dylan Matthews' Vox writeup of the OP seems to think that your scenarios don't involve robot armies taking control of the means of production and using the universe for their ends against human objections or killing off existing humans (perhaps destructively scanning their brains for information but not giving good living conditions to the scanned data):

Even so, Christiano’s first scenario doesn’t precisely envision human extinction. It envisions human irrelevance, as we become agents of machines we created.
Human reliance on these systems, combined with the systems failing, leads to a massive societal breakdown. And in the wake of the breakdown, there are still machines that are great at persuading and influencing people to do what they want, machines that got everyone into this catastrophe and yet are still giving advice that some of us will listen to.

The Vox article also mistakes the source of influence-seeking patterns to be about social influence rather than systems that try to increase in power and numbers tend to do so, so are selected for if we accidentally or intentionally produce them and don't effectively weed them out; this is why living things are adapted to survive and expand; such desires motivate conflict with humans when power and reproduction can be obtained by conflict with humans, which can look like robot armies taking control.takes the point about influence-seeking patterns to be about. That seems to me just a mistake about the meaning of influence you had in mind here:

Often, he notes, the best way to achieve a given goal is to obtain influence over other people who can help you achieve that goal. If you are trying to launch a startup, you need to influence investors to give you money and engineers to come work for you. If you’re trying to pass a law, you need to influence advocacy groups and members of Congress.
That means that machine-learning algorithms will probably, over time, produce programs that are extremely good at influencing people. And it’s dangerous to have machines that are extremely good at influencing people.

Comment by carlshulman on Act of Charity · 2018-12-18T20:14:49.885Z · LW · GW

There's an enormous difference between having millions of dollars of operating expenditures in an LLC (so that an org is legally allowed to do things like investigate non-deductible activities like investment or politics), and giving up the ability to make billions of dollars of tax-deductible donations. Open Philanthropy being an LLC (so that its own expenses aren't tax-deductible, but it has LLC freedom) doesn't stop Good Ventures from making all relevant donations tax-deductible, and indeed the overwhelming majority of grants on its grants page are deductible.

Comment by carlshulman on Two Neglected Problems in Human-AI Safety · 2018-12-18T17:38:24.320Z · LW · GW

I think this is under-discussed, but also that I have seen many discussions in this area. E.g. I have seen it come up and brought it up in the context of Paul's research agenda, where success relies on humans being able to play their part safely in the amplification system. Many people say they are more worried about misuse than accident on the basis of the corruption issues (and much discussion about CEV and idealization, superstimuli, etc addresses the kind of path-dependence and adversarial search you mention).

However, those varied problems mostly aren't formulated as 'ML safety problems in humans' (I have seen robustness and distributional shift discussion for Paul's amplification, and daemons/wireheading/safe-self-modification for humans and human organizations), and that seems like a productive framing for systematic exploration, going through the known inventories and trying to see how they cross-apply.

Comment by carlshulman on "Artificial Intelligence" (new entry at Stanford Encyclopedia of Philosophy) · 2018-07-19T19:59:26.860Z · LW · GW

No superintelligent AI computers, because they lack hypercomputation.

Comment by carlshulman on "Artificial Intelligence" (new entry at Stanford Encyclopedia of Philosophy) · 2018-07-19T19:45:47.604Z · LW · GW

Another Bringsjord classic :

> However, we give herein a novel, formal modal argument showing that since it's mathematically possible that human minds are hypercomputers, such minds are in fact hypercomputers.

Comment by carlshulman on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-07-03T18:54:19.215Z · LW · GW

That's what the congenital deafness discussion was about.

You have preferences over pain and pleasure intensities that you haven't experienced, or new durations of experiences you know. Otherwise you wouldn't have anything to worry about re torture, since you haven't experienced it.

Consider people with pain asymbolia:

Pain asymbolia is a condition in which pain is perceived, but with an absence of the suffering that is normally associated with the pain experience. Individuals with pain asymbolia still identify the stimulus as painful but do not display the behavioral or affective reactions that usually accompany pain; no sense of threat and/or danger is precipitated by pain.

Suppose you currently had pain asymbolia. Would that mean you wouldn't object to pain and suffering in non-asymbolics? What if you personally had only happened to experience extremely mild discomfort while having lots of great positive experiences? What about for yourself? If you knew you were going to get a cure for your pain asymbolia tomorrow would you object to subsequent torture as intrinsically bad?

We can go through similar stories for major depression and positive mood.

Seems it's the character of the experience that matters.

Likewise, if you've never experienced skiing, chocolate, favorite films, sex, victory in sports, and similar things that doesn't mean you should act as though they have no moral value. This also holds true for enhanced experiences and experiences your brain currently is unable to have, like the case of congenital deafness followed by a procedure to grant hearing and listening to music.

Comment by carlshulman on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-07-02T06:38:40.397Z · LW · GW

"My point was comparing pains and pleasures that could be generated with similar amount of resources. Do you think they balance out for human decision making?"

I think with current tech it's cheaper and easier to wirehead to increase pain (i.e. torture) than to increase pleasure or reduce pain. This makes sense biologically, since organisms won't go looking for ways to wirehead to maximize their own pain, evolution doesn't need to 'hide the keys' as much as with pleasure or pain relief (where the organism would actively seek out easy means of subverting the behavioral functions of the hedonic system). Thus when powerful addictive drugs are available, such as alcohol, human populations evolve increased resistance over time. The sex systems evolve to make masturbation less rewarding than reproductive sex under ancestral conditions, desire for play/curiosity is limited by boredom, delicious foods become less pleasant when full or the foods are not later associated with nutritional sensors in the stomach, etc.

I don't think this is true with fine control over the nervous system (or a digital version) to adjust felt intensity and behavioral reinforcement. I think with that sort of full access one could easily increase the intensity (and ease of activation) of pleasures/mood such that one would trade them off against the most intense pains at ~parity per second, and attempts at subjective comparison when or after experiencing both would put them at ~parity.

People will willingly undergo very painful jobs and undertakings for money, physical pleasures, love, status, childbirth, altruism, meaning, etc. Unless you have a different standard for the 'boxes' than used in subjective comparison with rich experience of the things to be compared I think we just haggling over the price re intensity.

We know the felt caliber and behavioral influence of such things can vary greatly. It would be possible to alter nociception and pain receptors to amp up or damp down any particular pain. This could even involve adding a new sense, e.g. someone with congenital deafness could be given the ability to hear (installing new nerves and neurons), and hear painful sounds, with artificially set intensity of pain. Likewise one could add a new sense (or dial one up) to enable stronger pleasures. I think that both the new pains and new pleasures would 'count' to the same degree (and if you're going to dismiss the pleasures as 'wireheading' then you should dismiss the pains too).

" For example, I'd strongly disagree to create a box of pleasure and a box of pain, do you think my preference would go away after extrapolation?"

You trade off pain and pleasure in your own life, are you saying that the standard would be different for the boxes than for yourself?

What are you using as the examples to represent the boxes, and have you experienced them? (As discussed in my link above, people often use weaksauce examples in such comparison.)

Comment by carlshulman on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-07-01T18:50:01.683Z · LW · GW

"one filled with pleasure and the other filled with pain, feels strongly negative rather than symmetric to us"

Comparing pains and pleasures of similar magnitude? People have a tendency not to do this, see the linked thread.

"Another sign is that pain is an internal experience, while our values might refer to the external world (though it's very murky"

You accept pain and risk of pain all the time to pursue various pleasures, desires and goals. Mice will cross electrified surfaces for tastier treats.

If you're going to care about hedonic states as such, why treat the external case differently?

Alternatively, if you're going to dismiss pleasure as just an indicator of true goals (e.g. that pursuit of pleasure as such is 'wireheading') then why not dismiss pain in the same way, as just a signal and not itself a goal?

Comment by carlshulman on Increasing GDP is not growth · 2017-03-02T01:55:45.657Z · LW · GW

I meant GWP without introducing the term. Edited for clarity.

Comment by carlshulman on Increasing GDP is not growth · 2017-02-19T20:28:29.500Z · LW · GW

If you have a constant population, and GDP increases, productivity per person has increased. But if you have a border on a map enclosing some people, and you move it so it encloses more people, productivity hasn't increased.

Can you give examples of people confirmed to be actually making the mistake this post discusses? I don't recall seeing any.

The standard economist claim (and the only version I've seen promulgated in LW and EA circles) is that it increases gross world product (total and per capita) because migrants are much more productive when they migrate to developed countries. Here is a set of references and counterarguments.

Separately, some people are keen to increase GDP in particular countries to pay off national fixed costs (like already incurred debts, or military spending).

Comment by carlshulman on Claim explainer: donor lotteries and returns to scale · 2016-12-31T01:17:33.114Z · LW · GW

I came up with the idea and basic method, then asked Paul if he would provide a donor lottery facility. He did so, and has been taking in entrants and solving logistical issues as they come up.

I agree that thinking/researching/discussing more dominates the gains in the $1-100k range.

Comment by carlshulman on Optimizing the news feed · 2016-12-02T00:26:05.112Z · LW · GW

A different possibility is identifying vectors in Facebook-behavior space, and letting users alter their feeds accordingly, e.g. I might want to see my feed shifted in the direction of more intelligent users, people outside the US, other political views, etc. At the individual level, I might be able to request a shift in my feed in the direction of individual Facebook friends I respect (where they give general or specific permission).

Comment by carlshulman on Synthetic supermicrobe will be resistant to all known viruses · 2016-11-24T05:08:50.863Z · LW · GW

That advantage only goes so far:

  • Plenty of nonviral bacteria-eating entities exist, and would become more numerous
  • Plant and antibacterial defenses aren't viral-based
  • For the bacteria to compete in the same niche as unmodified versions it has to fulfill a similar ecological role: photosynthetic cyanobacteria with altered DNA would still produce oxygen and provide food
  • It couldn't benefit from exchanging genetic material with other kinds of bacteria
Comment by carlshulman on Astrobiology III: Why Earth? · 2016-10-07T00:19:07.058Z · LW · GW

Primates and eukaryotes would be good.

Comment by carlshulman on Quick puzzle about utility functions under affine transformations · 2016-07-16T17:35:24.009Z · LW · GW

Your example has 3 states: vanilla, chocolate, and neither.

But you only explicitly assigned utilities to 2 of them, although you implicitly assigned the state of 'neither' a utility of 0 initially. Then when you applied the transformation to vanilla and chocolate you didn't apply it to the 'neither' state, which altered preferences for gambles over both transformed and untransformed states.

E.g. if we initially assigned u(neither)=0 then after the transformation we have u(neither)=4, u(vanilla)=7, u(chocolate)=12. Then an action with a 50% chance of neither and 50% chance of chocolate has expected utility 8, while the 100% chance of vanilla has expected utility 7.

Comment by carlshulman on A toy model of the control problem · 2015-09-18T15:31:48.936Z · LW · GW

Maybe explain how it works when being configured, and then stops working when B gets a better model of the situation/runs more trial-and-error trials?

Comment by carlshulman on A toy model of the control problem · 2015-09-17T19:15:31.812Z · LW · GW

An illustration with a game-playing AI, see 15:50 and after in the video. The system has a reward function based on bytes in memory, which leads it to pause the game forever when it is about to lose.

Comment by carlshulman on A toy model of the control problem · 2015-09-17T18:02:50.614Z · LW · GW

That still involves training it with no negative feedback error term for excess blocks (which would overwhelm a mere 0.1% uncertainty).

Comment by carlshulman on A toy model of the control problem · 2015-09-17T03:02:48.466Z · LW · GW

Of course, with this model it's a bit of a mystery why A gave B a reward function that gives 1 per block, instead of one that gives 1 for the first block and a penalty for additional blocks. Basically, why program B with a utility function so seriously out of whack with what you want when programming one perfectly aligned would have been easy?

Comment by carlshulman on Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time · 2015-07-30T00:05:13.443Z · LW · GW

1 is early filter meaning before our current state, #4 would be around or after our current state.

Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.

Not in the sense of harming us. For the Fermi paradox visible benevolent aliens are as inconsistent with our observations as murderous Berserkers.

I'm trying to get you to explain why you think a belief that "AI is a significant risk" would change our credence in any of #1-5, compared to not believing that.

Comment by carlshulman on Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time · 2015-07-28T22:50:12.051Z · LW · GW

Let's consider a few propositions:

  1. There is enough cumulative early filtration that very few civilizations develop, with less than 1 in expectation in a region like our past light-cone.
  2. Interstellar travel is impossible.
  3. Some civilizations have expanded but not engaged in mega-scale engineering that we could see or colonization that would have pre-empted our existence, and enforce their rules on dissenters.
  4. Civilizations very reliably wipe themselves out before they can colonize.
  5. Civilizations very reliably choose not to expand at all.

1-3 account for the Great Filter directly, and whether biological beings make AI they are happy with is irrelevant. For #4 and #5 what difference does it make whether biological beings make 'FAI' that helps them or 'UFAI' that kills them before going about its business? Either way the civilization (biological, machine, or both) could still wipe itself out or not (AIs could nuke each other out of existence too), and send out colonizers or not.

Unless there is some argument that 'UFAI' is much less likely to wipe out civilization (including itself), or much more likely to send out colonizers, how do the odds of alien 'FAI' vs 'UFAI' matter for explaining the Great Filter any more than whether aliens have scales or feathers? Either way they could produce visible signs or colonize Earth.

Comment by carlshulman on Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time · 2015-07-27T16:36:25.319Z · LW · GW

There's also the UFAI-Fermi-paradox:

This is just the regular Fermi paradox/Great Filter. If AI has any impact, it's that it may make space colonization easier. But what's important for that is that eventually industrial civilizations will develop AI (say in a million years). Whether the ancient aliens would be happy with the civilization that does the colonizing is irrelevant (i.e. UFAI/FAI) to the Filter.

You could also have the endotherm-Fermi-paradox, or the hexapodal-Fermi-paradox, or the Klingon-Great-Filter, but there is little to be gained by slicing up the Filter in that way.

Comment by carlshulman on Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time · 2015-07-26T17:04:42.688Z · LW · GW

Furthermore, although smaller stars are much more common than larger stars (the Sun is actually larger than over 80% of stars in the universe) stars smaller than about 0.5 solar masses (and thus 0.08 solar luminosities) are usually ‘flare stars’ – possessing very strong convoluted magnetic fields and periodically putting out flares and X-ray bursts that would frequently strip away the ozone and possibly even the atmosphere of an earthlike planet.

I have been wanting better stats on this for a while. Basically, what percentage of the eventual sum of potential-for-life-weighted habitable windows (undisturbed by technology) comes from small red dwarfs that can exist far longer than our sun, offsetting long stellar lifetimes with the various (nasty-looking) problems? ETA: wikipedia article. And how robust is the evidence?

Comment by carlshulman on Andrew Ng dismisses UFAI concerns · 2015-03-06T17:27:41.093Z · LW · GW

See this video at 39:30 for Yann LeCun giving some comments. He said:

  • Human-level AI is not near
  • He agrees with Musk that there will be important issues when it becomes near
  • He thinks people should be talking about it but not acting because a) there is some risk b) the public thinks there is more risk than there is

Also here is an IEEE interview:

Spectrum: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?

LeCun: It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.

There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.

Spectrum: What do you think he is going to accomplish in his job at Google?

LeCun: Not much has come out so far.

Spectrum: I often notice when I talk to researchers about the Singularity that while privately they are extremely dismissive of it, in public, they’re much more temperate in their remarks. Is that because so many powerful people in Silicon Valley believe it?

LeCun: AI researchers, down in the trenches, have to strike a delicate balance: be optimistic about what you can achieve, but don’t oversell what you can do. Point out how difficult your job is, but don’t make it sound hopeless. You need to be honest with your funders, sponsors, and employers, with your peers and colleagues, with the public, and with yourself. It is difficult when there is a lot of uncertainty about future progress, and when less honest or more self-deluded people make wild claims of future success. That’s why we don’t like hype: it is made by people who are either dishonest or self-deluded, and makes the life of serious and honest scientists considerably more difficult.

When you are in the kind of position as Larry Page and Sergey Brin and Elon Musk and Mark Zuckerberg, you have to prepare for where technology is going in the long run. And you have a huge amount of resources to make the future happen in a way that you think will be good. So inevitably you have to ask yourself those questions: what will technology be like 10, 20, 30 years from now. It leads you to think about questions like the progress of AI, the Singularity, and questions of ethics.

Spectrum: Right. But you yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.

LeCun: Not anytime soon.

Spectrum: Or ever.

LeCun: No, you can’t say never; technology is advancing very quickly, at an accelerating pace. But there are things that are worth worrying about today, and there are things that are so far out that we can write science fiction about it, but there’s no reason to worry about it just now.

Comment by carlshulman on Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time" · 2015-01-23T05:37:17.227Z · LW · GW

AI that can't compete in the job market probably isn't a global catastrophic risk.

Comment by carlshulman on Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial · 2015-01-17T00:11:10.895Z · LW · GW

GiveWell is on the case, and has said it is looking at bio threats (as well as nukes, solar storms, interruptions of agriculture). See their blog post on global catastrophic risks potential focus areas.

The open letter is an indication that GiveWell should take AI risk more seriously, while the Musk donation is an indication that near-term room for more funding will be lower. That could go either way.

On the room for more funding question, it's worth noting that GiveWell and Good Ventures are now moving tens of millions of dollars per year, and have been talking about moving quite a bit more than Musk's donation to the areas the Open Philanthropy Project winds up prioritizing.

However, even if the amount of money does not exhaust the field, there may be limits on how fast it can be digested, and the efficient growth path, that would favor gradually increasing activity.

Comment by carlshulman on Open Thread, March 1-15, 2013 · 2015-01-16T02:28:17.200Z · LW · GW

For some of the same reasons depressed people take drugs to elevate their mood.

Comment by carlshulman on New paper from MIRI: "Toward idealized decision theory" · 2014-12-26T21:58:02.204Z · LW · GW

Typo, "amplified" vs "amplify":

"on its motherboard as a makeshift radio to amplified oscillating signals from nearby computers"

Comment by carlshulman on [Resolved] Is the SIA doomsday argument wrong? · 2014-12-15T05:16:13.873Z · LW · GW

Thanks Brian.