Posts

[Expired] 20,000 Free $50 Charity Gift Cards 2020-12-11T20:02:13.674Z

Comments

Comment by WilliamKiely on Covid 12/24: We’re F***ed, It’s Over · 2021-09-08T05:03:44.904Z · LW · GW

My 8-months-ago self would be surprised to learn that the US average COVID-19 deaths/day has risen again to 1,300 deaths/day. I don't understand why this happened. Does anyone know? Is it a combination of vaccine and/or natural immunity not lasting? Or is it that there are still a lot of unvaccinated people? Or were my estimates of how many Americans had been infected so far too high?

Comment by WilliamKiely on Josh Jacobson's Shortform · 2021-08-20T01:16:17.735Z · LW · GW

A related question I've never seen the data on: How much more dangerous is driving at night than driving during the day? (per mile driven)

Comment by WilliamKiely on Josh Jacobson's Shortform · 2021-08-20T01:14:45.806Z · LW · GW

Perhaps the accurate way to say Romeo's point is that time spent driving through intersections is (much) more dangerous than time spent driving on roads, highways, etc.

Comment by WilliamKiely on Is the potential astronomical waste in our universe too small to care about? · 2021-08-13T19:02:36.846Z · LW · GW

Wei, insofar as you are making the deal with yourself consider that in the world in which it turns out that the universe could support doing at least 3^^^3 ops you may not be physically capable of changing yourself to work more toward longtermist goals than you would otherwise. (I.e. Human nature is such that making huge sacrifices to your standard of living and quality of life negatively effects your ability to work productively on longtermist goals for years.) If this is the case, then the deal won't work since one part of you can't uphold the bargain. So in the world in which it turns out that the universe can support only 10^120 ops you should not devote less effort to longtermism than you would otherwise, despite being physically capable of devoting less effort.

In a related kind of deal, both parts of you may be capable of upholding the deal, in which case I think such deals may be valid. But it seems to me that you don't need UDT-like reasoning and the deal future to believe that your future self with better knowledge of the size of the cosmic endowment ought to change his behavior in the same way as implied by the deal argument. Example: If you're a philanthropist with a plan to spend $X of your wealth on shortermist philanthropy and $X on longtermist-philanthropy when you're initially uncertain about the size of the cosmic endowment because you think this is optimal given your current beliefs and uncertainty, then when you later find out that the universe can support 3^^^3 ops I think this should cause you to shift how you spend your $2X to give more toward longtermist philanthropy just because the longtermist philanthropic opportunities now just seem more valuable. Similarly, if you find out that the universe can only support 10^120, then you ought to update to giving more toward short-termist philanthropy.

So is there really a case for UDT-like reasoning plus hypothetical deals our past selves could have made with themselves suggesting that we ought to behave differently than more common reasoning suggests we ought to behave when we learn new things about the world? I don't see it.

Comment by WilliamKiely on The unexpected difficulty of comparing AlphaStar to humans · 2021-07-18T17:49:18.550Z · LW · GW

How many years do you think it will be until we see (in public) an agent which only gets screen pixels as input, has human-level apm and reaction speed, and is very clearly better than the best humans?

Respondents had a median prediction of two years and an expertise-weighted mean prediction of a little less than four years.

It's now been about two years and this hasn't happened yet. It seems like that might just be the case because DeepMind stopped work on this?

Comment by WilliamKiely on Predict responses to the "existential risk from AI" survey · 2021-06-03T08:44:07.822Z · LW · GW

I agree. For me, the clarification note completely changed my interpretation of the question (and the answer I would give to my understanding of the question). I decided to record my answer as 50% for this reason.

Comment by WilliamKiely on Covid 12/24: We’re F***ed, It’s Over · 2021-05-24T16:07:39.401Z · LW · GW

It seems troubling that one of the most upvoted COVID-19 post on LessWrong is one that argued for a prediction that I think we should score really poorly.

I agree. FWIW, I strong-downvoted this post in December. I think this is the first LW post that I have strong-downvoted before.

Additionally, I commented on it (and on threads where this post was shared elsewhere, e.g. on Facebook) to explain my disagreement with it, and recorded ~the lowest forecast of anyone who submitted their forecast here that there'd be a 4th wave in the US in 2021.

What I failed to do was offer to bet here on the 4th wave question. I think the only time that I tried to make a bet on this topic was in a discussion on Facebook (set to friends-only) that began with "Well, it's time to pull the fire alarm on the UK mutation." I commented on the post on 12/26/20 with the following:

Would you be interested in operationalizing a bet on this? (If you don't think its good practice to bet money on COVID infections/cases/deaths or otherwise aren't interested in betting money, we can just make it a reputational bet.)

I get the sense that like Zvi you are being too pessimistic about how bad the new strain will be for the US relative to how bad thing would have been even without the new strain.

However, the bet never came to fruition.

Comment by WilliamKiely on Prediction markets for internet points? · 2021-01-11T03:55:01.742Z · LW · GW

Launching Forecast, a community for crowdsourced predictions from Facebook

Comment by WilliamKiely on Covid 12/24: We’re F***ed, It’s Over · 2020-12-27T00:21:35.811Z · LW · GW

In this post Zvi doesn't try to forecast how many infections/cases/deaths there would be in the US without this new strain (unless I missed it... it is a long post). Yet he really should, because doing so will lead one to realize that the US is likely going to be at or close to herd immunity by ~May-June anyway, so a new transmissible strain that becomes dominant in the US around that same period can't plausibly make as huge of a difference as Zvi seems to be saying in this post.

Good Judgment's median estimate for "How many total cases of COVID-19 in the U.S. will be estimated as of 31 March 2021?" is ~130M currently. And Good Judgment's median estimate for "When will enough doses of FDA-approved COVID-19 vaccine(s) to inoculate 100 million people be distributed in the United States?" is ~May 1st currently. https://goodjudgment.io/covid/dashboard/

Assuming that 20% of vaccines go to people who had already been infected, this would mean that by May, approximately ~220M people (220M = ~140M + 0.8*100M) the US will be immune, or about 66% of the population. This could easily be higher or lower, but the point is that we're going to be at or close to herd immunity by the time Zvi says this new viral strain would start becoming dominant in the US.

In short, the news would be much worse if this new viral strain had spread to the degree that it has now several months ago. But in reality, I think we'll be at or close to herd immunity already by the time it becomes prominent, so it won't make that much of a difference.

EDIT: I misread Zvi's piece initially and mistakenly thought he wrote that that the new strain wouldn't become dominant in the US until May. I now see that he says "Instead of that being the final peak and things only improving after that, we now face a potential fourth wave, likely cresting between March and May, that could be sufficiently powerful to substantially overshoot herd immunity." Taking this view as true instead makes me see the new strain as significantly worse news: specifically, this two-month shift might be sufficient to make an additional ~10-15% of the population get infected/sick before herd immunity is reached. (I still think the post title is overblown, but still this is a significant update for me.)

Comment by WilliamKiely on Covid 12/24: We’re F***ed, It’s Over · 2020-12-26T07:43:58.987Z · LW · GW

I choose an operationalization for the second question in this comment: https://www.lesswrong.com/posts/CHtwDXy63BsLkQx4n/covid-12-24-we-re-f-ed-it-s-over?commentId=A38t5Ffxbm6GhpXuk

Comment by WilliamKiely on Covid 12/24: We’re F***ed, It’s Over · 2020-12-26T07:39:54.660Z · LW · GW

Some of my thoughts that lead me to think this are in my comments on this Metaculus question: https://pandemic.metaculus.com/questions/3988/how-many-total-deaths-in-the-us-will-be-directly-attributed-to-covid-19-in-2021/

Comment by WilliamKiely on Covid 12/24: We’re F***ed, It’s Over · 2020-12-26T07:37:05.932Z · LW · GW

IMO The title is overly dramatic and seems to claim that the news about the new strain is more significant than I actually think it is in terms of how much it should cause us to update our views of what infection risk and US COVID-19 deaths will be in 2021.

Comment by WilliamKiely on Covid 12/24: We’re F***ed, It’s Over · 2020-12-26T07:32:53.157Z · LW · GW

Operationalizing "There will be an additional distinct large wave of Covid-19 infections in the United States 2021" as "The 7-day average of new cases according to Worldometers will decrease by at least 33% from a previous value in 2021 and then later increase to at least 150% of the previous high", I'm forecasting 38%.

(EDIT: Update 12/28: I updated my forecast to 48% after realizing that I had my timing wrong on when the new strain might become dominant in the US. Previously I thought Zvi said something like 'not until May, or maybe June or July', but I now see he actually said "Instead of that being the final peak and things only improving after that, we now face a potential fourth wave, likely cresting between March and May, that could be sufficiently powerful to substantially overshoot herd immunity." If the new wave actually becomes dominant in March (or early April) (instead of May or later, as I mistakenly thought Zvi was saying before) and is as transmissible as it seems, that will probably be soon enough that there will still be enough not-immune people for there to be a significant surge in cases to cause the above forecasting question to resolve positively.)

This operationalization isn't that great because changes in numbers of tests could affect it a lot, but at least it's concrete.

Alternatively we could operationalize it in terms of the midpoint newly infected estimate at https://covid19-projections.com/ . Doing this and using the same 33% and 150% as above, I'd forecast 32%.

(For the Elicit question in the post, I went with the first operationalization and said 38% (EDIT 12/28: Now 48%.))

Comment by WilliamKiely on Covid 12/24: We’re F***ed, It’s Over · 2020-12-26T07:14:21.217Z · LW · GW

Thanks! On mobile I had to zoom in to reliably tap directly on the bar, which I didn't try originally.

Comment by WilliamKiely on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T15:11:41.775Z · LW · GW

FYI, I'm not actually forecasting 50% on the two Elicit questions at the end of the post. Tapping on the distributions caused me to unintentionally make forecasts on them. I was able to modify the forecasts, but saw no way to remove them, so just set them to 50% so as to hopefully mislead others as little as possible. (While I'd like to actually make forecasts on these questions, I think how they are operationalized matters a lot and yet I did not see any operationalization provided for them.)

Comment by WilliamKiely on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T00:37:38.863Z · LW · GW

How are these questions being operationalized?

Comment by WilliamKiely on [Expired] 20,000 Free $50 Charity Gift Cards · 2020-12-12T03:34:52.321Z · LW · GW

Yes, updated the main post, thanks.

Comment by WilliamKiely on [Expired] 20,000 Free $50 Charity Gift Cards · 2020-12-12T03:34:05.923Z · LW · GW

The opportunity has expired (about 9 hours after the start) according to an update on the site: "We’re happy to share that together, we’ve given away 30,000 Charity Gift Cards to support the causes you care about, for a total of $2 million donated."

Comment by WilliamKiely on [Expired] 20,000 Free $50 Charity Gift Cards · 2020-12-11T21:12:12.962Z · LW · GW

For another worthwhile free money opportunity available until December 25th, see: Make a $10 donation into $35

Comment by WilliamKiely on [Expired] 20,000 Free $50 Charity Gift Cards · 2020-12-11T20:30:18.334Z · LW · GW

Hi, I believe these should all be available (and I personally checked Malaria Consortium and saw that it was available). Did you try using the Search function to find the charities at the bottom right of the page listing the charity categories?

Comment by WilliamKiely on [Expired] 20,000 Free $50 Charity Gift Cards · 2020-12-11T20:19:42.451Z · LW · GW

Thanks! FWIW When it runs out I expect to know pretty soon afterwards. I'll update the post and DM you as soon as I find out.

Comment by WilliamKiely on Limits of Current US Prediction Markets (PredictIt Case Study) · 2020-11-04T00:11:09.482Z · LW · GW

if we assume you are paying 35% income tax

This tweet claims "winnings are taxed as gambling income, and subject to a flat 25% rate". Is that the case, or are they taxed like normal income like the OP claims?

Comment by WilliamKiely on Why indoor lighting is hard to get right and how to fix it · 2020-11-02T17:35:10.609Z · LW · GW

Lux meter

Being able to measure how much light is in your house is useful and inexpensive. I use a $30 Uceri meter that I ordered from Amazon.

Is there much benefit in getting the Uceri meter rather than using a free phone app like Light Meter which uses my phone's camera to measure the lux?

Comment by WilliamKiely on The US Already Has A Wealth Tax · 2020-08-19T17:06:31.575Z · LW · GW

If we interpret his claim as an additional 0.5% wealth tax do you still think he is overstating his claim?

Comment by WilliamKiely on Seemingly Popular Covid-19 Model is Obvious Nonsense · 2020-04-17T14:48:12.304Z · LW · GW

April 17th Stat News story: Influential Covid-19 model uses flawed methods and shouldn’t guide U.S. policies, critics say:

“It’s not a model that most of us in the infectious disease epidemiology field think is well suited” to projecting Covid-19 deaths, epidemiologist Marc Lipsitch of the Harvard T.H. Chan School of Public Health told reporters this week, referring to projections by the Institute for Health Metrics and Evaluation at the University of Washington.
Others experts, including some colleagues of the model-makers, are even harsher. “That the IHME model keeps changing is evidence of its lack of reliability as a predictive tool,” said epidemiologist Ruth Etzioni of the Fred Hutchinson Cancer Center, home to several of the researchers who created the model, and who has served on a search committee for IHME. “That it is being used for policy decisions and its results interpreted wrongly is a travesty unfolding before our eyes.”
Comment by WilliamKiely on Seemingly Popular Covid-19 Model is Obvious Nonsense · 2020-04-13T06:09:57.054Z · LW · GW

Zvi commenting on his The One Mistake Rule post: "E.g. if you want to bet me that there will be no American Covid-19 deaths in July, I will be very, very surprised."

Comment by WilliamKiely on March Coronavirus Open Thread · 2020-03-14T08:19:40.981Z · LW · GW
Hospital bed availability at peak infections is 4% (25x over capacity) in the uncontrolled beta=0.25 scenario and only improves to 10% (10x over capacity) in the "controlled" beta=0.14 scenario.

Alex, I'm looking at your spreadsheet and I don't understand where you got these bold numbers from. It looks like you tweaked your sheet a bit since writing this comment, but still I can't figure out what you are looking at when you say 25x and 10x over capacity. Could you explain?

Comment by WilliamKiely on When to Reverse Quarantine and Other COVID-19 Considerations · 2020-03-13T23:39:48.988Z · LW · GW

Should read "~8 months = ~250 days"

Comment by WilliamKiely on Moral public goods · 2020-01-30T07:05:08.215Z · LW · GW

Scott Alexander makes a similar point in his post Too Much Dark Money in Almonds, arguing that the main reason why people do not donate much more money to politics and charity is because there is a public goods problem and lack of a coordinating mechanism: "People just can’t coordinate. If everyone who cared about homelessness donated $100 to the problem, homelessness would be solved. Nobody does this, because they know that nobody else is going to do it, and their $100 is just going to feel like a tiny drop in the ocean that doesn’t change anything."

Comment by WilliamKiely on Moral public goods · 2020-01-30T06:17:04.963Z · LW · GW
(Like, this is where the <1% in your post comes from, right?)

No, the <1% in the post comes from the other "bad option" (the first being that "They care about themselves >10 million times as much as other people"), namely that people care about themselves <10 million times as much as other people. (Since there are more than a billion people in the world, <10 million times as much as other people is "<1% as much as everyone else in the whole world put together.")

Comment by WilliamKiely on Long-term Donation Bunching? · 2019-10-01T18:48:53.708Z · LW · GW

I agree, which is why the large benefit of getting one's donations matched compared to the tax benefits of bunching provides another (stronger) reason (in addition to the value drift reason) for people like the GWWC-donor in your original post to donate this year (on Giving Tuesday) rather than bunch by taking the standard deduction this year and giving in 2020 (or later) instead. (This is the implication I had in mind when I wrote my first comment; sorry for not writing it out then.)

I myself am in this situation. As such:

  • If it turns out that Facebook doesn't offer an exploitable donation match this year, then I plan to not donate and take the standard deduction instead.
  • In the hypothetical world where free matching money was guaranteed to always be available every year, I would also plan to not donate this year and would take the standard deduction instead.
  • However, as seems most likely to be the case, if Facebook does offer an exploitable match this Giving Tuesday and it seems significantly less likely that I could get matched again in 2020 (as we both agree seems to be the case) then I will donate this Giving Tuesday to take advantage of the free money while it lasts.
Comment by WilliamKiely on Long-term Donation Bunching? · 2019-09-28T22:06:19.246Z · LW · GW

It's worth noting that the possible tax benefits are small compared to the benefit of getting one's donations matched: https://forum.effectivealtruism.org/posts/9ZRenh6bERDkoCfdX/eas-should-invest-all-year-then-give-only-on-giving-tuesday

Comment by WilliamKiely on Long-term Donation Bunching? · 2019-09-28T21:54:24.320Z · LW · GW

Why not instead: The $10k/year donor instead writes the $10k check to a friend who is already planning on itemizing that year and that friend then donates an amount equal to ($10k + the additional tax benefit they receive) such that that friend's after-tax income is the same.

Comment by WilliamKiely on Progress and Prizes in AI Alignment · 2017-01-08T18:18:38.911Z · LW · GW

How about having a prize for coming up with the very clear guidelines for an AI safety XPrize?

Comment by WilliamKiely on Meetup : Less Wrong NH Meet-up · 2015-08-23T01:40:48.281Z · LW · GW

I'm out of state until August 26th, but I'd like to attend the next one!

Comment by WilliamKiely on Meetup : Less Wrong NH Inaugural Meet-up · 2015-07-18T21:36:01.212Z · LW · GW

Mollie, I will be returning to New Hampshire Monday and would love to attend a LessWrong Meetup in Manchester. Where can I find out if and when a second Meetup will be occurring? Thanks.

Comment by WilliamKiely on 16 types of useful predictions · 2015-05-08T06:47:26.379Z · LW · GW

Certainly, for me, there has always been a gap between how useful I think predictions are, in theory, and how often I make them.

Same here.

There's probably some additional value you can extract by writing down quantitative confidence levels, and by devising objective metrics that are impossible to game, rather than just relying on your subjective impressions.

I agree.

But in most cases I don't think that additional value is worth the cost you incur from turning predictions into an onerous task.

I disagree in that (1) I think much of the value of predictions would come from the ability to examine and analyze my past prediction accuracy and (2) I don't think the task of recording the predictions would necessarily be very onerous (e.g. especially if there is some recurring prediction which you don't have to write a new description for every time you make it).

I really like Prediction Book (which I just checked out for the first time before Googling and finding this post), but it doesn't offer sufficient analysis options to make me want to really begin using it yet.

But this could change!

I would predict (75%) that I would begin using it on a daily basis (and would continue to do so indefinitely upon realizing that I was indeed getting sufficient value out of it to justify the time it takes to record my predictions on the site) if it offered not just the single Accuracy vs 50-100% Confidence plot and graph, but the following features:

  • Ability to see confidence and accuracy plotted versus time. (Useful, e.g. to see weekly progress on meeting some daily self-imposed deadline. Perhaps you used to meet it 60% of days on average, but now you meet it 80% of days on average. You could track your progress while seeing if you accurately predict progress as well, or if your predicted values follow the improvement.)

  • Ability to see 0-100% confidence on statistics plot, instead of just 50-100%. (Maybe it already includes 0-50% and just does the negative of each prediction (?). However, if so, this is still a problem since I may have different biases for 10% predictions than 90% predictions.)

  • Ability to set different different prediction types and analyze the data separately. (Useful for learning how accurate one's predictions are in different domains.)

  • Ability to download all of one's past prediction data. (Useful if there is some special analysis that one wants to perform.)

  • A public/private prediction toggle button (Useful because there may be times when it's okay for someone to hide a prediction they were embarrassingly wrong about or someone may want to publicize a previously-private prediction. Forcing users to decide at the time of the prediction whether their prediction will forever be displayed publicly on their account or remain private forever doesn't seem very user-friendly.)

  • Bonus: An app allowing easy data input when not at your computer. (Would make it even more convenient to record predictions.)

Some of these features can be achieved by creating multiple accounts. And I could accomplish all of this in Excel. But using multiple accounts or Excel would make it too tedious to be worth it. The value is in having the graphs and analysis automatically generated and presented to you with only a small amount of effort needed to input the predictions in the first place.

I don't think any of these additional features would be very difficult to implement. However, I'm not a programmer, so for me to dive into the Prediction Book GitHub and try to figure out how to make these changes would probably be quite time-consuming and not worth it.

Maybe there is someone else who agrees that these features would be useful to them who is a programmer and would like to add some or all of the suggested features I mentioned? Does anyone know the people who did most of the work programming the current website?

Comment by WilliamKiely on Nick Bostrom's TED talk on Superintelligence is now online · 2015-05-03T17:48:06.434Z · LW · GW

I agree that there are several reasons why solving the value alignment problem is important.

Note that when I said that Bostrom should "modify" his reply I didn't mean that he should make a different point instead of the point he made, but rather meant that he should make another point in addition to the point he already made. As I said:

While what [Bostrom] says is correct, I think that there is a more important point he should also be making when replying to this claim.

Comment by WilliamKiely on Nick Bostrom's TED talk on Superintelligence is now online · 2015-04-30T02:48:24.151Z · LW · GW

This is my first comment on LessWrong.

I just wrote a post replying to part of Bostrom's talk, but apparently I need 20 Karma points to post it, so... let it be a long comment instead:

Bostrom should modify his standard reply to the common "We'd just shut off / contain the AI" claim

In Superintelligence author Prof. Nick Bostrom's most recent TED Talk, What happens when our computers get smarter than we are?, he spends over two minutes replying to the common claim that we could just shut off an AI or preemptively contain it in a box in order to prevent it from doing bad things that we don't like, so there's no need to be too concerned about the possible future development of AI that has misconceived or poorly specified goals:

Now you might say, if a computer starts sticking electrodes into people's faces, we'd just shut it off. A, this is not necessarily so easy to do if we've grown dependent on the system -- like, where is the off switch to the Internet? B, why haven't the chimpanzees flicked the off switch to humanity, or the Neanderthals? They certainly had reasons. We have an off switch, for example, right here. (Choking) The reason is that we are an intelligent adversary; we can anticipate threats and plan around them. But so could a superintelligent agent, and it would be much better at that than we are. The point is, we should not be confident that we have this under control here.

And we could try to make our job a little bit easier by, say, putting the A.I. in a box, like a secure software environment, a virtual reality simulation from which it cannot escape. But how confident can we be that the A.I. couldn't find a bug. Given that merely human hackers find bugs all the time, I'd say, probably not very confident. So we disconnect the ethernet cable to create an air gap, but again, like merely human hackers routinely transgress air gaps using social engineering. Right now, as I speak, I'm sure there is some employee out there somewhere who has been talked into handing out her account details by somebody claiming to be from the I.T. department.

More creative scenarios are also possible, like if you're the A.I., you can imagine wiggling electrodes around in your internal circuitry to create radio waves that you can use to communicate. Or maybe you could pretend to malfunction, and then when the programmers open you up to see what went wrong with you, they look at the source code -- Bam! -- the manipulation can take place. Or it could output the blueprint to a really nifty technology, and when we implement it, it has some surreptitious side effect that the A.I. had planned. The point here is that we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later, it will out.

If I recall correctly, Bostrom has replied to this claim in this manner in several of the talks he has given. While what he says is correct, I think that there is a more important point he should also be making when replying to this claim.

The point is that even if containing an AI in a box so that it could not escape and cause damage was somehow feasible, it would still be incredibly important for us to determine how to create AI that shares our interests and values (friendly AI). And we would still have great reason to be concerned about the creation of unfriendly AI. This is because other people, such as terrorists, could still create an unfriendly AI and intentionally release it into the world to wreak havoc and potentially cause an existential catastrophe.

The idea that we should not be too worried about figuring out how to make AI friendly because we could always contain the AI in a box until we knew it was safe to release is confused not primarily because we couldn't actually successfully contain it in the box, but rather because the primary reason we have for wanting to quickly figure out how to make a friendly AI is so that we can make a friendly AI before anyone else makes an unfriendly AI.

In his TED Talk, Bostrom continues:

I believe that the answer here is to figure out how to create superintelligent A.I. such that even if -- when -- it escapes, it is still safe because it is fundamentally on our side because it shares our values. I see no way around this difficult problem.

Bostrom could have strengthened his argument for the position that there is no way around this difficult problem by stating my point above.

That is, he could have pointed out that even if we somehow developed a reliable way to keep a superintelligent genie locked up in its bottle forever, this still would not allow us to avoid having to solve the difficult problem of creating friendly AI with human values, since there would still be a high risk that other people in the world with not-so-good intentions would eventually develop an unfriendly AI and intentionally release it upon the world, or simply not exercise the caution necessary to keep it contained.

Once the technology to make superintelligent AI is developed, good people will be pressured to create friendly AI and let it take control of the future of the world ASAP. The longer they wait, the greater the risk that not-so-good people will develop AI that isn't specifically designed to have human values. This is why solving the value alignment problem soon is so important.