Posts

Comments

Comment by williamkiely on Long-term Donation Bunching? · 2019-10-01T18:48:53.708Z · score: 1 (1 votes) · LW · GW

I agree, which is why the large benefit of getting one's donations matched compared to the tax benefits of bunching provides another (stronger) reason (in addition to the value drift reason) for people like the GWWC-donor in your original post to donate this year (on Giving Tuesday) rather than bunch by taking the standard deduction this year and giving in 2020 (or later) instead. (This is the implication I had in mind when I wrote my first comment; sorry for not writing it out then.)

I myself am in this situation. As such:

  • If it turns out that Facebook doesn't offer an exploitable donation match this year, then I plan to not donate and take the standard deduction instead.
  • In the hypothetical world where free matching money was guaranteed to always be available every year, I would also plan to not donate this year and would take the standard deduction instead.
  • However, as seems most likely to be the case, if Facebook does offer an exploitable match this Giving Tuesday and it seems significantly less likely that I could get matched again in 2020 (as we both agree seems to be the case) then I will donate this Giving Tuesday to take advantage of the free money while it lasts.
Comment by williamkiely on Long-term Donation Bunching? · 2019-09-28T22:06:19.246Z · score: 1 (1 votes) · LW · GW

It's worth noting that the possible tax benefits are small compared to the benefit of getting one's donations matched: https://forum.effectivealtruism.org/posts/9ZRenh6bERDkoCfdX/eas-should-invest-all-year-then-give-only-on-giving-tuesday

Comment by williamkiely on Long-term Donation Bunching? · 2019-09-28T21:54:24.320Z · score: 2 (2 votes) · LW · GW

Why not instead: The $10k/year donor instead writes the $10k check to a friend who is already planning on itemizing that year and that friend then donates an amount equal to ($10k + the additional tax benefit they receive) such that that friend's after-tax income is the same.

Comment by williamkiely on Progress and Prizes in AI Alignment · 2017-01-08T18:18:38.911Z · score: 0 (0 votes) · LW · GW

How about having a prize for coming up with the very clear guidelines for an AI safety XPrize?

Comment by williamkiely on Meetup : Less Wrong NH Meet-up · 2015-08-23T01:40:48.281Z · score: 0 (0 votes) · LW · GW

I'm out of state until August 26th, but I'd like to attend the next one!

Comment by williamkiely on Meetup : Less Wrong NH Inaugural Meet-up · 2015-07-18T21:36:01.212Z · score: 2 (2 votes) · LW · GW

Mollie, I will be returning to New Hampshire Monday and would love to attend a LessWrong Meetup in Manchester. Where can I find out if and when a second Meetup will be occurring? Thanks.

Comment by williamkiely on 16 types of useful predictions · 2015-05-08T06:47:26.379Z · score: 0 (0 votes) · LW · GW

Certainly, for me, there has always been a gap between how useful I think predictions are, in theory, and how often I make them.

Same here.

There's probably some additional value you can extract by writing down quantitative confidence levels, and by devising objective metrics that are impossible to game, rather than just relying on your subjective impressions.

I agree.

But in most cases I don't think that additional value is worth the cost you incur from turning predictions into an onerous task.

I disagree in that (1) I think much of the value of predictions would come from the ability to examine and analyze my past prediction accuracy and (2) I don't think the task of recording the predictions would necessarily be very onerous (e.g. especially if there is some recurring prediction which you don't have to write a new description for every time you make it).

I really like Prediction Book (which I just checked out for the first time before Googling and finding this post), but it doesn't offer sufficient analysis options to make me want to really begin using it yet.

But this could change!

I would predict (75%) that I would begin using it on a daily basis (and would continue to do so indefinitely upon realizing that I was indeed getting sufficient value out of it to justify the time it takes to record my predictions on the site) if it offered not just the single Accuracy vs 50-100% Confidence plot and graph, but the following features:

  • Ability to see confidence and accuracy plotted versus time. (Useful, e.g. to see weekly progress on meeting some daily self-imposed deadline. Perhaps you used to meet it 60% of days on average, but now you meet it 80% of days on average. You could track your progress while seeing if you accurately predict progress as well, or if your predicted values follow the improvement.)

  • Ability to see 0-100% confidence on statistics plot, instead of just 50-100%. (Maybe it already includes 0-50% and just does the negative of each prediction (?). However, if so, this is still a problem since I may have different biases for 10% predictions than 90% predictions.)

  • Ability to set different different prediction types and analyze the data separately. (Useful for learning how accurate one's predictions are in different domains.)

  • Ability to download all of one's past prediction data. (Useful if there is some special analysis that one wants to perform.)

  • A public/private prediction toggle button (Useful because there may be times when it's okay for someone to hide a prediction they were embarrassingly wrong about or someone may want to publicize a previously-private prediction. Forcing users to decide at the time of the prediction whether their prediction will forever be displayed publicly on their account or remain private forever doesn't seem very user-friendly.)

  • Bonus: An app allowing easy data input when not at your computer. (Would make it even more convenient to record predictions.)

Some of these features can be achieved by creating multiple accounts. And I could accomplish all of this in Excel. But using multiple accounts or Excel would make it too tedious to be worth it. The value is in having the graphs and analysis automatically generated and presented to you with only a small amount of effort needed to input the predictions in the first place.

I don't think any of these additional features would be very difficult to implement. However, I'm not a programmer, so for me to dive into the Prediction Book GitHub and try to figure out how to make these changes would probably be quite time-consuming and not worth it.

Maybe there is someone else who agrees that these features would be useful to them who is a programmer and would like to add some or all of the suggested features I mentioned? Does anyone know the people who did most of the work programming the current website?

Comment by williamkiely on Nick Bostrom's TED talk on Superintelligence is now online · 2015-05-03T17:48:06.434Z · score: 2 (2 votes) · LW · GW

I agree that there are several reasons why solving the value alignment problem is important.

Note that when I said that Bostrom should "modify" his reply I didn't mean that he should make a different point instead of the point he made, but rather meant that he should make another point in addition to the point he already made. As I said:

While what [Bostrom] says is correct, I think that there is a more important point he should also be making when replying to this claim.

Comment by williamkiely on Nick Bostrom's TED talk on Superintelligence is now online · 2015-04-30T02:48:24.151Z · score: 6 (8 votes) · LW · GW

This is my first comment on LessWrong.

I just wrote a post replying to part of Bostrom's talk, but apparently I need 20 Karma points to post it, so... let it be a long comment instead:

Bostrom should modify his standard reply to the common "We'd just shut off / contain the AI" claim

In Superintelligence author Prof. Nick Bostrom's most recent TED Talk, What happens when our computers get smarter than we are?, he spends over two minutes replying to the common claim that we could just shut off an AI or preemptively contain it in a box in order to prevent it from doing bad things that we don't like, so there's no need to be too concerned about the possible future development of AI that has misconceived or poorly specified goals:

Now you might say, if a computer starts sticking electrodes into people's faces, we'd just shut it off. A, this is not necessarily so easy to do if we've grown dependent on the system -- like, where is the off switch to the Internet? B, why haven't the chimpanzees flicked the off switch to humanity, or the Neanderthals? They certainly had reasons. We have an off switch, for example, right here. (Choking) The reason is that we are an intelligent adversary; we can anticipate threats and plan around them. But so could a superintelligent agent, and it would be much better at that than we are. The point is, we should not be confident that we have this under control here.

And we could try to make our job a little bit easier by, say, putting the A.I. in a box, like a secure software environment, a virtual reality simulation from which it cannot escape. But how confident can we be that the A.I. couldn't find a bug. Given that merely human hackers find bugs all the time, I'd say, probably not very confident. So we disconnect the ethernet cable to create an air gap, but again, like merely human hackers routinely transgress air gaps using social engineering. Right now, as I speak, I'm sure there is some employee out there somewhere who has been talked into handing out her account details by somebody claiming to be from the I.T. department.

More creative scenarios are also possible, like if you're the A.I., you can imagine wiggling electrodes around in your internal circuitry to create radio waves that you can use to communicate. Or maybe you could pretend to malfunction, and then when the programmers open you up to see what went wrong with you, they look at the source code -- Bam! -- the manipulation can take place. Or it could output the blueprint to a really nifty technology, and when we implement it, it has some surreptitious side effect that the A.I. had planned. The point here is that we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later, it will out.

If I recall correctly, Bostrom has replied to this claim in this manner in several of the talks he has given. While what he says is correct, I think that there is a more important point he should also be making when replying to this claim.

The point is that even if containing an AI in a box so that it could not escape and cause damage was somehow feasible, it would still be incredibly important for us to determine how to create AI that shares our interests and values (friendly AI). And we would still have great reason to be concerned about the creation of unfriendly AI. This is because other people, such as terrorists, could still create an unfriendly AI and intentionally release it into the world to wreak havoc and potentially cause an existential catastrophe.

The idea that we should not be too worried about figuring out how to make AI friendly because we could always contain the AI in a box until we knew it was safe to release is confused not primarily because we couldn't actually successfully contain it in the box, but rather because the primary reason we have for wanting to quickly figure out how to make a friendly AI is so that we can make a friendly AI before anyone else makes an unfriendly AI.

In his TED Talk, Bostrom continues:

I believe that the answer here is to figure out how to create superintelligent A.I. such that even if -- when -- it escapes, it is still safe because it is fundamentally on our side because it shares our values. I see no way around this difficult problem.

Bostrom could have strengthened his argument for the position that there is no way around this difficult problem by stating my point above.

That is, he could have pointed out that even if we somehow developed a reliable way to keep a superintelligent genie locked up in its bottle forever, this still would not allow us to avoid having to solve the difficult problem of creating friendly AI with human values, since there would still be a high risk that other people in the world with not-so-good intentions would eventually develop an unfriendly AI and intentionally release it upon the world, or simply not exercise the caution necessary to keep it contained.

Once the technology to make superintelligent AI is developed, good people will be pressured to create friendly AI and let it take control of the future of the world ASAP. The longer they wait, the greater the risk that not-so-good people will develop AI that isn't specifically designed to have human values. This is why solving the value alignment problem soon is so important.