Posts

Comments

Comment by blf on [Expired] 20,000 Free $50 Charity Gift Cards · 2020-12-11T21:38:18.045Z · LW · GW

The first one you mention appears in the list as one word, GiveDirectly.  I initially had trouble finding it.

Comment by blf on Toward A Culture of Persuasion · 2020-12-09T08:47:56.255Z · LW · GW

It seems to me the word "dialog" may be appropriate: to me it has the connotation of reaching out to people you may not normally interact with.

Comment by blf on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-29T01:09:57.179Z · LW · GW

Thank you.

Comment by blf on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-18T21:36:32.133Z · LW · GW

Does there exist a paper version of Yudkowsky's book "Rationality: From AI to Zombies"? I only found a Kindle version but I would like to give it as a present to someone who is more likely to read a dead-tree version.

Comment by blf on Chatbots or set answers, not WBEs · 2015-09-08T21:14:56.530Z · LW · GW

I am not sure of myself, here, but I would expect a malicious AI to do the following. The first few (or many) times you run it, tell you the optimal stock. Then once in a while give a non-optimal stock. You would be unable to determine whether the AI was simply not turned on those times, or was not quite intelligent/resourceful enough to find the right stock. It may be that you would want the profits to continue.

By allowing itself to give you non-optimal stocks (but still making you rich), the AI can transmit information, such as its location, to anyone who would be looking at your pattern of buying stocks. And people would look at it, since you would be consistently buying the most profitable stock, with few exceptions. Once the location of the AI is known, you are in trouble, and someone less scrupulous than you may get their hand on the AI. Humans are dead in a fortnight.

Admittedly, this is a somewhat far-fetched scenario, but I believe that it indicates that you should not ask the AI more than one (or a few) questions before permanently destroying it. Even deleting all of its data and running the code again from scratch may be dangerous if the AI is able to determine how many times it has been launched in the past.