LessWrong Israel, Jan.15

2019-01-08T19:47:42.037Z · score: 6 (1 votes)

LessWrong Tel Aviv: Civilizational Collapse

2018-12-10T07:26:46.272Z · score: 6 (1 votes)
Comment by joshuafox on What's up with Arbital? · 2017-03-29T17:35:11.754Z · score: 4 (2 votes) · LW · GW

Eliezer is still writing AI Alignment content on it, ... MIRI ... adopt Arbital ...

How does Eliezer's work on Arbital relate to MIRI? Little is publicly visible of what is is doing in MIRI. Is he focusing on Arbital? What is the strategic purpose?

Comment by joshuafox on Meetup Discussion · 2017-01-19T09:52:55.171Z · score: 1 (1 votes) · LW · GW

Pre-existing friends, postings on Facebook (even though FB does not distribute events to the timelines of group members if there are more than 250 people in a group), occasionally lesswrong.com (not event postings, but more that people who are actively interested LW seek out a Tel Aviv group)

Comment by joshuafox on Meetup Discussion · 2017-01-11T07:42:15.516Z · score: 1 (1 votes) · LW · GW

In Tel Aviv, we have three types of meetings, all on Tuesdays. Monthly we have a full meeting, usually a lecture or sometimes Rump Sessions (informal lightning talks). Typical attendance is 12.

Monthly, alternating fortnights from the above, we do game nights.

We are graciously hosted by Meni Rosenfeld's Cluster startup hub. (For a few years we were hosted at Google.)

On other Tuesdays a few LessWrongers get together at a pub.

Comment by joshuafox on Progress and Prizes in AI Alignment · 2017-01-04T09:06:54.010Z · score: 0 (0 votes) · LW · GW

There certainly should be more orgs with different approaches. But possibly, CHCAI plays a role as the representative of MIRI in the mainstream academic world, and so from the perspective of goals, it is OK that the two are quite close.

Meetup : NLP for large scale sentiment detection

2016-09-06T07:13:40.453Z · score: 0 (1 votes)

Meetup : Quantum Homeschooling

2016-02-09T09:30:00.824Z · score: 1 (2 votes)

MIRIx Israel, Meeting summary

2015-12-31T07:04:35.278Z · score: 6 (7 votes)

Meetup : Biases and making better decisions

2015-08-26T14:58:26.012Z · score: 1 (2 votes)
Comment by joshuafox on Gatekeeper variation · 2015-08-10T13:58:48.947Z · score: 0 (0 votes) · LW · GW

You're quite right--these are among the standard objections for boxing, as mentioned in the post. However, AI boxing may have value as a stopgap in an early stage, so I'm wondering about the idea's value in that context.

Comment by joshuafox on Gatekeeper variation · 2015-08-08T18:21:09.800Z · score: 0 (0 votes) · LW · GW

Sure, but to "independently verify" the output of an entity smarter than you is generally impossible. This makes it possible, while also limiting the potential of the boxed AI to choose its answers.

Comment by joshuafox on Gatekeeper variation · 2015-08-07T15:00:18.558Z · score: 0 (0 votes) · LW · GW

Thanks. Those points are correct. Is there any particular weakness or strength to this UP-idea in contrast to Oracle, tool-AI, or Gatekeeper ideas?

Gatekeeper variation

2015-08-07T13:44:10.231Z · score: 4 (5 votes)

Meetup : Logical Counterfactuals, Tel Aviv

2015-07-27T04:39:16.628Z · score: 1 (2 votes)
Comment by joshuafox on Open Thread, Jun. 29 - Jul. 5, 2015 · 2015-06-30T04:49:03.148Z · score: 4 (4 votes) · LW · GW

Thank you, Kaj. Those references are what I was looking for.

It looks like there might be a somewhat new idea here. Previous suggestions, as you mention, restrict output to a single bit; or require review by human experts. Using multiple AGI oracles to check each other is a good one, though I'd worry about acausal coordination between by the AGIs, and I don't see that the safety is provable beyond checking that answers match.

This new variant gives the benefit of provable restrictions and the relative ease of implementing a narrow-AI proof system to check it. It's certainly not the full solution to the FAI problem, but it's a good addition to our lineup of partial or short-term solutions in the area of AI Boxing and Oracle AI.

I'll get this feedback to the originator of this idea and see what can be made of it.

Comment by joshuafox on Open Thread, Jun. 29 - Jul. 5, 2015 · 2015-06-29T05:16:03.898Z · score: 2 (2 votes) · LW · GW

Could someone point me to any existing articles on this variant of AI-Boxing and Oracle AGIs:

The boxed AGI's gatekeeper is a simpler system which runs formal proofs to verify that AGI's output satisfies a simple, formally definable. The constraint is not "safety" in general but rather is narrow enough that we can be mathematically sure that the output is safe. (This does limit potential benefits from the AGI.)

The questions about what the constraint should be remains open, and of course the fact that the AGI is physically embodied puts it in causal contact with the rest of the universe. But as a partial or short-term solution, has anyone written about it? The only one I can think of (though I can't find the specific article) is Goertzel's description of an architecture where the guardian component is separate from the main AGI.

Comment by joshuafox on Meetup : Tel Aviv Meetup: Assorted LW mini-talks · 2015-06-22T12:31:41.229Z · score: 0 (0 votes) · LW · GW

Duplicate of Anatoly's post.

Meetup : Tel Aviv Meetup: Assorted LW mini-talks

2015-06-22T12:04:16.447Z · score: 1 (2 votes)

Meetup : Tel Aviv Meetup: Social & Board Games

2015-06-06T18:32:25.381Z · score: 1 (2 votes)
Comment by joshuafox on Debunking Fallacies in the Theory of AI Motivation · 2015-05-05T05:17:51.041Z · score: 10 (10 votes) · LW · GW

Eliezer Yudkowsky and Bill Hibbard. Here is Yudkowsky stating the theme of their discussion ... 2001

Around 15 years ago, Bill Hibbard proposed hedonic utility functions for an ASI. However, since then he has, in other publications, stated that he has changed his mind -- he should get credit for this. Hibbard 2001 should not be used as a citation for hedonic utility functions, unless one mentions in the same sentence that this is an outdated and disclaimed position.

Comment by joshuafox on What you know that ain't so · 2015-03-23T19:46:08.473Z · score: 3 (3 votes) · LW · GW

It's not about the Six Day War. It talks about the Yom Kippur War (1973).

Comment by joshuafox on HPMOR Wrap Parties: Resources, Information and Discussion · 2015-03-09T16:02:06.452Z · score: 2 (2 votes) · LW · GW

That was the tentative location, but it looks like the party's location is the Google offices at 98 Yigal Alon Street in Tel Aviv. (See FB group.)

Meetup : Israel: Harry Potter and the Methods of Rationality Pi Day Wrap Party

2015-02-24T07:07:53.483Z · score: 1 (2 votes)
Comment by joshuafox on Announcing LessWrong Digest · 2015-02-23T11:43:35.777Z · score: 8 (8 votes) · LW · GW

A fine idea. I suggest opening each summary with the thesis of the cited post.

Comment by joshuafox on [LINK] The Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics · 2015-02-20T08:02:27.656Z · score: 3 (3 votes) · LW · GW

Here are the best items I have found in my search for anti-MWI reading. Some present anti-MWI arguments but in the end are pro-MWI.

  • David Wallace, The Emergent Multiverse (Pro; Anti-Everett arguments in the interludes)
  • Steven Weinberg: Lectures on Quantum Mechanics, sec. 3.7 (Seemingly pro-Everett, but in the end saying all current theories are flawed)
  • Adrian Kent, "Against Many-Worlds Interpretations" (Anti)
  • Stanford Encyclopedia of Philosophy "Many-Worlds Interpretation of Quantum Mechanics " (Mostly Pro, Anti in sec.6)
Comment by joshuafox on [LINK] The Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics · 2015-02-20T07:03:23.688Z · score: 1 (1 votes) · LW · GW

I'd be even more glad to read an article that specifically is against MWI/Everrett (or whatever you call it).

David Wallace, The Emergent Multiverse, Interludes I and II, presents both sides but in the end is pro-MWI.

A coherent, intelligent, reasonable article by an advocate of the other side would make things clearer.

Comment by joshuafox on [LINK] The Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics · 2015-02-20T07:01:04.515Z · score: 2 (2 votes) · LW · GW

Yes, those are all possibilities for what I am looking for. I'll let the experts decide: I'll be glad to read a coherent defense of Copenhagen, objective collapse, etc. or whatever it is that Hugh Everett/David Deutsch/Max Tegmark/Sean Carroll/etc are up against.

Comment by joshuafox on [LINK] The Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics · 2015-02-19T19:45:04.373Z · score: 11 (11 votes) · LW · GW

I'd like to see the best anti-MWI/Everett article out there.

Comment by joshuafox on Bragging Thread February 2015 · 2015-02-13T08:59:51.935Z · score: 2 (2 votes) · LW · GW

Thank you. I have removed the link from the Wiki. The item is available in archive.org. It's a short description of acausal trade with a focus on simulation as the way that one agent predicts the other's behavior.

Comment by joshuafox on The morality of disclosing salary requirements · 2015-02-12T18:05:42.726Z · score: 0 (0 votes) · LW · GW

If asked your salary requirements, put off your answer to as late a phase as possible.

I precommit to myself to always say "let's see if we're a fit first, I'm sure we'll agree on a reasonable salary" at any phase of negotiations before they know they want me. " I perfume to do this even I think it will lose me the job. I don't think it ever has.

When asked my salary requirements by third party recruiters, I always say " you're the expert. You know the member better than i do. What do you think i can make at a stretch?"

One way to get yourself some flexibility is to remember that compensation includes not only salary but also various benefits. So, if you crack under pressure and tell them previous pay or requirements, you can use that fact to later give your answers the interpretation you want.

Comment by joshuafox on The morality of disclosing salary requirements · 2015-02-12T17:56:02.467Z · score: 0 (0 votes) · LW · GW

If asked your previous salary, say "my contract with my former employerforbids me to say that [which is probably true], and I take my responsibilities to my employers very seriously."

Comment by joshuafox on Bragging Thread February 2015 · 2015-02-11T16:07:30.051Z · score: 5 (5 votes) · LW · GW

I rewrote the LW Wiki article on Acausal Trade. I had originally written this article, but it was too heavily based on multiverse concepts, which are not essential to acausal trade. Also, it made too much use of quasimathematical variables.

I rewrote it in the style I'd use to explain the concept face-to-face to a LessWronger. I will appreciate edits and improvements on the Wiki. Actually, it would be good to see a number of articles, including one in academic style.

This is, as far as I know, the only article explaining the concept. Considering that this term is in common use in LW circles and was even used in Bostrom's recent academic article, I am surprised that no one else has written one.

Comment by joshuafox on Stupid Questions February 2015 · 2015-02-05T05:51:38.389Z · score: 3 (3 votes) · LW · GW

See a recent MIRI paper.

A narrow AI, "tasked with designing an oscillating circuit, re-purposed the circuit tracks on its motherboard to use as a radio which amplified oscillating signals from nearby computers."

Comment by joshuafox on Stupid Questions February 2015 · 2015-02-05T05:43:26.625Z · score: 0 (0 votes) · LW · GW

Thank you. That matches up with that I was thinking; it's good to get a confirmation. At first glance, it looks like a discount factor would settle the agent's problem, but that's only if we're working with probabilistic beliefs and expected value rather than deterministic proofs.

Could you help me level up my understanding?

  1. It looks like the discussion of the Procrastination Paradox in the Vingean Reflection article depends on a reflectivity property in the agent. Does that somehow bypass the Löbstable? Or if not, how is it related to Löb's theorem?

  2. Is there something more to the Procrastination Paradox than just "I can prove that I'll do it tomorrow, so I won't do it today?" By itself, that doesn't look like an earth-shaking result.

Comment by joshuafox on Stupid Questions February 2015 · 2015-02-02T18:59:01.822Z · score: 2 (2 votes) · LW · GW

What is the Procrastination Paradox? I read the recent "Vingean Reflection" paper and other materials I found, but still don't get it.

Comment by joshuafox on Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial · 2015-01-16T08:17:47.221Z · score: 12 (12 votes) · LW · GW

I think that this is almost as much money as has gone into AI existential risk research to all organizations ever.

Comment by joshuafox on Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial · 2015-01-16T08:17:01.087Z · score: 8 (8 votes) · LW · GW

Do we know why he chose to donate in this way: donating to FLI (rather than FHI, MIRI, CSER, some university, or a new organization), and setting up a grant fund (rather than directly to researchers or other grantees)?

Comment by joshuafox on Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial · 2015-01-16T08:14:23.947Z · score: 16 (16 votes) · LW · GW

Musk's position on AI risk is useful because he is contributing his social status and money to the cause.

However, other than being smart, he has no special qualifications in the subject -- he got his ideas from other people.

So, his opinion should not update our beliefs very much.

Comment by joshuafox on The Importance of Sidekicks · 2015-01-08T17:09:35.232Z · score: 5 (5 votes) · LW · GW

I'd rather be a hero than a sidekick. But my small contribution to mitigating AI risk has generally been in helping MIRI in whatever way seemed most valuable, rather than inventing my independent way to global utility maximization.

So, what does that make me? A cooperative small-time hero, like one of those obscure minor superhero characters in the comics who occasionally steps up to help the famous ones?

Comment by joshuafox on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-29T15:09:21.618Z · score: 0 (0 votes) · LW · GW

I was asking on behalf of a friend who has a good essay -- I wanted to bring him into the online community by encouraging him to post it to discussion.

By the way, based on some old bug tickets, I think the answer is 2.

Comment by joshuafox on Podcast: Rationalists in Tech · 2014-12-25T16:38:51.199Z · score: 0 (0 votes) · LW · GW

This might be a good idea but I wouldn't listen to it.

OK, thanks for the feedback. My hypothesis is that people want to know about other LessWrongers in their profession, whether for immediate or longer-term networking. I am less sure now that there is a demand for that -- perhaps existing online presence like personal websites or LinkedIn is enough to meet that demand.

Comment by joshuafox on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-20T17:27:33.442Z · score: 0 (0 votes) · LW · GW

What is the minimum karma for posting to Discussion?

Comment by joshuafox on Podcast: Rationalists in Tech · 2014-12-16T12:55:53.592Z · score: 0 (0 votes) · LW · GW

Right, I'll need to polish up the production values. But taking these three inital interviews as an example: What do you think of the content? Does it help you in some way?

Comment by joshuafox on Podcast: Rationalists in Tech · 2014-12-16T09:07:28.740Z · score: 1 (1 votes) · LW · GW

Can this be corrected?

I'll look into that.

I would like to listen to this podcast.

You can subscribe to the RSS feed, or simply download the MP3s directly from inside the blog

Comment by joshuafox on Podcast: Rationalists in Tech · 2014-12-16T09:05:42.230Z · score: 1 (1 votes) · LW · GW

I really liked the podcast that I listened to.

Thank you!

indexed on the iTunes podcast list.

I'll look into that.

Podcast: Rationalists in Tech

2014-12-14T16:14:50.613Z · score: 12 (13 votes)
Comment by joshuafox on Editing meetups · 2014-12-05T12:21:22.657Z · score: 2 (2 votes) · LW · GW

I've encountered some bugs in creating and editing meetups, numbers 429, 432, and 479 on issue list here. Bug 432 is that when I try to change the time, another copy of the meetup post is created, and the duplicate posts (as well as the original) can't be deleted.

Comment by joshuafox on Superintelligence 12: Malignant failure modes · 2014-12-02T06:56:24.600Z · score: 3 (5 votes) · LW · GW

Of course, there are two kinds of perversity.

Perversity is "a deliberate desire to behave in an unreasonable or unacceptable way; contrariness."

Fictional genies seek out ways to trick the requester on purpose, just to prove a point about rash wishes. The other kind of perverse agent doesn't act contrarily for the sake of being contrary. They act exactly to achieve the goal and nothing else; it's just that they go against implicit goals as a side effect.

Comment by joshuafox on Superintelligence 12: Malignant failure modes · 2014-12-02T06:52:32.395Z · score: 5 (9 votes) · LW · GW

There is simply no way to give this a perverse instantiation

During the process of making 10 paperclips, it's necessary to "disturb" the world at least to the extent of removing a few grams of metal needed for making paperclips. So, I guess you mean that the prohibition of disturbing the world comes into effect after making the paperclips.

But that's not safe. For example, it would be effective for achieving the goal for the AI to kill everyone and destroy everything not directly useful to making the paperclips, to avoid any possible interference.

Comment by joshuafox on Misdiagnosed Asperger's syndrome is ruining my life. · 2014-11-28T09:07:23.734Z · score: 7 (7 votes) · LW · GW

Re the second point, please avoid telling people what is in the spirit of rationality unless you identify specific inconsistencies between their behavior and their goals, or between different goals that they have simultaneously. "Rationality" does not dictate goals.

Comment by joshuafox on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-26T20:23:34.264Z · score: 3 (3 votes) · LW · GW

Anyone want to comment on a pilot episode of a podcast "Rationalists in Tech"? Please PM or email me. I'll ask for your feedback and suggestions for improvement on a 30-minute audio interview with a leading technologist from the LW community. This will allow me to plan an even better series of further interviews with senior professionals, consultants, founders, and executives in technology, mostly in software.

  • Discussion topics will include the relevance of CfAR-style techniques to the career and daily work of a tech professional; tips on career aimed at LWer technologists; and the rationality-related products and services of some interviewees;

  • The goal is to show LessWrongers in the tech sector that they have a community of like-minded people. Often engineers, particularly those just starting out, have heard of the value of networking, but don't know where they can find people who they can and should connect to. Similarly, LWers who are managers or owners are always on the lookout for talent. This will highlight some examples of other LWers in the sector as an inspiration for networking.

Comment by joshuafox on Superintelligence 11: The treacherous turn · 2014-11-25T08:01:35.594Z · score: 9 (11 votes) · LW · GW

To play the treacherous turn gambit, the AI needs to get strong at faking weakness faster than it gets strong at everything else. What are the chances of that?

Comment by joshuafox on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-24T09:08:03.499Z · score: 2 (2 votes) · LW · GW

We're considering Meetup.com for the Tel Aviv LW group. (Also, the question was asked here.) It costs money, but we'd pay if it's worthwhile. I note that there are only 5 LessWrong groups at Meetup of which 2-3 are active. I'll appreciate feedback on the usefulness of Meetup.

Comment by joshuafox on Meetup : Tel Aviv Meetup: Rump Session · 2014-11-24T09:04:22.529Z · score: 0 (0 votes) · LW · GW

We discussed that in the Tel Aviv LW group. Meetup.com costs money. We'd pay if it's worthwhile--Is it? I note that there are only 5 LessWrong groups at Meetup of which 2-3 are active. I'll appreciate feedback on the usefulness of Meetup. ( I'll also put this comment in an open thread so more people see it.)

Comment by joshuafox on xkcd on the AI box experiment · 2014-11-22T21:12:49.046Z · score: 3 (3 votes) · LW · GW

And (3) explain why other potential info hazards, not the basilisk but very different configurations of acausal negotation (that have either not yet discovered, or were discovered but they not made public), should not be discussed.

Comment by joshuafox on xkcd on the AI box experiment · 2014-11-22T21:09:58.776Z · score: -3 (3 votes) · LW · GW

because I don't take elaborate TDT-based reasoning too seriously, partially out of ironic detachment, but many here would say I should.

Righto, you should avoid not taking things seriously because of ironic detachment.

Comment by joshuafox on xkcd on the AI box experiment · 2014-11-22T17:29:45.521Z · score: 3 (3 votes) · LW · GW

Suppose I buy shares in a company that builds an AI, which then works for the good of the company, which rewards share-owners. This is ordinary causality: I contributed towards its building, and was rewarded later.

What makes it possible to be rewarded as a shareholder is a legal system which enforces your ownership rights: a kind of pre-commitment which is feasible even among humans who cannot show proofs about their "source code." The legal system is a mutual enforcement system which sets up a chain of causality towards your being paid back.

Suppose I contribute towards something other than its building, in the belief that an AI which will later come into being will reward me for having done this. Still doesn't seem acausal to me.

It's interesting what to consider what happens when the second agent cannot precommit to repaying you. For example, if the agent does not yet exist.

Suppose I believe an AI is likely to be built that will conquer the world and transfer all wealth to its builders.

The question is: Why would it do that? In the future, when this new agent comes into existence, why would it consume resources to repay its builders (assuming that it receives no benefit at that future time)? The "favor" that the builders did is past and gone; repaying them gives the agent no benefit. Since we are talking in this comment subthread about an FAI that is truly friendly to all humanity, it might distribute its efforts equality to all humanity rather than "wasting" resources on differential payback.

The answer to this question has to do with acausal trade. I wrote a LW Wiki article on the topic. It's pretty mind-bending and it took me a while to grasp, but here is a summary. If Agent P (in this case the AI) can model or simulate Agent Q (in this case humans in P's past) to prove statements (probably probabalistic statements) about it, and Q can model P, then P's optimal move is to do what Q wants, and Q's optimal move is to do what P wants. This holds in the limiting case of perfect knowledge and infinite computational power, but in real life, clearly, it depends on a lot of assumptions about P's and Q's ability to model each other, and the relative utility they can grant each other.

Comment by joshuafox on xkcd on the AI box experiment · 2014-11-22T16:42:22.495Z · score: 3 (3 votes) · LW · GW

That explanation by Eliezer cleared things up for me. He really should have explained himself earlier. I actually had some vague understanding of what Eliezer was doing with his deletion and refusal to discuss the topic, but as usual, Eliezer's explanation make things that I thought I sort-of-knew seem obvious in retrospect.

And as Eliezer realizes, the attempt to hush things up was a mistake. Roko's post should have been taken as a teaching moment.

Meetup : Tel Aviv Meetup: Rump Session

2014-11-21T11:12:53.067Z · score: 1 (2 votes)
Comment by joshuafox on xkcd on the AI box experiment · 2014-11-21T10:38:13.434Z · score: 11 (11 votes) · LW · GW

Sometimes you have to get up and say, these are the facts, you are wrong.

Sometimes yes, and sometimes no.

damn the consequences.

Depends what the consequences are. Ignoring human status games can have some pretty bad consequences.

Comment by joshuafox on xkcd on the AI box experiment · 2014-11-21T10:34:16.397Z · score: 2 (8 votes) · LW · GW

Let's just tell the acausal trade story in terms of extreme positive utility rather than negative.

Putting it simply for the purpose of this comment: "If you do what the future AI wants now, it will reward you when it comes into being."

Makes the whole discussion much more cheerful.

Comment by joshuafox on Superintelligence 10: Instrumentally convergent goals · 2014-11-18T07:54:12.788Z · score: 0 (0 votes) · LW · GW

Another way to look at it: Subgoals may be offset by other subgoals. This includes convergent values.

Humans don't usually let any one of their conflicting values override all others. For example, accumulation of any given resource is moderated by other humans and by diminishing marginal returns on any one resource as compared to another.

On the other hand, for a superintelligence, particularly one with a simple terminal goal, these moderating factors would be less effective. For example, they might not have competitors.

Meetup : Prof. Roman Yampolskiy on approaches to AGI risk, Tel Aviv

2014-05-25T11:27:23.868Z · score: 2 (3 votes)

Finding LessWrongers on LinkedIn

2014-05-13T11:46:41.284Z · score: 9 (10 votes)

LessWrong as social catalyst

2014-04-28T14:10:31.164Z · score: 14 (17 votes)

Business Networking through LessWrong

2014-04-02T17:39:35.937Z · score: 31 (32 votes)

Snowdenizing UFAI

2013-12-05T14:42:12.946Z · score: 5 (16 votes)

Why officers vs. enlisted?

2013-10-30T20:14:52.156Z · score: 19 (21 votes)

Bets on an Extreme Future

2013-08-13T08:05:58.268Z · score: 1 (4 votes)

The Singularity Wars

2013-02-14T09:44:53.666Z · score: 52 (57 votes)

Evaluating the feasibility of SI's plan

2013-01-10T08:17:29.959Z · score: 25 (46 votes)

If we live in a simulation, what does that imply?

2012-10-25T21:27:44.749Z · score: 18 (21 votes)

Live web-forum Q&A on Friendly AI, Thu. May 24 (Hebrew)

2012-05-20T15:54:37.449Z · score: 2 (5 votes)

Overview article on FAI in a popular science magazine (Hebrew)

2012-05-15T11:09:51.759Z · score: 16 (21 votes)

Characterizing the superintelligence which we are concerned about

2012-04-01T18:40:21.179Z · score: 9 (14 votes)