Comment by furcas on What Are The Chances of Actually Achieving FAI? · 2017-07-26T04:24:13.336Z · score: 2 (2 votes) · LW · GW

I'd guess 1%. The small minority of AI researchers working on FAI will have to find the right solutions to a set of extremely difficult problems on the first try, before the (much better funded!) majority of AI researchers solve the vastly easier problem of Unfriendly AGI.

Comment by furcas on Split Brain Does Not Lead to Split Consciousness · 2017-01-31T01:45:36.490Z · score: 6 (2 votes) · LW · GW

Huh. Is it possible that the corpus callosum has (at least partially) healed since the original studies? Or that some other connection has formed between the hemispheres in the years since the operation?

Comment by furcas on MIRI's 2016 Fundraiser · 2016-09-24T15:39:19.453Z · score: 18 (18 votes) · LW · GW

Donated $500!

Comment by furcas on Open thread, Sep. 12 - Sep. 18, 2016 · 2016-09-15T17:01:33.677Z · score: 0 (0 votes) · LW · GW

Yes it was video. As Brillyant mentioned, the official version will be released on the 29th of September. It's possible someone will upload it before then (again), but AFAIK nobody has since the video I linked was taken down.

Comment by furcas on Open thread, Sep. 12 - Sep. 18, 2016 · 2016-09-13T19:02:34.728Z · score: 0 (0 votes) · LW · GW

I changed the link to the audio, should work now.

Comment by furcas on Open thread, Sep. 12 - Sep. 18, 2016 · 2016-09-13T15:03:27.294Z · score: 4 (4 votes) · LW · GW

Sam Harris' TED talk on AGI existential risk: https://www.youtube.com/watch?v=IZhGkKFH1x0&feature=youtu.be

ETA: It's been taken down, probably so TED can upload it on their own channel. Here's the audio in the meantime: https://drive.google.com/open?id=0B5xcnhOBS2UhZXpyaW9YR3hHU1k

Comment by furcas on November 2013 Media Thread · 2016-07-09T19:26:13.904Z · score: 2 (2 votes) · LW · GW

If you don't like it now, you never will.

Comment by furcas on Open thread, Jul. 04 - Jul. 10, 2016 · 2016-07-05T21:11:17.581Z · score: 4 (4 votes) · LW · GW

http://lesswrong.com/lw/rf/ghosts_in_the_machine/

Comment by furcas on Zombies Redacted · 2016-07-05T18:58:52.696Z · score: 1 (1 votes) · LW · GW

Yeah, I edited my comment after reading kilobug's.

Comment by furcas on Zombies Redacted · 2016-07-04T14:48:57.448Z · score: 1 (1 votes) · LW · GW

Ahh, it wasn't meant to be snarky. I saw an opportunity to try and get Eliezer to fess up, that's all. :)

Comment by furcas on Zombies Redacted · 2016-07-02T22:20:24.151Z · score: 7 (7 votes) · LW · GW

Nice.

So, when are you going to tell us your solution to the hard problem of consciousness?

Edited to add: The above wasn't meant as a sarcastic objection to Eliezer's post. I'm totally convinced by his arguments, and even if I wasn't I don't think not having a solution to the hard problem is a greater problem for reductionism than for dualism (of any kind). I was seriously asking Eliezer to share his solution, because he seems to think he has one.

Comment by furcas on Newcomb versus dust specks · 2016-05-13T15:51:44.186Z · score: 0 (2 votes) · LW · GW

IMO since people are patterns (and not instances of patterns), there's still only one person in the universe regardless of how many perfect copies of me there are. So I choose dust specks. Looks like the predictor isn't so perfect. :P

Comment by furcas on My Kind of Moral Responsibility · 2016-05-02T21:31:25.565Z · score: 1 (3 votes) · LW · GW

Why don't you go ask some.

I mentioned three crucial caveats. I think it would be difficult to find Christians in 2016 who have no doubts and swallow the bullet about the implications of Christianity. It would be a lot easier a few hundred years ago.

Huh? The "concept" of Christianity hasn't changed since the Middle Ages

What I mean is that the religious beliefs of the majority of people who call themselves Christians have changed a lot since medieval times.

We are talking here about what you can, should, or must sacrifice to get closer to the One True Goal (which in Christianity is salvation). Your answer is "everything". Why? Because the One True Goal justifies everything including things people call "horrors". Am I reading you wrong?

I don't see the relevance of what you call a "One True Goal". I mean, One True Goal as opposed to what? Several Sorta True Goals? Ultimately, no matter what your goals are, you will necessarily be willing to sacrifice things that are less important to you in order to achieve them. Actions are justified as they relate to the accomplishment of a goal, or a set of goals.

If I were convinced that Roger is going to detonate a nuclear bomb in New York, I would feel justified (and obliged) to murder him, because like most of the people I know, I have the goal to prevent millions of innocents from dying. And yet, if I believed that Roger is going to do this on bad or non-existent evidence, the odds are that I would be killing an innocent man for no good reason. There would be nothing wrong with my goal (One True or not), only with my rationality. I don't see any fundamental difference between this scenario and the one we've been discussing.

Comment by furcas on My Kind of Moral Responsibility · 2016-05-02T19:10:15.922Z · score: 0 (2 votes) · LW · GW

Would real-life Christians who sincerely and wholeheartedly believe that Christianity is true agree that such acts are not horrible at all and, in fact, desirable and highly moral?

Yes? Of course? With the caveats that the concept of 'Christianity' is the medieval one you mentioned above, that these Christians really have no doubts about their beliefs, and that they swallow the bullet.

So once you think you have good evidence, all the horrors stop being horrors and become justified?

Are you trolling? Is the notion that the morality of actions is dependent on reality really that surprising to you?

Comment by furcas on My Kind of Moral Responsibility · 2016-05-02T18:32:58.936Z · score: 0 (2 votes) · LW · GW

Well, my point is that stating all the horrible things that Christians should do to (hypothetically) save people from eternal torment is not a good argument against 'hard-core' utilitarianism. These acts are only horrible because Christianity isn't true. Therefore the antidote for these horrors is not, "don't swallow the bullet", it's "don't believe stuff without good evidence".

Comment by furcas on My Kind of Moral Responsibility · 2016-05-02T17:52:07.526Z · score: 3 (5 votes) · LW · GW

Yes, I acknowledge all of that. Do you understand the consequence of not doing those things, if Christianity is true?

Eternal torment, for everyone you failed to convert.

Eternal. Torment.

Comment by furcas on My Kind of Moral Responsibility · 2016-05-02T17:09:49.993Z · score: 3 (3 votes) · LW · GW

The parallel should be obvious: if you believe in eternal (!) salvation and torment, absolutely anything on Earth can be sacrificed for a minute increase in the chance of salvation.

... yes? What's wrong with that? Are you saying that, if you came across strong evidence that the Christian Heaven and Hell are real, you wouldn't do absolutely anything necessary to get yourself and the people you care about to Heaven?

The medieval Christians you describe didn't fail morally because they were hard-core utilitarians, they failed because they believed Christianity was true!

Comment by furcas on [Link] Salon piece analyzing Donald Trump's appeal using rationality · 2016-04-26T15:04:13.389Z · score: 0 (0 votes) · LW · GW

Do you already have something written on the subject? I'd like to read it.

Comment by furcas on Open Thread April 11 - April 17, 2016 · 2016-04-14T15:07:35.256Z · score: 1 (1 votes) · LW · GW

Ohh, Floornight is pretty awesome (so far). Thanks!

Comment by furcas on Lesswrong 2016 Survey · 2016-03-31T00:32:06.367Z · score: 28 (28 votes) · LW · GW

Did it.

Comment by furcas on [LINK] Why Cryonics Makes Sense - Wait But Why · 2016-03-25T15:50:05.330Z · score: 3 (3 votes) · LW · GW

He doesn't come from the LW-sphere but he's obviously read a lot of LW or LW-affiliated stuff. I mean, he's written a pair of articles about the existential risk of AGI...

Comment by furcas on Open Thread March 7 - March 13, 2016 · 2016-03-12T09:26:16.923Z · score: 0 (0 votes) · LW · GW

The Time article doesn't say anything interesting.

Goertzel's article (the first link you posted) is worth reading, although about half of it doesn't actually argue against AI risk, and the part that does seems obviously flawed to me. Even so, if more LessWrongers take the time to read the article I would enjoy talking about the details, particularly about his conception of AI architectures that aren't goal-driven.

Comment by furcas on After Go, what games should be next for DeepMind? · 2016-03-11T15:54:53.821Z · score: 6 (6 votes) · LW · GW

What that complaint usually means is "The AI is too hard, I would like easier wins".

That may be true in some cases, but in many other cases the AI really does cheat, and it cheats because it's not smart enough to offer a challenge to good players without cheating.

Comment by furcas on After Go, what games should be next for DeepMind? · 2016-03-10T22:19:27.247Z · score: 1 (1 votes) · LW · GW

Human-like uncertainty could be inserted into the AI's knowledge of those things, but yeah, as you say, it's going to be a mess. Probably best to pick another kind of game to beat humans at.

Comment by furcas on After Go, what games should be next for DeepMind? · 2016-03-10T22:04:23.679Z · score: 2 (2 votes) · LW · GW

RTS is a bit of a special case because a lot of the skill involved is micromanagement and software is MUCH better at micromanagement than humans.

The micro capabilities of the AI could be limited so they're more or less equivalent to a human pro gamer's, forcing the AI to win via build choice and tactics.

Comment by furcas on Updating towards the simulation hypothesis because you think about AI · 2016-03-06T02:21:35.474Z · score: 1 (1 votes) · LW · GW

I think Jim means that if minds are patterns, there could be instances of our minds in a simulation (or more!) as well as in the base reality, so that we exist in both (until the simulation diverges from reality, if it ever does).

Comment by furcas on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-23T18:23:25.862Z · score: 1 (1 votes) · LW · GW

Well, if nothing else, this is a good reminder that rationality has nothing to do with articulacy.

Comment by furcas on Religious and Rational? · 2016-02-09T20:15:45.510Z · score: -4 (10 votes) · LW · GW

Seriously?

Comment by furcas on February 2016 Media Thread · 2016-02-02T02:48:34.563Z · score: 2 (2 votes) · LW · GW

I strongly recommend JourneyQuest. It's a very smartly written and well acted fantasy webseries. It starts off mostly humorous but quickly becomes more serious. I think it's the sort of thing most LWers would enjoy. There are two seasons so far, with a third one coming in a few months if the Kickstarter succeeds

https://www.youtube.com/watch?v=pVORGr2fDk8&list=PLB600313D4723E21F

Comment by furcas on This year's biggest scientific achievements · 2015-12-13T23:30:31.935Z · score: 1 (1 votes) · LW · GW

The person accomplished notable things?

Comment by furcas on [Link] Introducing OpenAI · 2015-12-12T15:23:38.885Z · score: -4 (8 votes) · LW · GW

What the hell? There's no sign that Musk and Altman have read Bostrom, or understand the concept of an intelligence explosion in that interview.

Comment by furcas on Open thread, December 7-13, 2015 · 2015-12-11T21:11:47.873Z · score: 3 (3 votes) · LW · GW

World's first anti-ageing drug could see humans live to 120

Anyone know anything about this?

The drug is metformin, currently used for Type 2 diabetes.

Comment by furcas on [link] New essay summarizing some of my latest thoughts on AI safety · 2015-11-05T06:35:10.515Z · score: 5 (6 votes) · LW · GW

You have understood Loosemore's point but you're making the same mistake he is. The AI in your example would understand the intent behind the words "maximize human happiness" perfectly well but that doesn't mean it would want to obey that intent. You talk about learning human values and internalizing them as if those things naturally go together. The only way that value internalization naturally follows from value learning is if the agent already wants to internalize these values; figuring out how to do that is (part of) the Friendly AI problem.

Comment by furcas on MIRI's 2015 Summer Fundraiser! · 2015-08-27T18:57:53.886Z · score: 14 (14 votes) · LW · GW

I donated $400.

Comment by furcas on There is no such thing as strength: a parody · 2015-07-06T02:12:38.769Z · score: 4 (4 votes) · LW · GW

My cursor was literally pixels away from the downvote button. :)

Comment by furcas on Debunking Fallacies in the Theory of AI Motivation · 2015-05-13T04:58:48.215Z · score: 1 (1 votes) · LW · GW

I honestly don't know what more to write to make you understand that you misunderstand what Yudkowsky really means.

You may be suffering from a bad case of the Doctrine of Logical Infallibility, yourself.

Comment by furcas on Debunking Fallacies in the Theory of AI Motivation · 2015-05-13T01:55:58.448Z · score: 1 (1 votes) · LW · GW

The only sense in which the "rigidity" of goals can be said to be a universal fact about minds is that it is these goals that determine how the AI will modify itself once it has become smart and capable enough to do so. It's not a good idea to modify your goals if you want them to become reality; that seems obviously true to me, except perhaps for a small number of edge cases related to internally incoherent goals.

Your points against the inevitability of goal rigidity don't seem relevant to this.

Comment by furcas on Debunking Fallacies in the Theory of AI Motivation · 2015-05-12T04:24:06.881Z · score: 10 (10 votes) · LW · GW

Is the Doctrine of Logical Infallibility Taken Seriously?

No, it's not.

The Doctrine of Logical Infallibility is indeed completely crazy, but Yudkowsky and Muehlhauser (and probably Omohundro, I haven't read all of his stuff) don't believe it's true. At all.

Yudkowsky believes that a superintelligent AI programmed with the goal to "make humans happy" will put all humans on dopamine drip despite protests that this is not what they want, yes. However, he doesn't believe the AI will do this because it is absolutely certain of its conclusions past some threshold; he doesn't believe that the AI will ignore the humans' protests, or fail to update its beliefs accordingly. Edited to add: By "he doesn't believe that the AI will ignore the humans' protests", I mean that Yudkowsky believes the AI will listen to and understand the protests, even if they have no effect on its behavior.

What Yudkowsky believes is that the AI will understand perfectly well that being put on dopamine drip isn't what its programmers wanted. It will understand that its programmers now see its goal of "make humans happy" as a mistake. It just won't care, because it hasn't been programmed to want to do what its programmers desire, it's been programmed to want to make humans happy; therefore it will do its very best, in its acknowledged fallibility, to make humans happy. The AI's beliefs will change as it makes observations, including the observation that human beings are very unhappy a few seconds before being forced to be extremely happy until the end of the universe, but this will have little effect on its actions, because its actions are caused by its goals and whatever beliefs are relevant to these goals.

The AI won't think, “I don’t care, because I have come to a conclusion, and my conclusions are correct because of the Doctrine of Logical Infallibility.” It will think, "I'm updating my conclusions based on this evidence, but these conclusions don't have much to do with what I care about".

The whole Friendly AI thing is mostly about goals, not beliefs. It's about picking the right goals ("Make humans happy" definitely isn't the right goal), encoding those goals correctly (how do you correctly encode the concept of a "human being"?), and, if the first two objectives have been attained, designing the AI's thinking processes so that once it obtains the power to modify itself, it does not want to modify its goals to be something Unfriendly.

The genie knows, but doesn't care

Comment by furcas on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-29T16:23:43.016Z · score: 2 (2 votes) · LW · GW

Exactly right.

Comment by furcas on Superintelligence 23: Coherent extrapolated volition · 2015-02-17T02:38:30.045Z · score: 0 (2 votes) · LW · GW

Nah, we can just ignore the evil fraction of humanity's wishes when designing the Friendly AI's utility function.

Comment by furcas on Have you changed your mind recently? · 2015-02-07T16:35:35.644Z · score: 7 (7 votes) · LW · GW

Not about anything important, and that scares me.

Comment by furcas on CFAR fundraiser far from filled; 4 days remaining · 2015-01-29T02:21:56.086Z · score: 31 (31 votes) · LW · GW

Donated $400.

Comment by furcas on Open thread, Jan. 19 - Jan. 25, 2015 · 2015-01-20T05:16:54.391Z · score: 2 (2 votes) · LW · GW

Is there an eReader version of the Highly Advanced Epistemology 101 for Beginners sequence anywhere?

Comment by furcas on Open thread, Jan. 19 - Jan. 25, 2015 · 2015-01-20T05:12:23.072Z · score: 1 (1 votes) · LW · GW

and Eliezer's new sequence (most of it's not metaethics, but it's required reading for understanding the explanation of his 2nd attempt to explain metaethics, which is more precise than his first attempt in the earlier Sequences).

Where is this 2nd attempt to explain metaethics by Eliezer?

Comment by furcas on Open thread, Jan. 12 - Jan. 18, 2015 · 2015-01-17T17:24:19.409Z · score: 5 (5 votes) · LW · GW

Edge.org 2015 question: WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

There are answers by lots of famous or interesting scientists and philosophers, including Max Tegmark, Nick Bostrom, and Eliezer.

Comment by furcas on [LINK] Steven Hawking warns of the dangers of AI · 2014-12-03T16:26:44.711Z · score: 2 (2 votes) · LW · GW

All of these high status scientists speaking out about AGI existential risk seldom mention MIRI or use their terminology. I guess MIRI is still seen as too low status.

Comment by furcas on Open thread, Oct. 13 - Oct. 19, 2014 · 2014-10-16T14:21:04.143Z · score: 2 (2 votes) · LW · GW

A while ago Louie Helm recommended buying Darkcoins. After he did the price of a darkcoin went up to more than 10$, but now it's down to 2$. Is it still a good idea to buy darkcoins, that is, is their price likely to go back up?

Comment by furcas on Open thread, September 15-21, 2014 · 2014-09-22T05:27:27.490Z · score: 1 (1 votes) · LW · GW

Huh, looks like I've been fooled by journalists again. Thanks!

Comment by furcas on Open thread, July 28 - August 3, 2014 · 2014-07-29T03:31:58.610Z · score: 5 (5 votes) · LW · GW

In his latest newsletter Louie Helm advises taking "activated" vitamin D in the form of Calcitriol or Paricalcitol, to raise one's Klotho levels, which is likely to increase one's IQ and longevity if you don't already have the gene for it. Since Calcitriol and Paricalcitol aren't over-the-counter, what would be the best way to acquire some?

http://rockstarresearch.com/increase-longevity-and-intelligence-with-boosted-klotho-levels/

Comment by furcas on Consider giving an explanation for your deletion this time around. "Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild" · 2014-07-08T03:04:44.891Z · score: 1 (9 votes) · LW · GW

That was pretty good. Upvoted.

If MWI is correct, should we expect to experience Quantum Torment?

2012-11-10T04:32:02.524Z · score: 3 (20 votes)