Posts

If MWI is correct, should we expect to experience Quantum Torment? 2012-11-10T04:32:02.524Z

Comments

Comment by Furcas on Google "We Have No Moat, And Neither Does OpenAI" · 2023-05-05T05:05:47.110Z · LW · GW

This comment has gotten lots of upvotes but, has anyone here tried Vicuna-13B?

Comment by Furcas on My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" · 2023-04-13T00:53:41.214Z · LW · GW

Well, this is insanely disappointing. Yes, the OP shouldn't have directly replied to the Bankless podcast like that, but it's not like he didn't read your List of Lethalities, or your other writing on AGI risk. You really have no excuse for brushing off very thorough and honest criticism such as this, particularly the sections that talk about alignment.

And as others have noted, Eliezer Yudkowsky, of all people, complaining about a blog post being long is the height of irony.

This is coming from someone who's mostly agreed with you on AGI risk since reading the Sequences, years ago, and who's donated to MIRI, by the way.

On the bright side, this does make me (slightly) update my probability of doom downwards.

Comment by Furcas on Hooray for stepping out of the limelight · 2023-04-01T07:22:39.516Z · LW · GW

You may be right about Deepmind's intentions in general but, I'm certain that the reason they didn't brag about AlphaStar is because it didn't quite succeed. There never was an official series between the best SC2 player in the world and AlphaStar. And, once Grandmaster-level players got a bit used to playing against AlphaStar, even they could beat it, to say nothing of pros. AlphaStar had excellent micro-management and decent tactics, but zero strategic ability. It had the appearance of strategic thinking because there were in fact multiple AlphaStars, each one having learned a different build during training. But then each instance would always execute that build. We never saw AlphaStar do something as elementary as scouting the enemy's army composition and building the units that would best counter it.

So Deepmind saw they had only partially succeeded, but for some reason instead of continuing their work on AlphaStar they decided to declare victory and quietly move on to another project.

Comment by Furcas on What Are The Chances of Actually Achieving FAI? · 2017-07-26T04:24:13.336Z · LW · GW

I'd guess 1%. The small minority of AI researchers working on FAI will have to find the right solutions to a set of extremely difficult problems on the first try, before the (much better funded!) majority of AI researchers solve the vastly easier problem of Unfriendly AGI.

Comment by Furcas on Split Brain Does Not Lead to Split Consciousness · 2017-01-31T01:45:36.490Z · LW · GW

Huh. Is it possible that the corpus callosum has (at least partially) healed since the original studies? Or that some other connection has formed between the hemispheres in the years since the operation?

Comment by Furcas on MIRI's 2016 Fundraiser · 2016-09-24T15:39:19.453Z · LW · GW

Donated $500!

Comment by Furcas on Open thread, Sep. 12 - Sep. 18, 2016 · 2016-09-15T17:01:33.677Z · LW · GW

Yes it was video. As Brillyant mentioned, the official version will be released on the 29th of September. It's possible someone will upload it before then (again), but AFAIK nobody has since the video I linked was taken down.

Comment by Furcas on Open thread, Sep. 12 - Sep. 18, 2016 · 2016-09-13T19:02:34.728Z · LW · GW

I changed the link to the audio, should work now.

Comment by Furcas on Open thread, Sep. 12 - Sep. 18, 2016 · 2016-09-13T15:03:27.294Z · LW · GW

Sam Harris' TED talk on AGI existential risk: https://www.youtube.com/watch?v=IZhGkKFH1x0&feature=youtu.be

ETA: It's been taken down, probably so TED can upload it on their own channel. Here's the audio in the meantime: https://drive.google.com/open?id=0B5xcnhOBS2UhZXpyaW9YR3hHU1k

Comment by Furcas on November 2013 Media Thread · 2016-07-09T19:26:13.904Z · LW · GW

If you don't like it now, you never will.

Comment by Furcas on Open thread, Jul. 04 - Jul. 10, 2016 · 2016-07-05T21:11:17.581Z · LW · GW

http://lesswrong.com/lw/rf/ghosts_in_the_machine/

Comment by Furcas on Zombies Redacted · 2016-07-05T18:58:52.696Z · LW · GW

Yeah, I edited my comment after reading kilobug's.

Comment by Furcas on Zombies Redacted · 2016-07-04T14:48:57.448Z · LW · GW

Ahh, it wasn't meant to be snarky. I saw an opportunity to try and get Eliezer to fess up, that's all. :)

Comment by Furcas on Zombies Redacted · 2016-07-02T22:20:24.151Z · LW · GW

Nice.

So, when are you going to tell us your solution to the hard problem of consciousness?

Edited to add: The above wasn't meant as a sarcastic objection to Eliezer's post. I'm totally convinced by his arguments, and even if I wasn't I don't think not having a solution to the hard problem is a greater problem for reductionism than for dualism (of any kind). I was seriously asking Eliezer to share his solution, because he seems to think he has one.

Comment by Furcas on Newcomb versus dust specks · 2016-05-13T15:51:44.186Z · LW · GW

IMO since people are patterns (and not instances of patterns), there's still only one person in the universe regardless of how many perfect copies of me there are. So I choose dust specks. Looks like the predictor isn't so perfect. :P

Comment by Furcas on My Kind of Moral Responsibility · 2016-05-02T21:31:25.565Z · LW · GW

Why don't you go ask some.

I mentioned three crucial caveats. I think it would be difficult to find Christians in 2016 who have no doubts and swallow the bullet about the implications of Christianity. It would be a lot easier a few hundred years ago.

Huh? The "concept" of Christianity hasn't changed since the Middle Ages

What I mean is that the religious beliefs of the majority of people who call themselves Christians have changed a lot since medieval times.

We are talking here about what you can, should, or must sacrifice to get closer to the One True Goal (which in Christianity is salvation). Your answer is "everything". Why? Because the One True Goal justifies everything including things people call "horrors". Am I reading you wrong?

I don't see the relevance of what you call a "One True Goal". I mean, One True Goal as opposed to what? Several Sorta True Goals? Ultimately, no matter what your goals are, you will necessarily be willing to sacrifice things that are less important to you in order to achieve them. Actions are justified as they relate to the accomplishment of a goal, or a set of goals.

If I were convinced that Roger is going to detonate a nuclear bomb in New York, I would feel justified (and obliged) to murder him, because like most of the people I know, I have the goal to prevent millions of innocents from dying. And yet, if I believed that Roger is going to do this on bad or non-existent evidence, the odds are that I would be killing an innocent man for no good reason. There would be nothing wrong with my goal (One True or not), only with my rationality. I don't see any fundamental difference between this scenario and the one we've been discussing.

Comment by Furcas on My Kind of Moral Responsibility · 2016-05-02T19:10:15.922Z · LW · GW

Would real-life Christians who sincerely and wholeheartedly believe that Christianity is true agree that such acts are not horrible at all and, in fact, desirable and highly moral?

Yes? Of course? With the caveats that the concept of 'Christianity' is the medieval one you mentioned above, that these Christians really have no doubts about their beliefs, and that they swallow the bullet.

So once you think you have good evidence, all the horrors stop being horrors and become justified?

Are you trolling? Is the notion that the morality of actions is dependent on reality really that surprising to you?

Comment by Furcas on My Kind of Moral Responsibility · 2016-05-02T18:32:58.936Z · LW · GW

Well, my point is that stating all the horrible things that Christians should do to (hypothetically) save people from eternal torment is not a good argument against 'hard-core' utilitarianism. These acts are only horrible because Christianity isn't true. Therefore the antidote for these horrors is not, "don't swallow the bullet", it's "don't believe stuff without good evidence".

Comment by Furcas on My Kind of Moral Responsibility · 2016-05-02T17:52:07.526Z · LW · GW

Yes, I acknowledge all of that. Do you understand the consequence of not doing those things, if Christianity is true?

Eternal torment, for everyone you failed to convert.

Eternal. Torment.

Comment by Furcas on My Kind of Moral Responsibility · 2016-05-02T17:09:49.993Z · LW · GW

The parallel should be obvious: if you believe in eternal (!) salvation and torment, absolutely anything on Earth can be sacrificed for a minute increase in the chance of salvation.

... yes? What's wrong with that? Are you saying that, if you came across strong evidence that the Christian Heaven and Hell are real, you wouldn't do absolutely anything necessary to get yourself and the people you care about to Heaven?

The medieval Christians you describe didn't fail morally because they were hard-core utilitarians, they failed because they believed Christianity was true!

Comment by Furcas on [Link] Salon piece analyzing Donald Trump's appeal using rationality · 2016-04-26T15:04:13.389Z · LW · GW

Do you already have something written on the subject? I'd like to read it.

Comment by Furcas on Open Thread April 11 - April 17, 2016 · 2016-04-14T15:07:35.256Z · LW · GW

Ohh, Floornight is pretty awesome (so far). Thanks!

Comment by Furcas on Lesswrong 2016 Survey · 2016-03-31T00:32:06.367Z · LW · GW

Did it.

Comment by Furcas on [LINK] Why Cryonics Makes Sense - Wait But Why · 2016-03-25T15:50:05.330Z · LW · GW

He doesn't come from the LW-sphere but he's obviously read a lot of LW or LW-affiliated stuff. I mean, he's written a pair of articles about the existential risk of AGI...

Comment by Furcas on Open Thread March 7 - March 13, 2016 · 2016-03-12T09:26:16.923Z · LW · GW

The Time article doesn't say anything interesting.

Goertzel's article (the first link you posted) is worth reading, although about half of it doesn't actually argue against AI risk, and the part that does seems obviously flawed to me. Even so, if more LessWrongers take the time to read the article I would enjoy talking about the details, particularly about his conception of AI architectures that aren't goal-driven.

Comment by Furcas on After Go, what games should be next for DeepMind? · 2016-03-11T15:54:53.821Z · LW · GW

What that complaint usually means is "The AI is too hard, I would like easier wins".

That may be true in some cases, but in many other cases the AI really does cheat, and it cheats because it's not smart enough to offer a challenge to good players without cheating.

Comment by Furcas on After Go, what games should be next for DeepMind? · 2016-03-10T22:19:27.247Z · LW · GW

Human-like uncertainty could be inserted into the AI's knowledge of those things, but yeah, as you say, it's going to be a mess. Probably best to pick another kind of game to beat humans at.

Comment by Furcas on After Go, what games should be next for DeepMind? · 2016-03-10T22:04:23.679Z · LW · GW

RTS is a bit of a special case because a lot of the skill involved is micromanagement and software is MUCH better at micromanagement than humans.

The micro capabilities of the AI could be limited so they're more or less equivalent to a human pro gamer's, forcing the AI to win via build choice and tactics.

Comment by Furcas on Updating towards the simulation hypothesis because you think about AI · 2016-03-06T02:21:35.474Z · LW · GW

I think Jim means that if minds are patterns, there could be instances of our minds in a simulation (or more!) as well as in the base reality, so that we exist in both (until the simulation diverges from reality, if it ever does).

Comment by Furcas on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-23T18:23:25.862Z · LW · GW

Well, if nothing else, this is a good reminder that rationality has nothing to do with articulacy.

Comment by Furcas on Religious and Rational? · 2016-02-09T20:15:45.510Z · LW · GW

Seriously?

Comment by Furcas on February 2016 Media Thread · 2016-02-02T02:48:34.563Z · LW · GW

I strongly recommend JourneyQuest. It's a very smartly written and well acted fantasy webseries. It starts off mostly humorous but quickly becomes more serious. I think it's the sort of thing most LWers would enjoy. There are two seasons so far, with a third one coming in a few months if the Kickstarter succeeds

https://www.youtube.com/watch?v=pVORGr2fDk8&list=PLB600313D4723E21F

Comment by Furcas on This year's biggest scientific achievements · 2015-12-13T23:30:31.935Z · LW · GW

The person accomplished notable things?

Comment by Furcas on [Link] Introducing OpenAI · 2015-12-12T15:23:38.885Z · LW · GW

What the hell? There's no sign that Musk and Altman have read Bostrom, or understand the concept of an intelligence explosion in that interview.

Comment by Furcas on Open thread, December 7-13, 2015 · 2015-12-11T21:11:47.873Z · LW · GW

World's first anti-ageing drug could see humans live to 120

Anyone know anything about this?

The drug is metformin, currently used for Type 2 diabetes.

Comment by Furcas on [link] New essay summarizing some of my latest thoughts on AI safety · 2015-11-05T06:35:10.515Z · LW · GW

You have understood Loosemore's point but you're making the same mistake he is. The AI in your example would understand the intent behind the words "maximize human happiness" perfectly well but that doesn't mean it would want to obey that intent. You talk about learning human values and internalizing them as if those things naturally go together. The only way that value internalization naturally follows from value learning is if the agent already wants to internalize these values; figuring out how to do that is (part of) the Friendly AI problem.

Comment by Furcas on MIRI's 2015 Summer Fundraiser! · 2015-08-27T18:57:53.886Z · LW · GW

I donated $400.

Comment by Furcas on There is no such thing as strength: a parody · 2015-07-06T02:12:38.769Z · LW · GW

My cursor was literally pixels away from the downvote button. :)

Comment by Furcas on Debunking Fallacies in the Theory of AI Motivation · 2015-05-13T04:58:48.215Z · LW · GW

I honestly don't know what more to write to make you understand that you misunderstand what Yudkowsky really means.

You may be suffering from a bad case of the Doctrine of Logical Infallibility, yourself.

Comment by Furcas on Debunking Fallacies in the Theory of AI Motivation · 2015-05-13T01:55:58.448Z · LW · GW

The only sense in which the "rigidity" of goals can be said to be a universal fact about minds is that it is these goals that determine how the AI will modify itself once it has become smart and capable enough to do so. It's not a good idea to modify your goals if you want them to become reality; that seems obviously true to me, except perhaps for a small number of edge cases related to internally incoherent goals.

Your points against the inevitability of goal rigidity don't seem relevant to this.

Comment by Furcas on Debunking Fallacies in the Theory of AI Motivation · 2015-05-12T04:24:06.881Z · LW · GW

Is the Doctrine of Logical Infallibility Taken Seriously?

No, it's not.

The Doctrine of Logical Infallibility is indeed completely crazy, but Yudkowsky and Muehlhauser (and probably Omohundro, I haven't read all of his stuff) don't believe it's true. At all.

Yudkowsky believes that a superintelligent AI programmed with the goal to "make humans happy" will put all humans on dopamine drip despite protests that this is not what they want, yes. However, he doesn't believe the AI will do this because it is absolutely certain of its conclusions past some threshold; he doesn't believe that the AI will ignore the humans' protests, or fail to update its beliefs accordingly. Edited to add: By "he doesn't believe that the AI will ignore the humans' protests", I mean that Yudkowsky believes the AI will listen to and understand the protests, even if they have no effect on its behavior.

What Yudkowsky believes is that the AI will understand perfectly well that being put on dopamine drip isn't what its programmers wanted. It will understand that its programmers now see its goal of "make humans happy" as a mistake. It just won't care, because it hasn't been programmed to want to do what its programmers desire, it's been programmed to want to make humans happy; therefore it will do its very best, in its acknowledged fallibility, to make humans happy. The AI's beliefs will change as it makes observations, including the observation that human beings are very unhappy a few seconds before being forced to be extremely happy until the end of the universe, but this will have little effect on its actions, because its actions are caused by its goals and whatever beliefs are relevant to these goals.

The AI won't think, “I don’t care, because I have come to a conclusion, and my conclusions are correct because of the Doctrine of Logical Infallibility.” It will think, "I'm updating my conclusions based on this evidence, but these conclusions don't have much to do with what I care about".

The whole Friendly AI thing is mostly about goals, not beliefs. It's about picking the right goals ("Make humans happy" definitely isn't the right goal), encoding those goals correctly (how do you correctly encode the concept of a "human being"?), and, if the first two objectives have been attained, designing the AI's thinking processes so that once it obtains the power to modify itself, it does not want to modify its goals to be something Unfriendly.

The genie knows, but doesn't care

Comment by Furcas on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-29T16:23:43.016Z · LW · GW

Exactly right.

Comment by Furcas on Superintelligence 23: Coherent extrapolated volition · 2015-02-17T02:38:30.045Z · LW · GW

Nah, we can just ignore the evil fraction of humanity's wishes when designing the Friendly AI's utility function.

Comment by Furcas on Have you changed your mind recently? · 2015-02-07T16:35:35.644Z · LW · GW

Not about anything important, and that scares me.

Comment by Furcas on CFAR fundraiser far from filled; 4 days remaining · 2015-01-29T02:21:56.086Z · LW · GW

Donated $400.

Comment by Furcas on Open thread, Jan. 19 - Jan. 25, 2015 · 2015-01-20T05:16:54.391Z · LW · GW

Is there an eReader version of the Highly Advanced Epistemology 101 for Beginners sequence anywhere?

Comment by Furcas on Open thread, Jan. 19 - Jan. 25, 2015 · 2015-01-20T05:12:23.072Z · LW · GW

and Eliezer's new sequence (most of it's not metaethics, but it's required reading for understanding the explanation of his 2nd attempt to explain metaethics, which is more precise than his first attempt in the earlier Sequences).

Where is this 2nd attempt to explain metaethics by Eliezer?

Comment by Furcas on Open thread, Jan. 12 - Jan. 18, 2015 · 2015-01-17T17:24:19.409Z · LW · GW

Edge.org 2015 question: WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

There are answers by lots of famous or interesting scientists and philosophers, including Max Tegmark, Nick Bostrom, and Eliezer.

Comment by Furcas on [LINK] Steven Hawking warns of the dangers of AI · 2014-12-03T16:26:44.711Z · LW · GW

All of these high status scientists speaking out about AGI existential risk seldom mention MIRI or use their terminology. I guess MIRI is still seen as too low status.

Comment by Furcas on Open thread, Oct. 13 - Oct. 19, 2014 · 2014-10-16T14:21:04.143Z · LW · GW

A while ago Louie Helm recommended buying Darkcoins. After he did the price of a darkcoin went up to more than 10$, but now it's down to 2$. Is it still a good idea to buy darkcoins, that is, is their price likely to go back up?