Causation as Bias (sort of) 2009-07-10T08:38:23.873Z


Comment by spuckblase on Intelligence Amplification and Friendly AI · 2013-09-28T19:52:32.626Z · LW · GW

(2) looks awfully hard, unless we can find a powerful IA technique that also, say, gives you a 10% chance of cancer. Then some EAs devoted to building FAI might just use the technique, and maybe the AI community in general doesn’t.

Using early IA techniques is probably risky in most cases. Commited altruists might have a general advantage here.

Comment by spuckblase on Help us name a short primer on AI risk! · 2013-09-18T08:18:11.724Z · LW · GW

Risky Machines: Artificial Intelligence as a Danger to Mankind

Comment by spuckblase on Writing Style and the Typical Mind Fallacy · 2013-07-16T08:50:52.946Z · LW · GW

I like the your non-fiction style a lot (don't know your fictional stuff). I often get the impression you're in total control of the material. Very thorough yet original, witty and humble. The exemplary research paper. Definitely more Luke than Yvain/Eliezer.

Comment by spuckblase on The noncentral fallacy - the worst argument in the world? · 2012-09-13T14:35:47.938Z · LW · GW

Navigating the LW rules is not intended to require precognition.

Well, it was required when (negative) karma for Main articles increased tenfold.

Comment by spuckblase on Meetup : Berlin Meetup · 2012-08-11T12:33:16.385Z · LW · GW

I'll be there!

Comment by spuckblase on So You Want to Save the World · 2012-01-13T19:12:34.545Z · LW · GW

Do you still want to do this?

Comment by spuckblase on So You Want to Save the World · 2012-01-05T11:16:48.412Z · LW · GW

To be more specific:

I live in Germany, so timezone is GMT +1. My preferred time would be on a workday sometime after 8 pm (my time). Since I'm a german native speaker, and the AI has the harder job anyway, I offer: 50 dollars for you if you win, 10 dollars for me if I do.

Comment by spuckblase on Simple theory of IMDB bias · 2012-01-03T13:17:25.214Z · LW · GW

I agree in large parts, but it seems likely that value drift plays a role, too.

Comment by spuckblase on So You Want to Save the World · 2012-01-03T07:28:40.346Z · LW · GW

Well, I'm somewhat sure (80%?) that no human could do it, but...let's find out! Original terms are fine.

Comment by spuckblase on So You Want to Save the World · 2012-01-01T11:32:43.061Z · LW · GW

I'd bet up to fifty dollars!?

Comment by spuckblase on [German] Wo wohnt ihr? · 2011-12-15T15:47:12.889Z · LW · GW

Ok, so who's the other one living in Berlin?

Comment by spuckblase on [Link] A Short Film based on Eliezer Yudkowsky's AI Box experiment · 2011-12-08T14:39:06.212Z · LW · GW

If there are others who feel the same way, maybe we could set up some experiments where AI players are anonymous.

In that case, I'd like to participate as gatekeeper. I'm ready to put some money on the line.

BTW, I wonder if Clippy would want to play a human, too. I

Comment by spuckblase on FAI FAQ draft: general intelligence and greater-than-human intelligence · 2011-11-24T15:03:25.291Z · LW · GW

Some have argued that a machine cannot reach human-level general intelligence, for example see Lucas (1961); Dreyfus (1972); Penrose (1994); Searle (1980); Block (1981). But Chalmers (2010) points out that their arguments are irrelevant: To reply to the Lucas, Penrose, and Dreyfus objections, we can note that nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies on subsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain. As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behaviour, it might be missing important ‘internal’ aspects of mentality: consciousness, understanding, intentionality, and so on... [But if] there are systems that produce apparently superintelligent outputs, then whether or not these systems are truly conscious or intelligent, they will have a transformative impact on the rest of the world. Chalmers (2010) summarizes two arguments suggesting that machines can reach human-level general intelligence:

  • The emulation argument (see section 7.3)
  • The evolutionary argument (see section 7.4)

This whole paragraph doesn't seem to belong to section 1.11.

Comment by spuckblase on FAI FAQ draft: What is the Singularity? · 2011-11-17T09:06:22.011Z · LW · GW

it is standard in a rational discourse to include and address opposing arguments, provided your audience includes anyone other than supporters already. At a minimum, one should state an objection and cite a discussion of it.

This is not a rational discourse but part of an FAQ, providing explanations/definitions. Counterarguments would be misplaced.

Comment by spuckblase on Do the people behind the veil of ignorance vote for "specks"? · 2011-11-15T18:46:15.200Z · LW · GW

For those who read german or can infer the meaning: Philosopher Cristoph Fehige shows a way to embrace utilitarianism and dust specks.

Comment by spuckblase on FAI FAQ draft: What is Friendly AI? · 2011-11-15T10:56:18.327Z · LW · GW

"Literalness" is explained in sufficient detail to get a first idea of the connection to FAI, but "Superpower" is not.

Comment by spuckblase on Intelligence Explosion analysis draft: introduction · 2011-11-15T09:59:10.789Z · LW · GW

going back to the 1956 Dartmouth conference on AI

maybe better (if this is good english): going back to the seminal 1956 Dartmouth conference on AI

Comment by spuckblase on Intelligence Explosion analysis draft: types of digital intelligence · 2011-11-15T09:03:26.921Z · LW · GW

There are many types of digital intelligence. To name just four:

Readers might like to know what the others are and why you chose those four.

Comment by spuckblase on Cryonics Promotional Video Contest -- 10 BTC Prize · 2011-11-11T17:04:25.086Z · LW · GW

Relevant? (A fake ad by renowned artist Katerina Jebb)

Comment by spuckblase on Singularity Institute mentioned on Franco-German TV · 2011-11-07T14:46:38.436Z · LW · GW

Die Forscher kombinieren Daten aus Informatik und psychologischen Studien. Ihr Ziel: Eine Not-to-do-Liste, die jedes Unternehmen bekommt, das an künstlicher Intelligenz arbeitet.

Rough translation:

The researchers combine data from computer science and psychological studies. Their goal: a not-to-do list, given to every organization working on artificial intelligence.

Comment by spuckblase on Selection Effects in estimates of Global Catastrophic Risk · 2011-11-04T14:14:02.666Z · LW · GW

I don't see a special problem...evaluate the arguments, try to correct for biases. Business as usual. Or do you suspect there is a new type of bias at work here?

Comment by spuckblase on 2011 Less Wrong Census / Survey · 2011-11-01T08:26:07.638Z · LW · GW

I took it. Thanks for this, I'm excited about the results.

Comment by spuckblase on Whole Brain Emulation: Looking At Progress On C. elgans · 2011-10-31T12:47:58.882Z · LW · GW

Typo in the title!

Comment by spuckblase on Help needed: German translation of the Singularity FAQ · 2011-10-30T19:30:50.418Z · LW · GW

Good Translation! I'm through the whole text now, did proofreading and changed quite a bit, some terminological questions remain. After re-reading the original in the process, I think the english FAQ needs some work (unbalanced sections, winding sentences etc). But as a non-native speaker, I don't dare.

Comment by spuckblase on Rhetoric for the Good · 2011-10-26T08:12:47.748Z · LW · GW

At least 29 and 32 are process advice, too.

31: Anything can be done in dialogue (cf. Plato), but probably shoudn't.

22: Reader of blogs or of papers? What's the target audience?

Further points:

  • Avoid formulas
  • Use key words, catch phrases, highlighting.
  • Use a Summary and/or Conclusion where possible.
Comment by spuckblase on Rhetoric for the Good · 2011-10-25T14:14:55.237Z · LW · GW

First approximation: Make your writing similar to a blockbuster movie.

Comment by spuckblase on Repairing Yudkowsky's anti-zombie argument · 2011-10-05T14:11:44.641Z · LW · GW

Since the Universe’s computational accuracy appears to be infinite, in order for the mind to be omniscient about a human brain it must be running the human brain’s quark-level computations within its own mind; any approximate computation would yield imperfect predictions. In the act of running this computation, the brain’s qualia are generated, if (as we have assumed) the brain in question experiences qualia. Therefore the omniscient mind is fully aware of all of the qualia that are experienced within the volume of the Universe about which it has perfect knowledge.

Suppose an entity with qualia emerges in the Game of Life. Surely the omniscient being doesn't have to have those qualia to predict perfecty (and, it seems, to have perfect "physical" knowledge of the simulation)?

Comment by spuckblase on First German Meeting · 2011-09-26T10:29:12.985Z · LW · GW

I promised to give you feedback on your wikibook. Some quick thoughts:

There is a ton of at false or at least controversial stuff (e.g., "Disappointment is always something positive"; "instrumental rationality" = "instrumentale Rationalität" (wheras it's "instrumentelle Rationalität") or stuff that cannot be understood without further knowledge (what is your average reader to make of the words "The Litany of Gendlin"?).

The preface is lacking footnotes, links or an outline.

You're obviously just getting started on this project, so maybe you should rather wait for EY's book(s) on rationality and try a translation thereof?

Comment by spuckblase on Meetup : Munich Meetup, Saturday September 10th, 2PM · 2011-09-08T07:02:41.785Z · LW · GW

Great! Me also.

Comment by spuckblase on First LW-Meetup in Germany · 2011-08-11T13:49:24.906Z · LW · GW

Let's meet September 10th in Munich (see Maybe we can attract a few more people with a definite time and setting.

Comment by spuckblase on First LW-Meetup in Germany · 2011-07-10T09:08:42.636Z · LW · GW

I would definitely attend, but not in the first two weekends in August. The 5th is a friday, which may be problematic - at least for some - too. I propose a saturday/sunday meetup later in august. The 20th maybe? End of july would alos be possible.

Comment by spuckblase on Lesswrongers from the German-speaking world, unite! · 2011-05-20T08:38:56.932Z · LW · GW

I live in Berlin, but Munich would be fine. Not in June though.

Comment by spuckblase on Ben Goertzel interviews Michael Anissimov regarding existential risk [link] · 2011-04-21T11:45:47.996Z · LW · GW

No. He seems to talk about the species, and not its members.

Comment by spuckblase on Interested in a European lesswrong meetup · 2010-08-09T06:32:33.644Z · LW · GW

I'm sitting in Berlin.

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-17T08:22:06.514Z · LW · GW

Well, if you stipulate that "abstract truth-seeking" has nothing whatsoever to do with my getting along in the world, then you're right I guess.

Comment by spuckblase on The Strangest Thing An AI Could Tell You · 2009-07-16T12:27:33.416Z · LW · GW

"There is no causation."

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-16T12:24:19.513Z · LW · GW

Seems to me you're conflating different concepts: "being the reason for" and "being the cause of":

compare what an enemy of determinism could say: "we have no reason to listen to you if your theory is false and no reason to listen if it's true either". Now what?

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-15T08:43:02.435Z · LW · GW

Now we're getting to the heart of it. Upvoted. What does it mean to live in a hume world? For example, we would have to accept the existence of non-reducible mental states (everybody here granted the consistency of the theory until now) and take everything on faith. But indeed we cannnot take anything on faith, since we cannot think, if thinking is a causal notion!?

Suppose for the sake of argument we're not living in a hume world, but had massive, perhaps infinite computing power. we could simulate so many hume worlds that there are some with order and inhabitants in them. They would then quasi-think, quasi-feel and make quasi-experiences. Everything happens as if there were necessitating laws governing it, but there aren't. But still, the universe quasi-looks ordered to them.

This theory and solipsism have something in common, but they are distinguishible. solipsism surely is consistent but higly implausible compared to the standard model. But there could be evidence for it. But it is of another sort than the evidence for a hume world. If pigs start to fly, only hume world-theory (HWT from now on) can explain this easily.

Another point not enough discussed so far are evidences for HWT: causal gaps and anomalies in the fabric of the world as we already know it: In a Causal world, how do we properly deal with mental causation, qualia, time travel paradoxes and in general, indeterministic processes? I'm not saying there are no other solutions, but a lot of people think we did not and possibly cannot make progress in these questions, at least in the current framework. But HWT delivers here.

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-15T08:14:25.413Z · LW · GW

First, none of us are being as rude to you as you are to us in this comment alone. If you can't stand the abuse you're getting here, then quit commenting on this post.

Oh, I can take the abuse, I'm just wondering.

Second, we've given this well more than a few minutes' discussion, and you've given us no reason to believe that we misunderstand your theory

At least at first, I've been given just accusations and incredulous stares.

if a theory doesn't help us get what we actually want, it really is of no use to us

If you want the truth, you have to consider being wrong even about your darlings, say, prediction.

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-15T08:08:35.085Z · LW · GW

Why does everybody assume I'm a die-hard believer in this theory?

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-14T08:16:02.465Z · LW · GW

From your first comment to my post on you were really agressive. Arguments are fine, but why always the personal attacks? I tell you what might be going on here: You saw the post, couldn't make sense of it after a quick glance and decided it was junk and an easy way to gain reputation and boost your ego by bashing. And you are not alone. There are lots of haters, and nobody who just said, Ok, I don't believe it, but let's discuss it, and stop hitting the guy over the head.

The theory is highly counterintuitive, I said as much, but it is worth at least a few minutes of discussion, and i discussed it with quite a few eminent philosophers already. None was convinced (which is hardly surprising), but they found the discussion interesting and the theory consistent. So something has gone wrong here. Maybe all this talk of "winning" and "bayesian conspiracy" and whatever really does not do a favor to the principle goal of the site of being as unbiased as possible.

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-14T07:59:01.082Z · LW · GW

Downvoted again. Phew. Maybe you just tell me where I said or implied it?

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-14T07:51:05.215Z · LW · GW

Thanks but no thanks. I do know this really really basic stuff - I just don't agree. Instead of just postulating that all explanations have to be tied to prediction, why don't you try to rebut the argument. Again: Inhabitants of a Hume world are right to explain their world with this Hume-world theory. They just happen to live in a world where no prediction is possible. So explanation should be conceived independently of prediction. Not every explanation needs to be tied to prediction.

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-13T08:39:42.894Z · LW · GW

Ok, it seems that if you're right to choose density over cardinality then it's a blow to my proposal. I'm still trying to figure it out. Suppose the universe is an infinite Hume world. So is it true that even though there are just as many ordered regions, the likelihood that I live in one is almost zero?

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-13T08:30:15.497Z · LW · GW

No. We talked about evidential support, not predictive power. Inhabitants of a Hume world are obviously right to explain flying pigs et al. by a hume-world-theory, even if they cannot predict anything.

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-11T19:27:44.575Z · LW · GW

So now I scanned over the "Dust theory FAQ" to which Z_M_Davis linked (thanks again!)


Q5: How seriously do you take the Dust Theory yourself?

Egan replies:

A5: Not very seriously, although I have yet to hear a convincing refutation of it on purely logical grounds. For example, some people have suggested that a sequence of states could only experience consciousness if there was a genuine causal relationship between them. The whole point of the Dust Theory, though, is that there is nothing more to causality than the correlations between states. However, I think the universe we live in provides strong empirical evidence against the “pure” Dust Theory, because it is far too orderly and obeys far simpler and more homogeneous physical laws than it would need to, merely in order to contain observers with an enduring sense of their own existence. If every arrangement of the dust that contained such observers was realised, then there would be billions of times more arrangements in which the observers were surrounded by chaotic events, than arrangements in which there were uniform physical laws.

So, I would just add that the Dust theory of Egan (not without its followers on this side, it seems) can be supplemented by in an infinite universe of the right kind-approach ...and voilà: we have pretty much what I say.

So why the hate?

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-11T19:15:01.340Z · LW · GW

Where do I say or imply that? did you read it at all?

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-11T19:13:25.394Z · LW · GW

Why don't you apply the principle of charity for once?

Anyway, compare:

  1. The universe was created in the big bang.
  2. God created the big bang.

So in 2. I now have prolonged the mystery. Is it less mysterious?

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-11T19:07:52.287Z · LW · GW

If the universe is completely non-deterministic with infinite random events happening, shouldn't the odds of my living in the specific sub-universe that appears fully deterministic be almost indistinguishable from zero?

As I said, I want to argue that the sizes of ordered and chaotic regions are of the same cardinality.

Comment by spuckblase on Causation as Bias (sort of) · 2009-07-11T19:02:28.005Z · LW · GW

I guess we're calling it the Humeiform theory - isn't supported by any conceivable block of evidence, including that which actually holds true

just untrue. IF pigs start to fly, etc., you'll better remember this theory. besides, I repeat that in my opinion, the (controverted, granted,, but this is definitely not a closed case) existence of qualia, mental causation and indeterministic processes already give support.