[LINK] Rhonda Cornum: How to Create a Strong Army and Society 2012-10-31T20:29:26.156Z
The Yudkowsky Ambition Scale 2012-09-12T15:08:06.292Z
How to deal with non-realism? 2012-05-22T13:58:56.526Z
Left-wing Alarmism vs. Right-wing Optimism: evidence on which is correct? 2012-04-10T22:35:29.327Z
Not insane. Unsane. 2012-02-17T23:43:08.196Z
The Sword of Good: I need some help to translate it 2011-11-30T13:41:49.565Z
Less Wrong meets… 2011-07-12T23:08:48.228Z
I'm becoming intolerant. Help. 2011-06-30T15:30:47.970Z
[Site Redesign Bug] Discussion post are duplicated in the main site. 2011-06-27T19:50:47.476Z
[Meta] [Solved] Unstable Karma 2011-06-27T14:51:10.322Z
"Is there a God" for noobs (followup) 2011-04-12T09:56:12.564Z
"Is there a God" for noobs 2011-03-25T00:26:56.091Z


Comment by loup-vaillant on What steep learning curve do you wish you'd climbed sooner? · 2014-09-11T01:44:00.650Z · LW · GW

I, on the other hand love my cello. I also happen to enjoy practice itself. This helps a lot.

Comment by loup-vaillant on Harry Potter and the Methods of Rationality discussion thread, part 28, chapter 99-101 · 2013-12-12T23:04:02.472Z · LW · GW

I have defeated the hydra! (I had to cut off 670 heads). Feels like playing Diablo.

Comment by loup-vaillant on Harry Potter and the Methods of Rationality discussion thread, part 28, chapter 99-101 · 2013-12-12T22:46:18.880Z · LW · GW

But when you think of it, if you assume the centaur Firenze wasn't dead, Imperius is probably not the best option anyway

Comment by loup-vaillant on 2013 Less Wrong Census/Survey · 2013-11-30T23:19:49.844Z · LW · GW

I took the survey (answered nearly everything).

Comment by loup-vaillant on The dangers of zero and one · 2013-11-28T13:18:02.189Z · LW · GW

(7): indentation error. But I guess the interpreter will tell you i is used out of scope. That, or you would have gotten another catastrophic result on numbers below 10.

def is_prime(n):
for i in range(2,n):
    if n%i == 0: return False
return True

(Edit: okay, that was LessWrong screwing up leading spaces. We can cheat that with unbreakable spaces.)

Comment by loup-vaillant on Probability, knowledge, and meta-probability · 2013-09-23T09:50:25.730Z · LW · GW

I don't like your use of the word "probability". Sometimes, you use it to describe subjective probabilities, but sometimes you use it to describe the frequency properties of putting a coin in a given box.

When you say, "The brown box has 45 holes open, so it has probability p=0.45 of returning two coins." you are really saying that knowing that I have the brown box in front of me, and I put a coin in it, I would assign a 0.45 probability of that coin yielding 2 coins. And, as far as I know, the coin tosses are all independent: no amount of coin toss would ever tell me anything about the next coin toss. Simply put, a box, along with the way we toss coins in it has rather definite frequency properties.

Then you talk about "assigning probabilities to each possible probability between 0 and 1". What you really wanted to say is assigning a probability distribution over the possible frequency properties.

I know it sounds pedantic, but I cringe every time someone talks about "probabilities" being some properties of a real object out there in the territory (like amplitudes in QM). Probability is in the mind. Using the word any other way is confusing.

Comment by loup-vaillant on Help us name a short primer on AI risk! · 2013-09-18T17:12:42.220Z · LW · GW

It just occurred to me that we may be able to avoid the word "intelligence" entirely in the title. I was thinking of Cory Doctorrow on the coming war on general computation, where he explain unwanted behaviour on general purpose computers is basically impossible to stop. So:

Current computers are fully general hardware. An AI would be fully general software. We could also talk about general purpose computers vs general purpose programs.

The Idea is, many people already understand some risks associated with general purpose computers (if only for the various malware). Maybe we could use that to draw attention to the risks of general purpose programs.

That may avoid drawing unwanted associations with the word "intelligence". Many people believe that machines cannot be intelligent "by definition". Many believe there is something "magic" between the laws of physics and the high-level functioning of a human nervous system. They would be hard-pressed to admit it outright, but it is at the root of a fundamental disbelief of the possibility of AI.

As for actual titles…

  • The Risks of General Purpose Software.
  • General Purpose Computers can do anything. General Purpose Programs, will. (Sounds better as a subtitle, that one.)

(Small inconvenience: phrasing the title this way may require to touch the content of the book itself.)

Comment by loup-vaillant on Help us name a short primer on AI risk! · 2013-09-18T16:41:29.069Z · LW · GW

Or, "Artificial intelligence as a risk to mankind". (Without the emphasis.)

Comment by loup-vaillant on Rationality Quotes September 2013 · 2013-09-04T23:06:52.342Z · LW · GW

Good luck finding one that doesn't also bias you into a corner.

Comment by loup-vaillant on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-09-02T22:22:32.719Z · LW · GW

Maybe we could explain it by magical risks, and violence. I wouldn't be surprised if wizard kill each other more than muggles. With old-fashioned manners, may come old fashioned violence. The last two wars (Grindelwald and Voldemort), were awfully close, and it looks like the next one is coming.

If all times and all countries are the same, with a major conflict every other generation, it could easily explain such a low population.

Comment by loup-vaillant on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-08-31T22:57:33.877Z · LW · GW

Chapter 78

Thus it had been with some trepidation that Mr. and Mrs. Davis had insisted on an audience with Deputy Headmistress McGonagall. It was hard to muster a proper sense of indignation when you were confronting the same dignified witch who, twelve years and four months earlier, had given both of you two weeks' detention after catching you in the act of conceiving Tracey.

Apparently, contraception isn't always used 7th year students. I count that as mild evidence that contraception, magical or otherwise, isn't widespread in the magical world. Methods of conception promotion are probably just as rare —though if they exist at all, Great Houses are likely to use them.

Comment by loup-vaillant on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-08-28T20:18:16.923Z · LW · GW

War. With children.

I fear the consequences if we don't solve this.

Edit: I'm serious:

This was actually intended as a dry run for a later, serious “Solve this or the story ends sadly” puzzle

Comment by loup-vaillant on Harry Potter and the Methods of Rationality discussion thread, part 23, chapter 94 · 2013-07-09T21:45:52.033Z · LW · GW

I don't see Hermione be revived any time soon, for both story reasons and because Harry is unlikely to unravel the secrets of soul magic in mere hours, even with a time loop at his disposal.

More likely, Harry has found a reliable way to suspend her, and that would be the "he has already succeeded" you speak of.

Comment by loup-vaillant on Progress on automated mathematical theorem proving? · 2013-07-05T20:52:10.524Z · LW · GW

The key part is that some of those formal verification processes involve automated proof generation. This is exactly what Jonah is talking about:

I don't know of any computer programs that have been able to prove theorems outside of the class "very routine and not requiring any ideas," without human assistance (and without being heavily specialized to an individual theorem).

Those who make (semi-)automated proof for a living have a vested interest in making such things as useful as possible. Among other things, this means as automated as possible, and as general as possible. They're not there yet, but they're definitely working on it.

Comment by loup-vaillant on Progress on automated mathematical theorem proving? · 2013-07-04T10:13:08.786Z · LW · GW

The Prover company is working on the safety of train signalling software. Basically, they seek to prove that a given program is "safe" along a number of formal criteria. It involves the translation of the program in some (boolean based) standard form, which is then analysed.

The formal criteria are chosen manually, but the proofs are found completely automatically.

Despite the sizeable length of the proofs, combinatorial explosion is generally avoided, because programs written by humans (and therefore their standard form translation) tend to have shapes that lend them amenable to massive cuts in the search tree.

It doesn't always work: first, the criteria are simple and bounded. Second, combinatorial explosion sometimes does occur, in which case human-devised tweaks are needed.

Oh, and it's all proprietary. Maybe there's some related academic papers, though.

Comment by loup-vaillant on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-06-30T09:21:55.388Z · LW · GW

I do not lie to my readers


I think the facts at least are as described. Hermione is certainly lying in a pool of blood, something significant did happen to her (Harry felt the magic), and Dumbeldore definitely believe Hermione is dead.

If there is a time turner involved, it won't change those perceptions one bit, And I doubt Dumbeldore would try too Mess With Time ever again (as mentioned in the Azkaban arc). Harry might, but he's out of his Time Turner Authorized Range. Even then, it looks like he's thinking longer term than that.

Comment by loup-vaillant on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-06-30T09:07:44.962Z · LW · GW

Recalling a video I have seen (forgot the source), the actual damage wouldn't occur upon hypoxia, but upon re-oxygenation. Lack of oxygen at the cellular level does start a fatal chemical reaction, but the structure of the cells are largely preserved. But when you put oxygen back, everything blows up (or swells up, actually).

Harry may very well have killed Hermione with his oxygen shot. If he froze her before then, it might have worked, but after that… her information might be lost.

One obvious objection: Hermione was still concious enough to say some last words, ruling out advanced brain de-oxygenation. That could be only for the drama, but still.

One obvious consequence: that magic feeling upon death might be linked to plain muggle information-theoretic death somehow. But then, we have horcrucxes and Avada Kedavra… I'm quite confused by HPMOR's "laws of physics".

Comment by loup-vaillant on For FAI: Is "Molecular Nanotechnology" putting our best foot forward? · 2013-06-28T18:00:39.085Z · LW · GW

Furthermore, a "continuous" function could very well contain a finite amount of information, provided it's frequency range is limited. But then, it wouldn't be "actually" continuous.

I just didn't want to complicate things by mentioning Shannon.

Comment by loup-vaillant on Living in the shadow of superintelligence · 2013-06-25T16:23:55.525Z · LW · GW

I disagree with "not at all", to the extent that the Matrix has probably much less computing power than the universe it runs on. Plus, it could have exploitable bugs.

This is not a question worth asking for us mere mortals, but a wannabe super-intelligence should probably think about it for at least a nanosecond.

Comment by loup-vaillant on For FAI: Is "Molecular Nanotechnology" putting our best foot forward? · 2013-06-25T16:19:19.528Z · LW · GW

Here's my guess:

  • "Continuous" is a reference to the wave function as described by current laws of physics.
  • Eliezer is "infinite set atheist", which among other things rule out the possibility of an actually continuous fabric of the universe.
Comment by loup-vaillant on 240 questions for your utility function · 2013-06-25T15:40:11.492Z · LW · GW

By the way, why posts aren't written like comments, in Markdown format? Could we consider adding markdown formatting as an option?

Comment by loup-vaillant on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? · 2013-06-20T10:37:23.215Z · LW · GW

I think I have left a loophole. In your example, Omega is analysing the agent by analysing its outputs in unrelated, and most of all, unspecified problems. I think the end result should only depend on the output of the agent on the problem at hand.

Here's a possibly real life variation. Instead of simulating the agent, you throw a number of problems at it beforehand, without telling it it will be related to a future problem. Like, throw an exam at a human student (with a real stake at the end, such as grades). Then, later you submit the student to the following problem:

Welcome to my dungeon. Sorry for the headache, but I figured you wouldn't have followed someone like me in a place like this. Anyway. I was studying Decision Theory, and wanted to perform an experiment. So, I will give you a choice:

Option 1 : you die a most painful death. See those sharp, shimmering tools? Lots of fun.

Option 2 : if I think you're not the kind of person who makes good life decisions, I'll let you go unharmed. Hopefully you will harm yourself later. On the other hand, if I think you are the kind of person who makes good life decisions, well, too bad for you: I'll let you most of you go, but you'll have to give me your hand.

Option 2? Well that doesn't surprise me, though it does disappoint me a little. I would have hoped, after 17 times already… well, no matter. So, do you make good decisions? Sorry, I'm afraid "no" isn't enough. Let's see… oh, you're you're applying for College, if I recall correctly. Yes, I did my homework. I'm studying, remember? So, let's see your SAT scores. Oh, impressive. That should explain why you never left home those past three weeks. Looks like you know how to trade off short term well being for long term projects. Looks like a good life decision.

So. I'm not exactly omniscient, but this should be enough. I'll let you go. But first, I believe you'll have to put up with a little surgery job.

Sounds like something like that could "reasonably" happen in real life. But I don't think it's "fair" either, if only because being discriminated for being capable of taking good decisions is so unexpected.

Comment by loup-vaillant on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? · 2013-06-19T08:44:06.485Z · LW · GW

We have to determine what counts as "unfair". Newcomb's problem looks unfair because your decision seems to change the past. I have seen another Newcomb-like problem that was (I believe) genuinely unfair, because depending on their decision theory, the agents were not in the same epistemic state.

Here what I think is a "fair" problem. It's when

  1. the initial epistemic state of the agent is independent of its source code;
  2. given the decisions of the agent, the end result is independent of its source code;
  3. if there are intermediary steps, then given the decisions of the agent up to any given point, its epistemic state and any intermediate result accessible to the agent at that point are independent of its source code.

If we think of the agent as a program, I think we can equate "decision" with the agent's output. It's harder however to equate "epistemic state" with its input: recall Omega saying "Here is the 2 usual boxes. I have submitted this very problem in a simulation to TDT. If it one boxed, box B has the million. If it two boxed, box B is empty". So, if you're TDT, this problem is equivalent to the old Newcomb problem, where oneBox <=> $$$. But any other agent could 2 box, and get the million and the bonus. (Also, "TDT" could be replaced by a source code listing that the agent would recognize as its own.)

Anyway, I believe there's a good chance a decision theory exists such that it gets the best results out of any "fair" problem.Though now that I think of it, condition 2 may be a sufficient criterion for "fairness", for the problem above violates it: if TDT two-boxes, it does not get the million. Well except it does not two box, so my counter-factual doesn't really mean anything…

Comment by loup-vaillant on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-16T21:37:19.386Z · LW · GW

I think it is possible to prove that a given boxing works, if it's sufficiently simple. Choosing the language isn't enough, but choosing the interpreter should be.

Take Brainfuck for instance: replace the dot ('.'), which prints a character, by two other statements: one that prints "yes" and exits, and one that prints "no" and exits. If the interpreter has no bug, a program can only:

  • Print "yes" and kill itself.
  • Print "no" and kill itself.
  • Do nothing until we kill it, or otherwise fail.

Assuming the AI doesn't control the external world by heating the host Intel processor in a smart way, we should be able to prove that we're otherwise safe.

Comment by loup-vaillant on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-08T08:33:42.147Z · LW · GW

It's the whole thread. I was not sure where to place my comment. The connection is, the network may not be the only source of "cheating". My solutions plug them all in one fell swoop.

Comment by loup-vaillant on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-07T19:57:18.525Z · LW · GW

Well, I just though about it for 2 seconds. I tend to be a purist: if it were me, I would start from pure call-by-need λ-calculus, and limit the number of β-reductions, instead of the number of seconds. Cooperation and defection would be represented by Church booleans. From there, I could extend the language (explicit bindings, fast arithmetic…), provide a standard library, including some functions specific to this contest.

Or, I would start from the smallest possible subset of Scheme that can implement a meta-circular evaluator. It may be easier to examine an S-expression in Scheme than a λ-expression in λ-calculus.

Or, I would start from lambda calculus with de-Brujin indices, so we don't have to worry about α-conversions. In this case, I would provide a compiler from regular λ-calculus. (I suspect however that this one doesn't change a thing.)

Or, I would start from a Forth dialect (probably with an implicit return stack).

Or, I would start from BrainFuck (only half joking: the language is really dead simple, and fast interpreters already exist).

But it's not me, and I don't have the courage right now. If I ever implement the necessary tools (which I may: I'm studying programming languages in my spare time), then I will submit them here, and possibly run a contest myself.

Comment by loup-vaillant on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-06T08:34:43.915Z · LW · GW

Okay, it's not. But I'm sure there's a way to circumvent the spirit of your rule, while still abiding the letter. What about network I/O, for instance? As in, download some code from some remote location, and execute that? Or even worse, run your code in the remote location, where you can enjoy superior computing power?

Comment by loup-vaillant on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-05T21:05:19.101Z · LW · GW

More generally, the set of legal programs doesn't seem clearly defined. If it were me, I would be tempted to only accept externally pure functions, and to precisely define what parts of the standard library are allowed. Then I would enforce this rule by modifying the global environment such that any disallowed behaviour would result in an exception being thrown, resulting in an "other" result.

But it's not me. So, what exactly will be allowed?

Comment by loup-vaillant on Optimizing for attractiveness · 2013-06-02T10:17:23.646Z · LW · GW

Hmm, leaving everything and everyone behind, and a general feeling of uncertainty: what live will be like? Will I find a job? Will I enjoy my job (super-important)? How will this affect my relationship with my SO? Less critically, should I bring my Cello, or should I buy another one? What about the rest of my stuff?

We're not talking moving a couple hundred miles here. I've done it for a year and, I could see my family every 3 week-ends, and my SO twice as much. Living in Toulouse, France, I could even push to England if I had a good opportunity. But to go to the US, I have to Cross the Ocean. If I leave this summer and find a job by September, I likely won't make a single trip back before the next summer.

Also, I don't think I value money all that much. I mainly care about the sense of security it provides. If I were guaranteed half of what I currently make to work at home on the computer science research that I want to do, I would take it.

So, If I were to move to the US, it couldn't be just about the money. The job matters. And I'd better get closer to the LW-MIRI-CFAR community. And even then, I'm still not sure. Indefinitely postponing such a big decision is so easy.

Comment by loup-vaillant on Optimizing for attractiveness · 2013-06-01T10:21:40.397Z · LW · GW

(Yep, I'm loup-vaillant on HN too)

Thank you, I'll think about it. Though for now, seriously considering moving to the US tends to trigger my Ugh shields. I'm quite scared.

Comment by loup-vaillant on Optimizing for attractiveness · 2013-06-01T10:11:37.538Z · LW · GW

Ah. I guess I stand corrected, then.

Comment by loup-vaillant on Optimizing for attractiveness · 2013-05-31T22:49:59.000Z · LW · GW

My guess is, they don't make so little:

First, many EU citizen tend to assume $1 is 1€ at first approximation, while currently it's more like $1.3 for 1€. Cthulhoo may have made this approximation. Second, lower salaries may be compensated by a stronger welfare system (public unemployment insurance, public health insurance, public retirement plan…). This one is pretty big: in France, these cost over 40% of what your employer has to pay. Third, major cost centres such as housing may be cheaper (I wouldn't count on that one, though).

To take an example, I live in France, and here, entry-level programmers with an engineering degree make about 23k€ in net salary (often with a few benefits, and possibly more in the capital). That's about 38k€ that your employer have to pay. Convert that in US$, and we're talking about $49k.

From that amount, cut US taxes that serve unemployment, health, and retirement. I know nothing about the US tax system, so I leave it to you. I just wanted to say that I expect the actual difference between European and US salaries to be much lower than what we expect from a cursory look at "gross salaries", which doesn't even mean the same thing across countries.

Now, for someone who isn't afraid of unemployment, and plans to postponed retirement through rejuvenation procedures that should be available a couple decades from now (reaching either the intelligence explosion or escape velocity), my analysis goes out the window.

Comment by loup-vaillant on Earning to Give vs. Altruistic Career Choice Revisited · 2013-05-30T18:48:54.203Z · LW · GW

MIRI's stated goal is more meta:

The Machine Intelligence Research Institute exists to ensure that the creation of smarter-than-human intelligence benefits society.

They are well aware of the dangers of creating a uFAI, and you can be certain they will be real careful before they push a button that have the slightest chance of launching the ultimate ending (good or bad). Even then, they may very well decide that "being real careful" is not enough.

Are there other organizations attempting to develop AIs to control the world?

It probably doesn't matter, as any uFAI is likely to emerge by mistake:

Anthropomorphic ideas of a “robot rebellion,” in which AIs spontaneously develop primate-like resentments of low tribal status, are the stuff of science fiction. The more plausible danger stems not from malice, but from the fact that human survival requires scarce resources: resources for which AIs may have other uses.

Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal. For instance, AIs developed under evolutionary pressures would be selected for values that maximized reproductive fitness, and would prefer to allocate resources to reproduction rather than supporting humans.

Comment by loup-vaillant on Earning to Give vs. Altruistic Career Choice Revisited · 2013-05-29T12:48:24.395Z · LW · GW

If I may list some differences I perceive between AMF and MIRI:

  • AMF's impact is quite certain. MIRI's impact feels more like a long shot —or even a pipe dream.
  • AMF's impact is sizeable. MIRI's potential impact is astronomic.
  • AMF's impact is immediate. MIRI's impact is long term only.
  • AMF's have photos of children. MIRI have science fiction.
  • In mainstream circles, donating to AMF gives you pats in the back, while donating to MIRI gives you funny looks.

Near mode thinking will most likely direct one to AMF. MIRI probably requires one to shut up and multiply. Which is probably why I'm currently giving a little money to Greenpeace, despite being increasingly certain that it's far, far from the best choice.

Comment by loup-vaillant on Education control? · 2013-05-22T22:58:19.567Z · LW · GW

They're going to escape.

Education fighting an old existential risk: kids out of the box.

Comment by loup-vaillant on Meetup : Paris Meetup: Sunday, May 26. · 2013-05-15T06:12:38.444Z · LW · GW

I'll be there too.

Comment by loup-vaillant on LW Women- Female privilege · 2013-05-06T01:51:08.161Z · LW · GW

Good point.

I can think of two possible workarounds: they can still have fun among themselves, or they can teach their partner whenever they engage in long term relationship.

Comment by loup-vaillant on LW Women- Female privilege · 2013-05-05T21:56:46.560Z · LW · GW

It does seem to have some effect on the performers' private life however. Here is a question from Matt Williams, answered by Courtney Taylor:

"You find it hard now, having sex with civilians¹?"

"Oh yeah, absolutely."

[1] From the rest of the interview, I gathered that "civilian" was a bit derogatory.

Just to say that doing porn may tend to raise one's expectations. Sure, they optimise for the viewer, but I'd be surprised if they didn't try and have fun along the way, just like actors in mainstream films. I'd be surprised to learn their knowledge and experience doesn't make them very good partners, should they optimize for that.

Comment by loup-vaillant on Rationality Quotes May 2013 · 2013-05-03T23:35:59.297Z · LW · GW

Gasp, I definitely didn't read that way. Observing the sky sounded like science, and the logical puzzles sounded like math. Plus, it was already useful at the time: it helped keep track of time, predict seasons…

Comment by loup-vaillant on What do professional philosophers believe, and why? · 2013-05-02T11:55:05.637Z · LW · GW

Okay, let's try and defeat Omega. The goal is to do better than Eliezer Yudkowsky, which seems to be trustworthy about doing what he publicly says all over the place. Omega will definitely predict that Eliezer will one-box, and Eliezer will get the million.

The only way to do better is to two-box while making Omega believe that we will one-box, so we can get the $1001000 with more than 99.9% certainty. And of course,

  1. Omega has access to our brain schematics
  2. We don't have access to Omega's schematics. (optional)
  3. Omega has way more processing power than we do.

Err, short of building an AI to beat the crap out of Omega, that looks pretty impossible. $1000 is not enough to make me do the impossible.

Comment by loup-vaillant on What do professional philosophers believe, and why? · 2013-05-02T11:35:32.073Z · LW · GW

Edit: this post is mostly a duplicate of this one

I would guess that those particular fields look more interesting when you make the wrong assumptions to begin with. I mean, it's much less interesting to talk about God when you accept there is none. Or to talk about metaphysics, when you accept that the answer will most likely come from physics. (I don't know about morality.)

Comment by loup-vaillant on Three more ways identity can be a curse · 2013-04-29T12:37:16.223Z · LW · GW

Nevertheless, an above-average post is still evidence for an above-average poster. It's also her first post. She might very well "get better" in the future, as she put it.

Sure, I wouldn't count on it, but we still have a good reason to look forward to reading her future posts.

Comment by loup-vaillant on Why AI may not foom · 2013-03-26T07:24:54.515Z · LW · GW

I agree with your first point, though it gets worse for us as hardware gets cheaper and cheaper.

I like your second point even more: it's actionable. We could work on the security of personal computers.

That last one is incorrect however. The AI only have to access its object code in order to copy itself. That's something even current computer viruses can do. And we're back to boxing it.

Comment by loup-vaillant on Why AI may not foom · 2013-03-25T02:00:09.248Z · LW · GW

I think you miss the part where the team of millions continues its self-copying until it eats up every available computing power. If there's any significant computing overhang, the AI could easily seize control of way more computing power than all the human brains put together.

Also, I think you underestimate the "highly coordinated" part. Any copy of the AI will likely share the exact same goals, and the exact same beliefs. Its instances will have common knowledge of this fact. This would creates an unprecedented level of trust. (The only possible exception I can think of are twins. And even so…)

So, let's recap:

  • Thinks 100 times faster than a human, though no better.
  • Can copy itself over many times (the exact amount depends on computing power available).
  • The resulting team forms a nearly perfectly coordinated group.

Do you at least concede that this is potentially more dangerous than a whole country armed up with nukes? Would you rely on it being less dangerous than that?

Comment by loup-vaillant on Why AI may not foom · 2013-03-24T17:32:57.508Z · LW · GW

At first. If the "100 slaves" AI ever gets out of the box, you can multiply the initial number by the amount of hardware it can copy itself to. It can hack computers, earn (or steal) money, buy hardware…

And suddenly we're talking about a highly coordinated team of millions.

Comment by loup-vaillant on Why AI may not foom · 2013-03-24T13:56:29.529Z · LW · GW

If you were to speed up a chicken brain by a factor of 10,000 you wouldn't get a super-human intelligence.

Sure, but if we assume we manage to have a human-level AI, how powerful should we expect it to be if we speed that up by a factor of 10, 100, or more?

Personally, I'm pretty sure such a thing is still powerful enough to take over the world (assuming it is the only such AI), and in any case dangerous enough to lock us all in a future we really don't want.

At that point, I don't really care if it's "superhuman" or not.

Comment by loup-vaillant on You only need faith in two things · 2013-03-14T08:50:33.791Z · LW · GW

Nevertheless, the lack of exposure to such attractors is quite relevant: if there was any, you'd expect some scientist to encounter it.

Comment by loup-vaillant on Decision Theory FAQ · 2013-03-03T00:58:10.763Z · LW · GW

Easy explanation for the Ellsberg Paradox: We humans treat the urn as if it was subjected to two kinds of uncertainties.

  • The first kind is which ball I will actually draw. It feels "truly random".
  • The second kind is how many red (and blue) balls there actually are. This one is not truly random.

Somehow, we prefer to chose the "truly random" option. I think I can sense why: when it's "truly random", I know no potentially hostile agent messed up with me. I mean, I could chose "red" in situation A, but then the organizers could have put 60 blue balls just to mess with me!

Put it simply, choosing "red" opens me up for external sentient influence, and therefore risk being outsmarted. This particular risk aversion sounds like a pretty sound heuristic.

Comment by loup-vaillant on Philosophical Landmines · 2013-03-01T19:03:30.679Z · LW · GW

Explaining complexity through God suffers from various questions

Whose answers tend to just be "Poof Magic". While I do have a problem with "Poof Magic", I can't explain it away without quite deep scientific arguments. And "Poof Magic", while unsatisfactory to any properly curious mind, have no complexity problem.

Now that I think of it, I may have to qualify the argument I made above. I didn't know about Hume, so maybe the God Hypothesis wasn't so good even before Newton and Darwin after all. At least assuming the background knowledge available to the best thinkers of the time.

The laypeople, however, may not have had a choice but to believe in some God. I mean, I doubt there was some simple argument they could understand (and believe) at the time. Now, with the miracles of technology, I think it's much easier.

Comment by loup-vaillant on Why Bayes? A Wise Ruling · 2013-02-26T21:36:32.572Z · LW · GW

You tell me.