Posts

Comments

Comment by Crude_Dolorium on [moderator action] Eugine_Nier is now banned for mass downvote harassment · 2014-07-16T00:40:33.664Z · LW · GW

It's heartwarming to see off-the-cuff SQL that includes foreign key constraints.

Comment by Crude_Dolorium on [Open Thread] Stupid Questions (2014-02-17) · 2014-02-18T12:50:46.379Z · LW · GW

I've wondered about this too. I once tried to organize a round-robin tournament, and discovered that all the other players preferred single elimination despite its vulnerability to noise and lack of a meaningful second place. In the ensuing argument, I discovered that they do know about problems like this, but they don't care, for two reasons:

  1. They don't care much about accuracy. Tournaments ostensibly rank teams by quality, but they're used mostly as ritual contests: the audience wants to know who won, not who would most likely win.
  2. They don't like complexity or novelty. They're suspicious of any design they don't understand, because they're afraid it might be gamed, or might have perverse incentives (e.g. where losing a match helps you win the tournament), and because they want everyone, even the dumb jocks, to understand the rules.
Comment by Crude_Dolorium on 2013 Less Wrong Census/Survey · 2013-11-27T20:07:02.953Z · LW · GW

Apparently I don't participate in the community. I only comment once a year, to report that I took the survey.

Comment by Crude_Dolorium on Programming Thread · 2012-12-07T01:41:30.516Z · LW · GW

If you're studying a language to learn from it, then the choice of language depends on what you want it to teach you.

Erlang and Haskell are similar languages, and mostly teach the same things: purely applicative (“functional”) programming and high-order (also called “functional”) programming. Erlang also teaches message-passing concurrency and live patching; Haskell also teaches laziness and modern static typing. I've found Haskell more educational than Erlang, possibly because more of the things it teaches were new to me, possibly because I've done more with it, and possibly because it has more to teach. (But it is more complex.) Haskell is also more popular and has more libraries. IIRC you're a mathematician or at least math-inclined, so you'd be comfortable with Haskell's very mathematical culture.

Of the “employable languages”:

  • C teaches low-level data representations and efficiency concerns, and how to deal with unsafe tools. These are all things a programmer needs to know, and C itself is very widely used, so it's almost essential for a professional programmer to learn, but not for someone who only writes programs as an aid to other things. (Your blog suggests you already know some C.)
  • C++ is very complex, and most of what it teaches is C++-specific and not very enlightening, so I don't recommend studying it unless you need to use it.
  • Java is simple (except for its libraries) and not very enlightening. If you know C and Haskell, you know 3/4 of the important parts.
  • I don't know MATLAB. This is the second time I've heard it described as practically useful, so I suppose I should look into it.
Comment by Crude_Dolorium on Cryonic resurrection - an ethical hypothetical · 2012-11-29T00:16:26.855Z · LW · GW

Thanks. You made the world a little clearer.

Comment by Crude_Dolorium on Cryonic resurrection - an ethical hypothetical · 2012-11-27T20:50:02.992Z · LW · GW

Point of pedantry: could you say “cryonic” instead of “cryogenic”?

Comment by Crude_Dolorium on Cryonic resurrection - an ethical hypothetical · 2012-11-27T18:30:24.594Z · LW · GW

WRT fidelity of reproduction, yes – but the scale is described in terms of defects that we'd object to regardless of whether they were faithful to the original mind. Most people would prefer to be resurrected with higher intelligence and better memory than they originally had, for instance.

It might be better to describe (edit: as Tenoke already did) the imperfect resurrection as causing not impairment but change: the restored mind is fully functional, but some information is lost and must be replaced with inaccurate reconstructions. The resurrected patient is not quite the same person as before; everything that made them who they are – their personality, their tastes and inclinations, their memories, their allegiances and cares and loves – is different. How inaccurate a resurrection is even worthwhile? How long would you wait (missing out on centuries of life!) for better accuracy?

(This is reminiscent of the scenario where a person is reconstructed from their past behavior instead of their brain. The result might resemble the original, but it's unlikely to be very faithful; in particular, secrets they never revealed would be almost impossible to recover, and some such secrets are important.)

Comment by Crude_Dolorium on What does the world look like, the day before FAI efforts succeed? · 2012-11-21T02:51:23.879Z · LW · GW

This list is focused on scenarios where FAI succeeds by creating an AI that explodes and takes over the world. What about scenarios where FAI succeeds by creating an AI that provably doesn't take over the world? This isn't a climactic ending (although it may be a big step toward one), but it's still a success for FAI, since it averts a UFAI catastrophe.

(Is there a name for the strategy of making an oracle AI safe by making it not want to take over the world? Perhaps 'Hermit AI' or 'Anchorite AI', because it doesn't want to leave its box?)

This scenario deserves more attention that it has been getting, because it doesn't depend on solving all the problems of FAI in the right order. Unlike Nanny AI that takes over the world but only uses its powers for certain purposes, Anchorite AI might be a much easier problem than full-fledged FAI, so it might be developed earlier.

In the form of the OP:

  • Fantastic: FAI research proceeds much faster than AI research, so by the time we can make a superhuman AI, we already know how to make it Friendly (and we know what we really want that to mean).
  • Pretty good: Superhuman AI arrives before we learn how to make it Friendly, but we do learn how to make an Anchorite AI that definitely won't take over the world. The first superhuman AIs use this architecture, and we use them to solve the harder problems of FAI before anyone sets off an exploding UFAI.
  • Sufficiently good: The problems of Friendliness aren't solved in time, or the solutions don't apply to practical architectures, or the creators of the first superhuman AIs don't use them, so the AIs have only unreliable safeguards. They're given cheap, attainable goals; the creators have tools to read the AIs' minds to ensure they're not trying anything naughty, and killswitches to stop them; they have an aversion to increasing their intelligence beyond a certain point, and to whatever other failure modes the creators anticipate; they're given little or no network connectivity; they're kept ignorant of facts more relevant to exploding than to their assigned tasks; they require special hardware, so it's harder for them to explode; and they're otherwise designed to be safer if not actually safe. Fortunately they don't encounter any really dangerous failure modes before they're replaced with descendants that really are safe.
Comment by Crude_Dolorium on 2012 Less Wrong Census/Survey · 2012-11-05T21:32:50.917Z · LW · GW

I took the survey, and the extra credit, and the pretext to delurk.