Writing reviews instead of collecting 2011-02-17T23:27:02.898Z
Coding Rationally - Test Driven Development 2010-10-01T15:20:47.873Z


Comment by DSimon on Marketing rationalism · 2016-02-10T17:44:28.088Z · LW · GW

Taboo "faith", what do you mean specifically by that term?

Comment by DSimon on Efficient Charity: Do Unto Others... · 2016-02-01T00:51:46.108Z · LW · GW

The lawyer wants both warm fuzzies and charitrons, but has conflated the two, and will probably get buzzkilled (and lose out on both measures) if the distinction is made clear. The best outcome is one where the lawyer gets to maximize both, and that happens at the end of a long road that begins with introspection about what warm fuzzies ought to mean.

Comment by DSimon on Calibration Test with database of 150,000+ questions · 2015-04-07T13:33:11.934Z · LW · GW

It would probably be best to just remove all questions that contain certain key phrases like "this image" or "seen here". You'll get a few false positives but with such a big database that's no great loss.

Comment by DSimon on Boring Advice Repository · 2014-09-03T13:48:19.024Z · LW · GW

Seconded on that video, it's cheesy but very straightforward and informative.

Comment by DSimon on The rational way to name rivers · 2014-08-22T13:41:20.697Z · LW · GW

.ie zo'oru'e uinai

Comment by DSimon on The rational way to name rivers · 2014-08-07T17:29:33.453Z · LW · GW

.i la kristyn casnu lo lojbo tanru noi cmima

Comment by DSimon on The Power of Noise · 2014-06-17T14:41:40.920Z · LW · GW

While an interesting idea, I believe most people just call this "gambling".

I'm not sure what you're driving at here. A gambling system where everybody has a net expected gain is still a good use of randomness.

Comment by DSimon on The Power of Noise · 2014-06-17T14:39:43.265Z · LW · GW

A human running quicksort with certain expectations about its performance might require a particular distribution, but that's not a characteristic of software.

I think this may be a distinction without a difference; modularity can also be defined as human expectations about software, namely that the software will be relatively easy to hook into a larger system.

Comment by DSimon on AI risk, new executive summary · 2014-04-21T20:12:58.864Z · LW · GW

Try wdiff

Comment by DSimon on Rationality Quotes February 2014 · 2014-02-07T16:07:41.084Z · LW · GW

That might be a distinction without a difference; my preferences come partly from my instincts.

Comment by DSimon on Building Phenomenological Bridges · 2013-12-26T21:35:07.310Z · LW · GW

Here's a hacky solution. I suspect that it is actually not even a valid solution since I'm not very familiar with the subject matter, but I'm interested in finding out why.

The relationship between one's map and the territory is much easier to explain from outside than it is from the inside. Hypotheses about the maps of other entities can be complete as hypotheses about the territory if they make predictions based on that entity's physical responses.

Therefore: can't we sidestep the problem by having the AI consider its future map state as a step in the middle of its hypothetical explanation of how a some other AI in the territory would react to a given territory state? The hacky part then is to just hard-wire the AI to consider any such hypotheses as being potentially about itself, to be confirmed or disconfirmed by reflecting on its own output (perhaps via some kind of loopback).

AIUI this should allow the AI to consider any hypothesis about its own operation without requiring that it be able to deeply reflect on its own map as part of the territory, which seems to be the source of the trouble.

Comment by DSimon on Rationality Quotes November 2013 · 2013-11-11T17:17:50.367Z · LW · GW

The next best thing to have after a reliable ally is a predictable enemy.

-- Sam Starfall, FreeFall #1516

Comment by DSimon on The best 15 words · 2013-10-09T14:25:13.473Z · LW · GW

The evaluator, which determines the meaning of expressions in a program, is just another program.

-- Structure and Interpretation of Computer Programs

Comment by DSimon on Welcome to Less Wrong! (5th thread, March 2013) · 2013-10-07T15:11:10.837Z · LW · GW

I've been trying very hard to read the paper at that link for a while now, but honestly I can't figure it out. I can't even find anything content-wise to criticize because I don't understand what you're trying to claim in the first place. Something about the distinction between map and territory? But what the heck does that have to do with ethics and economics? And why the (seeming?) presumption of Christianity? And what does any of that have to do with this graph-making software you're trying to sell?

It would really help me if you could do the following:

  1. Read this short story:
  2. Please explain, using language no more complicated or technical than that used in the story, whether the idea of "truth" that the story proposes lines up with your philosophy or not, and why or why not.
Comment by DSimon on The best 15 words · 2013-10-07T14:47:53.564Z · LW · GW

Good point, probably the title should be "What is a good puzzle?" then.

Comment by DSimon on The best 15 words · 2013-10-03T16:22:17.251Z · LW · GW

That's interesting! I've had very different experiences:

When I'm trying to solve a puzzle and learn that it had no good answer (i.e. was just nonsense, not even rising to the level of trick question), it's very frustrating. It retroactively makes me unhappy about having spent all that time on it, even though I was enjoying myself at the time.

Comment by DSimon on The best 15 words · 2013-10-03T13:45:46.197Z · LW · GW

Scott Kim, What is a Puzzle?

  1. A puzzle is fun,
  2. and it has a right answer.

Comment by DSimon on The genie knows, but doesn't care · 2013-09-10T17:53:39.664Z · LW · GW
  1. Why does the hard takeoff point have to be after the point at which an AI is as good as a typical human at understanding semantic subtlety? In order to do a hard takeoff, the AI needs to be good at a very different class of tasks than those required for understanding humans that well.

  2. So let's suppose that the AI is as good as a human at understanding the implications of natural-language requests. Would you trust a human not to screw up a goal like "make humans happy" if they were given effective omnipotence? The human would probably do about as well as people in the past have at imagining utopias: really badly.

Comment by DSimon on The Up-Goer Five Game: Explaining hard ideas with simple words · 2013-09-10T14:51:00.381Z · LW · GW

So what is Mr. Turing's computer like? It has these parts:

  1. The long piece of paper. The paper has lines on it like the kind of paper you use in numbers class at school; the lines mark the paper up into small parts, and each part has only enough room for one number. Usually the paper starts out with some numbers already on it for the computer to work with.
  2. The head, which reads from and writes numbers onto the paper. It can only use the space on the paper that is exactly under it; if it wants to read from or write on a different place on the paper, the whole head has to move up or down to that new place first. Also, it can only move one space at a time.
  3. The memory. Our computers today have lots of memory, but Mr. Turing's computer has only enough memory for one thing at a time. The thing being remembered is the "state" of the computer, like a "state of mind".
  4. The table, which is a plan that tells the computer what to do when it is each state. There are only so many different states that the computer might be in, and we have to put them all in the table before we run the computer, along with the next steps the computer should take when it reads different numbers in each state.

Looking closer, each line in the table has five parts, which are:

  • If Our State Is this
  • And The Number Under Head Is this
  • Then Our Next State Will Be this (or maybe the computer just stops here)
  • And The Head Should write this
  • And Then The Head Should move this way

Here's a simple table:

Happy   1   Happy   1    Right
Happy   2   Happy   1    Right
Happy   3   Sad     3    Right
Sad       1   Sad     2    Right
Sad       2   Sad     2    Right
Sad       3   Stop

Okay, so let's say that we have one of Mr. Turing's computers built with that table. It starts out in the Happy state, and its head is on the first number of a paper like this:

1 2 1 1 2 1 3 1 2 1 2 2 1 1 2 3

What will the paper look like after the computer is done? Try pretending you are the computer and see what you do! The answer is at the end.

So you can see now that the table is the plan for what the computer should do. But we still have not fixed Mr. Babbage's problem! To make the computer do different things, we have to open it up and change the table. Since the "table" in any real computer will be made of very many parts put together very carefully, this is not a good way to do it!

So here is the amazing part that surprised everyone: you can make a great table that can act like any other table if you give it the right numbers on the paper. Some of the numbers on the paper tell the computer about a table for adding, and the rest of the numbers are to be added. The person who made the great table did not even have to know anything about adding, as long as the person who wrote the first half of the paper does.

Our computers today have tables like this great table, and so almost everything fun or important that they do is given to them long after they are built, and it is easy to change what they do.

By the way, here is how the paper from before will look after a computer with our simple table is done with it:

1 1 1 1 1 1 3 2 2 2 2 2 2 2 2 3
Comment by DSimon on The Up-Goer Five Game: Explaining hard ideas with simple words · 2013-09-06T14:27:21.871Z · LW · GW

Mr. Turing's Computer

Computers in the past could only do one kind of thing at a time. One computer could add some numbers together, but nothing else. Another could find the smallest of some numbers, but nothing else. You could give them different numbers to work with, but the computer would always do the same kind of thing with them.

To make the computer do something else, you had to open it up and put all its pieces back in a different way. This was very hard and slow!

So a man named Mr. Babbage thought: what if some of the numbers you gave the computer were what told it what to do? That way you could have just one computer, and you could quickly make it be a number-adding computer, or a smallest-number-finding computer, or any kind of computer you wanted, just by giving it different numbers. But although Mr. Babbage and his friend Ms. Lovelace tried very hard to make a computer like that, they could not do it.

But later a man named Mr. Turing thought up a way to make that computer. He imagined a long piece of paper with numbers written on it, and imagined a computer moving left and right that paper and reading the numbers on it, and sometimes changing the numbers. This computer could only see one number on the paper at a time, and also only remember one thing at a time, but that was enough for the computer to know what to do next. Everyone was amazed that such a simple computer could do anything that any other computer then could do; all you had to do was put the right numbers on the paper first, and then the computer could do something different! Mr. Turing's idea was enough to let people build computers that finally acted like Mr. Babbage's and Ms. Lovelace's dream computer.

Even though Mr. Turing's computer sounds way too simple when you think about our computers today, our computers can't do anything that Mr. Turing's imagined computer can't. Our computers can look at many many numbers and remember many many things at the same time, but this only makes them faster than Mr. Turing's computer, not actually different in any important way. (Though of course being fast is very important if you want to have any fun or do any real work on a computer!)

Comment by DSimon on The genie knows, but doesn't care · 2013-09-05T14:57:05.170Z · LW · GW

There is no reason to assume that an AI with goals that are hostile to us, despite our intentions, is stupid.

Humans often use birth control to have sex without procreating. If evolution were a more effective design algorithm it would never have allowed such a thing.

The fact that we have different goals from the system that designed us does not imply that we are stupid or incoherent.

Comment by DSimon on The genie knows, but doesn't care · 2013-09-04T12:53:34.938Z · LW · GW

Why can't it weight actions based on what we as a society want/like/approve/consent/condone?

Human society would not do a good job being directly in charge of a naive omnipotent genie. Insert your own nightmare scenario examples here, there are plenty to choose from.

What I'm describing isn't really a utility function, it's more like a policy, or policy function. Its policy would be volatile, or at least, more volatile than the common understanding LW has of a set-in-stone utility function.

What would be in charge of changing the policy?

Comment by DSimon on Reality is weirdly normal · 2013-08-29T13:04:05.906Z · LW · GW

See also:

Comment by DSimon on Humans are utility monsters · 2013-08-20T20:45:15.957Z · LW · GW

I may not actually want to pay $1 per squirrel, but if I still want to want to, then that's as significant a part of my ethics as my desire to avoid being a wire-head, even though once I tried it I would almost certainly never want to stop.

Comment by DSimon on Fake Explanations · 2013-06-19T13:49:24.484Z · LW · GW

How about "I don't know, but maybe it has something to do with X?"

Comment by DSimon on Changing Systems is Different than Running Controlled Experiments - Don’t Choose How to Run Your Country That Way! · 2013-06-14T14:14:46.343Z · LW · GW

I agree that this is a failure, though I do not think the problem is with the definition of privilege itself. As a parallel example: Social Darwinism (in some forms) assigns moral value to the utility function of evolution, and this is a pretty silly thing to do, but it doesn't reduce the explanatory usefulness of evolution.

Comment by DSimon on Changing Systems is Different than Running Controlled Experiments - Don’t Choose How to Run Your Country That Way! · 2013-06-13T21:20:35.294Z · LW · GW

Sure. Here's the most-viewed question on SO:

If you click the score on the left, it splits into green and red, showing up and down votes respectively.

Interestingly, there are very few down-votes for such a popular question! But then again, it's an awfully interesting question, and in SO it costs you one karma point to downvote someone else.

Comment by DSimon on Changing Systems is Different than Running Controlled Experiments - Don’t Choose How to Run Your Country That Way! · 2013-06-13T18:36:52.512Z · LW · GW

Of course, one needs a definition of "potentially" crafted specifically for the purpose of this specific claim.

Yes, good point: perhaps "socially permitted to be" is better than "potentially".

I agree that the parts of culture teaching (anyone) that rape is a socially acceptable action should be removed.

To be clear, the assertion is that some rape is taught to be socially acceptable. Violent rape and rape using illegal drugs is right out; we are talking about cases closer to the edge than the center, but which are still significantly harmful.

For example, it's part of the standard cultural romantic script that women put up a token resistance to advances, which men then overcome by being insistent and stubborn. This is social acceptance of rape to the degree that it instructs men to ignore non-consent unless it's sufficiently emphasized, or to put it another way, to the degree that it makes it more difficult for women who are non-confrontational to effectively deny consent.

From the simplistic "women good, men bad", we have progressed to a more nuanced perception of society "women good, men bad, but rich white women also a little bad, etc.".

I think this is also a strawman, at least of feminism as I've interacted with/participated in online. Privilege is an epistemological failure, not an ethical failure. To be privileged is not to be a bad person, it's to have incorrect or biased information-gathering skills regarding the experiences of various social groups compared to one's own.

I am not aware of mainstream feminists saying that [islam grants males rapists a safety bonus against consequences] loudly.

This isn't quite an isomorphic case: male privilege helping males abuse non-males isn't parallel to Islamic privilege helping Muslims abuse Muslims. However, if you're looking for general recognition among online feminists that Islamic countries have a lot of problems with gender inequality stemming from religious sources, then I'm very surprised to hear you say that.

And I think female rapists have it even easier in our society. Don't they?


According to this model, it would be acceptable to speak about "male privilege" or "rich privilege", and illustrate them with examples of rapists, but speaking about "female privilege" or "muslim privilege" and illustrating them with examples of rapists, is not acceptable, because it goes against the official black-to-white gradient.

This is a very good point, I agree. I have heard feminists address this by attempting to coin new terms, but I don't think it's working very well.

Comment by DSimon on Procedural Knowledge Gaps · 2013-06-13T14:23:25.913Z · LW · GW

As one data-point: I am a straight male, and gender is more important to me than genitalia.

Comment by DSimon on Changing Systems is Different than Running Controlled Experiments - Don’t Choose How to Run Your Country That Way! · 2013-06-13T14:20:07.374Z · LW · GW

Seconded. StackOverflow shows this information, and it's frequently interesting.

Comment by DSimon on Changing Systems is Different than Running Controlled Experiments - Don’t Choose How to Run Your Country That Way! · 2013-06-13T14:12:59.882Z · LW · GW

Things are the way they are for reasons, not magic.

Who is claiming magical or otherwise non-sensical causes?

Comment by DSimon on Changing Systems is Different than Running Controlled Experiments - Don’t Choose How to Run Your Country That Way! · 2013-06-13T14:10:33.442Z · LW · GW

Could the person who voted down the parent comment please explain their reasoning? I am genuinely curious.

Comment by DSimon on Changing Systems is Different than Running Controlled Experiments - Don’t Choose How to Run Your Country That Way! · 2013-06-13T14:04:50.338Z · LW · GW

From a typical online discussion with a feminist, I get an idea that every man is a rapist, and that men constructed the whole society to help each other get away with their crimes.

This strikes me as being a strawman, or as an indication that the feminists you have been talking to are either poor communicators or make very different statements than I am used to from feminist discussions online. (To be clear: Both of these are intended as serious possibilities, not as snark. Or as they say in Lojban: zo'onai )

Discussing each part individually:

[...] every man is a rapist [...]

I think this is denotationally wrong. The assertion is not that all men are rapists, but that all men are potentially rapists. This is because men tend to learn, culturally, a set of socially acceptable actions that intersects with the set of rape actions. That does not mean that every man's actions actually cross into the latter set.

[...] men constructed the whole society to help each other get away with their crimes [...]

This language, e.g. the phrase "constructed [...] to help each other", implies a deliberate act of planned societal design. That is not an assertion I tend to hear from feminists; rather, they say that male privilege does makes it easier for rapists to escape consequences, but do not claim an intentional or conspiratorial source for that privilege.

Comment by DSimon on Changing Systems is Different than Running Controlled Experiments - Don’t Choose How to Run Your Country That Way! · 2013-06-10T20:20:17.941Z · LW · GW

I like your examples, and recognize the problem you point out, but I don't agree with your conclusion.

The problem with counter-arguments of the form "Well, if we changed this one variable of a social system to a very different value, X would break!" is that variables like that usually change slowly, with only a small number of people fully and quickly adopting any change, and the rest moving along with the gradually shifting Overton window.

Additionally, having a proposed solution that involves changing a large number of things should probably set off warning alarms in your head: such solutions are more difficult to implement and have a greater number of working parts.

Comment by DSimon on Being Foreign and Being Sane · 2013-05-28T19:14:48.235Z · LW · GW

Available evidence seems to point to the contrary, unless you are using a quite high value for "sufficiently", higher than the one used by fowlertm in the quoted phrase.

Comment by DSimon on Privileging the Question · 2013-05-28T19:10:30.664Z · LW · GW

Orthogonality has to claim that the typical, statistically common kind of agent could have arbitrary goals

I'm not sure what you mean by "statistically common" here. Do you mean a randomly picked agent out of the set of all possible agents?

Comment by DSimon on Post ridiculous munchkin ideas! · 2013-05-22T19:35:19.733Z · LW · GW

But it requires active, exclusive use of time to go to a library, loan out a book, and bring it back (and additional time to return it), whereas I can do whatever while the book is en route.

Comment by DSimon on No, Really, I've Deceived Myself · 2013-05-22T18:43:35.741Z · LW · GW

Most atheists alieve in God and trust him to make the future turn out all right (ie they expect the future to magically be ok even if no one deliberately makes it so).

The statement in parentheses seems to contradict the one outside. Are you over-applying the correlation between magical thinking and theism?

Comment by DSimon on Rationality Quotes May 2012 · 2013-05-21T20:45:51.613Z · LW · GW

Even if you don't know which port you're going to, a wind that blows you to some port is more favorable than a wind that blows you out towards the middle of the ocean.

Comment by DSimon on The flawed Turing test: language, understanding, and partial p-zombies · 2013-05-20T12:13:00.525Z · LW · GW

I'm not really sure what you're driving at here. We don't have any software even close to being able to pass the TT right now; at the moment, using relatively easy subsets of the TT is the most useful thing to do. That doesn't mean that anyone expects that passing such a subset counts as passing the general TT.

Comment by DSimon on The flawed Turing test: language, understanding, and partial p-zombies · 2013-05-17T20:41:10.221Z · LW · GW

But you can keep on adding specifics to a subject until you arrive at something novel. I don't think it would even be that hard: just Google the key phrases of whatever you're about to say, and if you get back results that could be smooshed into a coherent answer, then you need to keep changing up or complicating.

Comment by DSimon on How to Build a Community · 2013-05-17T13:21:01.742Z · LW · GW

I would want them to alert hotel security and/or call the police.

Comment by DSimon on How to Build a Community · 2013-05-17T13:20:14.538Z · LW · GW

He needs to have a second gun ready so that he can get as many shots off as possible before having to reload.

He isn't assembling the gun out of a backpack, but from a backpack: specifically, from gun parts which are inside the backpack.

Comment by DSimon on Welcome to Less Wrong! (5th thread, March 2013) · 2013-05-16T18:06:05.062Z · LW · GW

Hello, Lumifer! Welcome to smart-weird land. We have snacks.

So you say you have no burning questions, but here's one for you: as a new commenter, what are your expectations about how you'll be interacting with others on the site? It might be interesting to note those now, so you can compare later.

Comment by DSimon on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-16T17:49:36.193Z · LW · GW

So I may as well discount all probability lines in which the evidence I'm seeing isn't a valid representation of an underlying reality.

But that would destroy your ability to deal with optical illusions and misdirection.

Comment by DSimon on Welcome to Less Wrong! (5th thread, March 2013) · 2013-05-16T16:18:34.506Z · LW · GW

Sounds fine to me. Consider it this way: whether or not you "win the debate" from the perspective of some outside audience, or from our perspective, isn't important. It's more about whether you feel like you might benefit from the conversation yourself.

Comment by DSimon on How to Build a Community · 2013-05-16T12:36:21.848Z · LW · GW

Yep, agreed. We have a lot more historical examples of dictators (of various levels of effectiveness) who were in it for themselves, and either don't care if their citizens suffer or even actively prefer it. Such dictators would be worse for the world if they get more rational, because their goals make the world a shittier place.

Comment by DSimon on How to Build a Community · 2013-05-15T23:04:18.212Z · LW · GW

You keep using that word, etc. etc.

Rational means something like "figures out what the truth is, and figures out the best way to get stuff done, and does that thing". It doesn't require any particular goal.

So a rational dictator whose goals include their subjects having lots of fun, would be fun to live under.

Comment by DSimon on How to Build a Community · 2013-05-15T22:46:28.891Z · LW · GW

Ask too much of your subjects, and they start wondering if maybe it would be less trouble to just replace you by force.

Comment by DSimon on How to Build a Community · 2013-05-15T22:42:29.740Z · LW · GW

Best hope they've found (or built) a better dictator to replace them...