Posts

Comments

Comment by randallsquared on 2014 Survey Results · 2015-01-04T14:43:41.059Z · LW · GW

May is missing from Birth Month.

Comment by randallsquared on 2014 Less Wrong Census/Survey · 2014-10-25T16:44:29.059Z · LW · GW

I would also like to know for next year. I have four older siblings on my father's side, and two on my mother's, and only spent any home time with one (from my mother's side). So, I answered 6 for older, but depending on whether this was a socialization or uterine environment question, the best answer might have been either 1 or 2 for older.

Comment by randallsquared on The Great Filter is early, or AI is hard · 2014-09-03T15:24:49.020Z · LW · GW

Especially if the builders are concerned about unintended consequences, the final goal might be relatively narrow and easily achieved, yet result in the wiping out of the builder species.

Comment by randallsquared on What should a friendly AI do, in this situation? · 2014-08-08T22:09:00.180Z · LW · GW

Most goals include "I will not tolerate any challenges to my power" as a subgoal. Tolerating challenges to power to execute goals reduces the likelihood of acheiving them.

Comment by randallsquared on [moderator action] Eugine_Nier is now banned for mass downvote harassment · 2014-07-11T03:35:50.647Z · LW · GW

I have seen people querulously quibbling, "ah, but suppose I find everything a user posts bad and I downvote each of them, is that a bannable offense and if not how are you going to tell, eh?" But I have not yet see anyone saying, Eugine was right to downvote everything that these people posted, regardless of what it was, and everyone else should do the same until they are driven away.

Ah, but it's not clear that those are different activities, or if they are, whether there's any way in the database or logs to tell the difference. So, when people "quibble" about the first, they're implying (I think) that they believe that in the future someone might be right to downvote everything someone posts, because that person always posts terrible posts.

Part of the reason this is coming up is a lack or perceived lack of transparency as to exactly what patterns "convicted" Eugine_Nier.

Comment by randallsquared on Dissolving the Thread of Personal Identity · 2014-05-26T00:02:06.312Z · LW · GW

In fact, people experience this all the time whenever we dream about being someone else, and wake up confused about who we are for a few seconds or whatever. It's definitely important to me that the thread of consciousness of who I am survives, separately from my memories and preferences, since I've experienced being me without those, like everyone else, in dreams.

Comment by randallsquared on Rational Evangelism · 2014-02-27T22:58:04.088Z · LW · GW

Russia is a poor counter-argument, given that the ruler of Russia was called Caesar.

Comment by randallsquared on Continuity in Uploading · 2014-01-22T05:03:29.412Z · LW · GW

It's more that my definition of identity just is something like an internally-forward-flowing, indistinguishable-from-the-inside sequence of observer slices and the definition that other people are pushing just...isn't.

Hm. Does "internally-forward-flowing" mean that stateA is a (primary? major? efficient? not sure if there's a technical term, here) cause of stateB, or does it mean only that internally, stateB remembers "being" stateA?

If the former, then I think you and I actually agree.

Comment by randallsquared on Continuity in Uploading · 2014-01-20T00:09:50.754Z · LW · GW

Moby Dick is not a single physical manuscript somewhere.

"Moby Dick" can refer either to a specific object, or to a set. Your argument is that people are like a set, and Error's argument is that they are like an object (or a process, possibly; that's my own view). Conflating sets and objects assumes the conclusion.

Comment by randallsquared on [LINK] Why I'm not on the Rationalist Masterlist · 2014-01-07T03:36:12.958Z · LW · GW

People in the rationality community tend to believe that there's a lot of low-hanging fruit to be had in thinking rationally, and that the average person and the average society is missing out on this. This is difficult to reconcile with arguments for tradition and being cautious about rapid change, which is the heart of (old school) conservatism.

Comment by randallsquared on What if Strong AI is just not possible? · 2014-01-04T15:31:43.921Z · LW · GW

What's your evidence? I have some anecdotal evidence (based on waking from sleep, and on drinking alcohol) that seems to imply that consciousness and intelligence are quite strongly correlated, but perhaps you know of experiments in which they've been shown to vary separately?

Comment by randallsquared on Yet more "stupid" questions · 2013-09-05T16:00:22.389Z · LW · GW

Haha, no, sorry. I was referring to Child's Jack Reacher, who starts off with a strong moral code and seems to lose track of it around book 12.

Comment by randallsquared on Yet more "stupid" questions · 2013-09-01T01:14:16.516Z · LW · GW

Not every specific question need have contributed to fitness.

Comment by randallsquared on Yet more "stupid" questions · 2013-09-01T01:01:30.459Z · LW · GW

You may, however, come to strongly dislike the protagonist later in the series.

Comment by randallsquared on More "Stupid" Questions · 2013-08-08T13:29:22.919Z · LW · GW

I think "numerically identical" is just a stupid way of saying "they're the same".

In English, at least, there appears to be no good way to differentiate between "this is the same thing" and "this is an exactly similar thing (except that there are at least two of them)". In programming, you can just test whether two objects have the same memory location, but the simplest way to indicate that in English about arbitrary objects is to point out that there's only one item. Hence the need for phrasing like "numerically identical".

Is there a better way?

Comment by randallsquared on Optimizing for attractiveness · 2013-05-31T20:43:37.101Z · LW · GW

3.1 ounces of very lean meat

That's a very specific number. Why not just "about 3 ounces (85g)"?

Comment by randallsquared on Justifiable Erroneous Scientific Pessimism · 2013-05-10T11:16:46.801Z · LW · GW

We can imagine a world in which brains were highly efficient and people looked more like elephants, in which one could revolutionize physics every year or so but it takes a decade to push out a calf.

That's not even required, though. What we're looking for (blade-size-wise) is whether a million additional people produce enough innovation to support more than a million additional people, and even if innovators are one in a thousand, it's not clear which way that swings in general.

Comment by randallsquared on Using Evolution for Marriage or Sex · 2013-05-09T11:28:45.087Z · LW · GW

subtle, feminine, discrete and firm

Probably you meant discreet, but if not, consider using "distinct" to avoid confusion.

Comment by randallsquared on Pay Other Species to Pandemize Vegetarianism for You · 2013-04-18T16:58:42.062Z · LW · GW

If you prefer suffering to nonexistence, this ceases to be a problem. One could argue that this justifies raising animals for food (which would otherwise never have existed), but it's not clear to me what the sign of the change is.

Comment by randallsquared on What truths are actually taboo? · 2013-04-17T19:53:12.298Z · LW · GW

...but "argh" is pronounced that way... http://www.youtube.com/watch?v=pOlKRMXvTiA :) Since the late 90s, at least.

Comment by randallsquared on What truths are actually taboo? · 2013-04-17T15:06:48.497Z · LW · GW

...but people (around me, at least, in the DC area) do say "Er..." literally, sometimes. It appears to be pronounced that way when the speaker wants to emphasize the pause, as far as I can tell.

Comment by randallsquared on LW Women Submissions: On Misogyny · 2013-04-12T21:54:18.998Z · LW · GW

Gandhi might not be the best example of non-misogyny.

Comment by randallsquared on AI prediction case study 3: Searle's Chinese room · 2013-03-17T16:44:02.859Z · LW · GW

But it's actually true that solving the Hard Problem of Consciousness is necessary to fully explode the Chinese Room! Without having solved it, it's still possible that the Room isn't understanding anything, even if you don't regard this as a knock against the possibility of GAI. I think the Room does say something useful about Turing tests: that behavior suggests implementation, but doesn't necessarily constrain it. The Giant Lookup Table is another, similarly impractical, argument that makes the same point.

Understanding is either only inferred from behavior, or actually a process that needs to be duplicated for a system to understand. If the latter, then the Room may speak Chinese without understanding it. If the former, then it makes no sense to say that a system can speak Chinese without understanding it.

Comment by randallsquared on Rationalist fiction brainstorming funtimes · 2013-03-09T18:07:48.641Z · LW · GW

no immortal horses, imagine that.

No ponies or friendship? Hard to imagine, indeed. :|

Comment by randallsquared on The value of Now. · 2013-02-01T21:54:53.155Z · LW · GW

Not Michaelos, but in this sense, I would say that, yes, a billion years from now is magical gibberish for almost any decision you'd make today. I have the feeling you meant that the other way 'round, though.

Comment by randallsquared on CEV: a utilitarian critique · 2013-01-28T04:06:02.755Z · LW · GW

In the context of

But when this is phrased as "the set of minds included in CEV is totally arbitrary, and hence, so will be the output," an essential truth is lost

I think it's clear that with

valuing others' not having abortions loses to their valuing choice

you have decided to exclude some (potential) minds from CEV. You could just as easily have decided to include them and said "valuing choice loses to others valuing their life".

But, to be clear, I don't think that even if you limit it to "existing, thinking human minds at the time of the calculation", you will get some sort of unambiguous result.

Comment by randallsquared on CEV: a utilitarian critique · 2013-01-28T02:20:07.259Z · LW · GW

A very common desire is to be more prosperous than one's peers. It's not clear to me that there is some "real" goal that this serves (for an individual) -- it could be literally a primary goal. If that's the case, then we already have a problem: two people in a peer group cannot both get all they want if both want to have more than any other. I can't think of any satisfactory solution to this. Now, one might say, "well, if they'd grown up farther together this would be solvable", but I don't see any reason that should be true. People don't necessarily grow more altruistic as they "grow up", so it seems that there might well be no CEV to arrive at. I think, actually, a weaker version of the UFAI problem exists here: sure, humans are more similar to each other than UFAI's need be to each other, but they still seem fundamentally different in goal systems and ethical views, in many respects.

Comment by randallsquared on CEV: a utilitarian critique · 2013-01-26T21:30:40.218Z · LW · GW

The point you quoted is my main objection to CEV as well.

You might object that a person might fundamentally value something that clashes with my values. But I think this is not likely to be found on Earth.

Right now there are large groups who have specific goals that fundamentally clash with some goals of those in other groups. The idea of "knowing more about [...] ethics" either presumes an objective ethics or merely points at you or where you wish you were.

Comment by randallsquared on Welcome to Heaven · 2013-01-26T15:06:19.807Z · LW · GW

Yes, I thought about that when writing the above, but I figured I'd fall back on the term "entity". ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).

Comment by randallsquared on Welcome to Heaven · 2013-01-26T04:38:57.481Z · LW · GW

I think I must be misunderstanding you. It's not so much that I'm saying that our goals are the bedrock, as that there's no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there's some basis for what we "ought" to do, but I'm making exactly the same point you are when you say:

what evidence is there that there is any 'ought' above 'maxing out our utility functions'?

I know of no such evidence. We do act in pursuit of goals, and that's enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it's not very close at all, and I agree, but I don't see a path to closer.

So, to recap, we value what we value, and there's no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about "ought" presume a given goal both can agree on.

Would making paperclips become valuable if we created a paperclip maximiser?

To the paperclip maximizer, they would certainly be valuable -- ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :)

By the way, you can't say the wirehead doesn't care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn't care about goals would never do anything at all.

Comment by randallsquared on Welcome to Heaven · 2013-01-22T20:00:58.317Z · LW · GW

Just to be clear, I don't think you're disagreeing with me.

Comment by randallsquared on Donation tradeoffs in conscientious objection · 2012-12-28T11:22:07.925Z · LW · GW

I'm asking about how to efficiently signal actual pacifism.

Comment by randallsquared on Donation tradeoffs in conscientious objection · 2012-12-28T01:08:54.438Z · LW · GW

I'm not asking about faking pacifism. I'm asking about how to efficiently signal actual pacifism. How else am I supposed to ask about that?

Replace "serious injury or death" with "causing serious injury or death".

Comment by randallsquared on Ontological Crisis in Humans · 2012-12-21T05:48:49.671Z · LW · GW

If God doesn't exist, then there is no way to know what He would want, so the replacement has no actual moral rules.

Comment by randallsquared on Constructing fictional eugenics (LW edition) · 2012-10-29T23:55:57.693Z · LW · GW

When you consider this, consider the difference between our current world (with all the consequences for those of IQ 85), and a world where 85 was the average, so that civilization and all its comforts never developed at all...

Comment by randallsquared on The raw-experience dogma: Dissolving the “qualia” problem · 2012-09-18T18:51:53.848Z · LW · GW

When people say that it's conceivable for something to act exactly as if it were in pain without actually feeling pain, they are using the word "feel" in a way that I don't understand or care about.

Taken literally, this suggests that you believe all actors really believe they are the character (at least, if they are acting exactly like the character). Since that seems unlikely, I'm not sure what you mean.

Comment by randallsquared on [LINK] Cryonics - without even trying · 2012-08-17T17:32:31.826Z · LW · GW

people can see after 30 years that the idea [of molecular manufacturing] turned out sterile.

Did I miss the paper where it was shown not to be workable, or are you basing this only on the current lack of assemblers?

Comment by randallsquared on The weakest arguments for and against human level AI · 2012-08-16T11:44:32.448Z · LW · GW

Raw processing power. In the computer analogy, intelligence is the combination of enough processing power with software that implements the intelligence. When people compare computers to brains, they usually seem to be ignoring the software side.

Comment by randallsquared on HP:MOR and the Radio Fallacy · 2012-07-24T03:43:50.712Z · LW · GW

Can you point out why the analogy is bad?

Comment by randallsquared on Negative and Positive Selection · 2012-07-16T14:13:09.947Z · LW · GW

I've read over one hundred books I think were better. And I mean that literally; if I spent a day doing it, I could actually go through my bookshelves and write down a list of one hundred and one books I liked more.

I've read many, many books I liked more than many books which I would consider "better" in a general sense. From the context of the discussion, I'd think "were better" was the meaning you meant. Alternatively, maybe you don't experience such a discrepancy between what you like and what you believe is "good writing"?

Comment by randallsquared on Moderate alcohol consumption inversely correlated with all-cause mortality · 2012-07-14T06:10:55.559Z · LW · GW

Me, too, but about two years ago. Unfortunately, I've had a hard time liking wine, so I'm hoping that moderate amounts of scotch and/or rum have a similar effect.

Comment by randallsquared on A wild theist platonist appears, to ask about the path · 2012-05-16T13:09:36.620Z · LW · GW

There are (at least) two meaning for "why ought we be moral":

  • "Why should an entity without goals choose to follow goals", or, more generally, "Why should an entity without goals choose [anything]",
  • and, "Why should an entity with a top level goal of X discard this in favor of a top level goal of Y."

I can imagine answers to the second question (it could be that explicitly replacing X with Y results in achieving X better than if you don't; this is one driver of extremism in many areas), but it seems clear that the first question admits of no attack.

Comment by randallsquared on The ethics of breaking belief · 2012-05-09T16:57:13.305Z · LW · GW

Unless J is much, much less intelligent than you, or you've spent a lot of time planning different scenarios, it seems like any one of J's answers might well require too much thought for a quick response. For example,

tld: Well, God was there, and now he's left that world behind. So it's a world without God - what changes, what would be different about the world if God weren't in it?

J: I can't imagine a world without God in it.

Lots of theists might answer this in a much more specific fashion. "Well, I suppose the world would cease to exist, wouldn't it?", "Anything could happen, since God wouldn't be holding it together anymore!", or "People would all turn evil immediately, since God is the source of conscience." all seem like plausible responses. "I can't imagine a world without God in it" might literally be true, but even if it is, J's response might be something entirely different, or even something that isn't really even a response to the question (try writing down a real-life conversation some time, without cleaning it up into what was really meant. People you know probably very often say things that are both surprising and utterly pointless).

Comment by randallsquared on A wild theist platonist appears, to ask about the path · 2012-05-09T15:24:18.716Z · LW · GW

Morality consists of courses of action to achieve a goal or goals, and the goal or goals themselves. Game theory, evolutionary biology, and other areas of study can help choose courses of action, and they can explain why we have the goals we have, but they can't explain why we "ought" to have a given goal or goals. If you believe that a god created everything except itself, but including morality, then said god presumably can ground morality simply by virtue of having created it.

Comment by randallsquared on How to Fix Science · 2012-03-07T13:58:37.458Z · LW · GW

Also this year,

Nitpick: actually last year (March 2011, per http://www.ncbi.nlm.nih.gov/pubmed/21280961 ).

Comment by randallsquared on On Saying the Obvious · 2012-02-02T14:16:40.473Z · LW · GW

This is not (to paraphrase Eliezer) a thunderbolt of insight. [...]

This sentence seems exactly the same to me as saying, "This was obvious, but, [...]".

Sometimes, people assert obviousness as a self-deprecating maneuver or to preempt criticism, rather than because they believe that everyone would consider the statement in question obvious.

Comment by randallsquared on Non-theist cinema? · 2012-01-10T14:30:37.272Z · LW · GW

SG-1 usually had a very anti-theist message, as long as you group all gods together, but the writers went out of their way at least once to exempt the Christian God when the earthborn characters wondered if God might be a goa'uld: "Teal'c: I know of no Goa'uld capable of showing the necessary compassion or benevolence that I've read of in your bible."

However, the overall thrust of the show was pretty anti-diety, and the big bads of the last few seasons were very, very medieval-priestish.

Comment by randallsquared on Things you are supposed to like · 2011-11-01T13:31:31.005Z · LW · GW

I like Pandora enough that I pay for it. That said, there are some issues with it:

  • a given station seems to be limited to 20-30 songs, with a very occasional other song tossed in, so if you listen to it throughout a workday, you'll have heard the same song repeatedly. This can be ideal, however, for worktime music, where repetitive enjoyability is more important that novelty.
  • Pandora doesn't have some artists, especially (I think) those not completely representable with ASCII, like Alizée.
  • If you upvote everything you like, and downvote things you don't like regularly, and if your tastes are quite broad across genres, it's easy for stations to drift from their seed song or artist so far that it mostly plays things not really representative of the name you gave it originally. Additionally, multiple stations can converge so that they mostly play the same songs, except for the original song you started each station with, which are quite different.
Comment by randallsquared on Curiosity, Adam Savage, and Life-Extension · 2011-10-17T21:58:57.208Z · LW · GW

[...] or thwarting "the single best invention of life," according to Steve Jobs.

Which was even more odd given that it immediately followed a worshipful Jobs documentary featuring Adam Savage and Jamie, which contained that very quote.

Comment by randallsquared on Should I play World of Warcraft? · 2011-10-07T18:00:02.225Z · LW · GW

Or, you know, approving.