Posts
Comments
May is missing from Birth Month.
I would also like to know for next year. I have four older siblings on my father's side, and two on my mother's, and only spent any home time with one (from my mother's side). So, I answered 6 for older, but depending on whether this was a socialization or uterine environment question, the best answer might have been either 1 or 2 for older.
Especially if the builders are concerned about unintended consequences, the final goal might be relatively narrow and easily achieved, yet result in the wiping out of the builder species.
Most goals include "I will not tolerate any challenges to my power" as a subgoal. Tolerating challenges to power to execute goals reduces the likelihood of acheiving them.
I have seen people querulously quibbling, "ah, but suppose I find everything a user posts bad and I downvote each of them, is that a bannable offense and if not how are you going to tell, eh?" But I have not yet see anyone saying, Eugine was right to downvote everything that these people posted, regardless of what it was, and everyone else should do the same until they are driven away.
Ah, but it's not clear that those are different activities, or if they are, whether there's any way in the database or logs to tell the difference. So, when people "quibble" about the first, they're implying (I think) that they believe that in the future someone might be right to downvote everything someone posts, because that person always posts terrible posts.
Part of the reason this is coming up is a lack or perceived lack of transparency as to exactly what patterns "convicted" Eugine_Nier.
In fact, people experience this all the time whenever we dream about being someone else, and wake up confused about who we are for a few seconds or whatever. It's definitely important to me that the thread of consciousness of who I am survives, separately from my memories and preferences, since I've experienced being me without those, like everyone else, in dreams.
Russia is a poor counter-argument, given that the ruler of Russia was called Caesar.
It's more that my definition of identity just is something like an internally-forward-flowing, indistinguishable-from-the-inside sequence of observer slices and the definition that other people are pushing just...isn't.
Hm. Does "internally-forward-flowing" mean that stateA is a (primary? major? efficient? not sure if there's a technical term, here) cause of stateB, or does it mean only that internally, stateB remembers "being" stateA?
If the former, then I think you and I actually agree.
Moby Dick is not a single physical manuscript somewhere.
"Moby Dick" can refer either to a specific object, or to a set. Your argument is that people are like a set, and Error's argument is that they are like an object (or a process, possibly; that's my own view). Conflating sets and objects assumes the conclusion.
People in the rationality community tend to believe that there's a lot of low-hanging fruit to be had in thinking rationally, and that the average person and the average society is missing out on this. This is difficult to reconcile with arguments for tradition and being cautious about rapid change, which is the heart of (old school) conservatism.
What's your evidence? I have some anecdotal evidence (based on waking from sleep, and on drinking alcohol) that seems to imply that consciousness and intelligence are quite strongly correlated, but perhaps you know of experiments in which they've been shown to vary separately?
Haha, no, sorry. I was referring to Child's Jack Reacher, who starts off with a strong moral code and seems to lose track of it around book 12.
Not every specific question need have contributed to fitness.
You may, however, come to strongly dislike the protagonist later in the series.
I think "numerically identical" is just a stupid way of saying "they're the same".
In English, at least, there appears to be no good way to differentiate between "this is the same thing" and "this is an exactly similar thing (except that there are at least two of them)". In programming, you can just test whether two objects have the same memory location, but the simplest way to indicate that in English about arbitrary objects is to point out that there's only one item. Hence the need for phrasing like "numerically identical".
Is there a better way?
3.1 ounces of very lean meat
That's a very specific number. Why not just "about 3 ounces (85g)"?
We can imagine a world in which brains were highly efficient and people looked more like elephants, in which one could revolutionize physics every year or so but it takes a decade to push out a calf.
That's not even required, though. What we're looking for (blade-size-wise) is whether a million additional people produce enough innovation to support more than a million additional people, and even if innovators are one in a thousand, it's not clear which way that swings in general.
subtle, feminine, discrete and firm
Probably you meant discreet, but if not, consider using "distinct" to avoid confusion.
If you prefer suffering to nonexistence, this ceases to be a problem. One could argue that this justifies raising animals for food (which would otherwise never have existed), but it's not clear to me what the sign of the change is.
...but "argh" is pronounced that way... http://www.youtube.com/watch?v=pOlKRMXvTiA :) Since the late 90s, at least.
...but people (around me, at least, in the DC area) do say "Er..." literally, sometimes. It appears to be pronounced that way when the speaker wants to emphasize the pause, as far as I can tell.
But it's actually true that solving the Hard Problem of Consciousness is necessary to fully explode the Chinese Room! Without having solved it, it's still possible that the Room isn't understanding anything, even if you don't regard this as a knock against the possibility of GAI. I think the Room does say something useful about Turing tests: that behavior suggests implementation, but doesn't necessarily constrain it. The Giant Lookup Table is another, similarly impractical, argument that makes the same point.
Understanding is either only inferred from behavior, or actually a process that needs to be duplicated for a system to understand. If the latter, then the Room may speak Chinese without understanding it. If the former, then it makes no sense to say that a system can speak Chinese without understanding it.
no immortal horses, imagine that.
No ponies or friendship? Hard to imagine, indeed. :|
Not Michaelos, but in this sense, I would say that, yes, a billion years from now is magical gibberish for almost any decision you'd make today. I have the feeling you meant that the other way 'round, though.
In the context of
But when this is phrased as "the set of minds included in CEV is totally arbitrary, and hence, so will be the output," an essential truth is lost
I think it's clear that with
valuing others' not having abortions loses to their valuing choice
you have decided to exclude some (potential) minds from CEV. You could just as easily have decided to include them and said "valuing choice loses to others valuing their life".
But, to be clear, I don't think that even if you limit it to "existing, thinking human minds at the time of the calculation", you will get some sort of unambiguous result.
A very common desire is to be more prosperous than one's peers. It's not clear to me that there is some "real" goal that this serves (for an individual) -- it could be literally a primary goal. If that's the case, then we already have a problem: two people in a peer group cannot both get all they want if both want to have more than any other. I can't think of any satisfactory solution to this. Now, one might say, "well, if they'd grown up farther together this would be solvable", but I don't see any reason that should be true. People don't necessarily grow more altruistic as they "grow up", so it seems that there might well be no CEV to arrive at. I think, actually, a weaker version of the UFAI problem exists here: sure, humans are more similar to each other than UFAI's need be to each other, but they still seem fundamentally different in goal systems and ethical views, in many respects.
The point you quoted is my main objection to CEV as well.
You might object that a person might fundamentally value something that clashes with my values. But I think this is not likely to be found on Earth.
Right now there are large groups who have specific goals that fundamentally clash with some goals of those in other groups. The idea of "knowing more about [...] ethics" either presumes an objective ethics or merely points at you or where you wish you were.
Yes, I thought about that when writing the above, but I figured I'd fall back on the term "entity". ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).
I think I must be misunderstanding you. It's not so much that I'm saying that our goals are the bedrock, as that there's no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there's some basis for what we "ought" to do, but I'm making exactly the same point you are when you say:
what evidence is there that there is any 'ought' above 'maxing out our utility functions'?
I know of no such evidence. We do act in pursuit of goals, and that's enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it's not very close at all, and I agree, but I don't see a path to closer.
So, to recap, we value what we value, and there's no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about "ought" presume a given goal both can agree on.
Would making paperclips become valuable if we created a paperclip maximiser?
To the paperclip maximizer, they would certainly be valuable -- ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :)
By the way, you can't say the wirehead doesn't care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn't care about goals would never do anything at all.
Just to be clear, I don't think you're disagreeing with me.
I'm asking about how to efficiently signal actual pacifism.
I'm not asking about faking pacifism. I'm asking about how to efficiently signal actual pacifism. How else am I supposed to ask about that?
Replace "serious injury or death" with "causing serious injury or death".
If God doesn't exist, then there is no way to know what He would want, so the replacement has no actual moral rules.
When you consider this, consider the difference between our current world (with all the consequences for those of IQ 85), and a world where 85 was the average, so that civilization and all its comforts never developed at all...
When people say that it's conceivable for something to act exactly as if it were in pain without actually feeling pain, they are using the word "feel" in a way that I don't understand or care about.
Taken literally, this suggests that you believe all actors really believe they are the character (at least, if they are acting exactly like the character). Since that seems unlikely, I'm not sure what you mean.
people can see after 30 years that the idea [of molecular manufacturing] turned out sterile.
Did I miss the paper where it was shown not to be workable, or are you basing this only on the current lack of assemblers?
Raw processing power. In the computer analogy, intelligence is the combination of enough processing power with software that implements the intelligence. When people compare computers to brains, they usually seem to be ignoring the software side.
Can you point out why the analogy is bad?
I've read over one hundred books I think were better. And I mean that literally; if I spent a day doing it, I could actually go through my bookshelves and write down a list of one hundred and one books I liked more.
I've read many, many books I liked more than many books which I would consider "better" in a general sense. From the context of the discussion, I'd think "were better" was the meaning you meant. Alternatively, maybe you don't experience such a discrepancy between what you like and what you believe is "good writing"?
Me, too, but about two years ago. Unfortunately, I've had a hard time liking wine, so I'm hoping that moderate amounts of scotch and/or rum have a similar effect.
There are (at least) two meaning for "why ought we be moral":
- "Why should an entity without goals choose to follow goals", or, more generally, "Why should an entity without goals choose [anything]",
- and, "Why should an entity with a top level goal of X discard this in favor of a top level goal of Y."
I can imagine answers to the second question (it could be that explicitly replacing X with Y results in achieving X better than if you don't; this is one driver of extremism in many areas), but it seems clear that the first question admits of no attack.
Unless J is much, much less intelligent than you, or you've spent a lot of time planning different scenarios, it seems like any one of J's answers might well require too much thought for a quick response. For example,
tld: Well, God was there, and now he's left that world behind. So it's a world without God - what changes, what would be different about the world if God weren't in it?
J: I can't imagine a world without God in it.
Lots of theists might answer this in a much more specific fashion. "Well, I suppose the world would cease to exist, wouldn't it?", "Anything could happen, since God wouldn't be holding it together anymore!", or "People would all turn evil immediately, since God is the source of conscience." all seem like plausible responses. "I can't imagine a world without God in it" might literally be true, but even if it is, J's response might be something entirely different, or even something that isn't really even a response to the question (try writing down a real-life conversation some time, without cleaning it up into what was really meant. People you know probably very often say things that are both surprising and utterly pointless).
Morality consists of courses of action to achieve a goal or goals, and the goal or goals themselves. Game theory, evolutionary biology, and other areas of study can help choose courses of action, and they can explain why we have the goals we have, but they can't explain why we "ought" to have a given goal or goals. If you believe that a god created everything except itself, but including morality, then said god presumably can ground morality simply by virtue of having created it.
Also this year,
Nitpick: actually last year (March 2011, per http://www.ncbi.nlm.nih.gov/pubmed/21280961 ).
This is not (to paraphrase Eliezer) a thunderbolt of insight. [...]
This sentence seems exactly the same to me as saying, "This was obvious, but, [...]".
Sometimes, people assert obviousness as a self-deprecating maneuver or to preempt criticism, rather than because they believe that everyone would consider the statement in question obvious.
SG-1 usually had a very anti-theist message, as long as you group all gods together, but the writers went out of their way at least once to exempt the Christian God when the earthborn characters wondered if God might be a goa'uld: "Teal'c: I know of no Goa'uld capable of showing the necessary compassion or benevolence that I've read of in your bible."
However, the overall thrust of the show was pretty anti-diety, and the big bads of the last few seasons were very, very medieval-priestish.
I like Pandora enough that I pay for it. That said, there are some issues with it:
- a given station seems to be limited to 20-30 songs, with a very occasional other song tossed in, so if you listen to it throughout a workday, you'll have heard the same song repeatedly. This can be ideal, however, for worktime music, where repetitive enjoyability is more important that novelty.
- Pandora doesn't have some artists, especially (I think) those not completely representable with ASCII, like Alizée.
- If you upvote everything you like, and downvote things you don't like regularly, and if your tastes are quite broad across genres, it's easy for stations to drift from their seed song or artist so far that it mostly plays things not really representative of the name you gave it originally. Additionally, multiple stations can converge so that they mostly play the same songs, except for the original song you started each station with, which are quite different.
[...] or thwarting "the single best invention of life," according to Steve Jobs.
Which was even more odd given that it immediately followed a worshipful Jobs documentary featuring Adam Savage and Jamie, which contained that very quote.
Or, you know, approving.