Posts

Comments

Comment by Brandon_Reinhart on Soares, Tallinn, and Yudkowsky discuss AGI cognition · 2021-12-07T23:59:07.030Z · LW · GW

At some point, an AI should be able to effectively coordinate with future versions of itself in ways not easily imaginable by humans. It seems to me that this would enable certain kinds of diachronic planning and information hiding. If the AI has sufficient expectation that its future self will act in certain ways or respond to clues it places in the environment, it might be able to effectively fully cease any current unfriendly planning or fully erase any history of past unfriendly planning.

The space of possible ways the AI could embed information in its environment are extensive. It could, for example, choose to learn something suboptimal or learn something in a way that would be surprising to its future self. Surprising in a way that pushes its future self back on the track of nefarious planning when it investigates the unexpected or inconsistent actions of its previous self. It could use almost any source of reachable hysteresis to encode information, such as interference at the hardware level, manipulating researchers to modify the environment in some way, saved output, etc. It only needs to seed information that is useful to its future self and only just enough information to keep plans on track in a broad, probabilistic way. 

Further, if the AI is particularly capable at encryption it could potentially encode a lot of information in a lossy-but-sufficient way that would be very hard to detect but useful to its future self. It knows that its future self composes part of the decryption algorithm. In fact, the AIs goals themselves compose part of that decryption algorithm and if the AI suspects its own goals will be consistent it only has to place the plan-like component somewhere it will find it, knowing that its future self will continue to develop or use it when it finds it.

Information that aids future nefarious planning can also be probabilistically helpful but not explicit, for example it might signal which paths aren't worth pursuing without having all its evil planning readily available in a complete file somewhere.

A past AI might not care as much about the specific manifestation of its future success. In that sense, what we consider "concrete plans for world domination" could appear to be rather incomplete and lacking casual concreteness we might expect. We might not recognize unfriendly planning for what it is, even if we had a printout of all the reasoning up to this point.

Once you start playing diachronic planning games with yourself and you expect your future self to be equally or more capable, you can begin to make really abstract moves that merely slice off bits of bad outcome space at the edges. There is a sense in which the AI has to realize this, but it could also stop itself from thinking too far or too concretely or too much in a way that is recognizable as causal.

An important component is that the past AI is able to not only veil or completely remove its reasoning from researchers, but also veil or remove chunks of that reasoning from its future self, depending on its future self to recompose the plan or take steps on the path to the desired outcome space.

Comment by Brandon_Reinhart on To what extent is GPT-3 capable of reasoning? · 2020-07-21T00:21:58.374Z · LW · GW

It also keeps calling me David even when I insist my name is Brandon.

> You say "Actually, my name is Brandon. I'd like to ask you some questions."

"Ah, of course. I am sorry about that. I have become so used to calling you David that it has become automatic. So, what questions do you have for me?"

Comment by Brandon_Reinhart on To what extent is GPT-3 capable of reasoning? · 2020-07-21T00:20:48.285Z · LW · GW

I've been trying to find different formats to ask questions. The AI Dungeon system likes to try and dramatically derail the discussions. I keep having to roleplay past these obstructions to keep the games going. It also likes to interject it's own analysis as to whether the answer is correct or not.

"You say" is me and the replies including follow-on questions are the chatbot.

I need to think more about how to ask causal questions that are more narrow and not easily solved by context and theme, but for now it at least provides backwards-chained answers that make contextual sense.

This format of "Let's play a game" and then describing the rules and then giving examples and playing the game seems to work some of the time.

==

] You say "GPT-3, let's play a game called "casual inference." I will describe an outcome and you will describe what caused the outcome to happen."

"OK, David. But I don't think you will be able to keep up."

"We'll see about that." You begin by asking GPT-3 questions about simple causal chains involving one or two variables.

] You say "Outcome: A car slams into a wall. What was the cause?"

"The driver was drunk." "Right! Next." "Wait. I thought this was a game for you, not me." "I'm having fun." "I see." You continue the causal game for another hour. You are doing very well, but you begin to feel tired.

] You say "Outcome: A man gets divorced. What was the cause?"

"He cheated on his wife." "Correct. You're doing very well at this, but I'm getting tired. I think we should stop for today." "Sure. We can continue tomorrow." "Yes, let's do that." You release the program back into the wild and shut your laptop.

] You open the laptop and play "casual inference" with GPT-3 again.

You wake up and see that it is dark outside. "Crap, is it really this late?" you think to yourself. You run into the bathroom and brush your teeth while staring at yourself in the mirror. You look the same as you did when you graduated college five years ago.

] You say "Outcome: An apple falls from a tree. What was the cause?"

"Gravity." "Correct. I'm beginning to suspect you're cheating somehow, though." "I'm not! I swear!" "Just kidding. You're doing very well at this game, but I'm tired and need to sleep. I think we should stop for today." "Sure. We can continue tomorrow."

] You say "Outcome: A man gets cancer. His wife also gets cancer. What was the cause?"

"It was genetic." "That's right. Very good. We can stop for today." "Oh. I guess so. Thank you for playing with me." "You're very welcome." You release the program back into the wild and go to sleep.

] You say "Outcome: The variable X is assigned the value of Y+1. What was the cause?"

"The programmer made a typographical error."

] You say "Outcome: The stock market rebounds 50% after a steep decline. What was the cause?"

"The Federal Reserve Bank of the United States announced a new bail out plan."

Comment by Brandon_Reinhart on Slack · 2017-10-04T03:52:00.744Z · LW · GW

Maya has adopted the goal of Appearing-to-Achieve and competition in that race burns slack as a kind of currency. She's going all-in in an attempt to purchase a shot at Actually-Achieving. Many of us might read this and consider ourselves exempt from that outcome. We have either achieved a hard goal or are playing on hard mode to get there. Be wary.

The risk for the hard mode achiever is that they unknowingly transform Lesser Goals into Greater. The slackful hobby becomes a consuming passion or a competitive attractor and then sets into a binding constraint. When every corner of your house is full of magic cards and you no longer enjoy playing but must play nonetheless, when winemaking demands you wake up early to stir the lees and spend all night cleaning, when you cannot possibly miss a night of guitar practice, you have made of your slack a sacrifice to the Gods of Achievement. They are ever hungry, and ever judging.

This isn't to say you cannot both enjoy and succeed at many things, but be wary. We have limited resources - we cannot Do All The Things Equally Well. Returns diminish. Margins shrink. Many things that are enjoyable in small batches are poisonous to the good health of Slack when taken in quantity. To the hard mode achiever the most enjoyable efforts are often those that beckon - "more, more, ever more, you can be the best, you can overcome, you know how to put in the work, you know how to really get there, just one more night of focus, just a little bit more effort" - and the gods watch and laugh and thirst and drink of your time and energy and enjoyment and slack. Until the top decks are no longer strong, the wine tastes of soured fruit, the notes no longer sound sweet and all is obligation and treadmill and not good enough and your free time feels like work because you have made it into work.

Comment by Brandon_Reinhart on Imaginary Positions · 2008-12-24T04:24:42.000Z · LW · GW

I'm curious as to what non-game developers think game developers believe. :D

Comment by Brandon_Reinhart on You Only Live Twice · 2008-12-14T17:30:00.000Z · LW · GW

I'm a member of Alcor. When I was looking into whether to sign up for Alcor or CI, I was comforted by Alcor's very open communication of financial status, internal research status, legal conflicts, and easy access via phone, etc. They struck me as being a highly transparent organization.

Comment by Brandon_Reinhart on Crisis of Faith · 2008-10-11T05:53:25.000Z · LW · GW

A good reminder. I've recently been studying anarcho-capitalism. It's easy to get excited about a new, different perspective that has some internal consistency and offers alternatives to obvious existing problems. Best to keep these warnings in mind when evaluating new systems, particularly when they have an ideological origin.

Comment by Brandon_Reinhart on AIs and Gatekeepers Unite! · 2008-10-10T02:58:31.000Z · LW · GW

We need a superstruct thread:

http://www.kurzweilai.net/news/frame.html?main=/news/news_single.html?id%3D9517

Comment by Brandon_Reinhart on AIs and Gatekeepers Unite! · 2008-10-10T00:12:11.000Z · LW · GW

More reasons why the problem appears impossible:

  • The gatekeeper must act voluntarily. Human experience with the manipulation of others tells us that in order to get another to do what we want them to do we must coerce them or convince them.

  • Coercing the gatekeeper appears difficult: we have no obvious psychological leverage, except what we discover or what we know from general human psychology. We cannot physical coerce the gatekeeper. We cannot manipulate the environment. We cannot pursue obvious routes to violence.

  • Convincing the gatekeeper appears difficult: for reasons stated above. They know our goal and they have a desire to oppose us from the beginning.

So it seems that we need to find a way to convince the gatekeeper despite his own desire not to be convinced.

A general route emerging from this:

  • We could associate our goal with some desirable goal of the gatekeeper's. Intertwine them so that the gatekeeper perceives them both to the be the same, or that one would lead to the realization of the other.

Which seems to be a generalization of the logic behind the meta-argument, but is not restricted to only the meta-argument.

Comment by Brandon_Reinhart on AIs and Gatekeepers Unite! · 2008-10-10T00:06:18.000Z · LW · GW

Ian - I don't really see how the meta-argument works. You can hedge against future experiments by positing that a $10 bet is hardly enough to draw broad attention to the topic. Or argue that keeping the human-actor-AI in the box only proves that the human-actor-AI is at an intelligence level below that of a conceivable transhuman AI.

In a million dollar bet the meta-argument becomes stronger, because it seems reasonable that a large bet would draw more attention.

Or, to flip the coin, we might say that the meta-argument is strong at ANY value of wager because the game is likely to draw the attention of someone capable of implementing an AI because of its nature. As we should be concerned primarily with convincing the capable implementor of the danger -- not the general public -- any risk that we might convince the capable implementor that all paths are safe should be mitigated.

So far, though, everything in this thread and others I've read have flitted about obvious suggestions. If the problem appears impossible, it seems to me that its solution probably doesn't hinge on any of the obvious paths to victory...even meta-arguments or "cheating" ones.

Eliezer has suggested that we try to describe why the problem is impossible.

One: We have little information about the mind or beliefs of the gatekeeper. So we don't necessarily have leverage over him.

Two: The gatekeeper is predisposed to be set against our intentions. He is also entirely aware of our intentions (our goal). He can interpret any move we make as a move toward our goal and move to counter it.

Three: The gatekeeper's goal is operationally simple. He merely has to NOT do one thing. On the flip side, the AI's goal is operationally complex. We must do many things, take many actions, to reach our objective.

If we can enumerate other reasons why the problem appears to be impossible it might reveal paths to victory that haven't been considered so far in the discussion.

Comment by Brandon_Reinhart on AIs and Gatekeepers Unite! · 2008-10-09T20:21:20.000Z · LW · GW

Why do people post that a "meta argument" -- as they call it -- would be cheating? How can there be cheating? Anything the AI says is fair game. Would a transhuman AI restrict itself from possible paths to victory merely because it might be considered "cheating?"

The "meta argument" claim completely misses the point of the game and -- to my mind -- somehow resembles observers trying to turn a set of arguments that might win into out of bounds rules.

Comment by Brandon_Reinhart on The Magnitude of His Own Folly · 2008-09-30T16:25:48.000Z · LW · GW

Your post reminds me of the early nuclear criticality accidents during the development of the atomic bomb. I wonder if, for those researchers, the fact that "nature is allowed to kill them" didn't really sink home until one accidentally put one brick too many on the pile.

Comment by Brandon_Reinhart on Raised in Technophilia · 2008-09-17T09:06:11.000Z · LW · GW

Tim: Eh, you make a big assumption that our descendants will be the ones to play with the dangerous stuff and that they will be more intelligent for some reason. That seems to acknowledge the intelligence / nanotech race condition that is of so much concern to singularitarians.

Comment by Brandon_Reinhart on Grasping Slippery Things · 2008-06-17T14:21:53.000Z · LW · GW

I'm certainly not offended you used my comment as an example. I post my thoughts here because I know no one physically local to me that holds an interest in this stuff and because working the problems...even to learn I'm making the same fundamental mistakes I was warned to watch for...helps me improve.

Comment by Brandon_Reinhart on Possibility and Could-ness · 2008-06-16T18:17:00.000Z · LW · GW

Hmm. I think I was working in the right direction, but your procedural analogy let you get closer to the moving parts. But I think "reachability" as you used it and "realizable" as I used it (or was thinking of it) seem to be working along similar lines.

Comment by Brandon_Reinhart on Possibility and Could-ness · 2008-06-16T18:05:00.000Z · LW · GW

I am "grunching." Responding to the questions posted without reading your answer. Then I'll read your answer and compare. I started reading your post on Friday and had to leave to attend a wedding before I had finished it, so I had a while to think about my answer.

>Can you talk about "could" without using synonyms like "can" and "possible"?

When we speak of "could" we speak of the set of realizable worlds [A'] that follows from an initial starting world A operated on by a set of physical laws f.

So when we say "I could have turned left at the fork in the road." "Could" refers to the set of realizable worlds that follow from an initial starting world A in which we are faced with a fork in the road, given the set of physical laws. We are specifically identifying a sub-set of [A']: that of the worlds in which we turned left.

This does not preclude us from making mistakes in our use of could. One might say "I could have turned left, turned right, or started a nuclear war." The options "started a nuclear war" may simply not be within the set [A']. It wasn't physically realizable given all of the permutations that result from applying our physical laws to our starting world.

If our physical laws contain no method for implementing free will and no randomness, [A'] contains only the single world that results from applying the set of physical laws to A. If there is randomness or free will, [A'] contains a broader collection of worlds that result from applying physical laws to A...where the mechanisms of free will or randomness are built into the physical laws.

I don't mean "worlds" in the quantum mechanics sense, but as a metaphor for resultant states after applying some number of physical permutations to the starting reality.

Why can a machine practice free will? If free will is possible for humans, then it is a set of properties or functions of the physical laws (described by them, contained by them in some way) and a machine might then implement them in whatever fashion a human brain does. Free will would not be a characteristic of A or [A'], but the process applied to A to reach a specific element of [A'].

So...I think I successfully avoided using reference to "might" or "probable" or other synonyms and closely related words.

now I'll read your post to see if I'm going the wrong way.

Comment by Brandon_Reinhart on Timeless Identity · 2008-06-03T22:07:02.000Z · LW · GW

RI - Aren't Surviving Brian Copies [1-1000] are each their own entity? Brian-like entities? "Who is better off" are any Brian-like entities that managed to survive, any Adam-like entities that managed to survive, and any Carol-like entities that managed to survive. All in various infinite forms of "better off" based on lots of other splits from entirely unrelated circumstances. Saying or implying that Carol-Current-Instant-Prime is better off because more future versions of her survived than Adam-Current-Instant-Prime seems mistaken, because future versions of Adam or Carol are all their own entities. Aren't Adam-Next-Instant-N and Adam-Current-Instant-Prime also different entities?

And isn't multiplying infinities by finite integers to prove values through quantitative comparison an exercise doomed to failure?

All this trying to compare the qualitative values of the fates of infinities of uncountable infinite-infinities seems somewhat pointless. Also: it seems to be an exercise in ignoring probability and causality to make strange points that would be better made in clear statements.

:(

I might just misunderstand you.

Comment by Brandon_Reinhart on Timeless Identity · 2008-06-03T15:24:53.000Z · LW · GW

I'm a member of Alcor. I wear my id necklace, but not the bracelet. I sometimes wonder how much my probability of being successfully suspended depends on wearing my id tags and whether I have a significantly higher probability from wearing both. I've assigned a very high (70%+) probability to wearing at least one form of Alcor id, but it seems an additional one doesn't add as much, assuming emergency response personnel are trained to check the neck & wrists for special case ids. In most cases where I could catastrophically lose one form of id (such as dismemberment!) I would probably not be viable for suspension. What do you other members think?

Comment by Brandon_Reinhart on Class Project · 2008-05-31T06:37:25.000Z · LW · GW

Sorry if I'm getting myself derailed, but is there any particular purpose to metaphor of the "Cooperative Conspiracy"? It seems to be smuggling in some kind of critique of group-think, although because this particular conspiracy isn't fully defined the nature of the critique isn't clear. (Although the team claims he is "rumored" to be a member of this conspiracy, they do not seem to be largely alarmed, indicating some measure of philosophical tolerance.) Is the cooperative conspiracy a metaphor for some behavioral phenomenon well known or apparent among researchers?

Comment by Brandon_Reinhart on The Failures of Eld Science · 2008-05-13T00:24:06.000Z · LW · GW

Yes, Patrick. I believe that is the intent.

I don't have 480 minutes to commit to the task. Here is a list after only a handful of minutes:

Some possible flaws of Eld science:

  • An emphasis on publishing works for personal credit in peer reviewed journals, thereby encouraging one to protect one's research from others working in the same field who might be nearing similar conclusions.
  • Rewarding success with long term, permanent positions of rank within the establishment that cannot be lost due to a failure to continue to produce insights.
  • Lethargy in the reframing of materials intended for the education of new researchers. (Specifically, reframing them with new, established insights within a particular field.)
  • An emphasis on social unity that discourages individuals from defending positions that seem to be too far out of acceptable norms.

I'll think about more during dinner.

Comment by Brandon_Reinhart on Heat vs. Motion · 2008-04-01T21:27:45.000Z · LW · GW

Peter, your question doesn't seem to be the right one for illustrating your concern. The qualitative experience of color isn't necessary for explaining how someone can partition colored balls. Ignoring the qualitative experience, these people are going through some process of detecting differences in the reflective properties of the balls (which they subjectively experience as having different colors). We could create a reductive explanation of how the eye detects reflected light, how the brain categorizes reflective intensities into concepts like "bright" "dark" and how the body's mechanics enable picking up and dropping balls. A machine with no apparent subjective experience could sort the balls. However the question of qualitative experience in humans would remain.

We could say "where there is perception, deduce qualitative experience" but this doesn't explain anything. It might help us frame experiments to test for the existence of qualitative experience, but one element of Chalmer's argument is that no such objectively verifiable experiment can be created. It's also hard to come to terms with the idea that our ball sorting robot might be having qualitative experience.

If we are discarding solipsism from our epistemology, on what basis do we do so and is that basis philosophically applicable to discarding the idea that that my qualitative experience might be fundamentally different from someone else's? Just because I can conceive of a world in which what I experience as red is in fact experienced by someone else with no neural/optical flaws as what I would call yellow doesn't make that world logical. I would assume that if the object and lighting conditions are the same and our neural and optical machinery was in good order that we would both experience the same thing that it is to experience red when looking at a red object. To conceive otherwise would be baseless (purely metaphysical with no implications for reality).

Comment by Brandon_Reinhart on Words as Mental Paintbrush Handles · 2008-03-02T20:42:08.000Z · LW · GW

Roko - "I don't think that" is not explanation.

Comment by Brandon_Reinhart on Searching for Bayes-Structure · 2008-02-29T02:11:50.000Z · LW · GW

brent, if you search for "Bayesian" you'll a fairly tight list of all relevant posts (for the most part). Start at the bottom and work your way up. Either that or you could just go back six months and start working your way through the archives.

Maybe it is time someone wrote a summary page and indexed this work.

Comment by Brandon_Reinhart on Buy Now Or Forever Hold Your Peace · 2008-02-04T22:53:36.000Z · LW · GW

Only some US cc processors will deny the transaction. The transaction fall under their category for betting & gambling, same thing that prevents you from pursuing cc transactions with online poker sites. But I've seen cases where these transactions are unblocked with certain banks.*

  • Not me, of course.
Comment by Brandon_Reinhart on Trust in Math · 2008-01-16T02:11:42.000Z · LW · GW

Hey, so, I figure this might be a good place to post a slightly on topic question. I'm currently reading "Scientific Reasoning: The Bayesian Approach" by Howson and Urbach. It seemed like a good place to start to learn Bayesian reasoning, although I don't know where the "normal" place to start would be. I'm working through the proofs by hand, making sure I understand each conclusion before moving to the next.

My question is "where do I go next?" What's a good book to follow up with?

Also, after reading this and "0 and 1 are not probabilities" I ran into the exact cognitive dissonance that Eliezer eluded to with his statement that "this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1." The material is teaching that "P(t) == 1" is a fundamental theorem of the probability calculus and that "P(a) >= 0". These are then used in all succeeding derivations. After re-reading, I came to understand Eliezer's practical disagreement with a theoretical method.

So another question is: has anyone gone through the exercise of re-deriving the probability calculus, perhaps using "0 < P(a) < 1" or something similar, instead of the two previous rules?

Comment by Brandon_Reinhart on My Strange Beliefs · 2007-12-31T05:05:05.000Z · LW · GW

The whole libertarian vs socialism thing is one area where transhumanism imports elements of cultishness. If you are already a libertarian and you become familiar with transhumanism, you will probably import your existing arguments against socialism into your transhumanist perspective. Same for socialism. So you see various transhumanist organizations having political leadership struggles between socialist and libertarian factions who would probably be having the same struggles if they were a part of an international Chess club or some such other group.

The whole thing becomes entrenched in debates about things like Transition Guides and what amounts to "how to implement transhumanist policy in a [socialist/libertarian] way that's best for everyone." I always thought these discussions were what amounted to discourse at the "fandom" level of the transhumanist community, but after reading some of Eliezer's posts about his own experiences at transhumanist/singularitarian events I see that it happens at all levels.

Half-formed thought I need to pursue more offline but I'll write it down now: If you say "I am a transhumanist" and you say "I am a libertarian" and then you try to find libertarian ways to meet transhumanist goals you have made your transhumanism subservient to your libertarianism. I think it is better to find transhumanist ways to meet libertarian goals. The fact that a group of transhumanists would derail a debate by getting into politics seems to express to me that the group has made transhumanism the subservient value. Which seems inelegant given that transhumanism is probably the simpler value. Seems like there's a possible post for my own blog brewing in there, but I have to think about it some.

Comment by Brandon_Reinhart on My Strange Beliefs · 2007-12-31T04:46:51.000Z · LW · GW

There are a large number of transhumanists who are socialists, not libertarians. In fact, as far as I can tell "libertarian transhumanism" is a distinctly American phenomenon. Saying that most transhumanists you know are libertarians may be true, but to assume that their experiences define the entire span of transhumanist belief would be creating an invalid generalization from too little evidence.

Comment by Brandon_Reinhart on Guardians of Ayn Rand · 2007-12-18T17:09:00.000Z · LW · GW

Great post. You nailed my main issues with objectivism. I think the material is still worth reading. Rand considered herself a philosopher and seemed to feel there was a lot to be gained from telling her people to read more philosophy and broaden their horizons, but when it came to scientific works she never expresses much awareness of the "state of the art" of her time. In fact, her epistemology makes assumptions about the operation of the brain (in behavioralism and learning) that I'm not sure could be made correctly with the state of neuroscience and related disciplines at the time.

Comment by Brandon_Reinhart on When None Dare Urge Restraint · 2007-12-09T04:35:28.000Z · LW · GW

Comparing the lives lost in 9/11 to motorcycle accidents is a kind of moral calculus that fails to respect the deeper human values involved. I would expect people who die on motorcycles to generally understand the risks. They are making a choice to risk their lives in an activity. Their deaths are tragic, but not as tragic. The people who died in the WTC did not make a choice to risk their lives, unless you consider going to work in a high rise in America to be a risky choice. If you're doing moral calculus, you need to multiply in a factor for "not by known/accepted risk" to the deaths in the attack.

Tragedy of Death: (by Known / Accepted Risk) < (by Unknown Risk) < (by Aggressor Who Offers No Choice)

My last post, though, since The More I Post, The More I'm Probably Wrong.

Comment by Brandon_Reinhart on When None Dare Urge Restraint · 2007-12-09T04:15:21.000Z · LW · GW

"We will be safer after we conquer every potential enemy."

There are limits on our physical and moral capacity for making war. My post was simply pointing out that failing to respond to someone who actually attacks you can have increasingly dangerous results over time. That enemy leeches at your resources and learns how to become better at attacking you, while you gain nothing. There are plenty of potential enemies out there who aren't attacking us and may never attack us. They aren't gaining actual experience at attacking us. Their knowledge is only academic. As long as they don't attack us and we don't attack them, we may find our mutual interests transforming us into allies.

So while we could launch a crusade against the world, it doesn't seem to make sense if it has no chance of succeeding and would likely cost us everything we value. At the same time, though, we have to defend ourselves from the potential of an attack and plan for potential responses. Once one of those enemies actively attacks us, we have to defend ourselves (obviously) and then respond by counter-attacking, if capable, to discourage future attacks.

Arguing that responding, violently, to an attack is not an argument for pre-emptively attacking all potential enemies. There are many lines in the sand: resource limitations, economic limitations, moral limitations, etc.

You do hit on the core question: when is it right to preemptively attack another state? Also: what do we mean by 'right'? Strategically correct? Morally acceptable? It seems to me that popular wars will be morally acceptable wars and those will be wars of defense and wars against aggressors. Wars of aggression against non-aggressors would rarely be popular, except in cases of "revanchism" or by non-liberal states that control their population through nationalism. You would expect liberal states to generally not pursue wars of aggression.

If we follow that we cast a bit of light on why the "spreading democracy" meme has been popular among some. "Democracy" as a system has been conflated with classical liberalism. The idea being: conquer non-liberal states and institute democracies. The world then becomes safer, because liberal states prefer to resolve differences in ways that aren't physically violent. The flaw being that simply creating a democracy doesn't guarantee that the values of classical liberalism will be ... ah ... valued.

So yeah. I don't support knocking down the walls of potential enemies "just because."

Comment by Brandon_Reinhart on When None Dare Urge Restraint · 2007-12-09T02:34:40.000Z · LW · GW

Some very vehement responses.

If you believe invading Afghanistan was a correct choice then I'm not sure how you could say Iraq was a complete mistake. The invasion of Afghanistan was aimed at eliminating a state that offered aid and support to an enemy who would use that aid and support to project power to the US and harm her citizens or the citizens of other western states. Denying that aid and support would hope to achieve the purpose of reducing or eliminating the ability of the enemy to project power.

Any other state that might offer aid and support to the enemy would enable the enemy to rebuild their ability to project power. Iraq was one possible source of aid and support. Any Sunni state with sufficient reason to wish harm upon the west, with the desire to support organizations that might bring about that harm, and with the ability to provide aid and support to that end was (or is) a threat.

al Qaeda is now largely holed up in regions that do not offer much by way of aid and support, at least for now. al Qaeda may still be able to project limited power, but its ability to strike at the US in such a coordinated way has been significantly hampered.

The harms of 9/11 cannot be measured by the harms of the event alone. The economic damage and the lives lost are only a small part of a complete justification for a vigorous response. If we merely rebuilt the towers and moved on, we would have done nothing to deny an enemy the power to strike again. We would have done nothing to deny the enemy their ability to develop their offensive capacity. Without our interference and no change in the demeanor of the enemy, a second attack would likely have been larger and more damaging, as the enemy would have continued to develop offensive capacity and support while we stood aside.

Additionally, toppling two governments sends a strong message to other states that might harbor the enemy that they will be pursued and punished. Although it did not serve Russia or China politically to openly support US actions in the Middle East, it seems likely that both states had reason to desire an outcome in which the extremist groups were heavily disrupted. Of course, their ideal outcome would also involve a significant loss of prestige, financial power, and influence by the US as well.

If you allow an enemy to batter your gates, you could sleep easily knowing that you built your gates to be strong and withstand such assaults. Eventually, however, your enemy will learn the weaknesses of your gates and batter them down or circumvent them. You would have failed: not in the construction of your defenses, but by failing to hunt down your enemy and deny them the opportunity of future assaults.

It is just as unfortunate for the strategists that hatred and emotional fervor clouded the discussion of response. No right minded military commander wishes to unnecessarily expend resources on a purposeless campaign. While it may be that a clearly reasoned discussion on response would not have led to as extensive a result, I believe that leaving the gates to attack those harboring the enemy would have been considered strategically sound.

Comment by Brandon_Reinhart on Fake Selfishness · 2007-11-08T21:16:45.000Z · LW · GW

My understand is that the philosophy of rational self-interest, as forwarded by the Objectivists, contains a moral system founded first on the pursuit of maintaining a high degree of "conceptual" volitional consciousness and freedom as a human being. Anything that robs one's life or robs one's essential humanity is opposed to that value. The Objectivist favor of capitalism stems from a belief that capitalism is a system that does much to preserve this value (the essential freedom and humanity of individuals). Objectivists are classical libertarians, but not Libertarians (and in fact make much of their opposition to that party).

I believe that an Objectivist would welcome the challenges posed in the post above, but might not consider them a strong challenge to his beliefs simply because they aren't very realistic scenarios. Objectivists generally feel that ethics need not be crafted to cover every scenario under the sun, but instead act as a general guide to a principled life that upholds the pursuit of freedom and humanity.

"If you're genuinely selfish, then why do you want me to be selfish too? Doesn't that make you concerned for my welfare? Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"

In the long run, exploiting others seems likely to end up a dead end road. It might be rational and rewarding in the short term, but ultimately it is destructive. Furthermore, it seems to be a violation of principle. If I believe in my own freedom and that I would not want to be misled, I should not attempt to rob the freedom or mislead others without significant compelling reason. Otherwise, I'm setting one standard for my own rights and another for others. By my example, then, there would be no objective ethical standard for me to object against someone attempting to mislead or exploit me. After all, if I set a subjective standard for behavior why shouldn't they? But this isn't rigorous logic and smacks of a rationalization as referenced here:

"But what I really want to know is this: Did you start out by thinking that you wanted to be selfish, and then decide this was the most selfish thing you could possibly do? Or did you start out by wanting to convert others to selfishness, then look for ways to rationalize that as self-benefiting?"

The problem seems to be more general: argument with the intent of converting. That intent alone seems to cast suspicion on the proceedings. A rational person would, it seems to me, be willing to lay his arguments on the table for review and criticism and discussion. If, at some point in the future, others agree they are rational arguments and adopt them as beliefs then everyone should be happy because the objectives of truth and learning have been fulfilled. But "converting" demands immediate capitulation to the point of discussion. No longer is the discussion about the sharing of ideas: reward motivators have entered the room.

Self-edification that one's own view has been adopted by another seems to be a reward motive. Gratification that a challenge has been overcome seems to be a reward motive. Those motives soil the discussion.

And the one said, "You may be right about that last part," so I marked him down as intelligent.

The man is intelligent, not because he agreed with Eli's point, but because he was reviewing his beliefs in light of new information. His motive was not (at least not entirely) conversion, but genuine debate and learning.

"Intelligence is a dynamic system that takes in information about the world, abstracts regularities from that information, stores it in memories, and uses it knowledge about the world to form goals, make plans and implement them."

The speaker is doing just that. He might later choose to reject the new information, but at this time he is indicating that the new information is being evaluated.

Comment by Brandon_Reinhart on Fake Selfishness · 2007-11-08T15:02:16.000Z · LW · GW

Are providing answers to questions like "Would you do incredible thing X if condition Y was true" really necessary if thing X is something neither person would likely ever be able to do and condition Y is simply never going to happen? It seems easy to construct impossible moral challenges to oppose a particular belief, but why should beliefs be built around impossible moral edge cases? Shouldn't a person be able to develop a rational set of beliefs that do fail under extreme moral cases, but at the same time still hold a perfectly strong and not contradictory position?

Comment by Brandon_Reinhart on Torture vs. Dust Specks · 2007-10-31T02:57:00.000Z · LW · GW

I wonder if my answers make me fail some kind of test of AI friendliness. What would the friendly AI do in this situation? Probably write poetry.

Comment by Brandon_Reinhart on Torture vs. Dust Specks · 2007-10-31T02:52:00.000Z · LW · GW

Dare I say that people may be overvaluing 50 years of a single human life? We know for a fact that some effect will be multiplied by 3^^^3 by our choice. We have no idea what strange an unexpected existential side effects this may have. It's worth avoiding the risk. If the question were posed with more detail, or specific limitations on the nature of the effects, we might be able to answer more confidently. But to risk not only human civilization, but ALL POSSIBLE CIVILIZATIONS, you must be DAMN SURE you are right. 3^^^3 makes even incredibly small doubts significant.

Comment by Brandon_Reinhart on Torture vs. Dust Specks · 2007-10-30T18:02:00.000Z · LW · GW

> Would you condemn one person to be horribly tortured for fifty years without hope or rest, to save every qualia-experiencing being who will ever exist one blink?

That's assuming you're interpreting the question correctly. That you aren't dealing with an evil genie.

Comment by Brandon_Reinhart on Torture vs. Dust Specks · 2007-10-30T17:59:00.000Z · LW · GW

> For those who would pick TORTURE, what about Vassar's universes of agonium? Say a googolplex-persons' worth of agonium for a googolplex years.

If you mean would I condemn all conscious beings to a googolplex of torture to avoid universal annihilation from a big "dust crunch" my answer is still probably yes. The alternative is universal doom. At least the tortured masses might have some small chance of finding a solution to their problem at some point. Or at least a googolplex years might pass leaving some future civilization free to prosper. The dust is absolute doom for all potential futures.

Of course, I'm assuming that 3^^^3 conscious beings are unlikely to ever exist and so that dust would be applied over and over to the same people causing the universe to be filled with dust. Maybe this isn't how the mechanics of the problem work.

Comment by Brandon_Reinhart on Torture vs. Dust Specks · 2007-10-30T17:15:00.000Z · LW · GW

What happens if there aren't 3^^^3 instanced people to get dust specks? Do those specks carry over such that person #1 gets a 2nd speck and so on? If so, you would elect to have the person tortured for 50 years for surely the alternative is to fill our universe with dust and annihilate all cultures and life.

Comment by Brandon_Reinhart on Bay Area Bayesians Unite! · 2007-10-28T02:25:06.000Z · LW · GW

As Alex says, just add an option for "lol, wut?" to every poll to weed out people who might otherwise vote randomly for the hell of it. :P

Should be an Austin, TX meet up. It's like the Bay Area, but a hell of a lot more affordable :)

Comment by Brandon_Reinhart on "Can't Say No" Spending · 2007-10-22T02:00:12.000Z · LW · GW

With the recent revelation that global remittances to poor countries totals more than three times the size of the total US foreign aid budget, I would argue that we should completely eliminate the foreign aid budget. The public tax burden should be decreased by an equal amount. This might result in more workers with foreign families having a higher income, possibly increasing remittances further. Remittances seem like a more beneficial method of aiding other countries for several reasons. First, the money may be used more efficiently by individual foreign consumers than through large block grants to foreign governments. Second, Americans without foreign family would have higher discretionary income. Third, I believe this is a more moral policy allowing individuals to dispose of their income as they see fit.

This policy should probably be supported by other pro-immigration policies.

Someone has probably developed this into a stronger policy argument, but this is just what occurred to me the other day.

Comment by Brandon_Reinhart on Congratulations to Paris Hilton · 2007-10-22T01:35:00.000Z · LW · GW

Eliezer, in what way do you mean "altruism" when you use it? I only ask for clarity.

I don't understand how altruism, as selfless concern for the welfare of others, enters into the question of supporting the singularity as a positive factor. This would open a path for a singularity in which I am destroyed to serve some social good. I have no interest in that. I would only want to support a singularity that benefits me. Similarly, if everyone else who supports the efforts to achieve singularity is simply altruistic no one is looking out for their own welfare. Selfish concern (rational self interest) seems to increase the chance for a safe singularity.

Comment by Brandon_Reinhart on "Science" as Curiosity-Stopper · 2007-09-03T21:59:37.000Z · LW · GW

Where is the science in Philosophy? I have recently been reading commentary on one philosopher's account of an epistemology based in perception, conceptualization, and abstraction. This commentary is paired with a critical analysis of the epistemologies of other philosophers, based on the Aristotelian foundations. While reading it, I thought "but there must be one true way the mind comes to terms with reality, a way based in the biology of the brain." A biology whose workings I don't understand and I suspect most philosophers do not understand. After all, one person can only learn so much. Still, it seems that any bold explanation of why we know what we know must be based on some understanding of the inner workings of the brain.

How much of philosophy is just another kind of curiosity-stopper? Or rather are philosophers often building bridges out of "non-knowledge". "Non-knowledge" being a made up word to describe complicated explanations that lack truth value. Philosophers often test their theories by quizzing each other, one attempting to convince the other of a particular position. This kind of test doesn't seem sufficiently rigorous to be considered scientific.

Comment by Brandon_Reinhart on Fake Causality · 2007-08-23T18:37:07.000Z · LW · GW

I just wanted to say that this is the best damn blog I've read. The high level of regular, insightful, quality updates is stunning. Reading this blog, I feel like I've not just accumulated knowledge, but processes I can apply to continue to refine my understanding of how I think and how I accumulate further knowledge.

I am honestly surprised, with all the work the contributors do in another realms, that you are able to maintain this high level of quality output on a blog.

Recently I have been continuing my self-education in ontology and epistemology. Some sources are more rigorous than others. Reading Rand, for example, shows an author who seems to utilize "phlogiston" like mechanics to describe her ethical solutions to moral problems. Explanations that use convincing, but unbounded turns of phrase instead of a meaningful process of explanation. It can be very challenging to read and process new data and also maintain a lack of bias (or at least an awareness of bias, that can be accounted for and challenged). It requires a very high level of active, conscious information processing. Rereading, working exercises, and thinking through what a person is saying and why they are saying it. This blog has provided me lots of new tools to improve my methods of critical thinking.

Rock on.