Posts

Playing the Meta-game 2009-12-25T10:06:46.559Z

Comments

Comment by Technologos on Dissenting Views · 2015-06-20T12:53:41.943Z · LW · GW

VNM utility is a necessary consequence of its axioms but doesn't entail a unique utility function; as such, the ability to prevent Dutch Books is derived more from VNM's assumption of a fixed total ordering of outcomes than anything.

Comment by Technologos on Coffee: When it helps, when it hurts · 2010-03-11T21:43:50.738Z · LW · GW

Or you could just take more, so that the nervousness is swamped by the general handshakery...

Comment by Technologos on Rationality quotes: March 2010 · 2010-03-01T20:15:48.901Z · LW · GW

Seth appears to be contrasting a "job" with things like "being an entrepreneur in business for oneself," so perhaps the first of your options.

Comment by Technologos on Hayekian Prediction Markets? · 2010-02-22T19:13:10.442Z · LW · GW

I think much of the problem here comes from something of an equivocation on the meaning of "economic disaster." A country can post high and growing GDP numbers without benefiting its citizens as much as a country with weaker numbers; the linked paper notes that

real per capita private consumption was lower than straight GDP per capita figures suggest because of very high investment rates and high military expenditures, and the quality of goods that that consumption expenditure could bring was even lower still."

Communism is good at maintaining top-line growth in an economy because it can simply mandate spending. In much the same way as US government spending can directly add to GDP growth (even if incurring substantial debt), the Soviet Union could make massive military expenditures even while running factories that produced goods not based on consumer desires but state beliefs about those desires or needs.

In short, communism was not an economic disaster in that it effectively industrialized a great many nations and brought consistent top-line growth. It was an economic disaster in that state power allowed or created widespread famines and poor production of consumer goods.

Comment by Technologos on Open Thread: February 2010, part 2 · 2010-02-21T03:05:37.047Z · LW · GW

My understanding is that one primary issue with frequentism is that it can be so easily abused/manipulated to support preferred conclusions, and I suspect that's the subject of the article. Frequentism may not have "caused the problem," per se, but perhaps it enabled it?

Comment by Technologos on Demands for Particular Proof: Appendices · 2010-02-16T07:41:25.710Z · LW · GW

And in particular, there's good reason to believe that brains are still evolving at a decent pace, where it looks like cell mechanisms largely settled a long while back.

Comment by Technologos on The AI in a box boxes you · 2010-02-04T00:52:10.505Z · LW · GW

Oh, I meant that saying it was going to torture you if you didn't release it could have been exactly what it needed to say to get you to release it.

Comment by Technologos on The AI in a box boxes you · 2010-02-02T17:09:32.520Z · LW · GW

Perhaps it does--and already said it...

Comment by Technologos on Complexity of Value ≠ Complexity of Outcome · 2010-01-31T20:21:42.370Z · LW · GW

What you say is true while the Koran and the Bible are referents, but when A and B become "Mohammed is the last prophet, who brought the full truth of God's will" and "Jesus was a literal incarnation of God," (the central beliefs of the religions that hold the respective books sacred) then James' logic holds.

Comment by Technologos on Complexity of Value ≠ Complexity of Outcome · 2010-01-31T20:04:50.008Z · LW · GW

I realize how arrogant it must seem for young, uncredentialled (not even a Bachelor's!) me to conclude that brilliant professional philosophers who have devoted their entire lives to studying this topic are simply confused. But, disturbing as it may be to say ... that's how it really looks.

Perhaps the fact that they have devoted their lives to a topic suggests that they have a vested interest in making it appear not to be nonsense. Cognitive dissonance can be tricky even for the pros.

Comment by Technologos on Far & Near / Runaway Trolleys / The Proximity Of (Fat) Strangers · 2010-01-26T02:24:25.872Z · LW · GW

What if the problem was reframed such that nobody ever found out about the decision and thereby that their estimates of risk remained unchanged?

I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.

It is certainly possible that there is some underlying utilitarian rationale being used. Reframing the problem like I suggest above might provide something of a test of the reason you provided, if imperfect (can we really ignore intuitions on command?).

Comment by Technologos on Far & Near / Runaway Trolleys / The Proximity Of (Fat) Strangers · 2010-01-26T02:20:02.901Z · LW · GW

I have a different interpretation of the LCPW here, though. The LCPW is supposed to be the one that isolates the moral quantity of interest--in this case, the decision to push or not, or to switch tracks--and is specifically designed to exclude answers that consider factors (realistic or not) that sidestep the issue.

I'd say the LCPW is one in which nobody will ever hear about the decision, and thus in which any ancillary effects are neutralized.

Comment by Technologos on Normal Cryonics · 2010-01-22T15:40:41.672Z · LW · GW

buying life insurance

For what it's worth, I've heard people initially had many of the same hangups about life insurance, saying that they didn't want to gamble on death. The way that salespeople got around that was by emphasizing that the contracts would protect the family in event of the breadwinner's death, and thus making it less of a selfish thing.

I wonder if cryo needs a similar marketing parallel. "Don't you want to see your parents again?"

Comment by Technologos on That Magical Click · 2010-01-22T15:33:20.611Z · LW · GW

Could you supply a (rough) probability derivation for your concerns about dystopian futures?

I suspect the reason people aren't bringing those possibilities up is that, through a variety of elements including in particular the standard Less Wrong understanding of FAI derived from the Sequences, LWers have a fairly high conditional probability Pr(Life after cryo will be fun | anybody can and bothers to nanotechnologically reconstruct my brain) along with at least a modest probability of that condition actually occurring.

Comment by Technologos on Costs to (potentially) eternal life · 2010-01-22T03:00:37.919Z · LW · GW

Does anyone really expect that this population would not respond to its incentives to avoid more danger? Anecdotes aside; do you expect them to join the military with the same frequency, be firemen with the same frequency, to be doctors administering vaccinations in jungles with the same frequency?

Agreed--indeed, I suspect that one of the first steps to fundamentally altering the priorities of society may be the invention of methods to materially prolong life, such that it really does become an unspeakable tragedy to lose somebody permanently.

Comment by Technologos on Costs to (potentially) eternal life · 2010-01-22T02:58:34.877Z · LW · GW

I was the lead developer of an AGI that is scheduled to hit start in three weeks. I quit when I saw that the 'Friendliness' intended is actually a dystopia and my protests were suppressed. I have just cancelled my cryonics membership and the reason your cryonic revival is dependent on killing me is that I am planning to sabotage the AI.

Is it weird that my first reaction is to ask her specific questions about the Sequences to test the likelihood of that statement's veracity?

Comment by Technologos on Communicating effectively: form and content · 2010-01-22T02:52:13.320Z · LW · GW

Your opponent must not see (consciously or subconsciously) your rhetoric as an attempt to gain status at zir expense.

To quote Daniele Vare: "Diplomacy is the art of letting someone have your way."

Comment by Technologos on Communicating effectively: form and content · 2010-01-22T02:49:49.659Z · LW · GW

Agreed, and I suspect that certainty and abrasive attributes are also less problematic if truth is not being sought after.

Comment by Technologos on Winning the Unwinnable · 2010-01-21T21:10:47.526Z · LW · GW

This would be entirely true if instead of utiles you had said dollars or other resources. As it is, it is false by definition: if two choices have the same expected utility (expected value of the utility function) then the chooser is indifferent between them. You are taking utility as an argument in something like a meta-utility function, which is an interesting discussion to have (which utility function we might want to have) but not the same as standard decision theory.

Comment by Technologos on Winning the Unwinnable · 2010-01-21T21:01:25.362Z · LW · GW

I think the uncomfortable part is that bill's (and my) experience suggests that people are even more risk-averse than logarithmic functions would indicate.

I'd suggest that any consistent function (prospect theory notwithstanding) for human utility functions is somewhere between log(x) and log(log(x))... If I were given the option of a 50-50 chance of squaring my wealth and taking the square root, I would opt for the gamble.

Comment by Technologos on Winning the Unwinnable · 2010-01-21T20:25:38.901Z · LW · GW

Logarithmic utility functions are already risk-averse by virtue of their concavity. The expected value of a 50% chance of doubling or halving is a 25% gain.

Comment by Technologos on Normal Cryonics · 2010-01-21T19:31:55.415Z · LW · GW

I should note that most of the organizations we are talking about (Alcor, ACS, CI) are non-profits.

Comment by Technologos on Sorting Out Sticky Brains · 2010-01-19T16:59:35.733Z · LW · GW

I didn't mind the old one, but I do like the "sticky brains" label that we can use for this concept in the future.

Comment by Technologos on The Prediction Hierarchy · 2010-01-19T09:01:36.004Z · LW · GW

Agreed--the trick is that being wrong "only once" is deceptive. I may be wrong more than once on a one-in-forty-million chance. But I may also be wrong zero times in 100 million tries, on a problem as frequent and well-understood as the lottery, and I'm hesitant to say that any reading problems I may have would bias the test toward more lucrative mistakes.

Comment by Technologos on Advice for AI makers · 2010-01-17T21:16:23.519Z · LW · GW

an unprecedented and unlikely phenomenon

Possible precedents: the Library of Alexandria and the Dark Ages.

Comment by Technologos on Advice for AI makers · 2010-01-16T22:21:25.493Z · LW · GW

Definitely. Eliezer reflects perhaps a maximum lower bound on the amount of intelligence necessary to pull that off.

Comment by Technologos on Advice for AI makers · 2010-01-16T21:23:54.233Z · LW · GW

Who said he was? If Eliezer can convince somebody to let him out of the box--for a financial loss no less--then certainly a transhuman AI can, right?

Comment by Technologos on Advice for AI makers · 2010-01-16T20:56:23.420Z · LW · GW

Sure, though the mechanism I was referring to is "it can convince its handler(s) to let it out of the box through some transhuman method(s)."

Comment by Technologos on Advice for AI makers · 2010-01-16T20:32:26.502Z · LW · GW

This is essentially the AI box experiment. Check out the link to see how even an AI that can only communicate with its handler(s) might be lethal without guaranteed Friendliness.

Comment by Technologos on The Wannabe Rational · 2010-01-16T19:03:27.899Z · LW · GW

Indeed, I agree--I meant that it doesn't matter what conclusions you hold as much as how you interact with people as you search for them.

Comment by Technologos on The Wannabe Rational · 2010-01-16T00:45:27.199Z · LW · GW

I agree with Kevin that belief is insufficient for exclusion/rejection. Best I can tell, it's not so much what you believe that matters here as what you say and do: if you sincerely seek to improve yourself and make this clear without hostility, you will be accepted no matter the gap (as you have found with this post and previous comments).

The difference between the beliefs Kevin cited lies in the effect they may have on the perspective from which you can contribute ideas. Jefferson's deism had essentially no effect on his political and moral philosophizing (at least, his work could easily have been produced by an atheist). Pat Robertson's religiosity has a great deal of effect on what he says and does, and that would cause a problem.

The fact that you wrote this post suggests you are in the former category, and I for one am glad you're here.

Comment by Technologos on In defense of the outside view · 2010-01-15T16:48:46.170Z · LW · GW

To be clear, I wasn't arguing against applying the outside view--just against the belief that the outside view gives AGI a prior/outside view expected chance of success of (effectively) zero. The outside view should incorporate the fact that some material number of technologies not originally anticipated or even conceived do indeed materialize: we expected flying cars, but we got the internet. Even a 5% chance of Singularity seems more in line with the outside view than the 0% claimed in the reference class article, no?

I agree with your comment on the previous post, incidentally, that the probability of the Singularity as conceived by any individual or even LW in general is low; the possible types of Singularity are so great that it would be rather shocking if we could get it right from our current perspective. Again, I was responding only to the assertion that the outside view shows no successes for the class of breakthroughs containing AGI/cryo/Singularity.

I should note too that the entirety of the quotation you ascribe to me is originally from Eliezer, as the omitted beginning of the quoted sentence indicates.

Comment by Technologos on Advice for AI makers · 2010-01-14T15:18:17.864Z · LW · GW

There is a difference between giving something negative utility and giving it decreasing marginal utility. It's sufficient to give the AI exponents strictly between zero and one for all terms in a positive polynomial utility function, for instance. That would be effectively "inputting" the marginal utility of resources, given any current state of the world.

Comment by Technologos on High Status and Stupidity: Why? · 2010-01-12T17:54:59.392Z · LW · GW

I propose a further hypothesis: high-status people have internalized Laws 4, 5, and 46 of the 48 Laws of Power, but especially Law 1: Never Outshine the Master.

After years of practice in switching between seeming competent relative to underlings and less so relative to superiors, they develop the ability to segregate audiences as you described.

Comment by Technologos on Savulescu: "Genetically enhance humanity or face extinction" · 2010-01-12T06:40:34.634Z · LW · GW

Crime is down during the current recession. It's possible that the shock simply hasn't been strong enough, but it may be evidence nonetheless.

I think Hanson's hypothesis was more about true catastrophes, though--if some catastrophe devastated civilization and we were thrown back into widespread starvation, people wouldn't worry about morality.

Comment by Technologos on Open Thread: January 2010 · 2010-01-12T05:37:27.887Z · LW · GW

I was connecting it to and agreeing with Zack M Davis' thought about utilitarianism. Even with Roko's utility function, if you have to choose between two lotteries over outcomes, you are still minimizing the expected number of rights violations. If you make your utility function lexicographic in rights, then once you've done the best you can with rights, you're still a utilitarian in the usual sense within the class of choices that minimizes rights violations.

Comment by Technologos on Open Thread: January 2010 · 2010-01-11T21:41:54.349Z · LW · GW

It does occur to me that I wasn't objecting to the hypothetical existence of said function, only that rights aren't especially useful if we give up on caring about them in any world where we cannot prevent literally all violations.

Comment by Technologos on Open Thread: January 2010 · 2010-01-11T21:31:49.898Z · LW · GW

And if you cannot act such that 0 rights are violated? Your function would seem to suggest that you are indifferent between killing a dictator and committing the genocide he would have caused, since the number of rights violations is (arguably, of course) in both cases positive.

Comment by Technologos on Reference class of the unclassreferenceable · 2010-01-11T18:12:18.993Z · LW · GW

Perhaps I was simply unclear. Both my immediately prior comment and its grandparent were arguing only that there should be a nonzero expectation of a technological Singularity, even from a reference class standpoint.

The reference class of predictions about the Singularity can, as I showed in the grandparent, include a wide variety of predictions about major changes in the human condition. The complement or negation of that reference class is a class of predictions that things will remain largely the same, technologically.

Often, when people appear to be making an obviously wrong argument in this forum, it's a matter of communication rather than massive logic failure.

Comment by Technologos on Disclosure vs. Bans: Reply to Robin Hanson · 2010-01-11T17:58:21.716Z · LW · GW

I'd heard it re: the smoking bans implemented in Minneapolis; I'm not surprised that Canada takes an especially paternalist position on the matter.

Comment by Technologos on Disclosure vs. Bans: Reply to Robin Hanson · 2010-01-11T06:57:35.224Z · LW · GW

Also, more than votes are gained when demonizing smokers--there are also the smokers' tax dollars.

Comment by Technologos on Disclosure vs. Bans: Reply to Robin Hanson · 2010-01-11T06:52:14.086Z · LW · GW

For what it's worth, the argument I'd heard--not that I agree with it, to be clear--was that visitors/patrons weren't the issue: the law was designed to essentially extend safe-work-environment laws to bars. Thus, it was the employees who were the at-risk party.

Comment by Technologos on The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom · 2010-01-10T23:14:37.709Z · LW · GW

Best I can tell, Science is just a particularly strong form (/subset) of Bayesian evidence. Since it attempts (when done well) to control for many potentially confounding factors and isolate true likelihoods, we can have more confidence in the strength of the evidence thus obtained than we could from general observations.

Comment by Technologos on Dennett's "Consciousness Explained": Prelude · 2010-01-10T23:08:08.319Z · LW · GW

Agreed, and a lot of modern fields, including many of the natural sciences and social sciences, derive from philosophers' framework-establishing questions. The trick is that we then consider the fields therein derived as solving the original questions, rather than philosophy.

Philosophy doesn't really solve questions in itself; instead, it allows others to solve them.

Comment by Technologos on Consciousness · 2010-01-09T20:14:43.268Z · LW · GW

I wonder if "How does neurons firing cause us to have a subjective experience?" might be unintentionally begging Mitchell_Porter's question. Best I can tell, neurons firing is having a subjective experience, as you more or less say right afterwards.

Comment by Technologos on Reference class of the unclassreferenceable · 2010-01-09T20:03:24.885Z · LW · GW

Even if we prefer to frame the reference class that way, we can instead note that anybody who predicted that things would remain the way they are (in any of the above categories) would have been wrong. People making that prediction in the last century have been wrong with increasing speed. As Eliezer put it, "beliefs that the future will be just like the past" have a zero success rate.

Perhaps the inventions listed above suggest that it's unwise to assign 0% chance to anything on the basis of present nonexistence, even if you could construct a reference class that has that success rate.

Either way, people who predicted that human life would be lengthened considerably, that humanity would fundamentally change in structure, or that some people would interact with beings that appear nigh-omnipotent have all been right with some non-zero success rate, and there's no particular reason to reject those data.

Comment by Technologos on Open Thread: January 2010 · 2010-01-09T04:34:06.898Z · LW · GW

That's not uncommon. Villains act, heroes react.

I interpreted Eliezer as saying that that was a cause of the stories' failure or unsatisfactory nature, attributing this to our desire to feel like decisions come from within even when driven by external forces.

Comment by Technologos on Reference class of the unclassreferenceable · 2010-01-08T17:46:53.597Z · LW · GW

I'm perfectly willing to grant that, over the scope of human history, the reference classes for cryo/AGI/Singularity have produced near-0 success rates. I'd modify the classes slightly, however:

  • Inventions that extend human life considerably: Penicillin, if nothing else. Vaccinations. Clean-room surgery.
  • Inventions that materially changed the fundamental condition of humanity: Agriculture. Factories/mass production. Computers.
  • Interactions with beings that are so relatively powerful that they appear omnipotent: Many colonists in the Americas were seen this way. Similarly with the cargo cults in the Pacific islands.

The point is, each of these references classes, given a small tweak, has experienced infrequent but nonzero successes--and that over the course of all of human history! Once we update the "all of human history" reference class/prior to account for the last century--in which technology has developed faster than probably the previous millennium--the posterior ends up looking much more promising.

Comment by Technologos on Fictional Evidence vs. Fictional Insight · 2010-01-08T17:29:06.908Z · LW · GW

Agreed. Part of the reason I love reading Asimov is that he focuses so much on the ideas he's presenting, without much attempt to invest the reader emotionally in the characters. I find the latter impairs my ability to synthesize useful general truths from fiction (especially short stories, my favorite form of Asimov).

Comment by Technologos on Consciousness · 2010-01-08T17:22:42.850Z · LW · GW

I defer to Wittgenstein: the limits of our language are the limits of the world. We can literally ask the questions above, but I cannot find meaning in them. Blueness, computational states, time, and aboutness do not seem to me to have any implementation in the world beyond the ones you reject as inadequate, and I simply don't see how we can speak meaningfully (that is, in a way that allows justification or pursues truth) about things outside the observable universe.