Posts
Comments
VNM utility is a necessary consequence of its axioms but doesn't entail a unique utility function; as such, the ability to prevent Dutch Books is derived more from VNM's assumption of a fixed total ordering of outcomes than anything.
Or you could just take more, so that the nervousness is swamped by the general handshakery...
Seth appears to be contrasting a "job" with things like "being an entrepreneur in business for oneself," so perhaps the first of your options.
I think much of the problem here comes from something of an equivocation on the meaning of "economic disaster." A country can post high and growing GDP numbers without benefiting its citizens as much as a country with weaker numbers; the linked paper notes that
real per capita private consumption was lower than straight GDP per capita figures suggest because of very high investment rates and high military expenditures, and the quality of goods that that consumption expenditure could bring was even lower still."
Communism is good at maintaining top-line growth in an economy because it can simply mandate spending. In much the same way as US government spending can directly add to GDP growth (even if incurring substantial debt), the Soviet Union could make massive military expenditures even while running factories that produced goods not based on consumer desires but state beliefs about those desires or needs.
In short, communism was not an economic disaster in that it effectively industrialized a great many nations and brought consistent top-line growth. It was an economic disaster in that state power allowed or created widespread famines and poor production of consumer goods.
My understanding is that one primary issue with frequentism is that it can be so easily abused/manipulated to support preferred conclusions, and I suspect that's the subject of the article. Frequentism may not have "caused the problem," per se, but perhaps it enabled it?
And in particular, there's good reason to believe that brains are still evolving at a decent pace, where it looks like cell mechanisms largely settled a long while back.
Oh, I meant that saying it was going to torture you if you didn't release it could have been exactly what it needed to say to get you to release it.
Perhaps it does--and already said it...
What you say is true while the Koran and the Bible are referents, but when A and B become "Mohammed is the last prophet, who brought the full truth of God's will" and "Jesus was a literal incarnation of God," (the central beliefs of the religions that hold the respective books sacred) then James' logic holds.
I realize how arrogant it must seem for young, uncredentialled (not even a Bachelor's!) me to conclude that brilliant professional philosophers who have devoted their entire lives to studying this topic are simply confused. But, disturbing as it may be to say ... that's how it really looks.
Perhaps the fact that they have devoted their lives to a topic suggests that they have a vested interest in making it appear not to be nonsense. Cognitive dissonance can be tricky even for the pros.
What if the problem was reframed such that nobody ever found out about the decision and thereby that their estimates of risk remained unchanged?
I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.
It is certainly possible that there is some underlying utilitarian rationale being used. Reframing the problem like I suggest above might provide something of a test of the reason you provided, if imperfect (can we really ignore intuitions on command?).
I have a different interpretation of the LCPW here, though. The LCPW is supposed to be the one that isolates the moral quantity of interest--in this case, the decision to push or not, or to switch tracks--and is specifically designed to exclude answers that consider factors (realistic or not) that sidestep the issue.
I'd say the LCPW is one in which nobody will ever hear about the decision, and thus in which any ancillary effects are neutralized.
buying life insurance
For what it's worth, I've heard people initially had many of the same hangups about life insurance, saying that they didn't want to gamble on death. The way that salespeople got around that was by emphasizing that the contracts would protect the family in event of the breadwinner's death, and thus making it less of a selfish thing.
I wonder if cryo needs a similar marketing parallel. "Don't you want to see your parents again?"
Could you supply a (rough) probability derivation for your concerns about dystopian futures?
I suspect the reason people aren't bringing those possibilities up is that, through a variety of elements including in particular the standard Less Wrong understanding of FAI derived from the Sequences, LWers have a fairly high conditional probability Pr(Life after cryo will be fun | anybody can and bothers to nanotechnologically reconstruct my brain) along with at least a modest probability of that condition actually occurring.
Does anyone really expect that this population would not respond to its incentives to avoid more danger? Anecdotes aside; do you expect them to join the military with the same frequency, be firemen with the same frequency, to be doctors administering vaccinations in jungles with the same frequency?
Agreed--indeed, I suspect that one of the first steps to fundamentally altering the priorities of society may be the invention of methods to materially prolong life, such that it really does become an unspeakable tragedy to lose somebody permanently.
I was the lead developer of an AGI that is scheduled to hit start in three weeks. I quit when I saw that the 'Friendliness' intended is actually a dystopia and my protests were suppressed. I have just cancelled my cryonics membership and the reason your cryonic revival is dependent on killing me is that I am planning to sabotage the AI.
Is it weird that my first reaction is to ask her specific questions about the Sequences to test the likelihood of that statement's veracity?
Your opponent must not see (consciously or subconsciously) your rhetoric as an attempt to gain status at zir expense.
To quote Daniele Vare: "Diplomacy is the art of letting someone have your way."
Agreed, and I suspect that certainty and abrasive attributes are also less problematic if truth is not being sought after.
This would be entirely true if instead of utiles you had said dollars or other resources. As it is, it is false by definition: if two choices have the same expected utility (expected value of the utility function) then the chooser is indifferent between them. You are taking utility as an argument in something like a meta-utility function, which is an interesting discussion to have (which utility function we might want to have) but not the same as standard decision theory.
I think the uncomfortable part is that bill's (and my) experience suggests that people are even more risk-averse than logarithmic functions would indicate.
I'd suggest that any consistent function (prospect theory notwithstanding) for human utility functions is somewhere between log(x) and log(log(x))... If I were given the option of a 50-50 chance of squaring my wealth and taking the square root, I would opt for the gamble.
Logarithmic utility functions are already risk-averse by virtue of their concavity. The expected value of a 50% chance of doubling or halving is a 25% gain.
I should note that most of the organizations we are talking about (Alcor, ACS, CI) are non-profits.
I didn't mind the old one, but I do like the "sticky brains" label that we can use for this concept in the future.
Agreed--the trick is that being wrong "only once" is deceptive. I may be wrong more than once on a one-in-forty-million chance. But I may also be wrong zero times in 100 million tries, on a problem as frequent and well-understood as the lottery, and I'm hesitant to say that any reading problems I may have would bias the test toward more lucrative mistakes.
an unprecedented and unlikely phenomenon
Possible precedents: the Library of Alexandria and the Dark Ages.
Definitely. Eliezer reflects perhaps a maximum lower bound on the amount of intelligence necessary to pull that off.
Who said he was? If Eliezer can convince somebody to let him out of the box--for a financial loss no less--then certainly a transhuman AI can, right?
Sure, though the mechanism I was referring to is "it can convince its handler(s) to let it out of the box through some transhuman method(s)."
This is essentially the AI box experiment. Check out the link to see how even an AI that can only communicate with its handler(s) might be lethal without guaranteed Friendliness.
Indeed, I agree--I meant that it doesn't matter what conclusions you hold as much as how you interact with people as you search for them.
I agree with Kevin that belief is insufficient for exclusion/rejection. Best I can tell, it's not so much what you believe that matters here as what you say and do: if you sincerely seek to improve yourself and make this clear without hostility, you will be accepted no matter the gap (as you have found with this post and previous comments).
The difference between the beliefs Kevin cited lies in the effect they may have on the perspective from which you can contribute ideas. Jefferson's deism had essentially no effect on his political and moral philosophizing (at least, his work could easily have been produced by an atheist). Pat Robertson's religiosity has a great deal of effect on what he says and does, and that would cause a problem.
The fact that you wrote this post suggests you are in the former category, and I for one am glad you're here.
To be clear, I wasn't arguing against applying the outside view--just against the belief that the outside view gives AGI a prior/outside view expected chance of success of (effectively) zero. The outside view should incorporate the fact that some material number of technologies not originally anticipated or even conceived do indeed materialize: we expected flying cars, but we got the internet. Even a 5% chance of Singularity seems more in line with the outside view than the 0% claimed in the reference class article, no?
I agree with your comment on the previous post, incidentally, that the probability of the Singularity as conceived by any individual or even LW in general is low; the possible types of Singularity are so great that it would be rather shocking if we could get it right from our current perspective. Again, I was responding only to the assertion that the outside view shows no successes for the class of breakthroughs containing AGI/cryo/Singularity.
I should note too that the entirety of the quotation you ascribe to me is originally from Eliezer, as the omitted beginning of the quoted sentence indicates.
There is a difference between giving something negative utility and giving it decreasing marginal utility. It's sufficient to give the AI exponents strictly between zero and one for all terms in a positive polynomial utility function, for instance. That would be effectively "inputting" the marginal utility of resources, given any current state of the world.
I propose a further hypothesis: high-status people have internalized Laws 4, 5, and 46 of the 48 Laws of Power, but especially Law 1: Never Outshine the Master.
After years of practice in switching between seeming competent relative to underlings and less so relative to superiors, they develop the ability to segregate audiences as you described.
Crime is down during the current recession. It's possible that the shock simply hasn't been strong enough, but it may be evidence nonetheless.
I think Hanson's hypothesis was more about true catastrophes, though--if some catastrophe devastated civilization and we were thrown back into widespread starvation, people wouldn't worry about morality.
I was connecting it to and agreeing with Zack M Davis' thought about utilitarianism. Even with Roko's utility function, if you have to choose between two lotteries over outcomes, you are still minimizing the expected number of rights violations. If you make your utility function lexicographic in rights, then once you've done the best you can with rights, you're still a utilitarian in the usual sense within the class of choices that minimizes rights violations.
It does occur to me that I wasn't objecting to the hypothetical existence of said function, only that rights aren't especially useful if we give up on caring about them in any world where we cannot prevent literally all violations.
And if you cannot act such that 0 rights are violated? Your function would seem to suggest that you are indifferent between killing a dictator and committing the genocide he would have caused, since the number of rights violations is (arguably, of course) in both cases positive.
Perhaps I was simply unclear. Both my immediately prior comment and its grandparent were arguing only that there should be a nonzero expectation of a technological Singularity, even from a reference class standpoint.
The reference class of predictions about the Singularity can, as I showed in the grandparent, include a wide variety of predictions about major changes in the human condition. The complement or negation of that reference class is a class of predictions that things will remain largely the same, technologically.
Often, when people appear to be making an obviously wrong argument in this forum, it's a matter of communication rather than massive logic failure.
I'd heard it re: the smoking bans implemented in Minneapolis; I'm not surprised that Canada takes an especially paternalist position on the matter.
Also, more than votes are gained when demonizing smokers--there are also the smokers' tax dollars.
For what it's worth, the argument I'd heard--not that I agree with it, to be clear--was that visitors/patrons weren't the issue: the law was designed to essentially extend safe-work-environment laws to bars. Thus, it was the employees who were the at-risk party.
Best I can tell, Science is just a particularly strong form (/subset) of Bayesian evidence. Since it attempts (when done well) to control for many potentially confounding factors and isolate true likelihoods, we can have more confidence in the strength of the evidence thus obtained than we could from general observations.
Agreed, and a lot of modern fields, including many of the natural sciences and social sciences, derive from philosophers' framework-establishing questions. The trick is that we then consider the fields therein derived as solving the original questions, rather than philosophy.
Philosophy doesn't really solve questions in itself; instead, it allows others to solve them.
I wonder if "How does neurons firing cause us to have a subjective experience?" might be unintentionally begging Mitchell_Porter's question. Best I can tell, neurons firing is having a subjective experience, as you more or less say right afterwards.
Even if we prefer to frame the reference class that way, we can instead note that anybody who predicted that things would remain the way they are (in any of the above categories) would have been wrong. People making that prediction in the last century have been wrong with increasing speed. As Eliezer put it, "beliefs that the future will be just like the past" have a zero success rate.
Perhaps the inventions listed above suggest that it's unwise to assign 0% chance to anything on the basis of present nonexistence, even if you could construct a reference class that has that success rate.
Either way, people who predicted that human life would be lengthened considerably, that humanity would fundamentally change in structure, or that some people would interact with beings that appear nigh-omnipotent have all been right with some non-zero success rate, and there's no particular reason to reject those data.
That's not uncommon. Villains act, heroes react.
I interpreted Eliezer as saying that that was a cause of the stories' failure or unsatisfactory nature, attributing this to our desire to feel like decisions come from within even when driven by external forces.
I'm perfectly willing to grant that, over the scope of human history, the reference classes for cryo/AGI/Singularity have produced near-0 success rates. I'd modify the classes slightly, however:
- Inventions that extend human life considerably: Penicillin, if nothing else. Vaccinations. Clean-room surgery.
- Inventions that materially changed the fundamental condition of humanity: Agriculture. Factories/mass production. Computers.
- Interactions with beings that are so relatively powerful that they appear omnipotent: Many colonists in the Americas were seen this way. Similarly with the cargo cults in the Pacific islands.
The point is, each of these references classes, given a small tweak, has experienced infrequent but nonzero successes--and that over the course of all of human history! Once we update the "all of human history" reference class/prior to account for the last century--in which technology has developed faster than probably the previous millennium--the posterior ends up looking much more promising.
Agreed. Part of the reason I love reading Asimov is that he focuses so much on the ideas he's presenting, without much attempt to invest the reader emotionally in the characters. I find the latter impairs my ability to synthesize useful general truths from fiction (especially short stories, my favorite form of Asimov).
I defer to Wittgenstein: the limits of our language are the limits of the world. We can literally ask the questions above, but I cannot find meaning in them. Blueness, computational states, time, and aboutness do not seem to me to have any implementation in the world beyond the ones you reject as inadequate, and I simply don't see how we can speak meaningfully (that is, in a way that allows justification or pursues truth) about things outside the observable universe.