Posts
Comments
Also "the teacher smiled"? Damn your smugness, teacher!
I'm enjoying these posts.
you do get to decide whether or not to perceive it as a complement or an insult.
compliment
dieties
believed
Harry should trick Voldemort into biting him, and then use his new freedom to bite him back.
Oops, you're right
From that Future of Life conference: if self-driving cars take over and cut the death rate from car accidents from 32000 to 16000 per year, the makers won't get 16000 thank-you cards -- they'll get 16000 lawsuits.
Yes, that's the point.
(I think sphexish is Dawkins, not Hofstadter.)
I think it's a bit of a leap to go from NASA being under-funded and unambitious in recent years to "people 50 years from now, in a permanently Earth-bound reality".
Not sure if it's in HPMOR but the symbol for the deadly hallows contains two right triangles.
EDIT err, deathly, I guess. I don't seem to be a trufan.
I'm afraid I won't have time to give you more help. There's a short summary of each sequence under the link at the top of the page, so it won't take you forever to see the relevance.
EDIT: you're wondering elsewhere in the thread why you're not being well received. It's because your post doesn't make contact with what other people have thought on the topic.
It can, but it doesn't have the time...
So how can the universe "enjoy itself" as much as possible before the big crunch (or before and during the heat death)*.
Maybe read the Fun Theory sequence?
It might useful to look at Pareto dominance and related ideas, and the way they are used to define concrete algorithms for multi-objective optimisation, eg NSGA2 which is probably the most used.
OP mentions "I used less water in the shower", so is obviously not only looking for extraordinary outcomes. So "saving the world" does indeed sound silly.
Any AI that would do this is unFriendly. The vast majority of uFAIs have goals incompatible with human life but not in any way concerned with it. [...] Therefore there is little to fear in the way of being tortured by an AI.
That makes no sense. The uFAIs most likely to be created are not drawn uniformly from the space of possible uFAIs. You need to argue that none of the uFAIs which are likely to be created will be interested in humans, not that few of all possible uFAIs will.
Off-topic:
I'm not talking about a basic vocabulary, but a vocabulary beyond that of the average, white, English-as-a-first-language adult.
Why white?
Golly, that sounds to me as if the people of this age don't go to heaven!
it's unclear to me how the category of "evolutionary restrictions" could apply to rationality techniques. Suggestions?
Not sure if this simple example is what you had in mind, but -- evolution wasn't capable of making us grow nice smooth erasable surfaces on our bodies, together with ink-secreting glands in our index fingers, so we couldn't evolve the excellent rationality technique of writing things down to remember them. So when writing was invented, the inventor was entitled to say "my invention passes the EOC because of the "evolutionary restrictions" clause".
And more important, its creators want to be sure that it will be very reliable before they switch it on.
can read the statement on its own
I like the principle behind Markdown: if it renders, fine, but if it doesn't, it degrades to perfectly readable plain-text.
A percentage is just fine.
I like the principle, but 5% is "extremely unlikely"? Something that happens on the way to work once every three weeks?
"X as a Y" is an academic idiom. Sounds wrong for the target audience.
Not being able to have any children, or as many as you (later realised you) wanted.
The claim is that it was obvious in advance. The whole reason AI-boxing is interesting is that the AI successes were unexpected, in advance.
the thesis was always glaringly obvious to anyone who was even paying attention to what superintelligence meant
I don't see that it was obvious, given that none of the AI players are actually superintelligent.
This discussion isn't getting anywhere, so, all the best :)
O.K, demonstrate that the idea of deterrent exists somewhere within their brains.
Evolutionary game theory and punishment of defectors is all the answer you need. You want me to point at a deterrent region, somewhere to the left of Broca's?
You say that science is useful for truths about the universe, whereas morality is useful for truths useful only to those interested in acting morally. It sounds like you agree with Harris that morality is a subcategory of science.
something can be good science without in any way being moral that Sam Harris would recognise as 'moral'.
Still, so what? He's not saying that all science is moral (in the sense of "benevolent" and "good for the world"). That would be ridiculous, and would be orthogonal to the argument of whether science can address questions of morality.
If you claim that evolutionary reasons are a person's 'true preferences'
No, of course not. It's still wrong to say that deterrent is nowhere in their brains.
Concerning the others:
Scientific inquiry percieves facts which are true and useful except for goals which run directly counter to science. Morality perceives 'facts' which are only useful to those who wish to follow a moral route.
I don't see what "goals which run directly counter to science" could mean. Even if you want to destroy all scientists, are you better off knowing some science or not? Anyway, how does this counter anything Harris says?
Although most people would be outraged, they probably wouldn't call it unscientific.
Again, so what? How does anything here prevent science from talking about morality?
As far as I can tell, Harris does not account for the well-being of animals.
He talks about well-being of conscious beings. It's not great terminology, but your inference is your own.
I disagree with all your points, but will stick to 4: "Deterrent is nowhere in their brains" is wrong -- read about altruism, game theory, and punishment of defectors, to understand where the desire comes from.
Nevertheless, moral questions aren't (even potentially) empirical, since they're obviously seeking normative and not factual answers.
You can't go from an is to an ought. Nevertheless, some people go from the "well-being and suffering" idea to ideas like consequentialism and utilitarianism, and from there the only remaining questions are factual. Other people are prepared to see a factual basis for morality in neuroscience and game theory. These are regular topics of discussion on LW. So calling it "obvious" begs the whole question.
control over the lower level OS allows for significant performance gains
Even if you got a 10^6 speedup (you wouldn't), that gain is not compoundable. So it's irrelevant.
access to a comparatively simple OS and tool chain allows the AI to spread to other systems.
Only if those other systems are kind enough to run the O/S you want them to run.
The unstated assumption is that a non-negligible proportion of the difficulty in creating a self-optimising AI has to do with the compiler toolchain. I guess most people wouldn't agree with that. For one thing, even if the toolchain is a complicated tower of Babel, why isn't it good enough to just optimise one's source code at the top level? Isn't there a limit to how much you can gain by running on top of a perfect O/S?
(BTW the "tower of Babel" is a nice phrase which gets at the sense of unease associated with these long toolchains, (eg) Python - RPython - LLVM - ??? - electrons.)
Agreed, but I think given the kind-of self-deprecating tone elsewhere, this was intended as a jibe at OP's own superficial knowledge rather than at the transportation systems of developing countries.
Ok, but are we optimising the expected case or the worst case? If the former, then the probability of those things happening with no special steps against them is relevant. To take the easiest example: would postponing the "take over the universe" step for 300 years make a big difference in the expected amount of cosmic commons burned before takeover?
That page mentions "common sense" quite a bit. Meanwhile, this is the latest research in common sense and verbal ability.
I don't think it's useful to think about constructing priors in the abstract. If you think about concrete examples, you see lots of cases where a reasonable prior is easy to find (eg coin-tossing, and the typical breast-cancer diagnostic test example). That must leave some concrete examples where good priors are hard to find. What are they?
To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky. The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.
It sounds like status quo bias. If growth was currently 2% higher, should the person then seize on growth-slowing opportunities?
One answer: it could be that any effort is likely to have little success in slowing world growth, but a large detrimental effect on the person's other projects. Fair enough, but presumably it applies equally to speeding growth.
Another: an organisation that aspires to political respectability shouldn't be seen to be advocating sabotage of the economy.
Status is far older than Hanson's take on it, or than Hanson himself. But the idea of seeing status signalling everywhere, as an explanation for everything -- that is characteristically Hanson. (Obviously, don't take my simplification seriously.)
Yes, but the next line mentioned PageRank, which is designed to deal with those types of issues. Lots of inward links doesn't mean much unless the people (or papers, or whatever, depending on the semantics of the graph) linking to you are themselves highly ranked.
Don't forget that the goal in the Turing Test is not to appear intelligent, but to appear human. If an interrogator asks "what question would you ask in the Turing test?", and the answer is "uh, I don't know", then that is perfectly consistent with the responder being human. A smart interrogator won't jump to a conclusion.
"That which has happened before is less likely to happen again" (a reference to an old Overcoming Bias post I can't locate).
Good point. In fact, that is the type of environment which is required for the No Free Lunch theorems mentioned in the post to even be relevant. A typical interpretation in the evolutionary computing field would be that it's the type of environment where an anti-GA (a genetic algorithm which selects individuals with worse fitness) does better than a GA. There are good reasons to say that such environments can't occur for important classes of problems typically tackled by EC. In the context of this post, I wonder whether such an environment is even physically realisable.
(I think a lot of people misinterpret NFL theorems.)
I think you're right that the OP doesn't quite hit the mark, but you got carried away and started almost wilfully misinterpreting. Especially your answers to 4, 5 and 6.
In the Soviet Union religion was marginalized for some 70 years, two generations grew up in the environment of state atheism, yet soon after the restrictions were relaxed, the Church has regained almost all of the lost ground. The situation was similar in the rest of the ex-Warsaw bloc (with less time under mandated atheism), and even in China, where the equilibrium was restored after the Cultural Revolution. The standard argument [bold added] for this happening is "but Communism was basically a religion by another name", what with the various Cults of Personality and the beliefs in the One True Path.
I don't think that is the strongest argument. I think that in Eastern Europe and China, religion never really went away. People don't change their minds in response to government-mandated atheism. Dawkins is talking about people changing their minds. I think on balance he is right, though the trend is obviously weak.
Whatever trend there is goes along with increasing wealth and education, obviously. The issue to be argued is whether wealth and education will continue to spread and increase, albeit slowly and with backsliding, or the backsliding is enough to prevent any ongoing trend.
I think that interesting results which fail to replicate are almost always better-known than the failure to replicate. I think it's a fundamental problem of science, rather than a special weakness of programmers.
I really like Thinking: Right and Wrong, but if there is a danger that Right be misconstrued as conservative, then how about a variant? This is my only suggestion and it doesn't sound as good but there must be better:
Thinking: Good and Bad
"loosing" is still incorrect.
In a sense, bookies could be interpreted as "money pumping" the public as a whole. But somehow, it turns out that any single individual will rarely be stupid enough to take both sides of the same bet from the same bookie, in spite of the fact that they're apparently irrational enough to be gambling in the first place.
Suggest making the link explicit with something like this: "in spite of the fact that they're apparently irrational enough to be part of that public in the first place."
I'm hoping in particular that someone used to feel this way—shutting down an impulse to praise someone else highly, or feeling that it was cultish to praise someone else highly—and then had some kind of epiphany after which it felt, not allowed, but rather, quite normal.
I think there is a necessary distinction between matter-of-fact praising someone highly, and engaging in various sucking-up behaviours such as echoing particular forms of words, or quoting-as-authority. The latter do leave an unpleasant taste and in those cases I can understand the "cult" reaction.
Oh god. Everyone stop talking.
For small vices, it is perhaps more important to ask, "What works?"
http://en.wikipedia.org/wiki/Broken_windows_theory
This is a bit like the "look before you leap", "no, who hesitates is lost" game.