Comment by jmmcd on The path of the rationalist · 2015-04-13T13:19:59.723Z · LW · GW

Also "the teacher smiled"? Damn your smugness, teacher!

Comment by jmmcd on Ephemeral correspondence · 2015-04-10T12:47:24.866Z · LW · GW

I'm enjoying these posts.

you do get to decide whether or not to perceive it as a complement or an insult.




Comment by jmmcd on Boxing an AI? · 2015-03-30T22:56:19.580Z · LW · GW

That Alien Message

Comment by jmmcd on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-02T07:26:38.018Z · LW · GW

Harry should trick Voldemort into biting him, and then use his new freedom to bite him back.

Comment by jmmcd on [Link] Algorithm aversion · 2015-03-01T22:36:51.991Z · LW · GW

Oops, you're right

Comment by jmmcd on [Link] Algorithm aversion · 2015-02-28T23:46:22.451Z · LW · GW

From that Future of Life conference: if self-driving cars take over and cut the death rate from car accidents from 32000 to 16000 per year, the makers won't get 16000 thank-you cards -- they'll get 16000 lawsuits.

Comment by jmmcd on [Link] Algorithm aversion · 2015-02-28T23:42:09.097Z · LW · GW

Yes, that's the point.

(I think sphexish is Dawkins, not Hofstadter.)

Comment by jmmcd on In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him? · 2015-02-28T01:42:37.153Z · LW · GW

I think it's a bit of a leap to go from NASA being under-funded and unambitious in recent years to "people 50 years from now, in a permanently Earth-bound reality".

Comment by jmmcd on Harry Potter and the Methods of Rationality discussion thread, January 2015, chapter 103 · 2015-01-29T22:31:38.630Z · LW · GW

Not sure if it's in HPMOR but the symbol for the deadly hallows contains two right triangles.

EDIT err, deathly, I guess. I don't seem to be a trufan.

Comment by jmmcd on A Somewhat Vague Proposal for Grounding Ethics in Physics · 2015-01-28T09:41:26.510Z · LW · GW

I'm afraid I won't have time to give you more help. There's a short summary of each sequence under the link at the top of the page, so it won't take you forever to see the relevance.

EDIT: you're wondering elsewhere in the thread why you're not being well received. It's because your post doesn't make contact with what other people have thought on the topic.

Comment by jmmcd on A Somewhat Vague Proposal for Grounding Ethics in Physics · 2015-01-28T07:50:08.619Z · LW · GW

It can, but it doesn't have the time...

Comment by jmmcd on A Somewhat Vague Proposal for Grounding Ethics in Physics · 2015-01-27T12:34:42.340Z · LW · GW

So how can the universe "enjoy itself" as much as possible before the big crunch (or before and during the heat death)*.

Maybe read the Fun Theory sequence?

Comment by jmmcd on "incomparable" outcomes--multiple utility functions? · 2014-12-17T08:16:29.724Z · LW · GW

It might useful to look at Pareto dominance and related ideas, and the way they are used to define concrete algorithms for multi-objective optimisation, eg NSGA2 which is probably the most used.

Comment by jmmcd on Saving the World - Progress Report · 2014-08-02T22:07:40.010Z · LW · GW

OP mentions "I used less water in the shower", so is obviously not only looking for extraordinary outcomes. So "saving the world" does indeed sound silly.

Comment by jmmcd on Recreational Cryonics · 2014-01-16T22:55:53.969Z · LW · GW

Any AI that would do this is unFriendly. The vast majority of uFAIs have goals incompatible with human life but not in any way concerned with it. [...] Therefore there is little to fear in the way of being tortured by an AI.

That makes no sense. The uFAIs most likely to be created are not drawn uniformly from the space of possible uFAIs. You need to argue that none of the uFAIs which are likely to be created will be interested in humans, not that few of all possible uFAIs will.

Comment by jmmcd on The Relevance of Advanced Vocabulary to Rationality · 2013-11-29T14:56:43.685Z · LW · GW


I'm not talking about a basic vocabulary, but a vocabulary beyond that of the average, white, English-as-a-first-language adult.

Why white?

Comment by jmmcd on Open Thread, November 15-22, 2013 · 2013-11-22T01:37:12.494Z · LW · GW

Golly, that sounds to me as if the people of this age don't go to heaven!

Comment by jmmcd on The Evolutionary Heuristic and Rationality Techniques · 2013-11-10T19:35:12.573Z · LW · GW

it's unclear to me how the category of "evolutionary restrictions" could apply to rationality techniques. Suggestions?

Not sure if this simple example is what you had in mind, but -- evolution wasn't capable of making us grow nice smooth erasable surfaces on our bodies, together with ink-secreting glands in our index fingers, so we couldn't evolve the excellent rationality technique of writing things down to remember them. So when writing was invented, the inventor was entitled to say "my invention passes the EOC because of the "evolutionary restrictions" clause".

Comment by jmmcd on The Inefficiency of Theoretical Discovery · 2013-11-08T21:09:20.378Z · LW · GW

And more important, its creators want to be sure that it will be very reliable before they switch it on.

Comment by jmmcd on Creating a Text Shorthand for Uncertainty · 2013-10-19T22:43:16.362Z · LW · GW

can read the statement on its own

I like the principle behind Markdown: if it renders, fine, but if it doesn't, it degrades to perfectly readable plain-text.

A percentage is just fine.

Comment by jmmcd on Creating a Text Shorthand for Uncertainty · 2013-10-19T22:40:34.833Z · LW · GW

I like the principle, but 5% is "extremely unlikely"? Something that happens on the way to work once every three weeks?

Comment by jmmcd on Help us name a short primer on AI risk! · 2013-09-18T13:10:57.734Z · LW · GW

"X as a Y" is an academic idiom. Sounds wrong for the target audience.

Comment by jmmcd on Mistakes repository · 2013-09-09T19:36:19.032Z · LW · GW

Not being able to have any children, or as many as you (later realised you) wanted.

Comment by jmmcd on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-08T22:25:07.467Z · LW · GW

The claim is that it was obvious in advance. The whole reason AI-boxing is interesting is that the AI successes were unexpected, in advance.

Comment by jmmcd on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-07T09:05:20.486Z · LW · GW

the thesis was always glaringly obvious to anyone who was even paying attention to what superintelligence meant

I don't see that it was obvious, given that none of the AI players are actually superintelligent.

Comment by jmmcd on P/S/A - Sam Harris offering money for a little good philosophy · 2013-09-02T20:53:49.486Z · LW · GW

This discussion isn't getting anywhere, so, all the best :)

Comment by jmmcd on P/S/A - Sam Harris offering money for a little good philosophy · 2013-09-02T14:12:32.788Z · LW · GW

O.K, demonstrate that the idea of deterrent exists somewhere within their brains.

Evolutionary game theory and punishment of defectors is all the answer you need. You want me to point at a deterrent region, somewhere to the left of Broca's?

You say that science is useful for truths about the universe, whereas morality is useful for truths useful only to those interested in acting morally. It sounds like you agree with Harris that morality is a subcategory of science.

something can be good science without in any way being moral that Sam Harris would recognise as 'moral'.

Still, so what? He's not saying that all science is moral (in the sense of "benevolent" and "good for the world"). That would be ridiculous, and would be orthogonal to the argument of whether science can address questions of morality.

Comment by jmmcd on P/S/A - Sam Harris offering money for a little good philosophy · 2013-09-02T12:48:55.728Z · LW · GW

If you claim that evolutionary reasons are a person's 'true preferences'

No, of course not. It's still wrong to say that deterrent is nowhere in their brains.

Concerning the others:

Scientific inquiry percieves facts which are true and useful except for goals which run directly counter to science. Morality perceives 'facts' which are only useful to those who wish to follow a moral route.

I don't see what "goals which run directly counter to science" could mean. Even if you want to destroy all scientists, are you better off knowing some science or not? Anyway, how does this counter anything Harris says?

Although most people would be outraged, they probably wouldn't call it unscientific.

Again, so what? How does anything here prevent science from talking about morality?

As far as I can tell, Harris does not account for the well-being of animals.

He talks about well-being of conscious beings. It's not great terminology, but your inference is your own.

Comment by jmmcd on P/S/A - Sam Harris offering money for a little good philosophy · 2013-09-02T10:17:25.111Z · LW · GW

I disagree with all your points, but will stick to 4: "Deterrent is nowhere in their brains" is wrong -- read about altruism, game theory, and punishment of defectors, to understand where the desire comes from.

Comment by jmmcd on P/S/A - Sam Harris offering money for a little good philosophy · 2013-09-02T10:07:33.171Z · LW · GW

Nevertheless, moral questions aren't (even potentially) empirical, since they're obviously seeking normative and not factual answers.

You can't go from an is to an ought. Nevertheless, some people go from the "well-being and suffering" idea to ideas like consequentialism and utilitarianism, and from there the only remaining questions are factual. Other people are prepared to see a factual basis for morality in neuroscience and game theory. These are regular topics of discussion on LW. So calling it "obvious" begs the whole question.

Comment by jmmcd on Relevance of the STEPS project to (F)AI teams · 2013-09-01T10:40:02.376Z · LW · GW

control over the lower level OS allows for significant performance gains

Even if you got a 10^6 speedup (you wouldn't), that gain is not compoundable. So it's irrelevant.

access to a comparatively simple OS and tool chain allows the AI to spread to other systems.

Only if those other systems are kind enough to run the O/S you want them to run.

Comment by jmmcd on Relevance of the STEPS project to (F)AI teams · 2013-09-01T07:02:24.499Z · LW · GW

The unstated assumption is that a non-negligible proportion of the difficulty in creating a self-optimising AI has to do with the compiler toolchain. I guess most people wouldn't agree with that. For one thing, even if the toolchain is a complicated tower of Babel, why isn't it good enough to just optimise one's source code at the top level? Isn't there a limit to how much you can gain by running on top of a perfect O/S?

(BTW the "tower of Babel" is a nice phrase which gets at the sense of unease associated with these long toolchains, (eg) Python - RPython - LLVM - ??? - electrons.)

Comment by jmmcd on Simple investing for a complete beginner? (Just… developing world index funds?) · 2013-08-22T12:10:08.523Z · LW · GW

Agreed, but I think given the kind-of self-deprecating tone elsewhere, this was intended as a jibe at OP's own superficial knowledge rather than at the transportation systems of developing countries.

Comment by jmmcd on Gains from trade: Slug versus Galaxy - how much would I give up to control you? · 2013-07-22T10:44:28.615Z · LW · GW

It is the fundamental theorem of linear programming.

Comment by jmmcd on Three Approaches to "Friendliness" · 2013-07-17T15:30:45.135Z · LW · GW

Ok, but are we optimising the expected case or the worst case? If the former, then the probability of those things happening with no special steps against them is relevant. To take the easiest example: would postponing the "take over the universe" step for 300 years make a big difference in the expected amount of cosmic commons burned before takeover?

Comment by jmmcd on Open thread, July 16-22, 2013 · 2013-07-16T16:24:29.196Z · LW · GW

That page mentions "common sense" quite a bit. Meanwhile, this is the latest research in common sense and verbal ability.

Comment by jmmcd on "Stupid" questions thread · 2013-07-13T21:41:33.909Z · LW · GW

I don't think it's useful to think about constructing priors in the abstract. If you think about concrete examples, you see lots of cases where a reasonable prior is easy to find (eg coin-tossing, and the typical breast-cancer diagnostic test example). That must leave some concrete examples where good priors are hard to find. What are they?

Comment by jmmcd on Do Earths with slower economic growth have a better chance at FAI? · 2013-06-14T19:59:22.709Z · LW · GW

To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky. The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.

It sounds like status quo bias. If growth was currently 2% higher, should the person then seize on growth-slowing opportunities?

One answer: it could be that any effort is likely to have little success in slowing world growth, but a large detrimental effect on the person's other projects. Fair enough, but presumably it applies equally to speeding growth.

Another: an organisation that aspires to political respectability shouldn't be seen to be advocating sabotage of the economy.

Comment by jmmcd on Useful Concepts Repository · 2013-06-10T14:41:34.409Z · LW · GW

Status is far older than Hanson's take on it, or than Hanson himself. But the idea of seeing status signalling everywhere, as an explanation for everything -- that is characteristically Hanson. (Obviously, don't take my simplification seriously.)

Comment by jmmcd on Problems with Academia and the Rising Sea · 2013-05-24T00:42:30.851Z · LW · GW

Yes, but the next line mentioned PageRank, which is designed to deal with those types of issues. Lots of inward links doesn't mean much unless the people (or papers, or whatever, depending on the semantics of the graph) linking to you are themselves highly ranked.

Comment by jmmcd on General intelligence test: no domains of stupidity · 2013-05-21T21:01:01.495Z · LW · GW

Don't forget that the goal in the Turing Test is not to appear intelligent, but to appear human. If an interrogator asks "what question would you ask in the Turing test?", and the answer is "uh, I don't know", then that is perfectly consistent with the responder being human. A smart interrogator won't jump to a conclusion.

Comment by jmmcd on General intelligence test: no domains of stupidity · 2013-05-21T18:47:40.809Z · LW · GW

"That which has happened before is less likely to happen again" (a reference to an old Overcoming Bias post I can't locate).

Good point. In fact, that is the type of environment which is required for the No Free Lunch theorems mentioned in the post to even be relevant. A typical interpretation in the evolutionary computing field would be that it's the type of environment where an anti-GA (a genetic algorithm which selects individuals with worse fitness) does better than a GA. There are good reasons to say that such environments can't occur for important classes of problems typically tackled by EC. In the context of this post, I wonder whether such an environment is even physically realisable.

(I think a lot of people misinterpret NFL theorems.)

Comment by jmmcd on The mystery of pain and pleasure · 2013-05-11T12:57:41.657Z · LW · GW

I think you're right that the OP doesn't quite hit the mark, but you got carried away and started almost wilfully misinterpreting. Especially your answers to 4, 5 and 6.

Comment by jmmcd on [LINK] The Unbelievers: Lawrence Krauss and Richard Dawkins Team Up Against Religion · 2013-05-01T10:08:59.380Z · LW · GW

In the Soviet Union religion was marginalized for some 70 years, two generations grew up in the environment of state atheism, yet soon after the restrictions were relaxed, the Church has regained almost all of the lost ground. The situation was similar in the rest of the ex-Warsaw bloc (with less time under mandated atheism), and even in China, where the equilibrium was restored after the Cultural Revolution. The standard argument [bold added] for this happening is "but Communism was basically a religion by another name", what with the various Cults of Personality and the beliefs in the One True Path.

I don't think that is the strongest argument. I think that in Eastern Europe and China, religion never really went away. People don't change their minds in response to government-mandated atheism. Dawkins is talking about people changing their minds. I think on balance he is right, though the trend is obviously weak.

Whatever trend there is goes along with increasing wealth and education, obviously. The issue to be argued is whether wealth and education will continue to spread and increase, albeit slowly and with backsliding, or the backsliding is enough to prevent any ongoing trend.

Comment by jmmcd on Checking for the Programming Gear · 2013-04-30T10:34:02.149Z · LW · GW

I think that interesting results which fail to replicate are almost always better-known than the failure to replicate. I think it's a fundamental problem of science, rather than a special weakness of programmers.

Comment by jmmcd on Help us name the Sequences ebook · 2013-04-16T11:23:59.001Z · LW · GW

I really like Thinking: Right and Wrong, but if there is a danger that Right be misconstrued as conservative, then how about a variant? This is my only suggestion and it doesn't sound as good but there must be better:

Thinking: Good and Bad

Comment by jmmcd on Willing gamblers, spherical cows, and AIs · 2013-04-10T09:38:55.157Z · LW · GW

"loosing" is still incorrect.

In a sense, bookies could be interpreted as "money pumping" the public as a whole. But somehow, it turns out that any single individual will rarely be stupid enough to take both sides of the same bet from the same bookie, in spite of the fact that they're apparently irrational enough to be gambling in the first place.

Suggest making the link explicit with something like this: "in spite of the fact that they're apparently irrational enough to be part of that public in the first place."

Comment by jmmcd on [SEQ RERUN] You're Calling *Who* A Cult Leader? · 2013-04-01T23:12:46.629Z · LW · GW

I'm hoping in particular that someone used to feel this way—shutting down an impulse to praise someone else highly, or feeling that it was cultish to praise someone else highly—and then had some kind of epiphany after which it felt, not allowed, but rather, quite normal.

I think there is a necessary distinction between matter-of-fact praising someone highly, and engaging in various sucking-up behaviours such as echoing particular forms of words, or quoting-as-authority. The latter do leave an unpleasant taste and in those cases I can understand the "cult" reaction.

Comment by jmmcd on Rationality Quotes April 2013 · 2013-04-01T23:04:43.416Z · LW · GW

Oh god. Everyone stop talking.

Comment by jmmcd on Giving in to small vices · 2013-03-05T23:08:06.716Z · LW · GW

For small vices, it is perhaps more important to ask, "What works?"

This is a bit like the "look before you leap", "no, who hesitates is lost" game.