Posts

What can go wrong with the following protocol for AI containment? 2016-01-11T23:03:20.846Z
There is no such thing as strength: a parody 2015-07-05T23:44:34.228Z

Comments

Comment by ZoltanBerrigomo on What can go wrong with the following protocol for AI containment? · 2016-01-15T01:11:59.366Z · LW · GW

If you limit yourself to a subset of features such that you are no longer writing in a format which is turing complete then you may be able to have a program capable of automatically proving that code reliably.

Right, that is what i meant.

Comment by ZoltanBerrigomo on What can go wrong with the following protocol for AI containment? · 2016-01-14T04:18:43.856Z · LW · GW

Here is my attempt at a calculation. Disclaimer: this is based on googling. If you are actually knowledgeable in the subject, please step in and set me right.

There are 10^11 neurons in the human brain.

A neuron will fire about 200 times per second.

It should take a constant number of flops to decide whether a neuron will fire -- say 10 flops (no need to solve a differential equation, neural networks usually use some discrete heuristics for something like this)

I want a society of 10^6 orcs running for 10^6 years

As you suggest, lets let the simulation run for a year of real time (moving away at this point from my initial suggestion of 1 second). By my calculations, it seems that in order for this to happen we need a computer that does 2x10^25 flops per second.

According to this

http://www.datacenterknowledge.com/archives/2015/04/15/doe-taps-intel-cray-to-build-worlds-fastest-supercomputer/

...in 2018 we will have a supercomputer that does about 2x10^17 flops per second.

That means we need a computer that is one hundred million times faster than the best computer in 2018.

That is still quite a lot, of course. If Moore's law was ongoing, this would take ~40 years; but Moore's law is dying. Still, it is not outside the realm of possibility for, say, the next 100 years.

Edit: By the way, one does not need to literally implement what I suggested -- the scheme I suggested is in principle applicable whenever you have a superintelligence, regardless of how it was designed.

Indeed, if we somehow develop an above-human intelligence, rather than trying to make sure its goals are aligned with ours, we might instead let it loose within a simulated world, giving it a preference for continued survival. Just one superintelligence thinking about factoring for a few thousand simulated years would likely be enough to let us factor any number we want. We could even give it have in-simulation ways of modifying its own code.

Comment by ZoltanBerrigomo on What can go wrong with the following protocol for AI containment? · 2016-01-14T04:01:42.499Z · LW · GW

Still inability to realise what you are doing seems rather dangerous.

So far, all I've done is post a question on lesswrong :)

More seriously, I do regret it if I appeared unaware of the potential danger. I am of course aware of the possibility that experiments with AI might destroy humanity. Think of my post above a suggesting a possible approach to investigate -- perhaps one with some kinks as written (that is why I'm asking a question here) but (I think) with the possibility of one day having rigorous safety guarantees.

Comment by ZoltanBerrigomo on What can go wrong with the following protocol for AI containment? · 2016-01-14T03:56:55.000Z · LW · GW

I think this calculation too conservative. The reason is (as I understand it) that neurons are governed by various differential equations, and simulating them accurately is a pain in the ass. We should instead assume that deciding whether a neuron will fire will take a constant number of flops.

I'll write another comment which attempts to redo your calculation with different assumptions.

It seems to me that by the time we can do that, we should have figured out a better way to create AI.

But will we have figured a way to reap the gains of AI safely for humanity?

Comment by ZoltanBerrigomo on What can go wrong with the following protocol for AI containment? · 2016-01-14T03:47:33.132Z · LW · GW

but more than that, it knows that the real universe is capable of producing intelligent beings that chose this particular world to simulate.

Good point -- this undermines a lot of what I wrote in my update 1. For example, I have no idea if F = m d^3 x / dt would result in a world that is capable of producing intelligent beings.

I should at some point produce a version of the above post with this claim, and other questionable parenthetical remarks I made, deleted, or at least acknowledging that they require further argumentation; they are not necessary for the larger point, which is that as long as the only thing the superintelligence can do (by definition) is live in a simulated world governed by Newton's laws, and as long as we don't interact with it at all except to see an automatically verified answer to a preset question (e.g., factor "111000232342342"), there is nothing it can do to harm us.

Comment by ZoltanBerrigomo on What can go wrong with the following protocol for AI containment? · 2016-01-14T03:41:38.632Z · LW · GW

I guess I am willing to bite the bullet and say that, as long as entity X prefers existence to nonexistence, you have done it no harm by bringing it into being. I realize this generates a number of repulsive-sounding conclusions, e.g., it becomes ethical to create entities which will live, by our 21st century standards, horrific lives.

At least some of them will tell you they had rather not been born.

If one is willing to accept my reasoning above, I think one can take one more leap and say that statistically as long as the vast majority of these entities will prefer existing to never having been brought into being, we are in the clear.

Comment by ZoltanBerrigomo on What can go wrong with the following protocol for AI containment? · 2016-01-14T03:37:00.804Z · LW · GW

The theorem you cite (provided I understood you correctly) does not preclude the possibility of checking whether a program written in a certain pre-specified format will have bugs. Bugs here are defined to be certain undesirable properties (e.g., looping forever, entering certain enumerated states, etc).

Baby versions of such tools (which automatically check whether your program will have certain properties from inspecting the code) already exist.

Comment by ZoltanBerrigomo on What can go wrong with the following protocol for AI containment? · 2016-01-12T21:04:36.908Z · LW · GW

Nice idea...I wrote an update to the post suggesting what seemed to me to be a variation on your suggestion.

About program checking: I agree completely. I'm not very informed about the state of the art, but it is very plausible that what we know right now is not yet up to task.

Comment by ZoltanBerrigomo on What can go wrong with the following protocol for AI containment? · 2016-01-12T20:23:42.135Z · LW · GW

I'm a bit skeptical about whether you can actually create a superintelligent AI by combining sped up humans like that,

Why not? You are pretty smart, and all you are is a combination of 10^11 or so very "dumb" neurons. Now imagine a "being" which is actually a very large number of human-level intelligences, all interacting...

Comment by ZoltanBerrigomo on What can go wrong with the following protocol for AI containment? · 2016-01-12T20:21:30.912Z · LW · GW

is very, very difficult not to give a superintelligence any hints of how the physics of our world work.

I wrote a short update to the post which tries to answer this point.

Maybe they notice minor fluctuations in the speed of the simulation based on environmental changes to the hardware

I believe they should have no ability whatsoever to detect fluctuations in the speed of the simulation.

Consider how the world of world of warcraft appears to an orc inside the game. Can it tell the speed at which the hardware is running the game?

It can't. What it can do is compare the speed of different things: how fast does an apple fall from a tree vs how fast a bird flies across the sky.

The orc's inner perception of the flow of time is based on comparing these things (e.g., how fast does an apple fall) to how fast their simulated brains process information.

If everything is slowed down by a factor of 2 (so you, as a player, see everything twice is slow), nothing appears any different to a simulated being within the simulation.

Comment by ZoltanBerrigomo on What can go wrong with the following protocol for AI containment? · 2016-01-12T20:03:15.459Z · LW · GW

These are good points. Perhaps I should not have said "interact" but chosen a different word instead. Still, its ability to play us is limited since (i) we will be examining the records of the world after it is dead (ii) it has no opportunity to learn anything about us.

Edit: we might even make it impossible for it to game us in the following way. All records of the simulated world are automatically deleted upon completion -- except for a specific prime factorization we want to know.

This is a really bad argument for safety.

You are right, of course. But you wrote that in response to what was a parenthetical remark on my part -- the real solution is to use program checking to make sure the laws of physics of the simulated world are never violated.

Comment by ZoltanBerrigomo on What can go wrong with the following protocol for AI containment? · 2016-01-12T05:59:07.310Z · LW · GW
  1. When talking about dealing and (non)interacting with real AIs, one is always talking about a future world with significant technological advances relative to our world today.

  2. If we can formulate something as a question about math, physics, chemistry, biology, then we can potentially attack it with this scheme. These are definitely problems we really want to solve.

  3. Its true that if we allow AIs more knowledge and more access to our world, they could potentially help us more -- but of course the number of things that can go wrong has to increase as well. Perhaps a compromise which sacrifices some of the potential while decreasing the possibilities that can go wrong is better.

Comment by ZoltanBerrigomo on Why CFAR's Mission? · 2016-01-11T03:34:17.878Z · LW · GW

1+1=2 is true by definition of what 2 means

Russell and Whitehead would beg to differ.

Comment by ZoltanBerrigomo on Why CFAR's Mission? · 2016-01-11T03:28:40.100Z · LW · GW

Sometimes you're dealing with a domain where explicit reasoning provides the best evidence, sometimes with a domain where emotions provide the best evidence.

And how should you (rationally) decide which kind of domain you are in?

Answer: using reason, not emotions.

Example: if you notice that your emotions have been a good guide in understanding what other people are thinking in the past, you should trust them in the future. The decision to do this, however, is an application of inductive reasoning.

Comment by ZoltanBerrigomo on Why CFAR's Mission? · 2016-01-08T23:55:50.533Z · LW · GW

No. CFAR rationality is about aligning system I and system II. It's not about declaring system I outputs to be worthy of being ignored in favor of system II outputs.

I believe you are nitpicking here.

If your reason tells you 1+1=2 but your emotions tell you that 1+1=3, being rational means going with your reason. If your reason tells you that ghosts do not exist, you should believe this to be the case even if you really, really want there to be evidence of an afterlife.

CFAR may teach you techniques to align your emotions and reason, but this does not change the fundamental fact that being rational involves evaluating claims like "is 1+1=2?" or empirical facts about the world such as "is there evidence for the existence of ghosts?" based on reason alone.

Just to forestall the inevitable objections (which always come in droves whenever I argue with anyone on this site): this does not mean you don't have emotions; it does not mean that your emotions don't play a role in determining your values; it does not mean that you shouldn't train your emotions to be an aid in your decision-making, etc etc etc.

Comment by ZoltanBerrigomo on Why CFAR's Mission? · 2016-01-05T05:51:22.454Z · LW · GW

Sure, you can work towards feeling more strongly about something, but I don't believe you'll ever be able match the emotional fervor the partisans feel -- I mean here the people who stew in their anger and embrace their emotions without reservations.

As a (rather extreme) example, consider Hitler. He was able to sway a great many people with what were appeals to anger and emotion (though I acknowledge there is much more to the phenomena of Hitler than this). Hypothetically, if you were a politician from the same era, say a rational one, and you understood that the way to persuade people is to tap into the public's sense of anger, I'm not sure you'd be able to match him.

Comment by ZoltanBerrigomo on Why CFAR's Mission? · 2016-01-03T17:42:10.119Z · LW · GW

Do the extend that it does require luck that simply means that it's important to have more people with rationality + competence + caring. If you have many people some will get lucky.

The "little bit of luck" in my post above was something of an understatement; actually, I'd suggest it requires a lot of luck (among many other things) to successfully change the world.

I think you might be pattern matching to straw-vulcan rationality, that's distinct from what CFAR wants to teach.

Not sure if I am, but I believe I am making a correct claim about human psychology here.

Being rational means many things, but surely one of them is making decisions based on some kind of reasoning process as opposed to recourse to emotions.

This does not mean you don't have emotions.

You might, for example, have very strong emotions about matters pertaining to fights between your perceived in-group and out-group, but you try to put those aside and make judgments based on some sort of fundamental principles.

Now if, in the real world, the way you persuade people is by emotional appeals (and this is at least partially true), this will be more difficult the more you get in the habit of rational thinking, even if you have an accurate model about what it takes to persuade someone -- emotions are not easy to fake and humans have strong intuitions about whether someone's expressed feelings are genuine.

Comment by ZoltanBerrigomo on Why CFAR's Mission? · 2016-01-01T21:10:11.041Z · LW · GW

A very interesting and thought provoking post -- I especially like the Q & A format.

I want to quibble with one bit:

How can I tell there aren't enough people out there, instead of supposing that we haven't yet figured out how to find and recruit them?

Basically, because it seems to me that if people had really huge amounts of epistemic rationality + competence + caring, they would already be impacting these problems. Their huge amounts of epistemic rationality and competence would allow them to find a path to high impact; and their caring would compel them to do it.

There is an empirical claim about the world that is implicit in that statement, and it is this claim I want to disagree with. Namely: I think having a high impact on the world is really, really hard. I would suggest it requires more than just rationality + competence + caring; for one thing, it requires a little bit of luck.

It also requires a good ability to persuade others who are not thinking rationally. Many such people respond to unreasonable confidence, emotional appeals, salesmanship, and other rhetorical tricks which may be more difficult to produce the more you are used to thinking things through rationally.

Comment by ZoltanBerrigomo on Rationality Quotes Thread October 2015 · 2015-11-01T19:02:07.760Z · LW · GW

For those people who insist, however, that the only thing that is important is that the theory agrees with experiment, I would like to make an imaginary discussion between a Mayan astronomer and his student...

These are the opening words of a ~1.5 minute monologue in one of Feynman's lectures; I won't transcribe the remainder but it can be viewed here.

Comment by ZoltanBerrigomo on Why Don't Rationalists Win? · 2015-09-12T06:23:05.449Z · LW · GW

Not sure...I think confidence, sales skills, and ability to believe and get passionate about BS can be very helpful in much of the business world.

Comment by ZoltanBerrigomo on Why Don't Rationalists Win? · 2015-09-11T23:59:42.178Z · LW · GW

Side-stepping the issue of whether rationalists actually "win" or "do not win" in the real world, I think a-priori there are some reasons to suspect that people who exhibit a high degree or rationality will not be among the most successful.

For example: people respond positively to confidence. When you make a sales pitch for your company/research project/whatever, people like to see you that you really believe in the idea. Often, you will win brownie points if you believe in whatever you are trying to sell with nearly evangelical fervor.

One might reply: surely a rational person would understand the value of confidence and fake it as necessary? Answer: yes to the former, no to the latter. Confidence is not so easy to fake; people with genuine beliefs either in their own grandeur or in the greatness of their ideas have a much easier time of it.

Robert Kurzbans' book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind is essentially about this. The book may be thought of as a long-winded answer to the question "Why aren't we all more rational?" Rationality skills seem kinda useful for bands of hunter-gatherers to possess, and yet evolution gave them to us only in part. Kurzban argues, among other things, that those who are able to genuinely believe certain fictions have an easier time persuading others, and therefore are likely to be more successful.

Comment by ZoltanBerrigomo on Rationality Quotes Thread August 2015 · 2015-09-01T04:36:55.885Z · LW · GW

I'm very fond of this bit by Robin Hanson:

A wide range of topics come up when talking informally with others, and people tend to like you to express opinions on at least some substantial subset of those topics. They typically aren’t very happy if you explain that you just adopted the opinion of some standard expert source without reflection, and so we are encouraged to “think for ourselves” to generate such opinions.

Comment by ZoltanBerrigomo on Scientific studies and trust · 2015-08-07T17:50:03.501Z · LW · GW

I think the lumping of various disciplines into "science" is unhelpful in this context. It is reasonable to trust the results of the last round of experiments at the LHC far more than the occasional psychology paper that makes the news.

I've not seen this distinction made as starkly as I think it really needs to be made -- there is a lot of difference between physics and chemistry, where one can usually design experiments to test hypotheses; to geology and atmospheric science, where one mostly fits models to data that happens to be available; to psychology, where the results of experiments seem to be very inconsistent and publication bias is a major cause of false research results.

Comment by ZoltanBerrigomo on There is no such thing as strength: a parody · 2015-07-11T03:42:27.402Z · LW · GW

I agree with 99.999% of what you say in this comment. In particular, you are right that the parody only works in the sense of the first of your bulleted points.

My only counterpoint is that I think this is how almost every reader will understand it. My whole post is an invitation to to consider a hypothetical in which people say about strength what they now say about intelligence and race.

Comment by ZoltanBerrigomo on Effective Altruism from XYZ perspective · 2015-07-09T01:49:32.477Z · LW · GW

I confess that I have not read much of what has been written on the subject, so what I am about to say may be dreadfully naive.

A. One should separate the concept of effective altruism from the mode-of-operation of the various organizations which currently take it as their motto.

A.i. Can anyone seriously oppose effective altruism in principle? I find it difficult to imagine someone supporting ineffective altruism. Surely, we should let our charity be guided by evidence, randomized experiments, hard thinking about tradeoffs, etc etc.

A.ii. On the other hand, one can certainly quibble with what various organization are now doing. Such quibbling can even be quite productive.

B. What comes next should be understood as quibbles.

B.i. As many others have pointed out, effective altruism implicitly assumes a set of values. As Daron Acemogulu asks (http://bostonreview.net/forum/logic-effective-altruism/daron-acemoglu-response-effective-altruism), "How much more valuable is to save the life of a one-year-old than to send a six-year-old to school?"

B.ii. I think GiveWell may be insufficiently transparent abut such things. For example, its explanation of criteria at http://www.givewell.org/criteria does not give a clearcut explanation of how it makes such determinations.

Caveat: this is onlybased on browsing the GiveWell webpage for 10 minutes. I'm open to being corrected on this point.

B.iii. Along the same lines I wonder: had GiveWell, or other effective altruists, existed in the 1920s, what would they say about funding a bunch of physicists who noticed some weird things were happening with the hydrogen atom? How does "develop quantum mechanics" rate in terms of benefit to humanity, compared to, say, keeping thirty children in school for an extra year?

B.iv. Peter Singer's endorsement of effective altruism in the Boston Review (http://bostonreview.net/forum/peter-singer-logic-effective-altruism ) includes some criticism of donations to opera houses; indeed, in a world with poverty and starvation, surely there are better things to do with one's money? This seems endorsed by GiveWell who list "serving the global poor" as their priority, and in context I doubt this means serving them via the production of poetry for their enjoyment.

I do not agree with this. Life is not merely about surviving; one must have something to live for. Poetry, music, novels -- for many people, these are a big part of what makes existence worthwhile.

C. Ideally, I'd love to see the recommendations of multiple effective altruist organizations with different values, all completely transparent about the assumptions that go into their recommendations. Could anyone disagree that this would make the world a better place?

Comment by ZoltanBerrigomo on There is no such thing as strength: a parody · 2015-07-08T04:40:14.079Z · LW · GW

I'm not sure I understand your criticism. I don't mean this in a passive aggressive sense, I really do not understand it. It seems to me that "the stupid," so to speak, perfectly carries over between the parody and the "original."


A. Imagine I visit country X, where everyone seems to be very buff. Gyms are everywhere, the parks are full of people practicing weight-lifting, and I notice people carrying heavy objects with little visible effort. When I return home, I remark to a friend that people in X seem to be very strong.

My friend gives me a glare. "What is strength, anyway? How would you define it? By the way, don't you know the concept has an ugly history? Also, have you seen this article about the impossibility of a culture-free measure of strength? Furthermore, don't you know that there is more variation between strong and weak people than among them?"

I listen to this and think to myself that I need to find some new friends.

B. Imagine I visit country X, where almost everyone seems to be of race Y. Being somewhat uneducated, I was unaware of this. When I return home, I ask a friend whether he knew that people from X tend to be of race Y.

My friend gives me a glare. "How do you define race anyway? Don't you know the concept has an ugly history? You know, it is a fact that there is more variation between races than among them."

I listen to this and think to myself that I need to find some new friends.

C. Imagine I visit country X, where intellectual pursuits seem highly valued. People play chess on the sidewalks and the coffee shops seem full of people reading the classics. The front pages of news papers are full of announcements of the latest mathematical breakthroughs. Nobel/Abel prize announcements draw the same audience on the television as the Oscars in my own country. Everyone I converse with is extremely well-informed and offers interesting opinions that I had not thought of before.

When I return home, I remark to a friend that people in X seem to be very smart.

My friend gives me a glare. "How would you define intelligence anyway? Don't you know the concept has an ugly history? Have you seen this article about the impossibility of a universal, culture-free intelligence test?"

I listen to this and...


It seems to me the three situations are exactly analogous. Am I wrong?

Comment by ZoltanBerrigomo on There is no such thing as strength: a parody · 2015-07-08T04:13:47.597Z · LW · GW

First, I'm not so sure: if someone is actually inconsistent, then pointing out the inconsistency may be the better (more charitable?) thing to do rather than pretending the person had made the closest consistent argument.

For example: there are a lot of academics who attack reason itself as fundamentally racist, imperialistic, etc. They back this up with something that looks like an argument. I think they are simply being inconsistent and contradictory, rather than meaning something deep not apparent at first glance.

More importantly, I think your conjecture is wrong.

On intelligence, I believe that many of the people who think intelligence does not exist would further object to a statement like "A is smarter than B," thinking it a form of ableism.

One example, just to show what I mean:

http://disabledfeminists.com/2009/10/23/ableist-word-profile-intelligence/

On race, the situation is more complicated: the "official line" is that race does not exist, but racism does. That is, people who say race does not exist also believe that people classify humans in terms of perceived race, even though the concept itself has no meaning (no "realness in a genetic sense" as one of the authors I cited in this thread puts it) . It is only in this sense that they would accept statements of the form "A and B are an interracial couple."

Comment by ZoltanBerrigomo on There is no such thing as strength: a parody · 2015-07-08T02:32:55.620Z · LW · GW

For what its worth, I have not downvoted any of your posts. Although we seem to be on opposite sides of this debate, I appreciate the thoughtful disagreement my post has received..

Comment by ZoltanBerrigomo on There is no such thing as strength: a parody · 2015-07-07T00:50:56.344Z · LW · GW

I would bet the opposite on #4, but that is beside the point. On #4 and #6, the point is that even if everything I wrote was completely correct -- e.g., if the scientific journals were actually full of papers to the effect that there is no such thing as a universal test of strength because people from different cultures lift things differently -- it would not imply there is no such thing as strength.

On #5, the statement that race is a social construct is implicit. Anyway, as I said in the comment above, there are a million similar statements that are being made in the media all the time, and I could have easily chosen to cite one that would have explicitly said race is a social construct. For example:

http://www.nytimes.com/roomfordebate/2015/06/16/how-fluid-is-racial-identity/race-and-racial-identity-are-social-constructs

The writer is a law professor, writing in the NY times; she tells us that "race is a social construct" as "there is no gene or cluster of genes common to all blacks or all whites" and explicitly draws the conclusion that race "is not real in a genetic sense."

...which is a synthesis of arguments 1 & 3 & 5 in my post. I know I could read the author's statement as true but trivial (she is, of course, right - race, strength, height, and all other concepts in our vocabulary are social constructs) but that does not seem to be the intended reading. I could also explicate her position beginning with the words "But what she really meant by that is..." but that also strikes me the wrong response to a fundamentally confused argument.

Comment by ZoltanBerrigomo on There is no such thing as strength: a parody · 2015-07-07T00:36:15.080Z · LW · GW

First, only some of the attacks I cited were brief and sketchy; others were lengthier. Second, I have cited a few such attacks due to time and space constraints, but in fact they exist in great profusion. My personal impression is that the popular discourse on intelligence and race is drowning in confused rhetoric along the lines of what I parodied.

Finally, I think the last possibility you cite is on point -- there are many, many people who are not thinking very clearly here. As I said, I think these people also have come to dominate the debate on this subject (at least in terms of what one is likely to read about in the newspaper rather than a scientific venue). Instead of ignoring them and focusing on people who make more thoughtful and defensible variations of these points, I think some kind of attempt at refutation is called for.

Comment by ZoltanBerrigomo on There is no such thing as strength: a parody · 2015-07-06T21:26:05.978Z · LW · GW

A. I think at least some people do mean that concepts of intelligence and race are, in some sense, inherently meaningless.

When people say

"race does not exist because it is a social construct"

or that race does not exist because

"amount of variation within races is much larger than the amount of variation between races,"

I think it is being overly charitable to read that as saying

"race is not a scientifically precise concept that denotes intrinsic, context-independent characteristics."

B. Along the same lines, I believe I am justified in taking people at their word. If people want to say "race is not a scientifically precise concept" then they should just say that. They should not say that race does not exist, and if they do say the latter, I think that opens them up to justifiable criticism.

Comment by ZoltanBerrigomo on There is no such thing as strength: a parody · 2015-07-06T18:46:51.585Z · LW · GW

See the reply I just wrote to gjm for an explanation of my motivations.

When I was writing this, I thought the intent to parody would be clear; surely no one could seriously suggest we have to strike strength from our dictionaries? I seem to have been way off on that. Perhaps that is a reflection on the internet culture at large, where these kinds of arguments are common enough not to raise any eyebrows.

Anyway, I went one step further and put "parody" in the title.

Comment by ZoltanBerrigomo on There is no such thing as strength: a parody · 2015-07-06T18:31:11.611Z · LW · GW

I was not trying to suggest that intelligence and strength are as alike as race and strength. Rather, I was motivated by the observation that there are a number of arguments floating around to the effect that,

A. Race doesn't exist

B. Intelligence doesn't exist.

and, actually, to a lesser extent,

C. Rationality doesn't exist (as a coherent notion).

The arguments for A,B,C are often dubious and tend to overlap heavily; I wanted to write something which would show how flawed those arguments are through a reductio ad absurdum.

To put it another way, even if strength (or intelligence or race) really was an incoherent notion, none of the arguments 1-7 in my post establish that it is so. It isn't that that these arguments are wholly wrong -- in fact, there is a measure of truth to each of them -- but that they don't suffice to establish the conclusion.

Comment by ZoltanBerrigomo on There is no such thing as strength: a parody · 2015-07-06T00:28:08.014Z · LW · GW

Hmm, on second thought, I added a [/parody] tag at the end of my post - just in case...

Comment by ZoltanBerrigomo on Beyond Statistics 101 · 2015-06-30T00:11:28.535Z · LW · GW

For what its worth, I have observed a certain reverence in the way great mathematicians are treated by their lesser-accomplished colleagues that can often border on the creepy. This is something specific to math, in that it seems to exist in other disciplines with lesser intensity.

But I agree, "dysfunctional" seems to be a more apt label than "cult." May I also add "fashion-prone?"

Comment by ZoltanBerrigomo on Beyond Statistics 101 · 2015-06-29T06:15:17.935Z · LW · GW

The links you give are extremely interesting, but, unless I am missing something, it seems that they fall short of justifying your earlier statement that math academia functions as a cult. I wonder if you would be willing to elaborate further on that?