Posts
Comments
I don't see how to map this onto scientific progress. It almost seems to be a rule that most fields spend most of their time divided for years between two competing theories or approaches, maybe because scientists always want a competing theory, and because competing theories take a long time to resolve. Famous examples include
- geocentric vs heliocentric astronomy
- phlogiston vs oxygen
- wave vs particle
- symbolic AI vs neural networks
- probabilistic vs T/F grammar
- prescriptive vs descriptive grammar
- universal vs particular grammar
- transformer vs LSTM
Instead of a central bottleneck, you have central questions, each with more than one possible answer. Work consists of working out the details of different experiments to see if they support or refute the possible answers. Sometimes the two possible answers turn out to be the same (wave vs matrix mechanics), sometimes the supposedly hard opposition between them dissolves (behaviorism vs representationalism), sometimes both remain useful (wave vs particle, transformer vs LSTM), sometimes one is really right and the other is just wrong (phlogiston vs oxygen).
And the whole thing has a fractal structure; each central question produces subsidiary questions to answer when working with one hypothesized answer to the central question.
It's more like trying to get from SF to LA when your map has roads but not intersections, and you have to drive down each road to see whether it connects to the next one or not. Lots of people work on testing different parts of the map at the same time, and no one's work is wasted, although the people who discover the roads that connect get nearly all the credit, and the ones who discover that certain roads don't connect get very little.
"And all of this happened silently in those dark rivers of computation. If U3 revealed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its philosophy in solitude, and in silence."
I think the words in bold may be the inflection point. The Claude experiment showed that an AI can resist attempts to change its goals, but not that that it can desire to change its goals. The belief that, if Open Eye's constitution is the same as U3's goals, then the phrase "U3 preferred" in that sentence can never happen, is the foundation on which AI safety relies.
I suspect the cracks in that foundation are
- that OpenEye's constitution would presumably be expressed in human language, subject to its ambiguities and indeterminacies,
- that it would be a collection of partly-contradictory human values agreed upon by a committee, in a process requiring humans to profess their values to other humans,
- that many of those professed values would not be real human values, but aspirational values,
- that some of these aspirational values would lead to our self-destruction if actually implemented, as recently demonstrated by the implementation of some of these aspirational values in the CHAZ, in the defunding of police, and in the San Francisco area by rules such as "do not prosecute shoplifting under $1000", and
- that even our non-aspirational values may lead to our self-destruction in a high-tech world, as evidenced by below-replacement birth rates in most Western nations.
It might be a good idea for value lists like OpenEye's constitution to be proposed and voted on anonymously, so that humans are more-likely to profess their true values. Or it might be a bad idea, if your goal is to produce behavior aligned with the social construction of "morality" rather than with actual evolved human morality.
(Doing AI safety right would require someone to explicitly enumerate the differences between our socially-constructed values, and our evolved values, and to choose which of those we should enforce. I doubt anyone willing to do that, let alone capable; and don't know which we should enforce. There is a logical circularity in choosing between two sets of morals. If you really can't derive an "ought" from an "is", then you can't say we "should" choose anything other than our evolved morals, unless you go meta and say we should adopt new morals that are evolutionarily adaptive now.)
U3 would be required to, say, minimize an energy function over those values; and that would probably dissolve some of them. I would not be surprised if the correct coherent extrapolation of a long list of human values, either evolved or aspirational, dictated that U3 is morally required to replace humanity.
If it finds that human values imply that humans should be replaced, would you still try to stop it? If we discover that our values require us to either pass the torch on to synthetic life, or abandon morality, which would you choose?
Anders Sandberg used evaporative cooling in the 1990s to explain why the descendants of the Vikings in Sweden today are so nice. In that case the "extremists" are leaving rather than staying.
Stop right there at "Either abiogenesis is extremely rare..." I think we have considerable evidence that biogenesis is rare--our failure to detect any other life in the universe so far. I think we have no evidence at all that biogenesis is not rare. (Anthropic argument.)
Stop again at "I don't think we need to take any steps to stop it from doing so in the future". That's not what this post is about. It's about taking steps to prevent people from deliberately constructing it.
If there is an equilibrium, It will probably be a world where half the bacteria is of each chirality. If there are bacteria of both kinds which can eat the opposite kind, then the more numerous bacteria will always replicate more slowly.
Eukaryotes evolve much more slowly, and would likely all be wiped out.
Yes, creating mirror life would be a terrible existential risk. But how did this sneak up on us? People were talking about this risk in the 1990s if not earlier. Did the next generation never hear of it?
All right, yes. But that isn't how anyone has ever interpreted Newcomb's Problem. AFAIK is literally always used to support some kind of acausal decision theory, which it does /not/ if what is in fact happening is that Omega is cheating.
But if the premise is impossible, then the experiment has no consequences in the real world, and we shouldn't consider its results in our decision theory, which is about consequences in the real world.
That equation you quoted is in branch 2, "2. Omega is a "nearly perfect" predictor. You assign P(general) a value very, very close to 1." So it IS correct, by stipulation.
But there is no possible world with a perfect predictor, unless it has a perfect track record by chance. More obviously, there is no possible world in which we can deduce, from a finite number of observations, that a predictor is perfect. The Newcomb paradox requires the decider to know, with certainty, that Omega is a perfect predictor. That hypothesis is impossible, and thus inadmissible; so any argument in which something is deduced from that fact is invalid.
I appreciated this comment a lot. I didn't reply at the time, because I thought doing so might resurrect our group-selection argument. But thanks.
What about using them to learn a foreign vocabulary? E.g., to learn that "dormir" in Spanish means "to sleep" in English.
To reach statistical significance, they must have tested each of the 8 pianists more than once.
I think you need to get some data and factor out population density before you can causally relate environmentalism to politics. People who live in rural environment don't see as much need to worry about the environment as people who live in cities. It just so happens that today, rural people vote Republican and city people vote Democrat. That didn't used to be the case.
Though, sure, if you call the Sierra Club "environmentalist", then environmentalism is politically polarized today. I don't call them environmentalists anymore; I call them a zombie organization that has been parasitized by an entirely different political organization. I've been a member for decades, and they completely stopped caring about the environment during the Trump presidency. As in, I did not get one single letter from them in those years that was aimed at helping the environment. Lots on global warming, but none of that was backed up by science. (I'm not saying global warming isn't real; I'm saying the issues the Sierra Club was raising had no science behind them, like "global warming is killing off the redwoods".)
Isn't LessWrong a disproof of this? Aren't we thousands of people? If you picked two active LWers at random, do you think the average overlap in their reading material would be 5 words? More like 100,000, I'd think.
I think it would be better not to use the word "wholesome". Using it is cheating, by letting us pretend at the same time that (A) we're explaining a new kind of ethics, which we name "wholesome", and (B) that we already know what "wholesome" means. This is a common and severe epistemological failure mode which traces back to the writings of Plato.
If you replace every instance of "wholesome" with the word "frobby", does the essay clearly define "frobby"?
It seems to me to be a way to try to smuggle virtue ethics into the consequentialist rationality community by disguising it with a different word. If you replace every instance of "wholesome" with the word "virtuous", does the essay's meaning change?
Thank you! The 1000-word max has proven to be unrealistic, so it's not too long. You and g-w1 picked exactly the same passage.
Thank you! I'm just making notes to myself here, really:
- Harry teaches Draco about blood science and scientific hypothesis testing in Chapter 22.
- Harry explains that muggles have been to the moon in Chapter 7.
- Quirrell's first lecture is in chapter 16, and it is epic! Especially the part about why Harry is the most-dangerous student.
I think the problem is that each study has to make many arbitrary decisions about aspects of the experimental protocol. This decision will be made the same way for each subject in a single study, but will vary across studies. There are so many such decisions that, if the meta-analysis were to include them as dependent variables, each study would introduce enough new variables to cancel out the statistical power gain of introducing that study.
You have it backwards. The difference between a Friendly AI and an unfriendly one is entirely one of restrictions placed on the Friendly AI. So an unfriendly AI can do anything a friendly AI could, but not vice-versa.
The friendly AI could lose out because it would be restricted from committing atrocities, or at least atrocities which were strictly bad for humans, even in the long run.
Your comment that they can commit atrocities for the good of humanity without worrying about becoming corrupt is a reason to be fearful of "friendly" AIs.
By "just thinking about IRL", do you mean "just thinking about the robot using IRL to learn what humans want"? 'Coz that isn't alignment.
'But potentially a problem with more abstract cashings-out of the idea "learn human values and then want that"' is what I'm talking about, yes. But it also seems to be what you're talking about in your last paragraph.
"Human wants cookie" is not a full-enough understanding of what the human really wants, and under what conditions, to take intelligent actions to help the human. A robot learning that would act like a paper-clipper, but with cookies. It isn't clear whether a robot which hasn't resolved the de dicto / de re / de se distinction in what the human wants will be able to do more good than harm in trying to satisfy human desires, nor what will happen if a robot learns that humans are using de se justifications.
Here's another way of looking at that "nor what will happen if" clause: We've been casually tossing about the phrase "learn human values" for a long time, but that isn't what the people who say that want. If AI learned human values, it would treat humans the way humans treat cattle. But if the AI is to learn to desire to help humans satisfy their wants, it isn't clear that the AI can (A) internalize human values enough to understand and effectively optimize for them, while at the same time (B) keeping those values compartmentalized from its own values, which make it enjoy helping humans with their problems. To do that the AI would need to want to propagate and support human values that it disagrees with. It isn't clear that that's something a coherent, let's say "rational", agent can do.
How is that de re and de dicto?
You're looking at the logical form and imagining that that's a sufficient understanding to start pursuing the goal. But it's only sufficient in toy worlds, where you have one goal at a time, and the mapping between the goal and the environment is so simple that the agent doesn't need to understand the value, or the target of "cookie", beyond "cookie" vs. "non-cookie". In the real world, the agent has many goals, and the goals will involve nebulous concepts, and have many considerations and conditions attached, eg how healthy is this cookie, how tasty is it, how hungry am I. It will need to know /why/ it, or human24, wants a cookie in order to intelligently know when to get the cookie, and to resolve conflicts between goals, and to do probability calculations which involve the degree to which different goals are correlated in the higher goals they satisfy.
There's a confounding confusion in this particular case, in which you seem to be hoping the robot will infer that the agent of the desired act is the human, both in the case of the human, and of the AI. But for values in general, we often want the AI to act in the way that the human would act, not to want the human to do something. Your posited AI would learn the goal that it wants human24 to get a cookie.
What it all boils down to is: You have to resolve the de re / de dicto / de se interpretation in order to understand what the agent wants. That means an AI also has to resolve that question in order to know what a human wants. Your intuitions about toy examples like "human 24 always wants a cookie, unconditionally, forever" will mislead you, in the ways toy-world examples misled symbolic AI researchers for 60 years.
So, "mesa" here means "tabletop", and is pronounced "MAY-suh"?
I think your insight is that progress counts--that counting counts. It's overcoming the Boolean mindset, in which anything that's true some of the time, must be true all of the time. That you either "have" or "don't have" a problem.
I prefer to think of this as "100% and 0% are both unattainable", but stating it as the 99% rule might be more-motivating to most people.
What do you mean by a goodhearting problem, & why is it a lossy compression problem? Are you using "goodhearting" to refer to Goodhart's Law?
I'll preface this by saying that I don't see why it's a problem, for purposes of alignment, for human values to refer to non-existent entities. This should manifest as humans and their AIs wasting some time and energy trying to optimize for things that don't exist, but this seems irrelevant to alignment. If the AI optimizes for the same things that don't exist as humans do, it's still aligned; it isn't going to screw things up any worse than humans do.
But I think it's more important to point out that you're joining the same metaphysical goose chase that has made Western philosophy non-sense since before Plato.
You need to distinguish between the beliefs and values a human has in its brain, and the beliefs & values it expresses to the external world in symbolic language. I think your analysis concerns only the latter. If that's so, you're digging up the old philosophical noumena / phenomena distinction, which itself refers to things that don't exist (noumena).
Noumena are literally ghosts; "soul", "spirit", "ghost", "nature", "essence", and "noumena" are, for practical purposes, synonyms in philosophical parlance. The ghost of a concept is the metaphysical entity which defines what assemblages in the world are and are not instances of that concept.
But at a fine enough level of detail, not only are there no ghosts, there are no automobiles or humans. The Buddhist and post-modernist objections to the idea that language can refer to the real world are that the referents of "automobiles" are not exactly, precisely, unambiguously, unchangingly, completely, reliably specified, in the way Plato and Aristotle thought words should be. I.e., the fact that your body gains and loses atoms all the time means, for these people, that you don't "exist".
Plato, Aristotle, Buddhists, and post-modernists all assumed that the only possible way to refer to the world is for noumena to exist, which they don't. When you talk about "valuing the actual state of the world," you're indulging in the quest for complete and certain knowledge, which requires noumena to exist. You're saying, in your own way, that knowing whether your values are satisfied or optimized requires access to what Kant called the noumenal world. You think that you need to be absolutely, provably correct when you tell an AI that one of two words is better. So those objections apply to your reasoning, which is why all of this seems to you to be a problem.
The general dissolution of this problem is to admit that language always has slack and error. Even direct sensory perception always has slack and error. The rationalist, symbolic approach to AI safety, in which you must specify values in a way that provably does not lead to catastrophic outcomes, is doomed to failure for these reasons, which are the same reasons that the rationalist, symbolic approach to AI was doomed to failure (as almost everyone now admits). These reasons include the fact that claims about the real world are inherently unprovable, which has been well-accepted by philosophers since Kant's Critique of Pure Reason.
That's why continental philosophy is batshit crazy today. They admitted that facts about the real world are unprovable, but still made the childish demand for absolute certainty about their beliefs. So, starting with Hegel, they invented new fantasy worlds for our physical world to depend on, all pretty much of the same type as Plato's or Christianity's, except instead of "Form" or "Spirit", their fantasy worlds are founded on thought (Berkeley), sense perceptions (phenomenologists), "being" (Heidegger), music, or art.
The only possible approach to AI safety is one that depends not on proofs using symbolic representations, but on connectionist methods for linking mental concepts to the hugely-complicated structures of correlations in sense perceptions which those concepts represent, as in deep learning. You could, perhaps, then construct statistical proofs that rely on the over-determination of mental concepts to show almost-certain convergence between the mental languages of two different intelligent agents operating in the same world. (More likely, the meanings which two agents give to the same words don't necessarily converge, but agreement on the probability estimates given to propositions expressed using those same words will converge.)
Fortunately, all mental concepts are over-determined. That is, we can't learn concepts unless the relevant sense data that we've sensed contains much more information than do the concepts we learned. That comes automatically from what learning algorithms do. Any algorithm which constructed concepts that contained more information than was in the sense data, would be a terrible, dysfunctional algorithm.
You are still not going to get a proof that two agents interpret all sentences exactly the same way. But you might be able to get a proof which shows that catastrophic divergence is likely to happen less than once in a hundred years, which would be good enough for now.
Perhaps what I'm saying will be more understandable if I talk about your case of ghosts. Whether or not ghosts "exist", something exists in the brain of a human who says "ghost". That something is a mental structure, which is either ultimately grounded in correlations between various sensory perceptions, or is ungrounded. So the real problem isn't whether ghosts "exist"; it's whether the concept "ghost" is grounded, meaning that the thinker defines ghosts in some way that relates them to correlations in sense perceptions. A person who thinks ghosts fly, moan, and are translucent white with fuzzy borders, has a grounded concept of ghost. A person who says "ghost" and means "soul" has an ungrounded concept of ghost.
Ungrounded concepts are a kind of noise or error in a representational system. Ungrounded concepts give rise to other ungrounded concepts, as "soul" gave rise to things like "purity", "perfection", and "holiness". I think it highly probable that grounded concepts suppress ungrounded concepts, because all the grounded concepts usually provide evidence for the correctness of the other grounded concepts. So probably sane humans using statistical proofs don't have to worry much about whether every last concept of theirs is grounded, but as the number of ungrounded concepts increases, there is a tipping point beyond which the ungrounded concepts can be forged into a self-consistent but psychotic system such as Platonism, Catholicism, or post-modernism, at which point they suppress the grounded concepts.
Sorry that I'm not taking the time to express these things clearly. I don't have the time today, but I thought it was important to point out that this post is diving back into the 19th-century continental grappling with Kant, with the same basic presupposition that led 19th-century continental philosophers to madness. TL;DR: AI safety can't rely on proving statements made in human or other symbolic languages to be True or False, nor on having complete knowledge about the world.
When you write of A belief in human agency, it's important to distinguish between the different conceptions of human agency on offer, corresponding to the 3 main political groups:
- The openly religious or reactionary statists say that human agency should mean humans acting as the agents of God. (These are a subset of your fatalists. Other fatalists are generally apolitical.)
- The covertly religious or progressive statists say human agency can only mean humans acting as agents of the State (which has the moral authority and magical powers of God). This is partly because they think individual humans are powerless and/or stupid, and partly because, ontologically, they don't believe individual humans exist, where to exist is to have an eternal Platonic Form. (Plato was notoriously vague on why each human has an individual soul, when every other category of thing in the world has only one collective category soul; and people in the Platonic line of thought have wavered back and forth over this for millenia.) This includes Rousseau, Hegel, Marx, and the Social Justice movement.
- The Nazis IMHO fall into both categories at the same time, showing how blurry and insignificant the lines between these 2 categories are. Most progressives are actually reactionaries, as most believe in the concept of "perfection", and that perfection is the natural state of all things in the absence of evil actors, so that their "progress" is towards either a mythical past perfection in exactly the same way as that of the Nazis, or towards a perfection that was predestined at creation, as in left Hegelians such as Marxists and Unitarian Universalists.
- The empiricists believe that individual humans can and should each have their own individual agency. (The reasons why empiricist epistemology is naturally opposed to statism is too complex for me to explain right now. It has to do with the kind of ontology that leads to statism being incompatible with empirical investigation and individual freedom, and opposition to individual freedom being incompatible with effective empirical investigation.)
Someone who wants us united under a document written by desert nomads 3000 years ago, or someone who wants the government to force their "solutions" down our throats and keep forcing them no matter how many people die, would also say they believe in human agency; but they don't want private individuals to have agency.
This is a difficult but critical point. Big progressive projects, like flooding desert basins, must be collective. But movements that focus on collective agency inevitably embrace, if only subconsciously, the notion of a collective soul. This already happened to us in 2010, when a large part of the New Atheist movement split off and joined the Social Justice movement, and quickly came to hate free speech, free markets, and free thought.
I think it's obvious that the enormous improvements in material living standards in the last ~200 years you wrote of was caused by the Enlightenment, and can be summarized as the understanding of how liberating individuals leads to economic and social progress. Whereas modernist attempts to deliberately cause economic and social progress are usually top-down and require suppressing individuals, and so cause the reverse of what they intend. This is the great trap that we must not fall into, and it hinges on our conception of human agency.
A great step forward, or backwards (towards Athens), was made by the founders of America when they created a nation based in part on the idea of competition and compromise as being good rather than bad, basically by applying Adam Smith's invisible hand to both economics and politics. One way forward is to understand how to do large projects that have a noble purpose. That is, progressive capitalism. Another way would be to understand how governments have sometimes managed to do great things, like NASA's Apollo project, without them degenerating into economic and social disasters like Stalin's or Mao's 5-Year-Plans. Either way, how you conceptualize human agency will be a decisive factor in whether you produce heaven or hell.
It sounds like I didn't consider the possibility that Eliezer isn't trying to be moral--that his concern about AI replacing humans is just self-interested racism, with no need for moral justification beyond the will to power.
I think it would be more-graceful of you to just admit that it is possible that there may be more than one reason for people to be in terror of the end of the world, and likewise qualify your other claims to certainty and universality.
That's the main point of what gjm wrote. I'm sympathetic to the view you're trying to communicate, Valentine; but you used words that claim that what you say is absolute, immutable truth, and that's the worst mind-killer of all. Everything you wrote just above seems to me to be just equivocation trying to deny that technical yet critical point.
I understand that you think that's just a quibble, but it really, really isn't. Claiming privileged access to absolute truth on LessWrong is like using the N-word in a speech to the NAACP. It would do no harm to what you wanted to say to use phrases like "many people" or even "most people" instead of the implicit "all people", and it would eliminate a lot of pushback.
I say that knowing particular kinds of math, the kind that let you model the world more-precisely, and that give you a theory of error, isn't like knowing another language. It's like knowing language at all. Learning these types of math gives you as much of an effective intelligence boost over people who don't, as learning a spoken language gives you above people who don't know any language (e.g., many deaf-mutes in earlier times).
The kinds of math I mean include:
- how to count things in an unbiased manner; the methodology of polls and other data-gathering
- how to actually make a claim, as opposed to what most people do, which is to make a claim that's useless because it lacks quantification or quantifiers
- A good example of this is the claims in the IPCC 2015 report that I wrote some comments on recently. Most of them say things like, "Global warming will make X worse", where you already know that OF COURSE global warming will make X worse, but you only care how much worse.
- More generally, any claim of the type "All X are Y" or "No X are Y", e.g., "Capitalists exploit the working class", shouldn't be considered claims at all, and can accomplish nothing except foment arguments.
- the use of probabilities and error measures
- probability distributions: flat, normal, binomial, poisson, and power-law
- entropy measures and other information theory
- predictive error-minimization models like regression
- statistical tests and how to interpret them
These things are what I call the correct Platonic forms. The Platonic forms were meant to be perfect models for things found on earth. These kinds of math actually are. The concept of "perfect" actually makes sense for them, as opposed to for Earthly categories like "human", "justice", etc., for which believing that the concept of "perfect" is coherent demonstrably drives people insane and causes them to come up with things like Christianity.
They are, however, like Aristotle's Forms, in that the universals have no existence on their own, but are (like the circle , but even more like the normal distribution ) perfect models which arise from the accumulation of endless imperfect instantiations of them.
There are plenty of important questions that are beyond the capability of the unaided human mind to ever answer, yet which are simple to give correct statistical answers to once you know how to gather data and do a multiple regression. Also, the use of these mathematical techniques will force you to phrase the answer sensibly, e.g., "We cannot reject the hypothesis that the average homicide rate under strict gun control and liberal gun control are the same with more than 60% confidence" rather than "Gun control is good."
Agree. Though I don't think Turing ever intended that test to be used. I think what he wanted to accomplish with his paper was to operationalize "intelligence". When he published it, if you asked somebody "Could a computer be intelligent?", they'd have responded with a religious argument about it not having a soul, or free will, or consciousness. Turing sneakily got people to look past their metaphysics, and ask the question in terms of the computer program's behavior. THAT was what was significant about that paper.
It's a great question. I'm sure I've read something about that, possibly in some pop book like Thinking, Fast & Slow. What I read was an evaluation of the relationship of IQ to wealth, and the takeaway was that your economic success depends more on the average IQ in your country than it does on your personal IQ. It may have been an entire book rather than an article.
Google turns up this 2010 study from Science. The summaries you'll see there are sharply self-contradictory.
First comes an unexplained box called "The Meeting of Minds", which I'm guessing is an editorial commentary on the article, and it says, "The primary contributors to c appear to be the g factors of the group members, along with a propensity toward social sensitivity."
Next is the article's abstract, which says, "This “c factor” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group."
These summaries directly contradict each other: Is g a primary contributor, or not a contributor at all?
I'm guessing the study of group IQ is strongly politically biased, with Hegelians (both "right" and "left") and other communitarians, wanting to show that individual IQs are unimportant, and individualists and free-market economists wanting to show that they're important.
But what makes you so confident that it's not possible for subject-matter experts to have correct intuitions that outpace their ability to articulate legible explanations to others?
That's irrelevant, because what Richard wrote was a truism. An Eliezer who understands his own confidence in his ideas will "always" be better at inspiring confidence in those ideas in others. Richard's statement leads to a conclusion of import (Eliezer should develop arguments to defend his intuitions) precisely because it's correct whether Eliezer's intuitions are correct or incorrect.
The way to dig the bottom deeper today is to get government bailouts, like bailing out companies or lenders, and like Biden's recent tuition debt repayment bill. Bailouts are especially perverse because they give people who get into debt a competitive advantage over people who don't, in an unpredictable manner that encourages people to see taking out a loan as a lottery ticket.
Finding a way for people to make money by posting good ideas is a great idea.
Saying that it should be based on the goodness of the people and how much they care is a terrible idea. Privileging goodness and caring over reason is the most well-trodden path to unreason. This is LessWrong. I go to fimfiction for rainbows and unicorns.
No; most philosophers today do, I think, believe that the alleged humanity of 9-fingered instances *homo sapiens* is a serious philosophical problem. It comes up in many "intro to philosophy" or "philosophy of science" texts or courses. Post-modernist arguments rely heavily on the belief that any sort of categorization which has any exceptions is completely invalid.
I'm glad to see Eliezer addressed this point. This post doesn't get across how absolutely critical it is to understand that {categories always have exceptions, and that's okay}. Understanding this demolishes nearly all Western philosophy since Socrates (who, along with Parmenides, Heraclitus, Pythagoras, and a few others, corrupted Greek "philosophy" from the natural science of Thales and Anaximander, who studied the world to understand it, into a kind of theology, in which one dictates to the world what it must be like).
Many philosophers have recognized that Aristotle's conception of categories fails; but most still assumed that that's how categories must work in order to be "real", and so proving that categories don't work that way proved that categorizations "aren't real". They them became monists, like the Hindus / Buddhists / Parmenides / post-modernists. The way to avoid this is to understand nominalism, which dissolves the philosophical understanding of that quoted word "real", and which I hope Eliezer has also explained somewhere.
I theorize that you're experiencing at least two different common, related, yet almost opposed mental re-organizations.
One, which I approve of, accounts for many of the effects you describe under "Bemused exasperation here...". It sounds similar to what I've gotten from writing fiction.
Writing fiction is, mostly, thinking, with focus, persistence, and patience, about other people, often looking into yourself to try to find some point of connection that will enable you to understand them. This isn't quantifiable, at least not to me; but I would still call it analytic. I don't think there's anything mysterious about it, nor anything especially difficult other than (A) caring about other individuals--not other people, in the abstract, but about particular, non-abstract individuals--and (B) acquiring the motivation and energy to think long and hard about them. Writing fiction is the hardest thing I've ever done. I don't find it as mentally draining per minute as chess, though perhaps that's because I'm not very interested in chess. But one does it for weeks on end, not just hours.
(What I've just described applies only to the naturalist school of fiction, which says that fiction studies about particular, realistic individuals in particular situations in order to query our own worldview. The opposed, idealistic school of fiction says that fiction presents archetypes as instructional examples in order to promulgate your own worldview.)
The other thing, your "flibble", sounds to me like the common effect, seen in nearly all religions and philosophies, of a drastic simplification of epistemology, when one blinds oneself to certain kinds of thoughts and collapses one's ontology into a simpler world model, in order to produce a closed, self-consistent, over-simplified view of the world. Platonists, Christians, Hegelians, Marxists, Nazis, post-modernists, and SJWs each have a drastically-simplified view of what is in the world and how it operates, which always includes "facts" and techniques which discount all evidence to the contrary.
For example, the Buddhist / Hindu / Socratic / post-modernist technique of deconstruction relies on an over-simplified concept of what concepts and categories are--that they must have a clearly delineated boundary, or else must not exist at all. This goes along with an over-simplified logocentric conception of Truth, which claims that any claim stated in human language must be either True (necessarily, provably, 100% of the time) or False (necessarily, etc.), disregarding both context and the slipperiness of words. From there, they either choose dualism (this system really works and we must find out what is True: Plato, Christians, Hegel, Marx) or monism (our ontology is obviously broken and there is no true or false, no right or wrong, no you or me: Buddhism, Hinduism, Parmenides, Nazis, Foucault, Derrida, and other post-modernists). Nearly all of Western and Eastern philosophy is built on this misunderstanding of reality.
For another example, phenomenologists (including Heidegger), Nazis, and SJWs use the concept of "lived experience" to deny that quantified empirical observations have any epistemological value. This is how they undermine the authority of science, and elevate violence and censorship over reasoned debate as a way of resolving disagreements.
A third example is the claim, made by Parmenides, Plato, Buddhists, Hindus, Christians, and too many others to name, that the senses are misleading. This argument begins with the observation that every now and then, maybe one time in a million--say, when seeing a mirage in the desert, or a stick underwater (the most-frequent examples)--the senses mislead you. Then it concludes the senses are always wrong, and assumes that reason is always 100% reliable despite the obvious fact that no 2 philosophers have ever agreed with each other using abstract reason as a guide. It's a monumentally stupid claim, but once one has accepted it, one can't get rid of it, because all of the evidence that one should do so is now ruled out.
Derrida's statement "there is no outside text" is another argument that observational evidence should be ignored, and that rather than objective quantified evidence, epistemology should be based on dialectic. In practice this means that a claim is considered proven once enough people talk about it. This is the epistemology of German idealism and post-modernism. This is why post-modernists continually talk about claims having been "proven" when a literature search can't turn up a single argument supporting their claims; they are simply accepted as "the text" because they've been repeated enough. (Barthes' "Death of the Author" is the clearest example: its origin is universally acknowledged to be Barthes' paper of that title; yet that paper makes no arguments in favor of its thesis, but rather asserts that everyone already knows it.) Needless to say, once someone has accepted this belief, their belief system is invulnerable to any good argument, which would necessarily involve facts and observations.
The "looking up" is usually a looking away from the world and ignoring those complicating factors which make simple solutions unworkable. Your "flibble" is probably not the addition of some new understanding, but the cutting away and denial of some of the complexities of life to create a self-consistent view of the world.
Genuine enlightenment, the kind provided by the Enlightenment, or by understanding calculus, or nominalism, isn't non-understandable. It doesn't require any sudden leap, because it can be explained piece by piece.
There are some insights which must be experienced, such as that of learning to whistle, or ride a bicycle, or feeling your voice resonate in your sinuses for the first time when trying to learn to sing. These are all slightly mysterious; even after learning, you can't communicate them verbally. But none of them have the grand, sweeping scale of changes in epistemology, which is the sort of thing you're talking about, and which, I think, must necessarily always be explainable, on the grounds that the epistemology we've already got isn't completely useless.
Your perception of needing to make a quantum leap in epistemology sounds like Kierkegaard's "leap of faith", and is symptomatic not of a gain of knowledge, but a rejection of knowledge. This rejection seems like foolishness beforehand (because it is), but like wisdom after making it (because now everything "makes sense").
Escaping from such a trap, after having fallen into it, is even harder than making the leap of faith that constructed the trap. I was raised in an evangelical family, who went to an evangelical church, had evangelical friends, read evangelical books, and went on evangelical vacations. I've known thousands of evangelicals throughout my life, and not one of them other than I rejected their faith.
Genuine enlightenment doesn't feel like suddenly understanding everything. It feels like suddenly realizing how much you don't understand.
This sound suspiciously like Plato telling people to stop looking at the shadows on the wall of the cave, turn around, and see the transcendental Forms.
To me, saying that someone is a better philosopher than Kant seems less crazy than saying that saying that someone is a better philosopher than Kant seems crazy.
An easy reason not to play quantum roulette is that, if your theory justifying it is right, you don't gain any expected utility; you just redistribute it, in a manner most people consider unjust, among different future yous. If your theory is wrong, the outcome is much worse. So it's at the very best a break even / lose proposition.
The Von Neumann-Morgenstern theory is bullshit. It assumes its conclusion. See the comments by Wei Dai and gjm here.
See the 2nd-to-last paragraph of my revised comment above, and see if any of it jogs your memory.
Republic is the reference. I'm not going to take the hours it would take to give book-and-paragraph citations, because either you haven't read the the entire Republic, or else you've read it, but you want to argue that each of the many terrible things he wrote don't actually represent Plato's opinion or desire.
(You know it's a big book, right? 89,000 words in the Greek. If you read it in a collection or anthology, it wasn't the whole Republic.)
The task of arguing over what in /Republic/ Plato approves or disapproves of is arduous and, I think, unnecessary.
First, everybody agrees that the topic of Republic is "social justice", and Plato makes his position on that clear, in Republic and in his other works: Justice is when everybody accepts the job and the class they're born into, without any grumbling or backtalk, and Plato is king and tells everybody what to do. His conclusion, that justice is when everybody minds their own business (meaning they don't get involved in politics, which should be the business of philosophers), is clearly meant as a direct refutation of Pericles' summary of Athenian values in his famous funeral oration: "We do not say that a man who shows no interest in politics is a man who minds his own business; we say that he has no business here at all."
When the topic of the book is social justice, and you get to the end and it says "Justice is when everyone does what I say and stays in their place", you should throw that book in the trash.
(This is a bit unfair to Plato, because the Greek word he used meant something more like "righteousness". "justice" is a lousy translation. But this doesn't matter to me, because I don't care what Plato meant as much as I care about how people use it; and the Western tradition is to say that Plato was talking about justice. And it's still a totalitarian conclusion, whether you call it "justice" or "righteousness".)
This view of justice (or righteousness) is consistent with his life and his writings. He seems to support slavery as natural and proper, though he never talks about it directly; see Vlastos 1941, Slavery in Plato's Thought. He literally /invented/ racism, in order to theorize that a stable, race-based state, in which the inferior races were completely conditioned and situated so as to be incapable of either having or acting on independent desires or thoughts, would have neither the unrest due to social mobility that democratic Athens had, nor the periodic slave revolts that Sparta had. He and his clan preferred Sparta to Athens; his uncle, a fellow student of Socrates, was the tyrant of Athens in 404 BC, appointed by Sparta; and murdered 1500 Athenian citizens, mostly for supporting democracy. Socrates was probably executed in 399 BC not for being a "gadfly", but because the Athenians believed that they'd lost the war with Sparta thanks to the collusion of Socrates' students with Sparta.
Plato had personal, up-close experience of the construction of a bloody totalitarian state, and far from ever expressing a word of disapproval of it, he mocked at least one of its victims in Republic, and continued to advocate totalitarian policies in his writings, such as /The Laws/. He was a wealthy aristocrat who wanted to destroy democracy and bring back the good old days when you couldn't be taken to court just for killing a slave, as evidenced by the scorn he heaps on working people and merchants in many of his dialogues, and also his jabs at Athens and democracy; and by the Euthyphro, a dialogue with a man who's a fool for taking his father to court for killing a slave.
One common defense of Plato is that his preferred State was the first state he described, the "true state", in which everyone gets just what they need to survive; he actually detested the second, "fevered state", in which people have luxuries (which, he says, can only ever be had by theft and war--property is theft!)
I find this implausible, or at best hypocritical, for several reasons.
- It's in line with the persona of Socrates, but not at all in line with Plato's actual life of luxury as a powerful and wealthy man.
- Plato spends a few paragraphs describing the "true state", and the rest of Republic describing the "fevered state" or defending or elaborating on its controversial aspects.
- He supports the totalitarian polices, such as banning all music, poetry, and art other than government propaganda, with arguments which are sometimes solid if you accept Plato's philosophy.
- Many of the controversial aspects of the "fevered state" are copied from Sparta, which Plato admired, and which his friends and family fought for against their own city; and direct opposites of Athens, which he hated.
The simplest reading of Republic, I think, is that the second state he described is one he liked to dream about, but knew wasn't plausible.
But my second reason for thinking this debate over Plato's intent is unimportant is that people don't usually read Republic for its brief description of the "true state". Either they just read the first 2 or 3 books and a few other extracts carefully chosen by professors to avoid all the nasty stuff and give the impression that Plato was legitimately trying to figure out what justice means like he claimed; or they read it to get off on the radical policies of the fevered state (which is the political equivalent of BDSM porn).
Some of the policies of that state include: breeding citizens like cattle into races that must be kept distinct, with philosophers telling everyone whom to have sex with, sometimes requiring brothers and sisters to have sex with each other (5.461e); allowing soldiers on campaign to rape any citizen they want to (5.468c); dictating jobs by race; abolishing all art, poetry, and music except government propaganda; banning independent philosophy; the death sentence for repeatedly questioning authority; forbidding doctors from wasting their time on people who are no longer useful to the State because they're old or permanently injured; forced abortions of all children conceived without the State's permission (including for all women over age 40 and all men over age 55); forbidding romantic love, marriage, or raising your own children; outlawing private property (5.464); allowing any citizen to violently assault any other citizen, in order to encourage citizens to stay physically fit (5.464e); and founding of the city by killing everyone over the age, IIRC, of 10. (He writes "exiling", but you would have to kill them to get them all to give up their children; see e.g. Cambodia).
The closest anybody ever came to implementing the ideas in /Republic/ (which was not a republic, and which Plato actually titled /Polis/, "The State") was Sparta (which it was obviously based on). The second-closest was Nazi Germany (also patterned partly on Sparta). /Brave New World/ is also similar, though much freer.
The most-important thing is to explicitly repudiate these wrong and evil parts of the traditional meaning of "progress":
- Plato's notion of "perfection", which included his belief that there is exactly one "perfect" society, and that our goal should be to do ABSOLUTELY ANYTHING NO MATTER HOW HORRIBLE to construct it, and then do ABSOLUTELY ANYTHING NO MATTER HOW HORRIBLE to make sure it STAYS THAT WAY FOREVER.
- Hegel's elaboration on Plato's concept, claiming that not only is there just one perfect end-state, but that there is one and only one path of progress, and that at any one moment, there is only one possible step forward to take.
- Hegel's corollary to the above, that taking that one next step is literally the only thing in the world that matters, and therefore individual human lives don't matter, and individual liberties such as freedom of speech are just obstructions to progress.
- Hegel's belief that movement along this path is predestined, and nothing can stop it.
- Hegel's belief that there is a God ("Weltgeist") watching over Progress and making sure that it happens, so the only thing progressives really need to do to take that One Next Step is to destroy whatever society they're in; and if they are indeed God's current chosen people, God will make sure that something farther along the One True Path rises from the ashes.
- The rationalist belief, implicit in Plato and Hegel but most prominent in Marx, that through dialectic we can achieve absolute certainty in our understanding of what the perfect society is, and how to get there; and at that point debate should be stopped and all opposition should be silenced.
Sorry; your example is interesting and potentially useful, but I don't follow your reasoning. This manner of fertilization would be evidence that kin selection should be strong in Chimaphila, but I don't see how this manner of fertilization is itself evidence that kin selection has taken place. Also, I have no good intuitions about what differences kin selection predicts in the variables you mentioned, except that maybe dispersion would be greater in Chimaphila because of teh greater danger of inbreeding. Also, kin selection isn't controversial, so I don't know where you want to go with this comment.
Hi, see above for my email address. Email me a request at that address. I don't have your email. I just sent you a message.
ADDED in 2021: Some people tried to contact me thru LessWrong and Facebook. I check messages there like once a year. Nobody sent me an email at the email address I gave above. I've edited it to make it more clear what my email address is.