Posts

"NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) 2014-09-04T16:58:55.950Z · score: 10 (38 votes)

Comments

Comment by matthew_opitz on Costs are not benefits · 2016-11-10T16:56:59.968Z · score: 0 (0 votes) · LW · GW

What does this framework give me? Well, I bet that I'll be able to predict the onset of the next world economic crisis much better than either the perma-bear goldbugs of the Austrian school, the Keynesians who think that a little stimulus is all that's ever needed to avoid a crisis, the monetarists, or any other economist. I can know when to stay invested in equities, and when to cash out and invest in gold, and when to cash out of gold and buy into equities for the next bull market, and so on and so on. I bet I can grow my investment over the next 20 years much better than the market average.

There are plenty of mainstream economists who will warn from time to time that there might be a recession approaching within the next few years. But what objective basis do they ever have for saying this? Aren't they usually just trying to gauge fickle investor and consumer "animal spirits"? And how specific and actionable are any of their predictions, really? Can an investor use any of them to guide trades and still sleep well at night and not feel like a dupe who is following some random guru's hunch?

To time the cycles, I do not need to rely on fickle estimations of consumer confidence or any unobservable psychology like that. There are specific objective numbers that I will be keeping an eye on in the coming years—indicators that are not mainstream, including Marxist authors' estimations of the world average rate of profit, the annual world production of physical gold, and the annual world economic output as measured in gold ounces (important!). No mainstream economist that I know—even Austrian goldbugs—think that world gold production has a casual role in world economic cycles.

If this sounds cuckoo, I suggest reading these two short articles: "On gold's monetary role today" https://critiqueofcrisistheory.wordpress.com/a-reply-to-anonymous-on-golds-monetary-role-today/ "Can the capitalist state ensure full employment by providing a replacement market?" https://critiqueofcrisistheory.wordpress.com/can-the-capitalist-state-ensure-full-employment-by-providing-a-replacement-market/

Yes, it does not surprise me that most economists were wrong about the expected inflation from quantitative easing. They could not foresee that most of this money would not enter circulation or act as a basis for additional multiples of credit creation on top of it that would enter circulation. They could not foresee that this QE money would sit inert for the time being as "excess reserves" due to central bank payment of interest on these excess reserves that was competitive with other attainable interest rates on the market. In reality, these excess reserves—so long as interest is paid on them—are not typical base money, but instead themselves function more like interest-bearing bonds. Heck, I didn't even have to know anything about Marxism to anticipate that!

Now, here's a concrete prediction: if the Federal Reserve were to decide to cease all payment of interest on excess reserves without also at the same time unwinding the QEs, leaving a permanently-swollen monetary base of token money that then has the incentive to be activated as the basis for many multiples of loans to be made on top of it—then you will see continued depreciation of the dollar with respect to gold.

Thankfully, though, I am not relegated to trying to mind-read what the Federal Reserve will do because my strategy of trading between equities and gold is only concerned with the relative prices between those two. I will come out ahead in real terms by correctly timing relative changes in their prices, regardless of whatever happens to their nominal dollar prices as a result of Federal Reserve shenanigans. And I would argue that, on average over the medium to long run, the Federal Reserves operations are neutral with respect to these relative prices. The Federal Reserve can change the nominal form of crises (whether they take the appearance of unemployment, dollar-inflation, or some intermediate admixture of the two like 1970s stagflation), but the Federal Reserve cannot actually influence the relative movements of equities and gold. If, thanks to incredibly dovish Federal Reserve policy in response to the onset of a crisis, equities continue to appreciate in dollar terms, gold will be appreciating even more.

Comment by matthew_opitz on Costs are not benefits · 2016-11-08T22:31:51.600Z · score: 0 (0 votes) · LW · GW

Yes, I realize that Marx's labor theory of value is not popular nowadays. I think that is a mistake. I think even investors would get a better descriptive model of reality if they adopted it for their own uses. That is what I am trying to do myself. I could care less about overthrowing capitalism. Instead, let me milk it for all I can....

As for "labour crystallised in the product," that's not how I think of it, regardless of however Marx wrote about it. (I'm not particularly interested in arguing from quotation, nor would you probably find that persuasive, so I'll just tell you how I make sense of it).

I interpret the labor-value of something (good or service) as the relative proportion of society's aggregate labor that must be devoted to its production in order to, with a given level of productivity of labor, reproduce that good or service sustainably over the long-term. Nothing gets crystallized in any individual product. That would be downright metaphysical thinking.

After all, just because an individual item has a certain labor-value doesn't mean that it will individually automatically fetch a certain price. It is not the individual labor-value that influences price. A pair of sneakers made by a factory that is half as efficient as the typical sneaker factory does not have twice the labor-value or fetch twice the price. What matters is the "socially-necessary" labor expended on an item. And how can that be perceived? On average in the long-run, if a particular firm's service or production process does not yield an average rate of profit, then that is society's signal, after-the-fact, that some of the labor devoted to that line of production is not being counted by society as having been "socially-necessary" labor. (Of course, technological change can lower the socially-necessary labor for a certain line of production, which will appear as falling prices (assuming a non-depreciating currency) through competition and below-average profits for any firms still using old techniques that waste labor that is now socially-unnecessary).

If business owners were to rely on a crude, metaphysical interpretation of Marx's labor theory of value that assured them that the value was already baked into their product as soon as it rolled off the production line, they would be unpleasantly surprised if it were to turn out that they could not realize the expected labor-value in their product...perhaps due to something like their competitors having, in the intervening time, embarked upon a technological innovation that changed society's unconscious, distributed calculation of what labor was "socially-necessary" for this line of production....

As for your final questions: it's a bit complicated, to say the least. There are even various schools of Marxists that don't agree with each other.

I think there is somewhat of a consensus that there is a real long-term tendency for the (real, inflation-adjusted) world rate of profit to fall, theoretically and empirically, and therefore you can expect there to be an ever-decreasing ceiling on how high (real) interest rates can go during a business cycle before they begin to eat up all of the profit rate and leave nothing for net profit of enterprise, thus precipitating a decline in production and a recession. (Although some Marxists reject that there is a theoretical or empirical tendency for the rate of profit to fall. See Andrew Kliman's book "The Failure of Capitalist Production" if you are interested in this "exciting" debate).

More controversial still is the question of what, if anything, monetary policy can do to influence interest rates and aggregate purchasing power to prevent future recessions. I concur with what I call the "Commodity-Money" school (see Ernest Mandel's work on "Marx's Theory of Money", Sam William's "Critique of Crisis Theory" blog, or the writings of Jon Britton) that argues that there is actually very little that monetary authorities can do to alter the course of business cycles because paper currencies, while they are no longer legally tied to commodity-money, remain tied to commodity-money in a practical sense, and that movements in the world production of commodity-money place practical limits on what authorities governing paper currencies can do.

I don't have the patience to explain all of this here in greater depth when others have already done so elsewhere. Sam Williams's "Critique of Crisis Theory" blog is what I would recommend reading from the top to get the clearest explanation of this stuff.

By the way, my "commodity-money" understanding of Marx's labor theory of value leads me to believe that we are currently entering a boom phase in the business cycle in which equities, on average, will continue to perform well. (I have holdings in Vanguard Total World Stock (VT), for your information. It is a very simple instrument for tracking the world economy with low management fees). So, expect accelerating growth for 3-4 years. Towards the end of that period, I expect an oncoming credit crisis and recession to be heralded by world gold production to start declining slightly and interest rates to be inching upward to a dangerous level infringing on the net profit of enterprise (hence, why a theory of the expected average rate of profit is so useful!)...with little that the Federal Reserve or other monetary authorities will be able or willing to do about it due to the fear of depreciating paper currencies with respect to commodity-money too much. Business will continue to apparently boom for a short while longer, but it will be in its unsustainable credit-boom phase by that point, and it will be time to cash out of equities and into commodity-money (gold).

Comment by matthew_opitz on Costs are not benefits · 2016-11-08T13:28:13.204Z · score: 0 (0 votes) · LW · GW

Not "cost of production," but "price of production," which includes the cost of production plus an average rate of profit.

Note that, according to marginalism, profit vanishes at equilibrium and capitalists, on average, earn only interest on their capital. I disagree. At equilibrium (over the long-run), an active capitalist (someone who employs capital to produce commodities) can expect, on average, to make a rate of profit that is at all times strictly above the going interest rate. The average rate of profit must always include some substantial amount of "profit of enterprise" to account for the added risk of producing and marketing an uncertain product rather than just being a financial capitalist and earning an interest rate (which carries a typically lesser risk of the default of the debtor). If the rate of profit is not substantially above the rate of interest, over the long run you will see capitalists transition from productive investment into financial capitalists (a problem we have right now). This will eventually decrease the supply of commodities and increase their relative prices until it is once again profitable to produce commodities even after deducting interest.

So, that is one concrete prediction. And empirically, although the averate world rate of profit can sometimes briefly dip negative, it is as a rule on average a substantial positive percentage.

And yes, I would argue that demand does not, in the medium to long-run, influence "value" or "long-run average market price." It's all about the price of production instead. The practical advantage of this is that this opens up opportunities for arbitrage against other people who don't realize this. For example, if it is 2007 and you see demand for oil surging and the price skyrocketing, you should keep in mind that the price of production of oil has probably not changed that much (excepting the fact that some of the new oil being brought online was shale oil that had a higher price of production), and thus oil will be making above-average profits at these prices. You can then expect investment to flood into oil production over the next ~5 years, thus increasing supply and bringing the market price of oil down to (or even temporarily below) the price of production. If I had had some money back then instead of being in high school, and if I had known what I know now, I am confident that I could have made some serious money on some sort of long-term oil future betting. Note that, now that I do have a little bit of money, I am indeed making plays in the market right now based on my analysis, although I won't go into specifics about what those are right here...

Note that, so far, we have been taking the price of production of various things and the average rate of profit as readily-discernable "givens" at any point in time. However, prices of production and the average world rate of profit can change as well over the long term.

Even the classical economists didn't really have a theory for what determined these changes. (For example, Adam Smith could tell you that the cost of production was the rent + capital + wages that, on average, was needed to produce something, but how can you anticipate changes in the costs of each of those? And then you need to add on the average rate of profit, but how can you anticipate how the average rate of profit of the world economy will evolve?)

So far, the only theory that I have seen that even tries to explain long-run changes in prices of production and the average world rate of profit is Marx's labor theory of value. For example, see: https://critiqueofcrisistheory.wordpress.com/responses-to-readers-austrian-economics-versus-marxism/why-prices-rise-above-labor-values-during-a-boom/

Note that you don't have to buy into Marx's labor theory of value to do medium-run arbitrage involving prices of production. All you need is classical economics for that.

Only if you wanted to do very long-run arbitrage that took into account technological change and the resulting increases in the productivity of labor in certain sectors—and thus declining production prices for those commodities and a long-term tendency for the worldwide average rate of profit to fall as the so-called "organic composition of capital" increases—then you would have to rely on Marx's labor theory of value or some other yet-to-be invented theory that could attempt to forecast changes in prices of production and the average worldwide rate of profit.

Comment by matthew_opitz on Costs are not benefits · 2016-11-07T18:57:57.271Z · score: 0 (0 votes) · LW · GW

For the purposes of this discussion, I would define "value" as "long-run average market price." Note that, in this sense, "use-value" has nothing whatsoever to do with value, unless you believe in the subjective theory of value. That's why I say it is unfortunate terminology, and "use-value" should less confusingly be called "subjective practical advantage."

Which economists confuse the two? The false equivocation of use-value with exchange-value is one of the core assumptions of marginalism, and pretty much everyone these days is a marginalist of some sort, so it would be easier to name economists that didn't confuse the two: Steve Keen and Anwar Shaikh are the first two that come to mind. Any Marxist economist will have a good grip on the distinction, so that would include people like Andrew Kliman and Michael Roberts as well.

Comment by matthew_opitz on Costs are not benefits · 2016-11-05T14:17:31.514Z · score: 0 (0 votes) · LW · GW

I was arguing against both the subjective theory of value, and the failure of modern economists to utilize the concepts of use-value and exchange-value as separate things.

Comment by matthew_opitz on Costs are not benefits · 2016-11-04T15:30:46.796Z · score: 0 (0 votes) · LW · GW

I know that the main thrust of the article was about vote trading and not marginalism, but I just have to blow off some frustration at how silly the example at the beginning of the article was, and how juvenile its marginalist premises are in general.

There has been a real retrogression in economics ever since the late 1800s. The classical economists (such as Adam Smith and David Ricardo) were light years ahead of today's marginalists in, among other things, being able to distinguish between "use-value" and "exchange-value," or as I like to call them, "subjective practical advantage" vs. "social advantage."

A lawn-mower might have both a subjective practical advantage and a social advantage. If you have grass in your yard, a lawn-mower might have a subjective practical advantage in being able to cut the grass. And yet, maybe it is an old model that nobody else is interested in, and therefore there is almost no social advantage to owning that lawn-mower (little to no price that one can fetch for it).

Likewise, vice-versa. If, for some reason, all of your grass died, or if you decided to pave over your lawn with a parking lot, then your lawn-mower would probably not have any more subjective practical advantage (unless you could cleverly think of something else to use it for). But it still might have a very important social advantage if others might want to buy it from you. So, you might continue to hoard it (instead of immediately throwing it in the dumpster), in anticipation of having a chance to sell it soon.

Nor do use-value and exchange-value scale in the same fashion. 1000 lawn-mowers is not necessarily 1000x more useful to an individual in a subjective, practical sense. But 1000 lawn-mowers certainly IS 1000x more useful to an individual in terms of exchange-value (assuming that the total size of the market for lawn-mowers is orders of magnitude larger than 1000 lawn-mowers, and thus the seller of these 1000 lawn-mowers forms a negligible part of the overall supply of lawn-mowers. Whereas, if the lawn-mower market is extremely small, then yes, it is possible that the price of 1000x more lawn-mowers will not scale linearly). THIS discrepancy between how use-value scales and exchange-value tends to scale is—contra the early marginalists like Carl Menger and Eugen von Böhm-Bawerk—the basis for the "double-inequality" that causes people to trade—NOT different valuations of how useful something is.

The ECON 101 that is taught nowadays gets this most basic thing wrong: medium-run market prices are NOT determined by demand or subjective desire for a commodity, ONLY by the conditions of supply.

Yes, in the short-run, supply is fixed, and the market price will vary according to demand. But in the medium-run, investment can re-allocate from lines of business that yield below-average profits to lines that yield above-average profits.

Therefore, if interest in a product or service suddenly declines, yes, in the short-run the price will drop. But that will mean that the producers of that product or service will be making below-average profits, or even losses, on that good or activity. They will re-allocate to other activities. Soon the quantity produced will adjust downwards, restricting the supply until the price of the product or service equals once again the cost of production + average rate of profit (what classical economists called the "natural price" or "price of production"—the long-term price needed to sustainably incentivize members of society to continue to reproduce the good or service. (Note that this is different from the "cost-price" that the producer pays, as the price of production also includes an average rate of profit, and note that this only applies to "commodities," meaning, things whose production can be increased and decreased with investment. Priceless, one-off works of art and other such novelties have their supply fixed and only respond to changes in demand).

So, subjective consumer desires, in the medium-run (3-5 years) have nothing to do with the market prices of commodities. The market prices of commodities will, instead, tend to fluctuate around the price of production, and the only thing that consumer desires dictate is what quantity will be produced around that price of production. The only thing that really matters, in the medium-run, is how many consumers are willing to pay the price of production for that product. You can, for the medium-run, forget about the rest of the demand curve (how many people would be willing to buy at half the price or double the price, etc.).

So, in short, it should be obvious why buying $5 worth of toothpaste is different from buying $5 worth of shampoo. They have equal spot exchange-values (and probably similar prices of production if their prices tend, over the medium-run to fluctuate close to each others'), but they do not necessarily have equal use-values to a particular individual at a particular time. One must weigh the use-value of the $5 before spending it, which means considering all of the other things that one could spend that $5 on then or in the future, and all of these possibilities will have different use-values, albeit the same exchange-value. Only if toothpaste is the best use of that $5 at that point for that individual will that individual want to buy toothpaste.

Use-value vs. exchange value, and the OBJECTIVE medium-run determination of price according to price of production (NOT SUBJECTIVE!) was all understood perfectly well 200 years ago, and yet now this probably sounds like some sort of crackpot ranting. It's not. Trust me, it's all there in the writings of the classical economists themselves, who were head-and-shoulders above the charlatans in mainstream economics today.

Although I am not a neoreactionary, I do tend to sympathize from time to time with their view that, despite all of our technological ease, we are really living in an era of intellectual and social decay....

Comment by matthew_opitz on Sleepwalk bias, self-defeating predictions and existential risk · 2016-04-23T16:09:11.930Z · score: 3 (3 votes) · LW · GW

There are also some examples of anti-sleepwalk bias:

  1. World War I. The crisis unfolded over more than a month. Surely the diplomats will work something out right? Nope.
  2. Germany's invasion of the Soviet Union in World War II. Surely some of Hitler's generals will speak up and persuade Hitler away from this crazy plan when Germany has not even finished the first part of the war against Britain. Surely Germany would not willingly put itself into another two-front war even after many generals had explicitly decided that Germany must never get involved in another two-front war ever again. Right? Nope.
  3. The sinking of the Titanic. Surely, with over two and a half hours to react to the iceberg impact before the ship finished sinking, SURELY there would be enough time to get all of the lifeboats safely and calmly loaded up to near max capacity, right? NOPE. And going even further back to the decision to not put enough lifeboats on in the first place...SURELY the White Star Line must have a good reason for this. SURELY this means that the ship really is unsinkable, right? NOPE.
  4. The 2008 financial crisis. SURELY the monetary authorities have solved the problem of preventing recessions and smoothing out the business cycle. So SURELY I as a private trader can afford to be as reckless as I want and not have to worry about systemic risk, etc.
Comment by matthew_opitz on Suppose HBD is True · 2016-04-22T14:54:51.445Z · score: 0 (0 votes) · LW · GW

I don't know...would clothing alone tell you more than clothing plus race? I think we would need to test this.

Is a poorly-dressed Irish-American (or at least, someone who looks Irish-American with bright red hair and pale white skin) as statistically likely to mug someone, given a certain situation (deserted street at night, etc.) as a poorly-dressed African-American? For reasons of political correctness, I would not like to share my pre-suppositions.

I will say, however, that, in certain historical contexts (1840s, for example), my money would have been on the Irish-American being more likely to mug me, and I would have taken more precautionary measures to avoid those Irish parts of town, whereas I would have expected the neighborhoods inhabited by free blacks to have been relatively safe.

Nowadays, I don't know what the statistics would be if you measured crimes perpetrated by certain races, when adjusted for socio-economic category (in other words, comparing poor to poor, or wealth to wealthy in each group). But many people would probably have their suspicions. So, can we test these intuitions to see if they are just bigoted racism, or if they unfortunately happen to be accurate generalizations?

Comment by matthew_opitz on Suppose HBD is True · 2016-04-22T14:32:49.736Z · score: 0 (0 votes) · LW · GW

True in many cases, although for some jobs the task might not be well-specified in advance (such as in some cutting-edge tech jobs), and what you need are not necessarily people with any particular domain-specific skills, but rather just people who are good all-around adaptable thinkers and learners.

Comment by matthew_opitz on Open thread, Apr. 18 - Apr. 24, 2016 · 2016-04-21T22:57:33.372Z · score: 2 (2 votes) · LW · GW

Yeah, what a hoot it has been watching this whole debacle slowly unfold! Someone should really write a long retrospective on the E-Cat controversy as a case-study in applying rationality to assess claims.

My priors about Andrea Rossi's claims were informed by things such as:

  1. He has been convicted of fraud before. (Strongly negative factor)
  2. The idea of this type of cold fusion has been deemed by most scientists to be far-fetched. (Weakly negative factor. Nobody has claimed that physics is a solved domain, and I'm always open to new ideas...)

From there, I updated on the following evidence:

  1. Rossi received apparent lukewarm endorsement from several professional scientists. (Weakly positive factor. Still didn't mean a whole lot.)
  2. Rossi dragged his feet on doing a clear, transparent, independently-conducted calorimetric test of his device—something that many people were willing to do for him, and which is not rocket science to perform. (Strongly negative factor—strongly pattern-matches with a fraudster).
  3. Rossi claimed to have received independent contracts for licensing his device. First Defkalion in Greece, then Industrial Heat. Rossi also made various claims about NASA and Texas Instruments being involved. When investigated, the claims about the reputable organizations being involved turned out to be exaggerations, and the other partners were either of unknown reputation (Defkalion) that quickly disappeared, or had close ties to Rossi himself. Still no independent validation. (Strongly negative factor).

And now we arrive at the point where even Industrial Heat is breaking ties with Rossi. What a fun show!

Comment by matthew_opitz on Suppose HBD is True · 2016-04-21T22:26:04.511Z · score: 0 (2 votes) · LW · GW

That just pushes the question back one step, though: why are there so few black programmers? Lack of encouragement in school (due to racial assumptions that they would not be any good at this stuff anyways)? Lack of stimulation of curiosity in programming in elementary school due to poor funding for electronics in the classroom that has nothing to do with conscious racism per se? (This would be an environmental factor not having to do with conscious racism, but rather instead having to do with inherited lack of socio-economic capital, living in a poor inner city, etc.) Lack of genetic aptitude for these tasks? HBD could be relevant to how we address this problem. Do we mandate racial-sensitivity training courses, increased federal funding for electronics in inner-city schools, and/or genetic modification? Even if we do all three, which should we devote the most funding towards?

Comment by matthew_opitz on Suppose HBD is True · 2016-04-21T22:19:55.520Z · score: 2 (4 votes) · LW · GW

One argument could be that many social scientists are being led down a blind alley of trying to find environmental causes of all sorts of differences and are being erroneously predisposed to find such causes in their data to a stronger extent than is really the case, which then leads to incorrect conclusions and policy recommendations that will not actually change things for the better because the policy recommendations end up not addressing what is the vast majority of the root of the problem (genetics, in this case).

Comment by matthew_opitz on Suppose HBD is True · 2016-04-21T22:09:28.386Z · score: 9 (7 votes) · LW · GW

Estimating a person's capability to do X, Y, or Z (do a job effectively, be a law-abiding citizen, be a consistently productive citizen not dependent on welfare programs, etc.) based on skin color or geographical origin of their ancestry is a heuristic.

HBD argues that it is a relatively accurate heuristic. The anti-HBD crowd argues that it is an inaccurate heuristic.

OrphanWilde seems to be arguing that, even if HBD is correct that these heuristics are relatively accurate, we don't need heuristics like this in the first place because there are even better heuristics or more direct measurements of a person's individual capability to do X, Y, or Z already out there. (IQ, interviews, etc.)

The HBD advocates here seem to be arguing that we do, in fact, need group-based heuristics because individual heuristics:
1. Are more costly in terms of time, and are thus just not feasible for many applications. 2. Don't really exist for certain measures, such as in estimating "probable future law-abidingness" or "probable future welfare dependency".
*3. Have political restrictions on being able to apply them. (For example, we COULD use formal IQ tests on job applicants, but such things have been made illegal precisely because they seem to paint a higher proportion of blacks in a bad light).

Perhaps OrphanWilde might like to respond to these objections. Here's how I would respond:
1. The costliness of individual judgment is warranted because using group-based heuristics has politically-toxic spillovers, and might miss out on important outliers (by settling on local optima at the expense of global optima). We are not trying to screen out defective widgets from an assembly line (in which case a quick but "lossy" sorting heuristic might be justified). We are trying to sort people. The costliness of mis-sorting even a small percentage of individuals (for example, by heuristically rejecting a black man who happens (unbeknowst to us without doing the individual evaluation) to have an IQ of 150 from a certain job) outweighs the cost-saving of using quick group-based heuristics: both because it will inevitably politically anger the black community, with all sorts of politically toxic spillovers, and because we are missing out on a disproportionate goldmine of economic potential by missing these outliers. 2. If individual tests for probable law-abidingness or probable economic productivity don't currently exist, then maybe we should try to develop them! Is that so impossible? Personally, I find it a bit unbelievable that the U.S. does not currently have tests for certain agreed-upon foundational cultural values as part of its immigration screening process. For example, if applicants had to respond to questions such as, "Explain why impartial fairness towards strangers rather than favoritism towards friends and relatives is an essential aspect of national citizenship and professional behavior" or "Explain the advantages of dis-establishment of religion from the political and legal affairs of the state" then I would sleep much more easily at night about our immigration policy.
*3. Well, perhaps we should campaign to overturn the political restrictions on individual merit-based tests by pointing out that the only de-facto alternative that people will have is to use group-based tests of some sort or another (whether employers or other institutions openly admit to using such group-based heuristics or not, they will find a way to do so), and that group-based heuristics will actually hurt disadvantaged groups even more. In other words, unless you want all appointments in society to be decided by random casting of lots, people need some sort of criteria for judging others. Given this, it would be better to have individual-based tests rather than group-based tests. Even if the individual-based tests will end up showing "disparate impact" on certain groups, it will still be less than if we used group-based tests.

(Edit: formatting improved upon request).

Comment by matthew_opitz on Black box knowledge · 2016-03-05T00:08:55.735Z · score: 1 (1 votes) · LW · GW

Some of your black box examples seem unproblematic. I agree that all you need to trust that a toaster will toast bread is an induction from repeated observation that bread goes in and toast comes out.

(Although, if the toaster is truly a black box about which we know absolutely NOTHING, then how can we induce that the toaster will not suddenly start shooting out popsicles or little green leprechauns when the year 2017 arrives? In reality, a toaster is nothing close to a black box. It is more like a gray box. Even if you think you know nothing about how a toaster works, you really do know quite a bit about how a toaster works by virtue of being a reasonably intelligent adult who understands a little bit about general physics--enough to know that a toaster is never going to start shooting out leprechauns. In fact, I would wager that there are very few true "black boxes" in the world--but rather, many gray boxes of varying shades of gray).

However, the tax accountant and the car mechanic seem to be even more problematic as examples of black boxes because there is intelligent agency behind them--agency that can analyze YOUR source code, determine the extent to which you think those things are a black box, and adjust their output accordingly. For example, how do you know that your car will be fixed if you bring it to the mechanic? If the mechanic knows that you consider automotive repair to be a complete black box, the mechanic could have an incentive to purposefully screw up the alignment or the transmission or something that would necessitate more repairs in the future, and you would have no way of telling where those problems came from. Or, the car mechanic could just lie about how much the repairs would cost, and how would you know any better? Ditto with the tax accountant.

The tax accountant and the car mechanic are a bit like AIs...except AIs would presumably be much more capable at scanning our source code and taking advantage of our ignorance of its black-box nature.

Here's another metaphor: in my mind, the problem of humanity confronting AI is a bit like the problem that a mentally-retarded billionaire would face.

Imagine that you are a mentally-retarded person with the mind of a two-year-old who has suddenly just come into possession of a billion dollars in a society where there is no state or higher authority to regulate enforce any sort of morality or make sure that things are "fair." How are you going to ensure that your money will be managed in your interest? How can you keep your money from being outright stolen from you?

I would assert that there would be, in fact, no way at all for you to have your money employed in your interest. Consider:

*Do you hire a money manager (a financial advisor, a bank, a CEO...any sort of money manager)? What would keep this money manager from taking all of your money and running away with it? (Remember, there is no higher authority to punish this money manager in this scenario). If you were as smart or smarter than the money manager, you could probably track down this money manager and take your money back. But you are not as smart as the money manager. You are a mentally-retarded person with the mind of a toddler. And in that case where you did happen to be as smart as the money manager, then the money manager would be redundant in the first place. You would just manage your own money.

*Do you try to manage your money on your own? Remember, you have the mind of a two-year-old. The best you can do is stumble around on the floor and say "Goo-goo-gah-gah." What are you going to be able to do with a billion dollars?

Neither solution in this metaphor is satisfactory.

In this metaphor: The two-year-old billionaire is humanity. The lack of a higher authority symbolizes the absence of a God to punish an AI. *The money manager is like AI.

If an AI is a black box, then you are screwed. If an AI is not a black box, then what do you need the AI for?

Humans only work as black-boxes (or rather, gray-boxes) because we have an instinctual desire to be altruistic to other humans. We don't take advantage of each other. (And this does not apply equally to all people. Sociopaths and tribalistic people would happily take advantage of strangers. And I would allege that a world civilization made up of entirely these types of people would be deeply dysfunctional).

So, here's how we might keep an AI from becoming a total black-box, while still allowing it to do useful work:

Let it run for a minute in a room unconnected to the Internet. Afterwards, hiring a hundred million programmers to trace out exactly what the AI was doing in that minute by looking at a readout of the most base-level code of the AI.

To any one of these programmers, the rest of the AI that does not happen to be that programmer's special area of expertise will seem like a black box. But, through communication, humanity could pool their specialized investigations into each part of the AIs running code and sketch out an overall picture of whether its computations were on a friendly trajectory or not.

Comment by matthew_opitz on [paper] [link] Defining human values for value learners · 2016-03-04T19:49:17.543Z · score: 3 (3 votes) · LW · GW

I don't want to speak for the original author, but I imagine that presumably the AI would take into account that the Victorian society's culture was changing based on its interactions with the AI, and that the AI would try to safeguard the new, updated values...until such a time as those new values became obsolete as well.

In other words, it sounds like under this scheme the AI's conception of human values would not be hardcoded. Instead, it would observe our affect to see what sorts of new activities had become terminal in their own right that made us intrinsically happy to participate in, and the AI would adapt to this change in human culture to facilitate the achievement of those new activities.

That said, I'm still unsure about how one could guarantee that the AI could not hack its own "human affect detector" to make it very easy for itself by forcing smiles on everyone's face under torture and defining torture as the preferred human activity.

Comment by matthew_opitz on [paper] [link] Defining human values for value learners · 2016-03-03T19:27:05.555Z · score: 1 (1 votes) · LW · GW

Okay, so let's use some concrete examples to see if I understand this abstract correctly.

You say that the chain of causation is from fitness (natural selection) ---> outcomes ---> activities

So, for example: reproduction ---> sex ---> flirting/dancing/tattooing/money/bodybuilding.

Natural selection programs us to have a terminal goal of reproduction. HOWEVER, it would be a bad idea for an AI to conclude, "OK, humans want reproduction? I'll give them reproduction. I'll help the humans reproduce 10 quadrillion people. The more reproduction, the better, right?"

The AI would need to look ahead and see, "OK, the programmed goal of reproduction has caused humans to prefer a specific outcome, sex, which tended to lead to reproduction in the original (ancestral) programming environment, but might no longer do so. Humans have, in other words, come to cherish sex as a terminal goal in its own right through their affective responses to its reward payoff. So, let's make sure that humans can have as much sex as possible, regardless of whether it will really lead to more reproduction. That will make humans happy, right?"

But then the AI would need to look ahead one step further and see, "OK, the preferred outcome of sex has, in turn, caused humans to enjoy, for their own sake, specific activities that, in the experience and learning of particular humans in their singular lifetimes (we are no longer talking about instinctual programming here, but rather culture), has tended in their particular circumstances, to lead to this preferred outcome of sex. In one culture, humans found that flirting tended to lead to sex, and so they formed a positive affective connotation with flirting and came to view flirting as a terminal goal in its own right. In another culture, dancing appeared to be the key to sex, and so dancing became a terminal goal in that culture. In other cultures, bodybuilding, accumulation of money, etc. seemed to lead to sex, and so humans became attached to those activities for their own sake, even beyond the extent to which those activities continued to lead to more sex. So really, the way to make these humans happy would be to pay attention to their particular cultures and psychologies and see which activities they have come to develop a positive affective bond with...because THESE activities have become the humans' new conscious terminal goals. So we AI robots should work hard to make it easy for the humans to engage in as much flirting/dancing/bodybuilding/money accumulation/etc. as possible."

Would this be an accurate example of what you are talking about?

Comment by matthew_opitz on Is Spirituality Irrational? · 2016-02-11T14:22:55.062Z · score: 4 (4 votes) · LW · GW

Even if I am not setting out trying to disparage a spiritual person's spiritual experiences—even if I am trying to be as charitable to them as possible—it is difficult to see how I could have a conversation with them about information (their own subjective spiritual experiences) that is not publicly accessible to me. It boils down to them telling me about their private experience and me replying, "Cool story bro." Once again, not because I WANT to sound flippant or dismissive...but what else can I say about it? I'm glad they had their experience.

Usually spiritual people start with their story, and then they proceed with a conclusion that, "Because I had this experience, you should believe X and do Y." I don't see how that follows, especially when the story sounds implausible.

It is a little different if someone said to me, "I saw a rabid dog across the street, so don't go over there or else you gonna get bit." A rabid dog sounds plausible based on what I have previously concluded about the world. I could go and check for myself that the dog is there (it is, in theory, publicly-accessible information), or I could take the person's word for it if they seem like a trustworthy person with a good handle on reality. But most spiritual beliefs are much more implausible than this. Naturally, I would want to check for myself. But spiritual people are usually not able to explain to me how I could check for myself. "You just gotta believe" is not an operation that I can execute. It's not that I don't want to believe. I might very well want to believe, especially if their story sounds convenient or fortunate to me (such as, "We all go to heaven when we die.") But I really don't know how to just "believe" something.

Maybe some children are raised with the skill of "just believe this..." (For example: https://youtu.be/KPFUr1Nnk4k ) but for me (and my Unitarian background), it DOES NOT COMPUTE.

The situation is different with drug-induced experiences. In those cases, someone can tell me, "I had this profound experience. As of now, it is known only to me, but it is in theory publicly-accessible to you too IF you follow this well-defined set of steps: measure out 3 grams of psilocybin mushrooms...etc." Then I could have the experience, or at least AN experience, and we could move beyond just "Cool story bro." If my experience ended up being very similar to theirs...well, then I would naturally start to search for explanations to explain the correlation. Maybe their report of their experience before I had mine primed my brain for having a similar experience. For me to consider my experience to be evidence in favor of some supernatural reality, it would have to be very similar to theirs AND independently-arrived at. So, if they had an experience, wrote down a description of it (maybe with winning lottery numbers communicated to them by Poseidon), and then I had the exact same experience as them after following their instructions, but without having heard anything specific about their experience beforehand (and especially if I had been given the same winning lottery numbers that I independently wrote down immediately afterwards before talking to my friend), then WOW, that would be outstanding evidence in favor of some underlying spiritual reality of practical use.

If a spiritual person could tell me, "If you kneel and face Mecca 5 times a day and cry out, "Allah Ackbar!" you will achieve great contentment in life.", that is an operational instruction that I understand and could execute. Now, I'm pretty skeptical that it would work, and in order for me to expend the trivial inconvenience and social embarrassment involved with actually trying it, I would have to be pretty desperate for a feeling of contentment in my life...but in theory it is something that I could try.

But just telling me, "Pray to God with ALL YOUR HEART and you will find the strength to do X, Y, Z...", that's still too fuzzy for me.
Me: "Am I praying will all my heart?"
Friend: "You will KNOW when you are praying with all your heart."
Me: "Okay, I must not be praying with all my heart. How do I pray with all my heart?"
Friend: "Think of the thing in the world that you want or cherish the most. Think of that intense yearning. Apply that feeling to your desire to connect with God."
Me: "Okay...hmmmm...I'm sorry, I'm having trouble applying that feeling to something that just feels silly, I can't help it."
Friend: "Stop thinking it is silly, you have to really try and believe!"
Me: "I know, I'm trying, but it's just not working."

It's not just prayer. I have the same problems with meditation. Maybe it is just me, personally, but I don't find most recipes for making people's private spiritual experiences publicly-accessible to me to be very specific or comprehensible or operational. Is this typical-mind fallacy, or do others feel the same way?

Note that I'm not demanding that the experiences themselves be easily describable. I understand that the experiences themselves might not be the sorts of things that can be put into words. For example, people's mushroom experiences might be ecstatic and ineffable. But at least they could give me a clear recipe of how to get there so I could see for myself.

What's impressive is, the mushroom recipe would not require FAITH WITH ALL MY HEART. I could be thinking, going into it, "Man, this is all a bunch of hippy-dippy BS. I ain't gonna feel a thing." And then, BAM! That's impressive.

Comment by matthew_opitz on Is Spirituality Irrational? · 2016-02-11T13:38:30.117Z · score: 0 (0 votes) · LW · GW

I think that what Viliam was implying was, "Don't Spiritualize and Decide." Don't get drunk on the holy spirit and then make important decisions about what you believe or how you should live your life. I'm pretty sure Viliam was comparing spiritual experiences to alcohol. They might be fun, euphoric, and they might seem meaningful, but do they give good, reliable information about the world that you can use in use in repeated fashion for positive outcomes?

Comment by matthew_opitz on What's wrong with this picture? · 2016-01-29T20:24:45.654Z · score: 0 (0 votes) · LW · GW

An analogous question that I encountered recently when buying a powerball lottery ticket just for the heck of it (also because its jackpot was $1.5 billion and the expected value of buying a ticket was actually approaching a positive net reward) :

I was in a rush to get somewhere when I was buying the ticket, so I thought, "instead of trying to pick meaningful numbers, why not just pick something like 1-1-1-1-1-1? Why would that drawing be strictly more improbable than any other random permutations of 6 numbers from 1 to 60, such as 5-23-23-16-37-2? But then the store clerk told me that I could just let the computer pick the numbers on my ticket, so I said "OK."

Picking 1-1-1-1-1-1 SEEMS like you are screwing yourself over and requiring an even more improbable outcome to take place in order to win...but are you REALLY? I don't see how....

I'm sure if 1-1-1-1-1-1 were actually drawn, there would be investigations about whether that drawing was rigged. And if I won with ANY ticket (such as 5-23-23-16-37-2), I would start to wonder whether I was living in a simulation centered around my life experience. But aren't these intuitions going astray? Aren't the probabilities all the same?

Comment by matthew_opitz on Your transhuman copy is of questionable value to your meat self. · 2016-01-13T02:17:07.495Z · score: 0 (0 votes) · LW · GW

Actually, you've kind of made me want to get my own hemispherectomy and then a re-merging just so that I can experimentally see which side's experiences I experience. I bet you would experience both (but not remember experiencing the other side while you were in the middle of it), and then after the re-merging, you would remember both experiences and they would seem a bit like two different dreams you had.

Comment by matthew_opitz on Your transhuman copy is of questionable value to your meat self. · 2016-01-13T02:11:44.462Z · score: 0 (0 votes) · LW · GW

So, what will that feel like? I have a hard time imagining what it will be like to experience two bodies at once. Can you describe how that will work?

Comment by matthew_opitz on Your transhuman copy is of questionable value to your meat self. · 2016-01-10T20:14:33.167Z · score: 1 (1 votes) · LW · GW

I don't really understand the point of view of people like torekp who would say, "No, they're just different interpretations of "you"."

I don't know about you, but I'm not accustomed to being able to change my interpretation of who I am to such an extent that I can change what sensory stimuli I experience.

I can't just say to myself, "I identify with Barack Obama's identity" and expect to start experiencing the sensory stimuli that he is experiencing.

Likewise, I don't expect to be able to say to myself, "I identify with my clone" and expect to start experiencing the sensory stimuli that the clone is experiencing.

I don't seem to get a choice in the matter. If I enter the teleporter machine, I can WANT to identify with my clone that will be reconstructed on Mars all I want, but I don't expect that I will experience stepping out of the teleporter on Mars.

Comment by matthew_opitz on Your transhuman copy is of questionable value to your meat self. · 2016-01-09T00:09:44.506Z · score: 2 (2 votes) · LW · GW

I'm with Usul on this whole topic.

Allow me to pose a different thought experiment that might elucidate things a bit.

Imagine that you visit a research lab where they put you under deep anesthesia. This anesthesia will not produce any dreams, just blank time. (Ordinarily, this would seem like one of those "blink and you're awake again" types of experiences).

In this case, while you are unconscious, the scientists make a perfect clone of you with a perfect clone of your brain. They put that clone in an identical-looking room somewhere else in the facility.

The scientists also alter your original brain just ever-so-slightly by deleting a few memories. Your original brain is altered no more than it originally is when, let's say, it has a slight alcohol hangover. But it is altered more than the clone, which has a perfect copy of your brain from before the operation.

Which body do you expect to wake up in the next morning? My intuition: the original with the slightly impaired memories—despite the fact that the pattern theory of identity would expect that one would wake up as the clone, would it not?

Of course, both will believe they are the original, and by all appearances it will be hard for outsiders who were not aware of the room layout of the building to figure out which one was the original. I don't care about any of those questions for the purpose of this thought-experiment.

It seems to me that there can be five possibilities as to what I experience the next morning:

  1. The body of the (ever-so-slightly) impaired original.
  2. The body of the perfect clone.
  3. Neither body (non-experience).
  4. Neither body (reincarnation in a different body, or in an entirely different organism with an entirely different sort of consciousness, with no memory or trace of the previous experiences).
  5. Somehow, both bodies at once.

So if you explained this setup to me before this whole operation and offered to pay either the original or the clone a million dollars after the experience was finished, my pre-operation self would very much prefer that the original get paid that million dollars because that's the body I expect to wake up in after the operation.

Why? Well, we will wake up in our original bodies after dreaming or having a hangover that changes our brains a bit, no?

Are you telling me that, next time I go to sleep, if there happens to be a configuration of matter, a Boltzmann brain somewhere, that happens to pattern-match my pre-sleep brain better than the brain that my original body ends up with after the night, that my awareness will wake up in the Boltzmann brain, and THAT is what I will experience? Ha!

I have a very strong feeling that this has not happened ever before. So that means one of three things:

  1. Boltzmann brains or copies of me somewhere else don't exist. The brain in my bedroom the next morning is always the closest pattern-match to the brain in my bed the previous night, so that's what my awareness adheres to all the time.
  2. My feelings are fundamentally misleading (how so?)

Just think: if the pattern theory of identity is true, then here is what I logically expect to happen when I die:

My awareness will jump to the next-as-good clone of my original mental pattern. Whoever had the most similar memories to what my original brain had before it died, that's whose body and brain and memories I will experience after the death of my original brain.

In that case: no cryonics needed! (As long as you are prepared to endure the world's worst hangover where you lose all memories of your previous life, gain new memories, and basically think that you have been someone else all along. But hey: assuming that this new person has had a pretty good life up until now, I would say that this still beats non-existence!)

This also implies that, if you are a, let's say, Jewish concentration camp prisoner who dies, the closest pattern-match to your mind the next moment that you will experience will be...probably another Jewish concentration camp prisoner. And on and on and on! Yikes!

Comment by matthew_opitz on Is Belief in Belief a Useful Concept? · 2015-04-09T20:36:58.549Z · score: 0 (0 votes) · LW · GW

This is so true! And if you buy into Julian Jaynes's "Bicameral Mind" theory, then ancient religious commandments from god (which were in actuality lessons from parents/chiefs/priests ingrained in one's psyche since childhood but falsely attributed to unseen spiritual forces) literally WERE heard in people's minds like a catchy music tune played over and over.

Comment by matthew_opitz on Can we decrease the risk of worse-than-death outcomes following brain preservation? · 2015-02-22T21:09:16.904Z · score: 2 (2 votes) · LW · GW

I'm guessing the author meant that the ancestral environment was one that many of us now would consider "worse than death" considering our higher standards of expectation for standard of living, whereas our ancestors were just perfectly happy to live in cold caves and die from unknown diseases and whatnot.

I guess the question is, how much higher are our expectations now, really? And how much better do we really have it now, really?

Some things, like material comfort and feelings of material security, have obviously gotten better, but others, such as positional social status anxiety and lack of warm social conviviality, have arguably gotten worse.

Comment by matthew_opitz on 2014 Less Wrong Census/Survey · 2014-10-29T17:34:52.777Z · score: 30 (30 votes) · LW · GW

I took the survey.

The only part I wasn't sure about how to answer was the P(God) and P(supernatural) part. I put a very low probability on P(supernatural) because it sounded like it was talking about supernatural things happening "since the beginning of the universe" which I took as meaning "after the big bang." But for P(God) I put 50% because, hey, who knows, maybe there was a clockmaker God who set up the big bang?

If one were to interpret these survey responses in a certain way, though, they could seem illogical because one might think that P(supernatural) (which includes God in addition to many other possibilities) would strictly have to have a higher probability than the more-specific P(God). But like I said, I took P(supernatural) as referring to stuff after the big bang, whereas I took P(God) as including any time even before the big bang.

Comment by matthew_opitz on What supplements do you take, if any? · 2014-10-25T16:43:50.001Z · score: 1 (1 votes) · LW · GW

Vitamin D and omega-3 fish oil daily.

Melatonin when needed (a couple of times a month).

Evening primrose oil occasionally.

Comment by matthew_opitz on What false beliefs have you held and why were you wrong? · 2014-10-18T14:08:12.965Z · score: 2 (2 votes) · LW · GW

Same thing here from around 2003 to 2006. I did not see the oil shale boom coming. I found plausible all of the peak oil pundits who argued that oil shale would barely, if at all, have an energy return on energy invested (EROEI) greater than 1, and thus it wouldn't matter how high the price went - the costs would keep pace with the revenue, and it would not be economical to develop it. Of course, those pundits turned out to be wrong.

I remember the day when I really started to doubt peak oil. It was when I saw a TOD article on Toe-to-Heel-Air-Injection for heavy oil and I thought, "By golly, maybe they'll be able to use all of that heavy oil after all...." If I had had any money at the time, rather than being a high school student, I would have put money on heavy oil and oil shale from that point on, and I'd probably be doing pretty well by now...

Comment by matthew_opitz on Questions on Theism · 2014-10-09T01:43:07.305Z · score: 0 (2 votes) · LW · GW

If god were perfectly understandable, if his miracles were repeatable, and if you could devise a perfect algorithm to elicit miracles from god, then how would "god" be distinct from "the natural world"? Wouldn't it be more parsimonious to say, "We have gravity that says if you do X then Y happens, we have electromagnetism that says that if you do X then Y happens....and then there's this "god" rule to the universe which says if you do X then Y happens."

Of course, if you approach the Christian god in this way, Christians will immediately object and say that "god does not like to be tested," as if they have a priori decided that they don't want to think of themselves as living in a predictable universe. Strange preference, that....

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-08T22:04:24.868Z · score: -1 (5 votes) · LW · GW

Wolf brains produce way more adrenaline than dog brains on a regular basis. That is one reason why wolves are likely to be far less predictably docile, even if you raise one from a pup onward. That is why you still have to be careful around a tame wolf.

Domestication is different than taming. Taming involves conditioning an animal's behavior; domestication involves breeding actual genetic/physiological changes.

Do we have evidence that whites and non-whites have different average levels of certain neurotransmitters? Are there actual gross physiological differences in white and non-write brains? If so, then there is more than just social-construction at work. If not, then social-construction is all there is.

I don't think all of this is just a semantic game.

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-08T21:08:12.474Z · score: 2 (2 votes) · LW · GW

And yet fertility is negatively correlated with income.

I imagine that, if I were making more money, I would be working more hours, which would mean I would have less time for parenting, which would make parenting even more unattractive. (This is under the assumption, which might be mistaken as you point out, that good parenting requires lots of money and time).

So basically, Westerners have gotten more picky about having children to the point of insisting on having a lot of free time AND a high income, AND for child-rearing to be a more intrinsically interesting activity than other things they could be doing with that time and money (say, being an unemployed millionaire who trades stocks and plays poker for fun). Time, money, and interest have all become necessary, but not sufficient conditions.

I think this has to do with the vast increase in the number of fun distractions in modern society. As a farmer in Sub-Saharan Africa, what does one do with one's time? Herd cattle? Why not have kids? They are like little super-intelligent robots that you can help program and develop. How neat! That sort of technology pretty much blows every other entertainment they would have right out of the water. But Westerners? They think, "Oh, whoop-de-do, a super-intelligent robot that you can help program and develop...but which you will also be responsible for and which may occasionally be stressful...no thanks, I'm more interested in football/LessWrong/youtube/something that is equally interesting but not as stressful."

Bingo. Except its perfectly possible to raise "nice middle-class" kids without micromanagement, your parents' generation did just that.

Nah, my parents helicoptered and micromanaged. But if you want to talk about my parents' parents' generation, then yes. The thing is, they didn't really raise good middle-class kids, in that my father ended up being a roofer and my mother a housewife. Neither graduated college until my mother went back to school after my siblings had gotten out of high school. Not that it hurt them too much in their generation. My father made good money at roofing. Would the money still be as good? I don't know.

Really, I get the feeling that these days people don't pay much attention to their neighbors, also why do you care what they think?

By "neighbors," I mean social circle, whether or not they geographically border one's property.

Probably not if you live in a neighborhood without thugs, granted this is becoming harder now that progressives are transporting thugs out of ghettos to other neighborhoods in the name of diversity.

And living in a neighborhood with a good peer group requires money.

Also in the "old days" the neighbors would look down on someone who divorces or has sex outside of marriage rather than someone who's a non-helicopter parent. Why did this change?

My naive progressive feeling about this is because "ending an unhappy marriage through divorce" or "sex outside of marriage" produce net good things. Progressives have this idea that divorce is the psychologically "healthier" option in that it is more honest and builds less resentment. Likewise, progressives tend to have this idea that having sex outside of marriage is a good way to make sure that sexual chemistry is compatible before marrying, plus it is just fun, and if protection is used and people are careful with each other's feelings, then there are no downsides (and progressives do not see lack of babies as a downside).

On the other hand, progressives have this idea that being a non-helicopter parent produces net bad things, such as children getting stuck in dysfunctional life situations. Buuuut...I will admit that there are those intriguing studies that suggest that parenting style does not have much of an effect on child outcome, which would be a bombshell to the progressive mindset.

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-07T15:48:52.358Z · score: 4 (4 votes) · LW · GW

The statistics about fertility rates in Nepal corresponding closely to level of education are telling. Education past the age of 12 has to be having some effect. But what is the mechanism?

Jim hypothesizes that there is a subtle indoctrination that begins in school around that age that dissuades women from having children. Perhaps a little bit...but is that all there really is to it?

Let's think about this for a second: let's imagine that it were legal for girls in the U.S. to drop out of school at 13. (I think the current legal age is 16).

What does a 13 year old girl do in American society if she isn't going to school? What can she usefully do?

She could theoretically get a job. There are probably some jobs that a 13-year old could be reasonably good at...like coffee house barista. Or maybe just the coffee house barista's helper who buses the tables. How hard are those jobs, really?

But, how's a 13 year old going to get that sort of job when the job market is swarming with over-qualified college graduates who can't get work in their fields of study, will be at least marginally more effective at those jobs (perhaps in terms of social interactions with the patrons or ancillary skills they might have picked up in college), and who will also be willing to work for minimum wage?

So a 13-year old drop-out can't reasonably expect to get a job. So, what about marriage and kids? Can a 13-year old reasonably expect to find a man who is at least vaguely within her age range (<18 years old) who is willing and ABLE to support her and her kids?

I noticed that this Jim guy pins a lot of the blame on Western women not wanting to have kids. Now, do we actually have evidence for this? Do we in fact know that it is not the Western MEN who are hesitant about having to provide for kids?

I myself have a beautiful wife who would make for a great mother, both genetically and in terms of raising kids, but the thought of having kids seems just insane to me right now. Why? I make about $10,000 a year with a MASTER'S DEGREE as a part-time college adjunct instructor and as a K-12 substitute teacher. My wife makes about the same with a BACHELOR'S DEGREE as a part-time nurse's aid in a hospital. Together, we might scrape together $20,000. Our expenses are about $16,000 a year if we are frugal (we have a very small apartment and only one old car). Not much buffer room. Not much money to save up towards a house or a new car for when the old one breaks down. Don't even talk to me about children.

Now, our luck could change. One of us could land a full-time job with benefits. Realistically, a job where one of us made $25,000 a year would have us jumping for joy. But in the current economy, there are no guarantees. And even if I did get a nice full-time job, I would still not have the confidence in the economy to expect that I would keep it, or something like it, for the next 20 years while my wife and I raised our kids.

It seems to me that the problems are that:

  1. There are way too few well-paying jobs in the economy for the number of over-qualified college graduates that there are to fill them. This is why I think that the politically-correct catchphrase, "Education is the KEY!" is way off track. Our problem is not lack of education. If everyone tomorrow suddenly starting doing better in school and went on to higher degrees, the only difference that would make is, we would suddenly have Ph.D.s working at McDonalds or Starbucks. More education does not magically create more jobs or better jobs.
  2. There are also higher cultural expectations on how good of a parent you have to be (at least, if we are talking about the "nice middle-class white" demographic whose low fertility rates the neoreactionaries are so worried about). "Close-parenting" is now the expected norm among this demographic. I get the sense from the stories my parents and grandparents tell that people used to assume that kids kinda "raised themselves." You just told them to go out in the neighborhood and play with other kids, and be home for supper, and you put food on the table, and you occasionally reprimanded them when they misbehaved or did poorly in school. You didn't micromanage their extra-curricular activities, go to all of their extra-curricular activities, research college-preparatory programs, etc. You didn't "helicopter parent." Now, if you don't "helicopter parent," then A. other parents will look down on you, and B. your kid probably will go off track and end up as a street thug in some gang or as a couch potato because the surrounding culture is not as much of a supportive ally. (Now why is that?)

All of this adds up to the fact that it is probably not just women who are wary of having kids, but men too.

If a girl starts having kids at 14 like some neoreactionaries advise, it is NOT going to be in a stable marriage with a nice male provider. And that is not necessarily going to be solely due to any bad choices on the girl's part. Even if the girl only tried to woo nice, decent men, what nice, decent 18-year olds are going to be willing and ABLE to raise a family in our economy and culture?

A big problem I see is that, in traditional societies, children are a net economic assets, whereas in modern society, children seem like a net economic drain. That, combined with the inability for a person to get a single-breadwinner job at 18, pretty much makes Jim's neoreactionary strategy not viable, even if a young woman tried to take his advice and execute it conscientiously.

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-05T11:38:52.729Z · score: 2 (2 votes) · LW · GW

How is wireheading trading freedom away if you are quite sure that it will do exactly what you want, and if you have some "abort button"? That sounds like the ultimate power.

Perhaps we are confusing what something looks like from the outside (it looks like the person is obviously immobilized and helpless) vs. what it feels like on the inside (the person gets exactly what they want).

Note that I would be wary about ever wireheading if there were other humans still around whose actions were not sufficiently predictable or constrained. That is because they could potentially try to mess with me while I am in my wireheaded state. I would only go into the wireheaded state if either I was the only human left and I had perfectly automated everything to take care of me while I was being wireheaded, or if there were other humans, but they were all safely under the control of an AI singleton who would keep them from screwing with me while I was being wireheaded.

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-05T01:47:07.876Z · score: 0 (4 votes) · LW · GW

Yes...I think American progressives (what Europe would call our "social democrats") share most of the assumptions that I've highlighted in this thread, as do communists. But American progressives aren't as willing to be frank with themselves or others about following the assumptions towards their icily-logical endpoints. American progressives are more likely to have some conflicting sentimental attachments to religious ideas of objective value, or ideas of "human rights" being a pseudo-objective value (I say "pseudo-objective" because, unless they are arguing from religion, the only basis they really have for asserting that such-and-such is an objective "human right" is their own moral intuition (in other words, what makes them feel good or icky, which is back to subjectivism even if they don't realize it. Like I said, they don't always follow their thoughts to the logical conclusion)).

So, American social democrats are not as "full-blooded progressives" as communists are, but their ideas lead in the same direction.

Comment by matthew_opitz on Goal retention discussion with Eliezer · 2014-09-05T01:34:23.654Z · score: 3 (5 votes) · LW · GW

Okay, wow, I don't know if I quite understand any of this, but this part caught my attention:

The Omohundrian/Yudkowskian argument is not that we can take an arbitrary stupid young AI and it will be smart enough to self-modify in a way that preserves its values, but rather that most AIs that don't self-destruct will eventually end up at a stable fixed-point of coherent consequentialist values. This could easily involve a step where, e.g., an AI that started out with a neural-style delta-rule policy-reinforcement learning algorithm, or an AI that started out as a big soup of self-modifying heuristics, is "taken over" by whatever part of the AI first learns to do consequentialist reasoning about code.

I have sometimes wondered whether the best way to teach an AI a human's utility function would not be to program it into the AI directly (because that will require that we figure out what we really want in a really precisely-defined way, which seems like a gargantuan task), but rather, perhaps the best way would be to "raise" the AI like a kid at a stage where the AI would have minimal and restricted ways of interacting with human society (to minimize harm...much like a toddler thankfully does not have the muscles of Arnold Schwarzenegger to use during its temper tantrums), and where we would then "reward" or "punish" the AI for seeming to demonstrate better or worse understanding of our utility function.

It always seemed to me that this strategy had the fatal flaw that we would not be able to tell if the AI was really already superintelligent and was just playing dumb and telling us what we wanted to hear so that we would let it loose, or if the AI really was just learning.

In addition to that fatal flaw, it seems to me that the above quote suggests another fatal flaw to the "raising an AI" strategy—that there would be a limited time window in which the AI's utility function would still be malleable. It would appear that, as soon as part of the AI figures out how to do consequentialist reasoning about code, then its "critical period" in which we could still mould its utility function would be over. Is this the right way of thinking about this, or is this line of thought waaaay too amateurish?

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-05T00:51:57.209Z · score: 1 (1 votes) · LW · GW

I would say that the most full-blooded "progressives" around today would be communists. No, not the Confucian Mandarins in China that try to pass themselves off as "communists" nowadays. I'm talking about communists who were / are at least as vaguely connected to the actual writings of Marx and Engels as the Soviet communists were. They are the ones who wanted / want to make "heaven on Earth." They are the ones who had / have the most supreme confidence in humankind's ability to eventually "master nature" in principle. They are the ones who had / have the most confidence in their designs to re-engineer human society and "lift the world." They are the ones neoreactionaries truly loathe.

As much as neoreactionaries wail about the decline of Western Civilization now, imagine what they would be like if the Soviet Union had won the Cold War...if they had subverted all Western governments and the U.S. was now run by a communist Poliburo. I think neoreactionaries' heads would explode.

That said, the numbers of real fire-breathing communists in the West nowadays is minuscule, so that is probably why neoreactionaries do not frame them as their ultimate enemy. Instead, neoreactionaries focus on ideologically combating the social justice types, who usually hail from a slightly less extreme part of the left associated with "democratic" socialism, social democracy, and maybe the left wing of the Democratic Party. They are less-extreme "progressives" in that they do not push the whole philosophy of "progressivism" to its most extreme conclusions, but because they are more numerous and more of a threat, they are who gets tarred with the label "progressivist," and that is why neoreactionaries talk about "progressivism" rather than "Enlightenment-ism" or "communism," and hence why I have chosen to use the label "progressivism" in this thread.

Edit: Also, one might object that communists talk a great deal about "serving the people" and not being selfish and all that. Surely they would not fit the mold of normative subjectivism ("Whatever I like, I define as "good.") But here again, you are getting confused by our modern-day Confucian Mandarin knock-offs. To a certain extent, even Soviet communism was polluted with all sorts of quasi-Eastern Orthodox sentiments. If you go back to "Real Communism"(tm) in the writings of Marx and Engels, you find that communism is about finding a collective solution to what is a shared, but essentially individual problem of an individual worker's alienation from his labors and his feeling of unfreedom. Marx cannot be easily separated from his contemporaries Pierre Joseph Proudhon (anarchist) and Max Stirner (egoist). Although Marx disagreed with them at length, his idea of communism was definitely influenced by them and other Enlightenment thinkers.

At some point along the way, communists mixed up the ultimate goal (individual liberation from unfreedom and alienation) with proximate means like "serve the people" or "das Partei hat immer Recht!" (The Party is always right!) (And these were poor proximate means at that, to judge by the fact that they did not bring society one inch closer to communism).

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-05T00:29:15.926Z · score: 3 (3 votes) · LW · GW

Yes, I've realized that neoreactionaries use the term "progressive" to basically mean "post-Enlightenment thought" in general. And that is the way I am using the term in this thread.

Edit: Except there is that tricky problem that neoreactionaries trace the origins of "progressivism" and "the Cathedral" back even farther to "ultra-Calvinism" and the Protestant Reformation. So I guess "progressivism" is post-Reformation thought, which would include Enlightenment thought and New Deal liberalism as further signposts along that road?

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-05T00:22:57.686Z · score: 7 (7 votes) · LW · GW

Interesting dichotomy. Yes, I think you may be on to something here.

The argument goes roughly that peasants, slaves, battered wives, and so on who accepted their lot in life would mentally adapt and be able to be perfectly happy. Progressivism/liberalism/the Cthedral has either destroyed our capacity to thrive in these arrangements or caused us to dishonestly claim we would hate them.

One way to test this hypothesis would be to locate a place in the world today, or a place and time in history, where the ideas of the "Cathedral" has not / had not penetrated, and give the "oppressed" a chance to state their true opinions in a way where they know that they don't need to censor themselves in front of the master.

For example, if we went back to 1650 in Virginia (surely before any abolitionist sentiment or Cathedralization of that society's discourse...) and found a secret diary of a slave that said, "Oh lawd, I sho' love slavin' fo' da massah evryday," then that would support the neoreactionary hypothesis. On the other hand, many discoveries of secret slave diaries in that context saying, "Bein' slaves is awful bad" would suggest the opposite.

Although I can't seem to find any citations for this at the moment, I do believe that I have run across at least one such example of a slave praising slavery in my time spent looking at primary sources from American antebellum slavery...but, if I recall, it might have been from a slave writing just after the Civil War, writing about "Dem was da good times befo' da war," and the statement might have been given for ulterior reasons with a mind to who the audience would be (possibly ex-slavemasters whom the ex-slave now served as a sharecropper...I can't remember the context).

To be sure, the vast, vast majority of slave sources that I have read all seem to indicate that slaves hated slavery and tried to escape at any opportunity...but maybe that was just the Cathedral fooling them...

Unfortunately, writing was an elite skill throughout much of history, and the honest opinions of the oppressed were not often recorded....

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-04T22:56:15.723Z · score: 1 (1 votes) · LW · GW

If humanity is threatened with dysgenic decline, perhaps a democratic world government organizes a eugenics program.

Few mainstream progressivists would be OK with that.

That is because they do not currently see dysgenic decline as a problem. If it ain't broke, don't fix it. But if they ever became convinced that it was a serious problem...that the only people who were willing to voluntarily restrain their reproduction were the smart ones, and the Earth was getting re-populated only by the less-smart ones on average, to the extent that it threatened the very maintenance of civilization...yes, if somehow you could convince progressives of this (imagine if perhaps the world had indeed turned into a carbon-copy of the world in the movie "Idiocracy" and no progressives could deny it any more...well, what would progressives do? Agree to the social darwinism that the neoreactionaries offer? No way. Plug their fingers in their ears and pretend the problem didn't exist? Not if the problem were self-evidently bad enough. Depend on voluntary initiatives? Then you are right back to the problem. The only way I could see of seriously addressing the problem while remaining true to progressivist principles would be a global eugenics program overseen by a democratic world government. This is the logical endpoint of progressivist principles when applied to this problem. And you know...me personally, I would be fine with such a eugenics program.

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-04T22:46:15.251Z · score: 10 (10 votes) · LW · GW

Unless I haven't found where to look yet, the literature on this period seems to lack good expositions which lay out the case in defense of the traditional social system that the philosophes mocked and rejected. Jonathan Israel references now obscure books written by the philosophes' contemporaries which respond to the Enlightenment's propaganda with anti-philosophie, but because these men, mostly Catholic clergymen and theologians, allegedly lost the historical argument to the philosophes, lots of luck finding accessible versions of that literature now, and in English translation.

Your comment has brought up a possibility that had never occurred to me before: perhaps one of the weaknesses of the anti-philosophes is that they felt obliged to defend their particular brand of traditionalism (Christian traditionalism) and therefore didn't have the cognizance to give the best general defense for traditionalism as such. Basically, the Enlightenment thinkers got to strawman traditionalism as Christian traditionalism, whereas in the least convenient possible world they would have had to argue against the 18th century equivalents of our neoreactionaries—which, even if you don't totally buy into their arguments, you have to at least admit that they would have made for more formidable intellectual opponents than...a Church that was shot through with a recent history of internal divisions (Protestant Reformation, religious wars) and corruption (selling of indulgences, corrupt popes, etc.).

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-04T22:36:11.498Z · score: 1 (1 votes) · LW · GW

Yes, the common thread is the Enlightenment. See my response to Lumifer regarding who are the "progressives" I am talking about. They are not necessarily the people in America who flock to the Democratic Party. I don't think neoreactionaries are just complaining about democrats when they go after "progressivism." They have a far more broad target in mind—the Enlightenment, I guess, more or less.

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-04T22:33:36.934Z · score: 3 (3 votes) · LW · GW

Good to know! It would be interesting for someone to write an "Intellectual History of LessWrong." I know there was this: http://slatestarcodex.com/2014/03/13/five-years-and-one-week-of-less-wrong/ But, as nice as this summary is, it focuses more on the questions that got solved and the culminating successes, and is less of a balance "history" that follows every trend, fad, and intellectual dead end (not to imply that neoreaction is necessarily a "dead end," but it doesn't make the list in this account).

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-04T22:04:49.185Z · score: 0 (4 votes) · LW · GW

Good question. I think the best representative of "progressivism" in the sense that I am using it would be Karl Marx. When Marx says at the end of chapter 2 of the Communist Manifesto, "In place of the old bourgeois society, with its classes and class antagonisms, we shall have an association, in which the free development of each is the condition for the free development of all." he is painting a picture of a society where each individual will have the freedom and ability to pursue the things that he/she subjectively values (insofar as it is compatible with others doing likewise).

The novel thing about Marx is, the idea that this would be a good state of affairs needs no justification beyond itself. Marx just says, "Wouldn't this be nice?" He does not say, "This is the perfection of man that God commands." He does not say, "This is what Natural Law commands." (Of course, Marx did not trust such appeals to universal absolutes, always seeing class motivations lurking underneath such language.) Marx just says, "Doesn't this sound nice? Let's do it."

You can find this idea among other Enlightenment thinkers. I guess the closest synonym to "progressivism" would be "the Enlightenment." For example, Thomas Jefferson's "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the Pursuit of Happiness." Yes, I know that Jefferson is justifying this by appealing to a "Creator," but that's him throwing a bone to the religious of his time to make it more palatable. Did Jefferson really believe that his Deist version of the "Creator" would have any bearing on this stuff? No. Earlier he says that these truths are "self-evident," which I think is Jefferson's real justification. Once again, subjectivism. And I know that "pursuit of happiness" is a last minute replacement for "property," which sounded to petty and narrow. Even so, the idea that life is about the "pursuit of happiness" is one that has widely caught on all the same, and this I consider to be synonymous with subjective values and the road to wireheading ("happiness" being conceived in the most all-encompassing sense).

If you want another person expressing "progressivist" ideas about normative value being subjective, I would point you to Jean-Paul Sartre or any of the existentialist and their idea that we create our own meaning in life.

Finally, there's the constant message of capitalist advertising to "enjoy yourself" as the ultimate purpose in life, as that brilliant fraud Slavoj Zizek has done the service of pointing out and dissecting.

So, although it is difficult to pin down very many people who fit the ideal type of a "progressive" in all of its facets that I have drawn here, we are all more or less swimming in "progressive" notions insofar as we are swimming in Enlightenment, existentialist, and/or capitalist ideological messages about life having no ultimate meaning beyond one's enjoyment.

Edit: I would add secular humanism as another source of what I am calling "progressivism." For example, the secular humanist idea of, "Just be good for goodness' sake." --what can that vapid phrase even mean, beyond "Do good things because they will promote things that you find good in the long run"?

Comment by matthew_opitz on "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) · 2014-09-04T18:06:19.109Z · score: 1 (1 votes) · LW · GW

I am not trying to describe the value system of American progressives (whom I would call "social democrats" to use terminology that is consistent with European nomenclature). I am using the word "progressive" in a much broader, more philosophical sense—close to the sense in which neoreactionaries use the term.

I know that that game of re-defining terms sounds like a cop-out, but there's not really another word I could use to describe this worldview that I am trying to sketch and compare with neoreaction.

Comment by matthew_opitz on Memory is Everything · 2014-08-28T02:00:51.859Z · score: 1 (1 votes) · LW · GW

Okay...but why wouldn't you treat the narcotized you as a different person, but you would treat the memory-erased you as a different person in the other scenario? Is this not being inconsistent? The same thing is being done in both scenarios--your memory is being erased--but by different means.

Comment by matthew_opitz on Memory is Everything · 2014-08-26T14:02:49.831Z · score: 1 (1 votes) · LW · GW

I think if we rephrase the scenario to be slightly more plausible and familiar, it will become clearer to people:

Imagine that some eccentric millionaire approaches you with the following deal: she will give you a million dollars if: You agree to go to a dentist and undergo a root canal operation without anesthesia (nevermind the fact that you probably don't need a root canal), BUT: You DO get to have a heaping dose of Versed, which, while it won't dull the pain during the operation, will prevent you from remembering anything about it after the fact.

Would you take the million dollars and do the operation? I would!

Now, as to the question of whether the person undergoing the root canal operation is the real me, I would say, YES! I will experience it. Now, is the copy of the pre-operation me that gets restored after the operation also me? I say YES! I will also experience that body.

Ultimately, the deciding factor for me ends up being the fact that the root canal will only take an hour or two of extreme pain, but the million dollars will bring me enjoyment for far longer. The fact that I won't remember the root canal operation does nothing to influence my estimation of how bad the root canal operation will be. The fact that I won't remember the root canal operation only changes my estimation of how pleasant the post-root canal experience will be (because I will know that I won't be haunted by nightmares of root canal pain while I am enjoying my million dollars).

Even though the memory of the root canal operation will cease to exist for me at some point, the experience still factors into my overall calculations of utility. It's just that normal events that we remember factor into our calculations of utility in two discrete terms: how nice/bad they are in the moment + how nice/bad their memory after-effects are. In the case of amnesia, you are just lopping off the right hand side of that sum.

Comment by matthew_opitz on Memory is Everything · 2014-08-24T14:38:12.052Z · score: 2 (2 votes) · LW · GW

You seem to be objecting that this is an unfair thought experiment because humans were not designed to contemplate these extreme cases.

But that's precisely the point! These extreme cases might not have been present in our ancestral environment. They might not be present now. But there is a decent chance that they are coming...that, someday, we will be literally offered this choice or something analogous to it by a superintelligence AI who, even if friendly, honestly just wants to ascertain our preferences. Perhaps the superintelligent AI can create a utopia for us, but during the week in which it is being constructed by nano-robots, the Earth's surface will be scoured to bits and resemble a living hell. Would we still want it?

That's why this post poses a good, relevant question. And I see that most people seem to just want to squirm in their seats and complain about the tough question rather than answer it.

Me, I would take option 2, assuming that the billion dollars I would get afterwards would enable more than a week of bliss of a magnitude in the positive direction equal to or greater than the magnitude of suffering I would experience for that horrible week.

Plus, no matter how back that first week of torture is, I will know in the back of my head during all of that that I can look forward to a billion dollars at the end of it. Now, if part of the torture involves temporarily deleting my memory of having made the deal and making me confused about why I am being tortured and how long it will last (possibly forever), it would make me think a bit harder about the deal, but I would still take option 2.

Comment by matthew_opitz on An example of deadly non-general AI · 2014-08-22T20:47:10.635Z · score: 1 (1 votes) · LW · GW

Doesn't a concept such as "mortality" require some general intelligence to understand in the first place?

There's no XML tag that comes with living beings that says whether they are "alive" or "dead" at one moment or another. We think this is a simple binary thing to see whether something is alive or dead, mainly because our brains have modules (modules whose functioning we don't understand and can't yet replicate in computer code) for distinguishing "animate" objects from "inaminate" objects. But doctors know that the dividing line between "alive" and "dead" is a much murkier one than deciding whether a beam of laser light is closer to 500 nm or 450 nm in wavelength (which would be a task that a narrow-intelligence AI could probably figure out). Already the concept of "mortality" is a bit too advanced for any "narrow AI."

It's a bit like, if you wanted to design a narrow intelligence to tackle the problem of mercury pollution in freshwater streams, and you came up with the most simple way of phrasing the command, like: "Computer: reduce the number of mercury atoms in the following 3-dimensional GPS domain (the state of Ohio, from 100 feet below ground to 100 feet up in the air, for example), while leaving all other factors untouched.

The computer might respond with something to the effect of, "I cannot accomplish that because any method of reducing the number of mercury atoms in that domain will require re-arranging some other atoms upstream (such as the atoms comprising the coal power plant that is belching out tons of mercury pollution)."

So then you tell the narrow AI, "Okay, try to figure out how to reduce the number of mercury atoms in the domain, and you can modify SOME other atoms upstream, but nothing IMPORTANT." Well, then we are back to the problem of needing a general intelligence to interpret things like the word "important."

This is why we can't just build an Oracle AI and command it to, "Tell us a cure for X disease, leaving all other things in the world unchanged." And the computer might say, "I can't keep everything else in the world the same and change just that one thing. To make the medical devices that you will need to cure this disease, you are going to have to build a factory to make the medical devices, and you are going to have to employ workers to work in that factory, and you are going to have to produce the food to feed those workers, and you are going to have to transport that food, and you are going to have to divert some gasoline supplies to the transportation of that food, and that is going to change the worldwide price of gasoline by an average of 0.005 cents, which will produce a 0.000006% chance of a revolution in France...." and so on.

So you tell the computer, "Okay, just come up with a plan for curing X disease, and change as little as possible, and if you do need to change other things, try not to change things that are IMPORTANT, that WE HUMANS CARE ABOUT."

And we are right back to the problem of having to be sure that we have successfully encoded all of human morality and values into this Oracle AI.

Comment by matthew_opitz on The Useful Definition of "I" · 2014-06-05T13:56:13.984Z · score: 0 (0 votes) · LW · GW

In the case of non-destructive copying, which copy will I end up experiencing? If it is a 50/50 chance of experiencing either copy, then in cases where the copy would inhabit a more advantageous spatial location than the one I was currently in (such as, if I were stuck on Mars and wanted to go back to Earth), it would be in my interest to copy myself many many times via a Mars-Earth teleporter in order to give myself a good probability that I would end up back on Earth where I wanted to be.

Let's say I valued being back home on Earth more than anything else, and I was willing to split whatever legal property I had back on Earth with 100 other copies of me. Then it would make sense for my original self on Mars to tell the scientists: "Copy me 100 times onto Earth. No more, no less, regardless of whatever I, the copy on Mars, say after this, and regardless of whatever the copies on Earth say after this."

I would end up with a very high probability of experiencing one of those copies back on Earth. Of course, all of the copies on Earth would insist that THEY were the successful case of subjective teleportation and that no further teleportation would be required. But they would always say that, regardless of whether I was really one of those experiencing them. That is why I pre-committed to copying 100 times, even if the first copy reports, "Yay! The teleportation was a success! No need to make the other 99 copies!" Because at that point, there is still a 50% chance that I am still experiencing the copy back on Mars—too high for my tastes.

Likewise, the pre-commitment to copy myself no more than 100 times is important because you have to draw the line somewhere. If I had $100,000 in a bank account back on Earth, I'd like to start out with at least $1,000 of that. If you leave it up to the original Mars copy to decide, then the teleportation copying will go on forever. Even after the 100th copying (by which point I might have already been fortunate to get my subjective experience transferred onto maybe the 55th Earth copy or the 78th Earth copy or the 24th Earth copy or the 3rd Earth copy, who knows?), the copy on Mars will still insist, "No! no! no! The entire experiment has been a huge stroke of bad luck! 100 times I have tried to copy myself, and 100 times the coin has landed on tails, so to speak. We must make some more copies until my subjective experience gets transferred over!" At this point, the other copies would say, "That's just what we would expect you to always say. You will never say that the experiment was a success. Very likely the original Matthew Opitz's subjective experience got transferred over to one of us. Which one, nobody can tell from the outside by any experiment, as we will all claim to be that success. But the odds are in favor of one of us being the one that the original Matthew Opitz is subjectively experiencing right now, which is what he wanted all along when he set up this experiment. Sorry!"

But then, what if tails really had come up 100 times in a row? What if one's subjective experience really was still attached to the Martian copy? Or what if this idea of a 50/50 chance is total bunk in the first place, and subjective experience simply cannot transfer to spatially-separated copies? That would suck.

What if, as the original you on Mars before any of the teleportation copying, you had a choice between using your $100,000 back on Earth to fund a physical rescue mission that would have a 10% chance of success, versus using that $100,000 back on Earth to fund a probe mission that would send a non-destructive teleportation machine to Mars that would make a copy of you back on Earth? If you believe that such an experiment would give you a 50/50 chance of waking up as the Earth copy, then it would make more sense to do that. However, if you believe that such an experiment would give you a 0% chance of waking up as the Earth copy, then it would make more sense just to do the physical rescue mission attempt.

These questions really do have practical significance. They are not just sophistry.

Comment by matthew_opitz on The Useful Definition of "I" · 2014-06-04T16:41:31.819Z · score: 2 (2 votes) · LW · GW

For the people for whom it does seem to make sense to identify with copies of themselves, do those people come to that conclusion because they anticipate being able to experience the input going into all of those copies somehow? Or is there some other reason that they use?