Open Thread June 2010, Part 3

post by Kevin · 2010-06-14T06:14:36.760Z · LW · GW · Legacy · 627 comments

Contents

627 comments

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

The thrilling conclusion of what is likely to be an inaccurately named trilogy of June Open Threads.

 

627 comments

Comments sorted by top scores.

comment by NancyLebovitz · 2010-06-15T12:25:02.132Z · LW(p) · GW(p)

How to Keep Someone with You Forever.

This is a description of "sick systems"-- jobs and relationships which destructively take people's lives over.

I'm posting it here partly because it may be of use-- systems like that are fairly common and can take a while to recognize, and partly because it leads to some general questions.

One of the marks of a sick system is that the people running it convince the victims that they (the victims) are both indispensable and incompetent-- and it can take a very long time to recognize the contradiction. It's plausible that the crises, lack of sleep, and frequent interruptions are enough to make people not think clearly about what's being done to them, but is there any more to it than that?

One of the commenters to the essay suggests that people are vulnerable to sick systems because raising babies and small children is a lot like being in a sick system. This is somewhat plausible, but I suspect that a large part of the stress is induced by modern methods of raising small children-- the parents are unlikely to have a substantial network of helpers, they aren't sharing a bed with the baby (leading to more serious sleep deprivation), and there's a belief that raising children is almost impossible to do well enough.

Also, it's interesting that people keep spontaneously inventing sick systems. It isn't as though there's a manual. I'm guessing that one of the drivers is feeling uncomfortable at seeing the victims feeling good and/or capable of independent choice, so that there are short-run rewards for the victimizers for piling the stress on.

On the other hand, there's a commenter who reports being treated better by her family after she disconnected from the craziness.

Replies from: Eneasz
comment by Eneasz · 2010-06-16T05:38:21.011Z · LW(p) · GW(p)

Interesting. I suspect that sick systems are actually highly competitively-fit, and while people who opt-out of them may be happier, those people will propagate themselves less, and therefore will be overwhelmed by Azathothian forces.

Is there any way to combat Azathoth aside from forming a singleton?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-17T11:13:45.133Z · LW(p) · GW(p)

Why do you think sick systems are highly competitively fit? They seem to get a lot of work out of people, but also waste a great deal of it.

If your hypothesis is that sick systems must be competitively fit because there are a great many of them, I think stronger evidence is needed.

Replies from: Eneasz
comment by Eneasz · 2010-06-18T05:16:18.779Z · LW(p) · GW(p)

As long as the system extracts & uses more work than it's equivalent healthy system - after wastage - then it will outperform it. It doesn't matter if the system burns through employees every few years, there are plenty of other employees to burn up.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-18T10:38:07.694Z · LW(p) · GW(p)

I would think sick systems have less good judgment than healthy systems-- they don't just burn up employees, management is less likely to get information about any mistakes it 's making.

On the other hand, sick systems do at least persist for quite a while. I'm guessing that they coast on the conscientiousness and other virtues of the employees. It's conceivable that some fraction of the excess work isn't wasted.

comment by xamdam · 2010-06-16T14:13:31.298Z · LW(p) · GW(p)

Message from Warren Buffett to other rich Americans

http://money.cnn.com/2010/06/15/news/newsmakers/Warren_Buffett_Pledge_Letter.fortune/index.htm?postversion=2010061608

I find super-rich people's level of rationality specifically interesting, because, unless they are heirs or entertainment, it takes quite a bit of instrumental rationality to 'get there'. Nevertheless it seems many of them do not make the same deductions as Buffett, which seem pretty clear:

My wealth has come from a combination of living in America, some lucky genes, and compound interest. Both my children and I won what I call the ovarian lottery. (For starters, the odds against my 1930 birth taking place in the U.S. were at least 30 to 1. My being male and white also removed huge obstacles that a majority of Americans then faced.)

My luck was accentuated by my living in a market system that sometimes produces distorted results, though overall it serves our country well. I've worked in an economy that rewards someone who saves the lives of others on a battlefield with a medal, rewards a great teacher with thank-you notes from parents, but rewards those who can detect the mispricing of securities with sums reaching into the billions. In short, fate's distribution of long straws is wildly capricious.

In this sense they are sort of 'natural experiments' of cognitive biases at work.

Replies from: pjeby
comment by pjeby · 2010-06-16T15:44:02.833Z · LW(p) · GW(p)

My wealth has come from a combination of living in America, some lucky genes, and compound interest. Both my children and I won what I call the ovarian lottery. (For starters, the odds against my 1930 birth taking place in the U.S. were at least 30 to 1. My being male and white also removed huge obstacles that a majority of Americans then faced.)

My luck was accentuated by my living in a market system that sometimes produces distorted results, though overall it serves our country well. I've worked in an economy that rewards someone who saves the lives of others on a battlefield with a medal, rewards a great teacher with thank-you notes from parents, but rewards those who can detect the mispricing of securities with sums reaching into the billions. In short, fate's distribution of long straws is wildly capricious.

Wow. That is some seriously clear thinking. Too bad Mr. Buffet isn't here to get the upvote himself, so I upvoted you instead. ;-)

Replies from: xamdam
comment by xamdam · 2010-06-16T15:55:49.006Z · LW(p) · GW(p)

I think in Buffett's case this is not an accident; I venture to claim that his wealth is a result of fortune combining with an unusual doze of rationality (even if he calls it 'genes'). My strongest piece of evidence is that his business partner for the past 40 years, Charlie Munger, is one of the very early outspoken adopters of the good parts of modern psychology, such as ideas of Cialdini and Tversky/Kahneman and decision-making under uncertainty.

http://vinvesting.com/docs/munger/human_misjudgement.html

Replies from: pjeby
comment by pjeby · 2010-06-17T01:21:51.866Z · LW(p) · GW(p)

http://vinvesting.com/docs/munger/human_misjudgement.html

Oh wow, I think I have a new role model. Any chance we can get these two (Buffet and Munger) to open a rationality dojo? (Who knows, they might be impressed, given that most people ask them for wealth advice instead...)

comment by multifoliaterose · 2010-06-14T21:18:52.253Z · LW(p) · GW(p)

I made a couple of comments here http://lesswrong.com/lw/1kr/that_other_kind_of_status/255f at Yvain's post titled "That Other Kind of Status." I messed up in writing my first comment in that it did not read as I had intended it to. Please disregard my first comment (I'm leaving it up to keep the responses in context).

I clarified in my second comment. My second comment seems to have gotten buried in the shuffle and so I thought I would post again here.

I've been a lurker in this community for three months and I've found that it's the smartest community that I've ever come across outside of parts of the mathematical community. I recognize a lot of the posters as similar to myself in many ways and so have some sense of having "arrived home."

At the same time the degree of confidence that many posters have about their beliefs in the significance of Less Wrong and SIAI is unsettling to me. A number of posters write as though they're sure that what Less Wrong and SIAI are doing are the most important things that any human could be doing. It seems very likely to me that what Less Wrong and SIAI are doing is not as nearly important (relative to other things) as such posters believe.

I don't want to get involved in a debate about this point now (although I'd be happy to elaborate and give my thoughts in detail if there's interest).

What I want to do is to draw attention to the remarks that I made in my second comment at the link. From what I've read (several hundred assorted threads), I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you're a part of these groups (*).

My drawing attention to this question is not out of malice toward any of you - as I indicated above, I feel more comfortable with Less Wrong than I do with almost any other large group that I've ever come across. I like you people and if some of you are suffering from the issue (*) I see this as understandable and am sympathetic - we're all only human.

But I am concerned that I haven't seen much evidence of serious reflection about the possibility of (*) on Less Wrong. The closest that I've seen is Yvain's post titled "Extreme Rationality: It's Not That Great". Even if the most ardent Less Wrong and SIAI supporters are mostly right about their beliefs, (*) is almost certainly at least occasionally present and I think that the community would benefit from a higher level of vigilance concerning the possibility (*).

Any thoughts? I'd also be interested in any relevant references.

[Edited in response to cupholder's comment, deleted extraneous words.]

Replies from: Eneasz, cupholder, JoshuaZ, Vladimir_Nesov
comment by Eneasz · 2010-06-16T07:02:02.949Z · LW(p) · GW(p)

At the same time the degree of confidence that many posters have about their beliefs in the significance of Less Wrong and SIAI is unsettling to me. A number of posters write as though they're sure that what Less Wrong and SIAI are doing are the most important things that any human could be doing. It seems very likely to me that what Less Wrong and SIAI are doing is not as nearly important (relative to other things) as such posters believe

I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you're a part of these groups (*).

You know what... I'm going to come right out and say it.

A lot of people need their clergy. And after a decade of denial, I'm finally willing to admit it - I am one of those people.

The vast majority of people do not give their 10% tithe to their church because some rule in some "holy" book demands it. They don't do it because they want a reward in heaven, or to avoid hell, or because their utility function assigns all such donated dollars 1.34 points of utility up to 10% of gross income.

They do it because they want their priests to kick more ass than the OTHER group's priests. OUR priests have more money, more power, and more intellect and YOUR sorry-ass excuse for a holy-man. "My priest bad, cures cancer and mends bones; your priest weak, tell your priest to go home!"

So when I give money to the SIAI (or FHI or similar causes) I don't do it because I necessarily think it's the best/most important possible use of my fungible resources. I do it because I believe Eliezer & Co are the most like-me actors out there who can influence the future. I do it because of all the people out there with the ability to alter the flow of future events, their utility function is the closest to my own, and I don't have the time/energy/talent to pursue my own interests directly. I want the future to look more like me, but I also want enough excess time/money to get hammered on the weekends while holding down an easy accounting job.

In short - I want to be able to just give a portion of my income to people I trust to be enough like me that they will further my goals simply by pursuing their own interests. Which is to say: I want to support my priests.

And my priests are Eliezer Yudkowsky and the SIAI fellows. I don't believe they leach off of me, I feel they earn every bit of respect and funding they get. But that's besides the point. The point is that even if the funds I gave were spent sub-optimally, I would STILL give them this money, simply because I want other people to see that MY priests are better taken care of than THEIR priests.

The vatican isn't made out of gold because the pope is greedy, it's made out of gold because the peasants demand that it be so. And frankly, I demand that the vatican be put to fucking shame when it compares itself us.

Standard Disclaimer, but really... some enthusiasm is needed to fight Azathoth.

Replies from: blogospheroid, Kutta
comment by blogospheroid · 2010-06-18T06:24:53.136Z · LW(p) · GW(p)

Voted up for honesty.

comment by Kutta · 2010-06-16T08:27:42.884Z · LW(p) · GW(p)

Umm, while on some visceral level I can relate to this sentiment I still find it hugely inappropriate. Reality --> enthusiasm, not in reverse order, and I think not even a slight deviation from that pattern is permissible.

comment by cupholder · 2010-06-14T21:53:19.898Z · LW(p) · GW(p)

Comment on markup: I saw the first version of your comment, where you were using "(*)" as a textual marker, and I see you're now using "#" because the asterisks were messing with the markup. You should be able to get the "(*)" marker to work by putting a backslash before the asterisk (and I preferred the "(*)" indicator because that's more easily recognized as a footnote-style marker).

Feels weird to post an entire paragraph just to nitpick someone's markup, so here's an actual comment!

From what I've read (several hundred assorted threads), I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you're a part of these groups

Let me try and rephrase this in a way that might be more testable/easier to think about. It sounds like the question here is what is causing the correlation between being a member of LW/SIAI and agreeing with LW/SIAI that future AI is one of the most important things to worry about. There are several possible causes:

  1. group membership causes group agreement (agreement with the group)
  2. group agreement causes group membership
  3. group membership and group agreement have a common cause (or, more generally, there's a network of causal factors that connect group membership with group agreement)
  4. a mix of the above

And we want to know whether #1 is strong enough that we're drifting towards a cult attractor or some other groupthink attractor.

I'm not instantly sure how to answer this, but I thought it might help to rephrase this more explicitly in terms of causal inference.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-06-15T01:45:17.709Z · LW(p) · GW(p)

I'm not sure that your rephrasing accurately captures what I was trying to get at. In particular, strictly speaking (*) doesn't require that one be a part of a group , although being part of a group often plays a role in enabling (*).

Also, I'm not only interested in possible irrational causes for LW/SIAI members' belief that future AI is one of the most important things to worry about, but also possible irrational causes for each of:

(1) SIAI members' belief that donating to SIAI in particular is the most leveraged way to reduce existential risks? Note that it's possible to devote ones' live to a project without believing that it's the best project for additional funding - see Givewell's blog posts on Room For More Funding:

For reference, PeerInfinity says

A couple of times I asked SIAI about the idea of splitting my donations with some other group, and of course they said that donating all of the money to them would still be the most leveraged way for me to reduce existential risks.

(2) The belief that refining the art of human rationality is very important.

On (2), I basically agree with Yvain's post Extreme Rationality: It's Not That Great.

My own take is that the Less Wrong community has been very enriching in some of its members lives on account of allowing them the opportunity to connect with people similar to themselves, and that their very positive feelings connected with their Less Wrong experience have led some of them to overrate the overall importance of Less Wrong's stated mission. I can write more about this if there's interest.

Replies from: cupholder, h-H
comment by cupholder · 2010-06-15T10:26:19.755Z · LW(p) · GW(p)

Thank you for clarifying. I don't think I really have an opinion on this, but I figure it's good to have someone bring it up as a potential issue.

comment by h-H · 2010-06-15T02:03:12.862Z · LW(p) · GW(p)

I can write more about this if there's interest.

I'm interested. I've been thinking about this issue myself for a bit, and something like an 'internal review' would greatly help in bringing any potential biases the community holds to light.

comment by JoshuaZ · 2010-06-14T22:02:09.398Z · LW(p) · GW(p)

I'm not aware of anyone here who would claim that LW is one of the most important things in the world right now but I think a lot of people here would agree that improving human reasoning is important if we can have those improvements apply to lots of different people across many different fields.

There is a definite group of people here who think that SIAI is really important. If one thinks that a near Singularity is a likely event then this attitude makes some sense. It makes a lot of sense if you assign a high probability to a Singularity in the near future and also assign a high probability to the possibility that many Singularitarians either have no idea what they are doing or are dangerously wrong. I agree with you that the SIAI is not that important. In particular, I think that a Singularity is not a likely event for the foreseeable future, although I agree with the general consensus here that a large fraction of Singularity proponents are extremely wrong at multiple levels.

Keep in mind that for any organization or goal, the people you hear the most about it are the people who think that it is important. That's the same reason that a lot of the general public thinks that tokamak fusion reactors will be practical in the next fifty years: The physicists and engineers who think that are going to loudly push for funding. The ones who don't are going to generally just go and do something else. Thus, in any given setting it can be difficult to estimate the general communal attitude towards something since the strongest views will be the views that are most apparent.

Replies from: Vladimir_Nesov, multifoliaterose
comment by Vladimir_Nesov · 2010-06-14T22:24:49.090Z · LW(p) · GW(p)

I don't think intelligence explosion is imminent either. But I believe it's certain to eventually happen, absent the end of civilization before that. And I believe that its outcome depends exclusively on the values of the agents driving it, hence we need to be ready, with good understanding of preference theory at hand when the time comes. To get there, we need to start somewhere. And right now, almost nobody is doing anything in that direction, and there is very poor level of awareness of the problem and poor intellectual standards of discussing the problem where surface awareness is present.

Either right now, or 50, or 100 years from now, a serious effort has to be taken on, but the later it starts, the greater the risk of being too late to guide the transition in a preferable direction. The problem itself, as a mathematical and philosophical challenge, sounds like something that could easily take at least 100 years to reach clear understanding, and that is the deadline we should worry about, starting 10 years too late to finish in time 100 years from now.

Replies from: multifoliaterose, Benquo
comment by multifoliaterose · 2010-06-15T00:10:43.748Z · LW(p) · GW(p)

Vladimir, I agree with you that people should be thinking intelligence explosion, that there's a very poor level of awareness of the problem, and that the intellectual standards for discourse about this problem in the general public are poor.

I have not been convinced but am open toward the idea that a paperclip maximizer is the overwhelmingly likely outcome if we create a superhuman AI. At present, my thinking is that if some care is taking in the creation of a superhuman AI, more likely than a paperclip maximizer is an AI which partially shares human values, that is, the dicotomy "paper clip maximizer vs. Friendly AI" seems like a false dicotomy - I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point appreciated.

SIAI seems to have focused on the existential risk of "unfriendly intelligence explosion" and it's not clear to me that this existential risk is greater than the risks coming from world war and natural resource shortage.

Replies from: Vladimir_Nesov, Vladimir_Nesov, Craig_Morgan
comment by Vladimir_Nesov · 2010-06-15T01:02:29.363Z · LW(p) · GW(p)

the dichotomy "paper clip maximizer vs. Friendly AI" seems like a false dichotomy - I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point appreciated.

Mainly Complexity of value. There is no way for human values to magically jump inside the AI, so if it's not specifically created to reflect them, it won't have them, and whatever the AI ends up with won't come close to human values, because human values are too complex to be resembled by any given structure that happens to be formed in the AI.

The more AI's preference diverges from ours, the more we lose, and this loss is on astronomic scale (even if preference diverges relatively little). The falloff with imperfect reflection of values might be so sharp that any ad-hoc solution turns the future worthless. Or maybe not, with certain classes of values that contain a component of sympathy that reflects values perfectly while giving them smaller weight in the overall game, but then we'd want to technically understand this "sympathy" to have any confidence in the outcome.

Replies from: CarlShulman, multifoliaterose, Vladimir_Nesov
comment by CarlShulman · 2010-06-16T16:33:09.058Z · LW(p) · GW(p)

The more AI's preference diverges from ours, the more we lose, and this loss is on astronomic scale (even if preference diverges relatively little).

This depends on something like aggregative utilitarianism. If additional resources have diminishing marginal value in fulfilling human aims, that getting a little slice of the universe (in the course of negotiating terms of surrender with the inhuman AI, if it can make credible commitments, or because we serve as acausal bargaining chips with other civilizations elsewhere in the universe) may be enough. Is getting 100% of the lightcone a hundred times better than 1%?

Replies from: Wei_Dai, Vladimir_Nesov
comment by Wei Dai (Wei_Dai) · 2010-06-21T01:27:33.237Z · LW(p) · GW(p)

Is getting 100% of the lightcone a hundred times better than 1%?

I think yes, if we take into account that the more of the lightcone we (our FAI) get, the more trading opportunities we would have with UFAI in other possible worlds. Diminishing marginal value shouldn't apply across possible worlds, because otherwise it would imply gross violations of expected utility maximization.

Also, I suspect that there are possible worlds with much greater resources than our universe (perhaps with physics that allow hypercomputation, or just many orders of magnitude more total exploitable resources), and some of them would have potential trading partners who are willing to give us a small share of their world for a large share of ours. We may eventually achieve most of our value from trading with them. But of course such trade wouldn't be possible if we didn't have something to trade with!

Replies from: Vladimir_Nesov, Will_Newsome
comment by Vladimir_Nesov · 2010-06-21T10:55:00.880Z · LW(p) · GW(p)

Interesting. This suggests thinking about FAI not as using its control to produce terminal value in its own world, but as using its control to buy as much terminal value as it can, in various world-programs. Since it doesn't matter where the value is produced, most of the value doesn't have to be produced in the possible worlds with FAIs in them. Indeed, it sounds unlikely that specifically the FAI worlds will be optimal for FAI-value optimization. FAIs (and the worlds they control) act as instrumental leverage, a way of controlling the global mathematical universe into having more value for our preference.

Thus, more FAIs means stronger control over the mathematical universe, while more UFAIs mean that the mathematical universe is richer, and so the FAIs can get more value out of it with the same control. The metaphors of trade and comparative advantage start applying again, not on the naive level of cohabitation on the same world, but on the level of the global ontology. Mathematics grants you total control over your domain, so that your "atoms" can't be reused for something else by another stronger agent, and so you do benefit from most superintelligent "aliens".

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-06-21T11:29:07.805Z · LW(p) · GW(p)

Yes, assuming that trading across possible worlds can be done in the first place. One thing that concerns me is the combinatorial explosion of potential trading partners. How do they manage to "find" each other?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-21T11:59:21.000Z · LW(p) · GW(p)

It's the same combinatorial explosion as with the future possible worlds. Even though you can't locate individual valuable future outcomes (through certain instrumental sequences of exact events), you can still make decisions about your actions leading to certain consequences "in bulk", and I expect the trade between possible worlds can be described similarly (after all, it does work on exactly the same decision-making algorithm). Thus, you usually won't know who are you trading with, exactly, but on the net estimate that your actions are in the right direction.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-05-17T19:14:39.656Z · LW(p) · GW(p)

Isn't the set of future worlds with high measure a lot smaller?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-05-17T20:40:07.926Z · LW(p) · GW(p)

I currently agree it's a bad analogy and I no longer endorse the position that global acausal trade is probably feasible, although its theoretical possibility seems to be a stable conclusion.

comment by Will_Newsome · 2010-09-27T06:29:23.275Z · LW(p) · GW(p)

Robin Hanson would be so pleased that it turns out economics is the fundamental law of the entire ensemble universe.

comment by Vladimir_Nesov · 2010-06-16T16:51:03.768Z · LW(p) · GW(p)

There are two distinct issues here: (1) how high would a human with original preference value a universe which only gives a small weight to their preference, and (2) how likely is the changed preference to give any weight whatsoever to the original preference, in other words to produce a universe to any extent valuable to the original preference, even if original preference values universes only weakly optimized in its direction.

Moving to a different preference is different from lowering weight of the original preference. A slightly changed (formal definition of) preference may put no weight at all on the preceding preference. The optimal outcome according to the modified preference can thus be essentially moral noise, paperclips, to the original preference. Giving a small slice of the universe, on the other hand, is what you get out of aggregation of preference, and a changed preference doesn't necessarily have a form of aggregation that includes original preference. (On the other hand, there is a hope that human-like preferences include sympathy, which does make them aggregate preferences of other persons with some weight.)

Replies from: CarlShulman
comment by CarlShulman · 2010-06-16T16:56:26.135Z · LW(p) · GW(p)

We should assign some substantial probability to getting some weighting of our preference (from bargaining with transparency, acausal trade, altruistic brain emulations, etc). If a moderate weighting of our preferences gets most of the potential utility, then the expected utility of inhuman AIs getting powerful won't be astronomically less than the expected utility of, e.g. a 'Friendly AI' acting on idealized human preferences.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-16T17:28:50.151Z · LW(p) · GW(p)

We should assign some substantial probability to getting some weighting of our preference (from bargaining with transparency, acausal trade, altruistic brain emulations, etc).

Game-theoretic considerations are only relevant when you have non-trivial control, not when your atoms are used for something else. If singleton's preference gives some weight to your preference, this is a case of having control directly through the singleton's preference, but the origin of this control is not game-theoretic. If the singleton's preference has sympathy for your preference, your explicit instantiation in the world doesn't need to have any control, in order to win through the implicit control via singleton's preference.

Game-theoretic aggregation, on the other hand, doesn't work by influence on other agent's preference. You only get your slice of the world because you already control it. Another agent may perform trade, but this is trade of control, rearranging what specifically each of you controls, without changing your preferences.

I assume that control will be winner-takes-all, so preferences of other agents existing at the time only matter if the winner's preference directly pays to their preferences any attention, but not if they had some limited control from the start.

If a moderate weighting of our preferences gets most of the potential utility, then the expected utility of inhuman AIs getting powerful won't be astronomically less than the expected utility of, e.g. a 'Friendly AI' acting on idealized human preferences.

My point is that inhuman AI may give no weight to our preference, while FAI may give at least some weight to everyone's preference. Game-theoretic trade won't matter here because agents other than the singleton have no control to bargain with. FAI gives weight to other preferences not because of trade, but by construction from the start, even if people it gives weight to don't exist at all (FAI giving them weight in optimization might cause them to appear, or a better event at least as good from their perspective).

Replies from: CarlShulman
comment by CarlShulman · 2010-06-17T06:21:47.358Z · LW(p) · GW(p)

You only get your slice of the world because you already control it.

This isn't obviously the most natural way to describe a scenario in which an AI thinks it has a 90% chance of winning a conflict with humanity, but also has the ability to jointly create (with humanity) agents to enforce an agreement (and can do this quickly enough to be relevant), so cuts a deal splitting up the resources of the light cone at a 9:1 ratio.

I assume that control will be winner-takes-all,

Given that there are plausible sets of parameter values where this assumption is false, we can't use it to assess overall expected value to astronomical precision.

Game-theoretic considerations are only relevant when you have non-trivial control,

I specifically mentioned acausal trade, a la Rolf Nelson's AI-deterrence scheme, which needs non-trivial control only in some region of the ensemble of possibilities the AI considers. Indeed, the AI might treat us well simply because of the chance that benevolent non-human aliens will respond positively if its algorithm has this output (as the benevolent aliens might be modeling the AI's algorithm).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-18T16:11:23.957Z · LW(p) · GW(p)

I specifically mentioned acausal trade, a la Rolf Nelson's AI-deterrence scheme, which needs non-trivial control only in some region of the ensemble of possibilities the AI considers.

Yes, I forgot about that (though I remain uncertain about how well this argument works, not having worked out a formal model). To summarize the arguments for why future is still significantly more valuable than what we have now, even if we run into Unfriendly AI,

(1) if there is a non-negligible chance that we'll have FAI in the future, or that we could've created FAI if some morally random facts in the past (such as the coin in counterfactual mugging) were different, then we can estimate the present expected value of the world as pretty high, as a factor of getting whole universes (counterfactually or probably) optimized towards your specific preference is present in the expected utility computation. The counterfactual value is present even if it's certain that the future contains Unfriendly AI.

(2) It's even better, because the unfriendly singletons will also optimize their worlds towards your preference a little for game-theoretic reasons, even if they don't care at all about your preference. This game is not with you personally, a human that controls very little and whose control can't compel a singleton to any significant extent, but with the counterfactual FAIs. The FAIs that could be created, but weren't, can act as Omega in counterfactual mugging, making it profitable for the indifferent singletons to pay the FAI a little in FAI-favored kind of world-optimization.

(3) Some singletons that don't follow your preference in particular, but have remotely human-like preference, will have a component of sympathy in their preference, and will dole your preference some fair portion of control in their world, that is much greater than the portion of control you held originally. This sympathy seems to be godshatter of game-theoretic considerations that compel even singletons with non-evolved (artificial, random) preferences according to arguments (1) and (2).

The conclusion to this seems to be that creating an Unfriendly AI is significantly better than ending up with no rational singleton at all (existential disaster that terminates civilization), but significantly worse than a small chance of FAI.

Replies from: PhilGoetz, CarlShulman
comment by PhilGoetz · 2011-07-15T19:06:26.194Z · LW(p) · GW(p)

Your comments are mostly good, but I dispute the final assumption that no singleton => disaster. There has as yet been no investigation into the merits of singleton vs. an economy (or ecosystem) of independent agents.

If we were living in the 18th century, it would be reasonable to suppose that the only stable situation is one where one agent is king. But we are not.

comment by CarlShulman · 2010-06-18T16:38:47.603Z · LW(p) · GW(p)

Yep, these are key considerations.

So there's the utility difference between business-as-usual (no AI), and getting a small share of resources optimized for your preference, and the utility difference between getting small and large shares of resources. If the second difference is much larger than the first, then (1) is crucial, and (2) and (3) are not so good. But if the first difference is much bigger than the second, the pattern is the reverse.

And if we're comparing expected utility conditioning on no local FAI here and EU conditioning on FAI here, moderate credences can suffice (depending on the shape of your utility function).

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-18T17:14:49.491Z · LW(p) · GW(p)

no local FAI here

Whether FAI is local or not can't matter, whether something is real or counterfactual is morally irrelevant. If we like small control, it means that the possible worlds with UFAI are significantly valuable, just as the worlds with FAI, provided there are enough worlds with FAI to weakly control the UFAIs; and if we like only large control, it means that the possible worlds with UFAI are not as valuable, and it's mostly the worlds with FAI that matter.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-07-15T19:07:02.242Z · LW(p) · GW(p)

What do "small control" and "large control" mean?

comment by Vladimir_Nesov · 2010-06-18T17:07:02.815Z · LW(p) · GW(p)

But if the first difference is much bigger than the second, the pattern is the reverse.

It's not literally the reverse, because if you don't create those FAIs, nobody will, and so the UFAIs won't have the incentive to give you your small share. It's never good to increase probability of UFAI at the expense of probability of FAI. I'm not sure whether there is any policy guideline suggested by these considerations, conditional on the pattern in utility you discuss. What should we do differently depending on how much we value small vs. large control? It's still clearly preferable to have UFAI to having no future AI, and to have FAI to having UFAI, in both cases.

Replies from: CarlShulman
comment by CarlShulman · 2010-06-19T02:23:44.450Z · LW(p) · GW(p)

Worrying less about our individual (or national) shares, and being more cooperative with other humans or uploads seems like an important upshot.

comment by multifoliaterose · 2010-06-15T08:36:09.353Z · LW(p) · GW(p)

There is no way for human values to magically jump inside the AI, so if it's not specifically created to reflect them, it won't have them, and whatever the AI ends up with won't come close to human values, because human values are too complex to be resembled by any given structure that happens to be formed in the AI.

I'm not convinced by the claim that human values have high Kolmogorov complexity.

In particular, Eliezer's article Not for the Sake of Happiness Alone is totally at odds with my own beliefs. In my mind, it's incoherent to give anything other than subjective experiences ethical consideration. My own preference for real science over imagined science is entirely instrumental and not at all terminal.

Now, maybe Eliezer is confused about what his terminal values are, or maybe I'm confused about what my terminal values are, or maybe our terminal values are incompatible. In any case, it's not obvious that an AI should care about anything other than the subjective experiences of sentient beings.

Suppose that it's okay for an AI to exclude everything but subjective experience from ethical consideration. Is there then still reason to expect that human values have high Kolmogorov complexity?

I don't have a low complexity description to offer, but it seems to me that one can get a lot of mileage out of the principles "if an individual prefers state A to state B whenever he/she/it is in either of state A or state B, then state A is superior for that individual to state B" and "when faced with two alternatives, the moral alternative is the one that you would prefer if you were going to live through the lives of all sentient beings involved."

Of course "sentient being" is ill-defined and one would have to do a fair amount of work frame the things that I just said in more formal terms, but anyway, it's not clear to me that there's a really serious problem here.

The more AI's preference diverges from ours, the more we lose, and this loss is on astronomic scale (even if preference diverges relatively little).

I totally agree that if the creation of a superhuman AI is going to precede all other existential threats then we should focus all of our resources on trying to get the superhuman AI to be as friendly as possible.

Replies from: khafra, Vladimir_Nesov, multifoliaterose, Vladimir_Nesov, timtyler
comment by khafra · 2010-06-15T10:28:27.285Z · LW(p) · GW(p)

Have you read the Heaven post by denisbider and the two follow-ups constituting a mini-wireheading series? There have been other posts on the difference between wanting and liking; but it illustrates a fairly strong problem with wireheading: Even if all we're worried about is "subjective states," many people won't want to be put in that subjective state, even knowing they'll like it. Forcing them into it or changing their value system so they do want it are ethically suboptimal solutions.

So, it seems to me that if anything other than maximized absolute wireheading for everyone is the AI's goal, it's gonna start to get complicated.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-06-15T23:26:16.881Z · LW(p) · GW(p)

Thanks for the references to the posts which I had not seen before and which I find relevant. I'm sympathetic toward denisbider's view, but will read the comments to see if I find diverging views compelling.

comment by Vladimir_Nesov · 2010-06-15T11:16:42.631Z · LW(p) · GW(p)

Maybe you should start with what's linked from fake fake utility functions then (the page on the wiki wasn't organized quite as I expected).

comment by multifoliaterose · 2010-06-15T08:41:44.740Z · LW(p) · GW(p)

But I would qualify the last sentence of my reply by saying that the best way to get a superhuman AI to be as friendly as possible may not be to work on friendly AI or advocate for friendly AI. For example, it may be best to work toward geopolitical stability to minimize the chances of some country rashly creating a potentially unsafe AI out of a sense of desperation during wartime.

comment by Vladimir_Nesov · 2010-06-15T11:08:40.714Z · LW(p) · GW(p)

I totally agree that if the creation of a superhuman AI is going to precede all other existential threats then we should focus all of our resources on trying to get the superhuman AI to be as friendly as possible.

(?) I never said that.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-06-15T19:32:19.894Z · LW(p) · GW(p)

Yes, I was agreeing with what I inferred your attitude to be rather than agreeing with something that you said. (I apologize if I distorted your views - if you'd like I can edit my comment to remove the suggestion that you hold the position that I attributed to you.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-15T20:05:25.756Z · LW(p) · GW(p)

I don't believe that we "should focus all of our resources" on FAI, as there are many other worthy activities to focus on. The argument is that this particular problem gets disproportionally little attention, and while with other risks we can in principle luck out even if they get no attention, it isn't so for AI. Failing to take FAI seriously is fatal, failing to take nanotech seriously isn't necessarily fatal.

Thus, although strictly speaking I agree with your implication, I don't see its condition plausible, and so implication as whole relevant.

comment by timtyler · 2010-06-16T15:35:48.276Z · LW(p) · GW(p)

Re: "Is there then still reason to expect that human values have high Kolmogorov complexity?"

Human values are mosly a product of their genes and their memes. There is an awful lot of information in those. However, it is true that you can fairly closely approximate human values - or those of any other creature - by the directive to make as many grandchildren as possible - which seems reasonably simple.

Most of the arguments for humans having complex values appear to list a whole bunch of proximate goals - as though that constitutes evidence.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-16T16:34:53.982Z · LW(p) · GW(p)

I disagree. You need to know much more than just the drive for grandchildren, given the massively diverse ways we observe even in our present world for species to propagate, all of which correspond to different articulable values once they reach human intelligence.

Human values should be expected to have a high K-complexity because you would need to specify both the genes/early environment, and the precise place in history/Everett branches where humans are now.

Replies from: timtyler, red75
comment by timtyler · 2010-06-16T16:43:11.697Z · LW(p) · GW(p)

The idea was to "approximate human values" - not to express them in precise detail: nobody cares much if Jim likes strawberry jam more than he likes raspberry jam.

The environment mostly drops out of the equation - because most of it is shared between the agents involved - and because of the phenomenon of Canalisation: http://en.wikipedia.org/wiki/Canalisation_%28genetics%29

Replies from: SilasBarta
comment by SilasBarta · 2010-06-16T16:49:01.642Z · LW(p) · GW(p)

The idea was to "approximate human values" - not to express them in precise detail

Sure, but I take "approximation" to mean something like getting you within 10 or so bits of the true distribution, but the heuristic you gave still leaves you maybe 500 or so bits away, which is huge, and far more than you implied.

The environment mostly drops out of the equation - because most of it is shared between the agents involved - and because of the phenomenon of Canalisation

That would help you on message length if you had already stored one person's values and were looking to store a second person's. It does not for describing the first person's value, or some aggregate measure of humans' values.

Replies from: timtyler
comment by timtyler · 2010-06-16T16:55:50.959Z · LW(p) · GW(p)

10 bits!!! That's not much of a message!

The idea of a shared environment arises because the proposed machine - in which the human-like values are to be implemented - is to live in the same world as the human. So, one does not need to specify all the details of the environment - since these are shared naturally between the agents in question.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-16T17:15:35.575Z · LW(p) · GW(p)

10 bits!!! That's not much of a message!

10 bits short of the needed message, not a 10-bit message. I mean that e.g. an approximation gives 100 bits when full accuracy would be 110 bits (and 10 bits is an upper bound).

The idea of a shared environment arises because the proposed machine - in which the human-like values are to be implemented - is to live in the same world as the human. So, one does not need to specify all the details of the environment - since these are shared naturally between the agents in question.

That still doesn't answer my point; it just shows how once you have one agent, adding others is easy. It doesn't show how getting the first, or the "general" agent is easy.

Replies from: timtyler, timtyler
comment by timtyler · 2010-06-16T18:53:19.290Z · LW(p) · GW(p)

Re: "That still doesn't answer my point; it just shows how once you have one agent, adding others is easy. It doesn't show how getting the first, or the "general" agent is easy."

To specify the environment, choose the universe, galaxy, star, planet, lattiude, longitude and time. I am not pretending that information is simple, just that it is already there, if your project is building an intelligent agent.

comment by timtyler · 2010-06-16T18:50:36.916Z · LW(p) · GW(p)

Re: "10 bits short of the needed message".

Yes, I got that the first time. I don't think you are appreciating the difficulty of coding even relatively simple utility functions. A couple of ASCII characters is practically nothing!

Replies from: SilasBarta
comment by SilasBarta · 2010-06-17T17:40:54.722Z · LW(p) · GW(p)

ASCII characters aren't a relevant metric here. Getting within 10 bits of the correct answer means that you've narrowed it down to 2^10 = 1024 distinct equiprobable possibilities [1], one of which is correct. Sounds like an approximation to me! (if a bit on the lower end of the accuracy expected out of one)

[1] or probability distribution with the same KL divergence from the true governing distribution

comment by red75 · 2010-06-16T19:03:59.940Z · LW(p) · GW(p)

Or you can implement constant K-complexity learn-by-example algorithm and get all the rest from environment.

How about "Do as your creators do (generalize this as your creators generalize)"?

comment by Vladimir_Nesov · 2010-06-15T11:16:08.579Z · LW(p) · GW(p)

Maybe you should start with what's linked from fake fake utility functions then (the page on the wiki wasn't organized quite as I expected).

comment by Vladimir_Nesov · 2010-06-15T01:58:31.391Z · LW(p) · GW(p)

SIAI seems to have focused on the existential risk of "unfriendly intelligence explosion" and it's not clear to me that this existential risk is greater than the risks coming from world war and natural resource shortage.

Not clear to me either that unfriendly AI is the greatest risk, in the sense of having the most probability of terminating the future (though "resource shortage" as existential risk sounds highly implausible - we are talking about extinction risks, not merely potential serious issues; and "world war" doesn't seem like something particularly relevant for the coming risks, dangerous technology doesn't need war to be deployed).

But Unfriendly AI seems to be the only unavoidable risk, something we'd need to tackle in any case if we get through the rest. On other problems we can luck out, not on this one. Without solving this problem, the efforts to solve the rest are for naught (relatively speaking).

Replies from: multifoliaterose, CarlShulman
comment by multifoliaterose · 2010-06-15T04:44:00.604Z · LW(p) · GW(p)

"resource shortage" as existential risk sounds highly implausible - we are talking about extinction risks, not merely potential serious issues;

I mean "existential risk" in a broad sense.

Suppose we run out of a source of, oh, say, electricity too fast to find a substitute. Then we would be forced to revert to a preindustrial society. This would be a permanent obstruction to technological progress - we would have no chance of creating a transhuman paradise or populating the galaxy with happy sentient machines and this would be an astronomical waste.

Similarly if we ran out of any number of things (say, one of the materials that's currently needed to build computers) before finding an adequate substitute.

"world war" doesn't seem like something particularly relevant for the coming risks, dangerous technology doesn't need war to be deployed.

My understanding is that a large scale nuclear war could seriously damage infrastructure. I could imagine this preventing technological development as well.

But Unfriendly AI seems to be the only unavoidable risk, something we'd need to tackle in any case if we get through the rest. On other problems we can luck out, not on this one. Without solving this problem, the efforts to solve the rest are for naught (relatively speaking).

On the other hand, it's equally true that if another existential risk hits us before we friendly AI, all of our friendly AI directed efforts will be for naught.

Replies from: Strange7, Vladimir_Nesov
comment by Strange7 · 2010-06-15T08:52:42.589Z · LW(p) · GW(p)

Suppose we run out of a source of, oh, say, electricity too fast to find a substitute.

That's not how economics works. If one source of electricity becomes scarce, that means it's more expensive, so people will switch to cheaper alternatives. All the energy we use ultimately comes from either decaying isotopes (fission, geothermal) or the sun; neither of those will run out in the next thousand years.

Modern computer chips are doped silicon semiconductors. We're not going to run out of sand any time soon, either. Of course, purification is the hard part, but people have been thinking up clever ways to purify stuff since before they stopped calling it 'alchemistry.'

Replies from: khafra, cupholder
comment by khafra · 2010-06-15T13:37:45.111Z · LW(p) · GW(p)

The energy requirements for running modern civilization aren't just a scalar number--we need large amounts of highly concentrated energy, and an infrastructure for distributing it cheaply. The normal economics of substitution don't work for energy.

A "tradeoff" exists between using resources (including energy and material inputs of fossil origin) to feed the growth of material production (industry and agriculture) and to support the economy’s structural transformation.

As the substitution of renewable for nonrenewable (primarily fossil) energy continues, nature exerts resistance at some point; the scale limit begins to bind. Either economic growth or transition must halt. Both alternatives lead to severe disequilibrium. The first because increased pauperization and the apparent irreducibility of income differentials would endanger social peace. Also, since an economic order built on competition among private firms cannot exist without expansion, the free enterprise system would flounder.

The second alternative is equally untenable because the depletion of nonrenewable resources, proceeding along a rising marginal cost curve or, equivalently, along a descending Energy Return on Energy Invested (EROI) schedule, increases production costs across the entire spectrum of activities. Supply curves shift upwards.

It's entirely possible that failure to create a superintelligence before the average EROI drops too low for sustainment would render us unable to create one for long enough to render other existential risks inevitabilities.

Replies from: timtyler
comment by timtyler · 2010-06-16T08:34:00.789Z · LW(p) · GW(p)

"Substitution economics" seems unlikely to stop us eventually substituting fusion power and biodesiel for oil. Meanwhile, we have an abundance of energy in the form of coal - more than enough to drive progress for a loooog while yet. The "energy apocalypse"-gets-us-first scenario is just very silly.

Replies from: khafra
comment by khafra · 2010-06-16T13:32:43.659Z · LW(p) · GW(p)

Energy economics is interconnected enough with politics to make me lower my expectation of rationality from both of us for the remainder of the discussion due to reference class forecasting. Also, we are several inferential steps away from each other, so any discussion is going to be long and full of details. Regardless, I'm going to go ahead, assuming agreement that market forces cannot necessarily overcome resource shortages (or the Easter Islanders would still be with us).

Historically, the world switched from coal to petroleum before developing any technologies we'd regard as modern. The reason, unlike so much else in economics, is simple: the energy density of coal is 24 MJ/kg; the energy density of gasoline is 44 MJ/kg. Nearly doubling the energy density makes many things practical that wouldn't otherwise be, like cars, trucks, airplanes, etc. Coal cannot be converted into a higher energy density fuel except at high expense and with large losses, making the expected reserves much smaller. The fuels it can be converted to require significant modifications to engines and fuel storage.

Coal is at least plausible, although a stop-gap measure with many drawbacks. It's your hopes for fusion that really show the wishful thinking. Fusion is 20 years away from being a practical energy source, just like it was in 1960. The NIF has yet to reach break-even; economically practical power generation is far beyond that point; assuming a substantial portion of US energy generation needs is farther still. It'd be nice if Polywell/Bussard fusion proved practical, but that's barely a speck on the horizon, getting its first big basic research grant from the US Navy. And nothing but Mr. Fusion will help unless someone makes an order of magnitude improvement in battery or ultracapacitor energy density.

No matter which of the alternatives you plan to replace the energy infrastructure with, you needed to start about 20 years ago. World petroleum production is no longer sufficient to sustain economic growth and infrastructure transition simultaneously. Remember, the question isn't whether it's theoretically possible to substitute more plentiful energy sources for the ones that are getting more difficult to extract, it's whether the declining EROI of current energy sources will remain high enough for the additional economic activity of converting infrastructure to other sources while still feeding people, let alone indulging in activities with no immediate payoff like GAI research.

We seem to be living in a world where the EROI is declining faster than willingness to devote painful amounts of the GDP to energy source conversion is increasing. This doesn't mean an immediate biker zombie outlaw apocalypse, but it does mean a slow, unevenly distributed "catabolic collapse" of decreasing standards of living, security, and stability.

Replies from: RobinZ, JoshuaZ, timtyler
comment by RobinZ · 2010-06-18T20:07:20.650Z · LW(p) · GW(p)

Upvoted chiefly for

Energy economics is interconnected enough with politics to make me lower my expectation of rationality from both of us for the remainder of the discussion due to reference class forecasting.

but I appreciate the analysis. (I am behind on reading comments, so I will be continuing downthread now.)

comment by JoshuaZ · 2010-06-16T17:26:01.648Z · LW(p) · GW(p)

And nothing but Mr. Fusion will help unless someone makes an order of magnitude improvement in battery or ultracapacitor energy density.

I don't know why you focus so much on fusion although I agree it isn't practical at this point. But note that batteries and ultracapacitors are just energy storage devices. Even if they become far more energy dense they don't provide a source of energy.

Replies from: khafra
comment by khafra · 2010-06-16T17:47:19.685Z · LW(p) · GW(p)

Unfortunately, that appears to be part of the bias I'd expected in myself--since timtyler mentioned fusion, biofuels, and coal; I was thinking about refuting his arguments instead of laying out the best view of probable futures that I could.

The case for wind, solar, and other renewables failing to take up petroleum's slack before it's too late is not as overwhelmingly probable as fusion's, but it takes the same form--they form roughly 0.3% of current world power generation, and even if the current exponential growth curve is somehow sustainable indefinitely they won't replace current capacity until the late 21st century.

With the large-scale petroleum supply curve, that leaves a large gap between 2015 and 2060 where we're somehow continuing to build renewable energy infrastructure with a steadily diminishing total supply of energy. I expect impoverished people to loot energy infrastructure for scrap metal to sell for food faster than other impoverished people can keep building it.

comment by timtyler · 2010-06-16T17:05:02.551Z · LW(p) · GW(p)

That we will eventually substitute fusion power and biodesiel for oil seems pretty obvious to me. You are saying it represents "wishful thinking" - because of the possibility of civilisation not "making it" at all? If so, be aware that I think that the chances of that happening seem to grossly exaggerated around these parts.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-16T17:24:31.435Z · LW(p) · GW(p)

It seem very doubtful that we'll have practical fusion power any time soon or necessarily ever. The technical hurdles are immense. Note that any form of fusion plant will almost certainly be using deuterium-tritium fusion. That means you need tritium sources. This also means that the internal structure will undergo constant low-level neutron bombardment which seriously reduces the lifespan of basic parts such as the electromagnets used. If we look at he form of proposed fusion that has had the most work and has the best chance of success, tokamaks, then we get to a number of other serious problems such as plasma leaks. Other forms of magnetic containment have also not solved the plasma leak problem. Forms of reactors that don't use magnetic containment suffer from other similarly serious problems. For example, the runner up to magnetic containment is laser confinement but no one hasa good way to actually get energy out of laser confinement.

That said, I think that there are enough other potential sources of energy (nuclear fission, solar (and space based solar especially), wind, and tidal to name a few) that this won't be an issue.

Replies from: simplicio, Christian_Szegedy, Vladimir_Nesov, timtyler
comment by simplicio · 2010-06-16T20:30:23.784Z · LW(p) · GW(p)

...the runner up to magnetic containment is laser confinement but no one has a good way to actually get energy out of laser confinement...

Um.. not sure what you mean. The energy out of inertial (i.e., laser) confinement is thermal. You implode and heat a ball of D-T, causing fusion, releasing heat energy, which is used to generate steam for a turbine.

Fusion has a bad rap, because the high benefits that would accrue if it were accomplished encourage wishful thinking. But that doesn't mean it's all wishful thinking. Lawrence Livermore has seen some encouraging results, for example.

EDIT: for fact checking vis-a-vis LLNL.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-16T21:30:37.812Z · LW(p) · GW(p)

Yeah, but a lot of that energy that is released isn't in happy forms. D-T releases not just high energy photons but also neutrons which are carrying away a lot of the energy. So what you actually need is something that can absorb the neutrons in a safe fashion and convert that to heat. Lithium blankets are a commonly suggested solution since a lot of the time lithium will form tritium after you bombard it with neutrons (so you get more tritium as a result). There's also the technically simpler solution of just using paraffin. But the conversion of the resulting energy into heat for steam is decidedly non-trivial.

Replies from: simplicio
comment by simplicio · 2010-06-16T21:31:50.890Z · LW(p) · GW(p)

I see, thanks.

comment by Christian_Szegedy · 2010-06-16T19:23:07.412Z · LW(p) · GW(p)

Imagine what people must have thought in 1910 about the feasibility of getting to the Moon or generating energy by artificially splitting atoms (especially within the 20th century).

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-16T20:06:47.126Z · LW(p) · GW(p)

Imagine what people must have thought in 1910 about the feasibility of getting to the Moon or generating energy by artificially splitting atoms (especially within the 20th century).

Two problems with that sort of comparison: First, something like going to the Moon is a goal, not a technology. Thus, if we have other sources of power, the incentive to work out the details for fusion becomes small. Second, one shouldn't forget how many technologies have been tried and have fallen by the wayside as not very practical or not at all practical. A good way of getting a handle on this is to read old issue of something like Scientific American from the 1950s and 1960s. Or read scifi from that time period. One of example of historical technology that never showed up on any substantial scale is nuclear powered airplanes, despite a lot of research in the 1950s about them. Similarly, nuclear thermal rockets have not been made. This isn't because they are impossible, but because they are extremely impractical compared to other technologies. It seems likely that fusion power will fall into the same category. See this article about Project Pluto for example.

Replies from: Christian_Szegedy, timtyler
comment by Christian_Szegedy · 2010-06-16T21:09:04.580Z · LW(p) · GW(p)

These are perfectly valid arguments and I admit that I share your skepticism concerning the economic competitiveness of the fusion technology. I admit, if I had a decision to make about buying some security, the payout of which would depend on the amount of energy produced by fusion power within 30 years, I would not hurry to place any bet.

What I lack is your apparent confidence in ruling out the technology based on the technological difficulties we face at this point in time.

I am always surprised how the opinion of so called experts diverges when it comes to estimating the feasibility and cost of different energy production options (even excluding fusion power). For example there is recent TED video where people discuss the pros and cons of nuclear power. The whole discussion boils down to the question: What are the resources we need in order to produce X amount of energy using

  • nuclear
  • wind
  • solar
  • biofuel
  • geothermal

power. For me, the disturbing thing was that the statements about the resource usage (e.g. area consumption, but also risks) of the different technologies were sometimes off by magnitudes.

If we lack the information to produce numbers in the same ballpark even for technologies that we have been using for decades (if not longer), then how much confidence can we have about the viability, costs, risks and competitiveness of a technology, like fusion, that we have not even started to tap.

Replies from: cousin_it
comment by cousin_it · 2010-06-17T17:07:25.480Z · LW(p) · GW(p)

Ask and ye shall receive: David MacKay, Sustainable energy without the hot air. A free online book that reads like porn for LessWrong regulars.

Replies from: Christian_Szegedy
comment by Christian_Szegedy · 2010-06-17T18:42:09.543Z · LW(p) · GW(p)

Yes, I've read that (pretty good) book quite a while ago and it is also referenced in the TED talk I mentioned.

This was one of the reasons I was surprised that there is still such a huge disagreement about the figures even among experts.

comment by timtyler · 2010-06-17T16:55:33.682Z · LW(p) · GW(p)

Re: "Second, one shouldn't forget how many technologies have been tried and have fallen by the wayside as not very practical or not at all practical. [...] It seems likely that fusion power will fall into the same category."

Er, not to the governments that have already invested many billions of dollars in fusion research it doesn't! They have looked into the whole issue of the chances of success.

comment by Vladimir_Nesov · 2010-06-16T20:43:17.396Z · LW(p) · GW(p)

It seem very doubtful that we'll have practical fusion power any time soon or necessarily ever. [...] This also means that the internal structure will undergo constant low-level neutron bombardment which seriously reduces the lifespan of basic parts such as the electromagnets used.

Automatically self-repairing nanotech construction? (To suggest a point where a straightforward way of dealing with this becomes economically viable.)

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-16T21:48:41.136Z · LW(p) · GW(p)

Automatically self-repairing nanotech construction?

You would need not only self-repairing nanotech but such technology that could withstand both large amounts of radiation as well as strong magnetic fields. Of the currently proposed major methods of nanotech I'm not aware of any that has anything resembling a chance to meet those criteria (with the disclaimer that I'm not a chemist.) If we had nanotech that was that robust it would bump up so many different technologies that fusion would look pretty unnecessary. For example the main barrier to space elevators is efficient reliable synthesis of long chains of carbon nanotubes that could be placed in a functional composite (see this NASA Institute for Advanced Concepts Report for a discussion of these and related issues). We'd almost certainly have that technology well before anything like self-repairing nanotech that stayed functional in high radiation environments. And if you have functional space elevators then you get cheap solar power because it becomes very easy to launch solar power satellites.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-16T21:58:09.837Z · LW(p) · GW(p)

I'm not talking about plausible now, but plausible some day, as a reply to your "It seem very doubtful ... any time soon or necessarily ever". The sections being repaired could be offline. "Self-repair" doesn't assume repair within volume of an existing/operating structure, it could be all cleared out and rebuilt anew, for example. That it's done more or less automatically is the economic requirement. Any other methods of relatively cheap and fast production, assembly and recycling will work too.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-16T22:09:59.031Z · LW(p) · GW(p)

Ah ok. That's a lot more plausible. There's still the issue that once you have cheap solar the resources it takes to make fusion power will simply cost so much more as to likely not be worth it. But if it could be substantially more efficient than straight fission then maybe it would get used for stuff not directly on Earth if/when we have large installations that aren't the inner solar system.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-16T22:19:23.142Z · LW(p) · GW(p)

Estimating feasibility using exploratory engineering is much simpler than estimating what will actually happen. I'm only arguing that this technology will almost certainly be feasible on human level in not absurdly distant future, not that it'll ever be actually used.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-16T22:21:32.826Z · LW(p) · GW(p)

In that case, there's no substantial disagreement.

comment by timtyler · 2010-06-16T18:57:16.787Z · LW(p) · GW(p)

There don't seem to be too many electromagnets at the NIF: https://lasers.llnl.gov/

It seems to me that the problems are relatively minor, and so that we will have fusion power - with high probabilty this century.

[Wow - LW codebase doesn't know about https!]

comment by cupholder · 2010-06-15T11:06:08.826Z · LW(p) · GW(p)

That's not how economics works. If one source of electricity becomes scarce, that means it's more expensive, so people will switch to cheaper alternatives.

I would have thought that those 'cheaper alternatives' could still be more expensive than the initial cost of the original source of electricity...? In which case losing that original source of electricity could still bite pretty hard (albeit maybe not to the extent of being an existential risk).

comment by Vladimir_Nesov · 2010-06-15T11:02:17.287Z · LW(p) · GW(p)

On the other hand, it's equally true that if another existential risk hits us before we friendly AI, all of our friendly AI directed efforts will be for naught.

Yes.

comment by CarlShulman · 2010-06-16T16:27:29.905Z · LW(p) · GW(p)

But Unfriendly AI seems to be the only unavoidable risk, something we'd need to tackle in any case if we get through the rest.

A stably benevolent stable world government/singleton could take its time solving AI, or inching up to it with biological and culture intelligence enhancement. From our perspective we should count that as almost a maximal win in terms of existential risks.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-16T17:00:05.894Z · LW(p) · GW(p)

I don't see your point. It would take an unrealistic world dictatorship (whether it's "benevolent" seems like irrelevant hair-splitting at that point) to stop the risks (stop the technological progress in the wild!) and allow more time for development of FAI. And in the end, solving FAI still remains a necessary step, even if done by modified/improved people, even if given a safe environment to work in.

Replies from: CarlShulman
comment by CarlShulman · 2010-06-16T17:24:52.187Z · LW(p) · GW(p)

I don't see your point. It would take an unrealistic world dictatorship (whether it's "benevolent" seems like irrelevant hair-splitting at that point) to stop the risks (stop the technological progress in the wild!) and allow more time for development of FAI.

You were talking about hundred year time scales. That's time enough for neuroscience lie detectors, whole brain emulation, democratization in authoritarian countries, continued expansion of EU-like arrangements, and many other things to occur. That's time for lie detectors/neuroscience to advance a lot, whole brain emulation to take off

And in the end, solving FAI still remains a necessary step, even if done by modified/improved people, even if given a safe environment to work in.

But from our perspective, if we can get the benevolent non-AI (but perhaps WBE) singleton, it can do the FAI work at leisure and we don't need to. So the relative marginal impact of our working on say, FAI theory or institutional arrangements for WBE, need to be weighed against one another.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-16T17:59:05.583Z · LW(p) · GW(p)

You were talking about hundred year time scales. That's time enough for neuroscience lie detectors, whole brain emulation, democratization in authoritarian countries, continued expansion of EU-like arrangements, and many other things to occur. That's time for lie detectors/neuroscience to advance a lot, whole brain emulation to take off

It's also time enough for any of the huge number of other outcomes. It's not outright impossible, but pretty improbable, that the world will go this exact road. And don't underestimate how crazy people are.

But from our perspective, if we can get the benevolent non-AI (but perhaps WBE) singleton, it can do the FAI work at leisure and we don't need to. So the relative marginal impact of our working on say, FAI theory or institutional arrangements for WBE, need to be weighed against one another.

After the change of mind about value of drifted human preference, I agree that WBE/intelligence enhancement is a viable road. Here're my arguments about the impact of these paths at this point.

WBE is still at least decades away, probably more than a hundred years if you take planning fallacy into account, and depends on the development of global technological efforts that are not easily influenced. Value of any "institutional arrangements" and viability of arguing for them given the remoteness (hence irrelevance at present) and implausibility (to most people) of WBE, also seems doubtful at present. This in my mind makes the marginal value on any present effort related to WBE relatively small. This will go up sharply as WBE tech gets closer

I suspect that FAI theory, once understood, will still be simple enough (if any general theory is possible), and can be developed by vanilla humans (on unknown timescale, probably decades to hundreds of years, but at some point WBEs overtake the timescale estimates). By the time WBE becomes viable, the risk situation will be already very explosive, so if we can get a good understanding earlier, we could possibly avoid that risky period entirely. Also, having a viable technical Friendliness programme might give academic recognition to the problem (that these risks are as unavoidable as laws of physics, and not just something to talk with your friends about, like politics or football), which might spread awareness of the AI risks on an otherwise unachievable level, helping with institutional change promoting measures against wild AI and other existential risks. On the other hand, I won't underestimate human craziness on this point as well - technical recognition of the problem may still live side to side with global indifference.

comment by Craig_Morgan · 2010-06-15T03:35:31.050Z · LW(p) · GW(p)

I have not been convinced but am open toward the idea that a paperclip maximizer is the overwhelmingly likely outcome if we create a superhuman AI. At present, my thinking is that if some care is taking in the creation of a superhuman AI, more likely than a paperclip maximizer is an AI which partially shares human values, that is, the dicotomy "paper clip maximizer vs. Friendly AI" seems like a false dicotomy - I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point appreciated.

I believed similarly until I read Steve Omohundro's The Basic AI Drives. It convinced me that a paperclip maximizer is the overwhelmingly likely outcome of creating an AGI.

Replies from: CarlShulman, multifoliaterose
comment by CarlShulman · 2010-06-16T16:24:26.839Z · LW(p) · GW(p)

That paper makes a convincing case that the 'generic' AI (some distribution of AI motivations weighted by our likelihood of developing them) will most prefer outcomes that rank low in our preference ordering, i.e. the free energy and atoms needed to support life as we know it or would want it will get reallocated to something else. That means that an AI given arbitrary power (e.g. because of a very hard takeoff, or easy bargaining among AIs but not humans, or other reasons) would be lethal. However, the situation seems different and more sensitive to initial conditions when we consider AIs with limited power that must trade off chances of conquest with a risk of failure and retaliation. I'm working on a write up of those issues.

comment by multifoliaterose · 2010-06-15T06:35:54.560Z · LW(p) · GW(p)

Thanks Craig, I'll check it out!

comment by Benquo · 2010-06-15T01:04:15.716Z · LW(p) · GW(p)

"But I believe it's certain to eventually happen, absent the end of civilization before that."

And I will live 1000 years, provided I don't die first.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-15T01:26:46.990Z · LW(p) · GW(p)

But I believe it's certain to eventually happen, absent the end of civilization before that.

And I will live 1000 years, provided I don't die first.

(As opposed to gradual progress, of course. I could make a case with your analogy facing an unexpected distinction also, as in what happens if you got overrun by a Friendly intelligence explosion, and persons don't prove to be a valuable pattern, but death doesn't adequately describe the transition either, as value doesn't get lost.)

comment by multifoliaterose · 2010-06-15T00:08:21.659Z · LW(p) · GW(p)

Keep in mind that for any organization or goal, the people you hear the most about it are the people who think that it is important.

---this is a good point, thanks.

comment by Vladimir_Nesov · 2010-06-14T21:42:32.753Z · LW(p) · GW(p)

#: I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you're a part of these groups.

Even if the most ardent Less Wrong and SIAI supporters are mostly right about their beliefs, # is almost certainly at least occasionally present and I think that the community would benefit from a higher level of vigilance concerning the possibility #.

# refers to a pattern of incorrect (intuitive) reasoning. This pattern is potentially dangerous specifically because it leads to incorrect beliefs. But if you are saying that there is no significant distortion in beliefs (in particular about the importance of Less Wrong or SIAI's missions*), doesn't this imply that the role of this potential bias is therefore unimportant? Either # isn't important, because it doesn't significantly distort beliefs, or it does significantly distort beliefs and therefore important.


* Although I should note that I don't remember there being a visible position about the importance of Less Wrong.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-06-15T00:36:56.508Z · LW(p) · GW(p)

Either # isn't important, because it doesn't significantly distort beliefs, or it does significantly distort beliefs and therefore important.

There's no single point at which distortion of beliefs becomes sufficiently large to register as "significant" - it's a gradualist thing

Although I should note that I don't remember there being a visible position about the importance of Less Wrong.

Probably I've unfairly conflated Less Wrong and SIAI. But at this post Kevin says "We try to take existential risk seriously around these parts. Each marginal new user that reads anything on Less Wrong has a real chance of being the one that tips us from existential Loss to existential Win." This seemed to me to carry the connotation of ascribing extremely high significance to Less Wrong and I (quite possibly incorrectly) interpreted the fact that nobody questioned the statement or asked for clarification as an indication that the rest of the community is in agreement with the idea that Less Wrong is extremely significant. I will respond to the post asking Kevin to clarify what he was getting at.

Replies from: JoshuaZ, Vladimir_Nesov
comment by JoshuaZ · 2010-06-15T01:30:16.371Z · LW(p) · GW(p)

"We try to take existential risk seriously around these parts. Each marginal new user that reads anything on Less Wrong has a real chance of being the one that tips us from existential Loss to existential Win." This seemed to me to carry the connotation of ascribing extremely high significance to Less Wrong and I (quite possibly incorrectly) interpreted the fact that nobody questioned the statement or asked for clarification as an indication that the rest of the community is in agreement with the idea that Less Wrong is extremely significant.

Would you respond differently if someone else talked about every single person who becomes an amateur astronomer and searches for dangerous asteroids? There are lots of ways of potential existential threats. Unfriendly or rogue AIs are certainly one of them. Nuclear war is another. And I think a lot of people would agree that most humans don't pay nearly enough attention to existential threats. So one aspect of improving rational thinking should be a net reduction in existential threats of all types, not just those associated with AI. Kevin's statement thus isn't intrinsically connected to SIAI at all (although I'd be inclined to argue that even given that Kevin's statement is possibly a tad hyperbolic).

Replies from: multifoliaterose, multifoliaterose
comment by multifoliaterose · 2010-06-15T01:52:15.038Z · LW(p) · GW(p)

Would you respond differently if someone else talked about every single person who becomes an amateur astronomer and searches for dangerous asteroids?

The parallel is a good one. I would think it sort of crankish if somebody went around trying to get people to engage in amateur astronomy and search for dangerous asteroids on the grounds that any new amateur astronomer may be the one save us from being killed by a dangerous asteroid. Just because an issue is potentially important doesn't mean that one should attempt to interest as many people as possible in it. There's an issue of opportunity cost.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-15T01:55:05.518Z · LW(p) · GW(p)

Sure there's an opportunity cost, but how large is that opportunity cost? What if someone has good data that suggests that the current number of asteroid seekers is orders of magnitude below the optimum?

comment by multifoliaterose · 2010-06-15T02:11:12.591Z · LW(p) · GW(p)

improving rational thinking should be a net reduction in existential threats of all types

Two points:

(1) It's not clear that improving rational thinking matters much. The factors limiting human ability to reduce existential risk seem to me to have more to do with politics, marketing and culture rather than rationality proper. Devoting oneself to refining rationality may come at the cost of increasing one's ability to engage in politics and marketing and influence culture. I guess what I'm saying is that rationalists should win and consciously aspiring toward rationality may interfere with one's ability to win.

(2) It's not clear how much it's possible to improve rational thinking. It may be that beyond a certain point, attempts to improve rational thinking are self defeating (e.g. combating one bias may cause another bias).

Replies from: JoshuaZ, blogospheroid, NancyLebovitz, Vladimir_Nesov
comment by JoshuaZ · 2010-06-15T02:34:39.064Z · LW(p) · GW(p)

Part of influencing culture should include the spreading of rationality. This is actually related to why I think that the rationality movement has more in common with organized skepticism than is generally acknowledged. Consider what would happen if the general public had enough epistemic rationality to recognize that homeopathy was complete nonsense. In the United States alone, people spend around three billion dollars a year on homeopathy (source). If that went away, and only 5% of that ended up getting spent on things that actually increase general utility, that means around $150 million dollars are now going into useful things. And that's only a tiny example. The US spends about 30 to 40 billion dollars a year on alternative medicine much of which is also a complete waste. We're not talking here about a Hansonian approach where much medicine is only of marginal use or only helps the very sick who are going to die soon. We're talking about "medicine" that does zero. And many of the people taking those alternatives will take those alternatives instead of taking medicine that will improve their lives. Improving the general population's rationality will be a net win for everyone. And if some tiny set of those freed resources goes to dealing with existential risk? Even better.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-06-15T07:46:27.526Z · LW(p) · GW(p)

Part of influencing culture should include the spreading of rationality. This is actually related to why I think that the rationality movement has more in common with organized skepticism than is generally acknowledged. Consider what would happen if the general public had enough epistemic rationality to recognize that homeopathy was complete nonsense.

Okay, but now the rationality that you're talking about is "ordinary rationality" rather than "extreme rationality" and the general public rather than the Less Wrong community. What is Less Wrong community doing to spread ordinary rationality within the general public?

The US spends about 30 to 40 billion dollars a year on alternative medicine much of which is also a complete waste [...] We're talking about "medicine" that does zero.

Are you sure that the placebo effects are never sufficiently useful to warrant the cost?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-15T13:08:17.292Z · LW(p) · GW(p)

Okay, but now the rationality that you're talking about is "ordinary rationality" rather than "extreme rationality" and the general public rather than the Less Wrong community. What is Less Wrong community doing to spread ordinary rationality within the general public?

A lot of the aspects of "extreme rationality" are aspects of rationality in general (understanding the scientific method and the nature of evidence, trying to make experiments to test things, being aware of serious cognitive biases, etc.) Also, I suspect (and this may not be accurate) that a lot of the ideas of extreme rationality are ones which LWers will simply spread in casual conversation, not necessarily out of any deliberate attempt to spread them, but because they are really neat. For example, the representativeness heuristic is an amazing form of cognitive bias. Similarly, the 2-4-6 game is independently fun to play with people and helps them learn better.

Are you sure that the placebo effects are never sufficiently useful to warrant the cost?

I was careful to say that much, not all. Placebos can help. And some of it involves treatments that will eventually turn out to be helpful when they get studied. There are entire subindustries that aren't just useless but downright harmful (chelation therapy for autism would be an example). And large parts of the alternative medicine world involve claims that are emotionally damaging to patients (such as claims that cancer is a result of negatives beliefs). And when one isn't talking about something like homeopathy which is just water but rather remedies that involve chemically active substances the chance that actual complications will occur from them grows.

Deliberately giving placebos is of questionable ethical value, but if we think it is ok we can do it with cheap sugar pills delivered at a pharmacy. Cheaper, safer and better controlled. And people won't be getting the sugar pills as an alternative to treatment when treatment is possible.

comment by blogospheroid · 2010-06-15T08:46:04.187Z · LW(p) · GW(p)

Anything we seek to do is a function of our capabilities and how important the activity is. Less Wrong is aimed mainly as a pointer towards increasing the capabilities of those who are interested in improving their rationality and Eliezer has mentioned in one of the sequences that there are many other aspects of the art that have to be developed. Epistemic rationality is one, luminosity as mentioned by Alicorn is another, so on and so forth.

Who knows that in the future, we may get many rational offshoots of lesswrong - lessshy, lessprocastinating, etc.

Now, getting back to my statement. Function of capabilities and Importance.

Importance - Existential risk is the most important problem that is not having sufficient light on it. Capability - The singinst is a group of powerless, poor and introverted geeks who are doing the best, that they think they can do, to reduce existential risk. This may include things that improve their personal ability to affect the future positively. It may include charisma and marketing, also. For all the time that they have thought on the issue, the singinst people consider raising the sanity waterline as really important to the cause. Unless and until you have specific data that that avenue is not the best use of their time, it is a worthwhile cause to pursue.

Before reading the paragraph below, please answer this simple question - What is your marginal time unit, taking into account necessary leisure, being used for?

If your capability is great, then you can contribute much more than SIAI. All you need to see is whether on the margin, your contribution is making a greater difference to the activity or not. Even Singinst cannot absorb too much money without losing focus. You, as a smart person know that. So, stop contributing to Singinst when you think your marginal dollar gets better value when spent elsewhere.

It is not whether you believe that singinst is the best cause ever. Honestly assess and calculate where your marginal dollar can get better value. Are you better off being the millionth voice in the climate change debate or the hundredth voice in the existential risk discussion?

EDIT : Edited the capability para for clarity

comment by NancyLebovitz · 2010-06-16T08:32:58.800Z · LW(p) · GW(p)

One other factor which influences how much goes into reducing existential risk is the general wealth level. Long term existential risk gets taken care of after people take care of shorter term risks, have what they consider to be enough fun, and spend quite a bit on maintaining status.

More to spare means being likely to have a longer time horizon.

comment by Vladimir_Nesov · 2010-06-15T02:29:53.483Z · LW(p) · GW(p)

It's not clear how much it's possible to improve rational thinking.

On the level of society, there seems to be tons of low-hanging fruit.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-06-15T07:47:37.803Z · LW(p) · GW(p)

What are some examples of this low-hanging fruit that you have in mind?

Replies from: magfrump
comment by magfrump · 2010-06-15T09:31:37.402Z · LW(p) · GW(p)

Fact-checking in political discussions (i.e. senate politics), parenting and teaching methods, keeping a clean desk or being happy at work (see here), getting effective medical treatments rather than unproven treatments (sometimes this might require confronting your doctor), and maintaining budgets seem like decent examples (in no particular order, and of course these are at various heights but well within the reach of the general public).

Not sure if Vladimir would have the same types of things in mind.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-16T08:36:27.246Z · LW(p) · GW(p)

Factcheck, which keeps track of the accuracy of statements made by politicians and about politics, strikes me as a big recent improvement.

Replies from: magfrump
comment by magfrump · 2010-06-16T11:34:43.782Z · LW(p) · GW(p)

Just added that to my RSS feed.

comment by Vladimir_Nesov · 2010-06-15T00:48:50.828Z · LW(p) · GW(p)

There's no single point at which distortion of beliefs becomes sufficiently large to register as "significant" - it's a gradualist thing

But to avoid turning this into a fallacy of gray, you still need to take notice of the extent of the effect. Neither working on a bias, nor ignoring the bias, are "defaults" - it necessarily depends on the perceived level of significance.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-06-15T00:57:42.312Z · LW(p) · GW(p)

I think I agree with you. My suggestion is that Less Wrong and SIAI are, at the margin, not paying enough attention to the bias (*).

comment by Mike Bishop (MichaelBishop) · 2010-06-14T16:22:40.374Z · LW(p) · GW(p)

I'd like to share introductory level posts as widely as possible. There are only three with this tag. Can people nominate more of these posts, perhaps messaging the author to encourage them to tag their post "introduction."

We should link to, stumble on, etc. accessible posts as much as possible. The sequences are great, but intimidating for many people.

Added: Are there more refined tags we'd like to use to indicate who the articles are appropriate for?

Replies from: RobinZ
comment by RobinZ · 2010-06-15T04:23:26.939Z · LW(p) · GW(p)

There are a few scattered posts in Eliezer's sequences which do not, I believe, have strong dependencies (I steal several from the About page, others from Kaj_Sotala's first and second lists) - I separate out the ones which seem like good introductory posts specifically, with a separate list of others I considered but do not think are specifically introductory.

Introductions:

Not introductions, but accessible and cool:

Replies from: SilasBarta, blogospheroid
comment by SilasBarta · 2010-06-15T13:05:13.909Z · LW(p) · GW(p)

As usual, I'll have to recommend Truly Part of You as an excellent introductory post, given the very little background required, and the high insight per unit length.

comment by blogospheroid · 2010-06-15T05:27:09.744Z · LW(p) · GW(p)

Thanks for this list.

comment by Nick_Tarleton · 2010-06-17T06:07:39.742Z · LW(p) · GW(p)

Ladies and gentlemen, the human brain: acetaminophen reduces the pain of social rejection.

comment by khafra · 2010-06-14T12:37:58.406Z · LW(p) · GW(p)

Wikipedia says the term "Synthetic Intelligence" is a synonym for GAI. I'd like to propose a different use: as a name for the superclass encompassing things like prediction markets. This usage occurred to me while considering 4chan as a weakly superintelligent optimization process with a single goal; something along the lines of "producing novelty;" something it certainly does with a paperclippy single-mindedness we wouldn't expect out of a human.

It may be that there's little useful to be gained by considering prediction markets and chans as part of the same category, or that I'm unable to find all the prior art in this area because I'm using the wrong search terms--but it does seem somewhat larger and more practical than gestalt intelligence.

Replies from: timtyler, NancyLebovitz
comment by timtyler · 2010-06-15T20:48:18.289Z · LW(p) · GW(p)

That is usually called "collective intelligence":

http://en.wikipedia.org/wiki/Collective_intelligence

Calling it "synthetic Intelligence" would be bad, IMO.

Replies from: khafra
comment by khafra · 2010-06-15T22:05:43.302Z · LW(p) · GW(p)

It appears the "wrong search terms" hypothesis was the correct one. Curses.
Thanks for correcting me.

comment by NancyLebovitz · 2010-06-15T13:41:21.900Z · LW(p) · GW(p)

Could you expand on what would be included and excluded from Synthetic Intelligence?

Would a free market count?

Replies from: khafra
comment by khafra · 2010-06-15T14:11:30.886Z · LW(p) · GW(p)

Good question. I didn't mean to take ownership of the term, but I'd consider the "invisible hand" part to be the synthetic intelligence; and the rest of the market's activities to be other synthetic appendages and organs.

comment by Vladimir_Nesov · 2010-06-18T16:39:59.370Z · LW(p) · GW(p)

I've noticed a surprising conclusion about moral value of the outcomes (1) existential disaster that terminates civilization, leaving no rational singleton behind ("Doom"), (2) Unfriendly AI ("UFAI") and (3) FAI. It now seems that although the most important factor in optimizing the value of the world (according to your personal formal preference) is increasing probability of FAI (no surprise here), all else equal UFAI is much more preferable than Doom. That is, if you have an option of trading Doom for UFAI, while forsaking only negligible probability of FAI, you should take it.

The main argument (known as Rolf Nelson's AI deterrence) can be modeled by counterfactual mugging: an UFAI will give up a (small) portion of the control over its world to FAI's preference (pay the $100), if there is a (correspondingly small) probability that FAI could've been created, had the circumstances played out differently (which corresponds to the coin landing differently in counterfactual mugging), in exchange for the FAI (counterfactually) giving up a portion of control to the UFAI (reward from Omega).

As a result, having an UFAI in the world is better than having no AI (at any point in the future), because this UFAI can work as a counterfactual trading partner to a FAI that could've existed under other circumstances, which would make the FAI stronger (improve the value of the possible worlds). Of course, the negative effect of decreasing the probability of FAI is much stronger than the positive effect of increasing the probability of UFAI to the same extent, which means that if the choice is purely between UFAI and FAI, the balance is conclusively in FAI's favor. That there are FAIs in the possible worlds also shows that the Doom outcome is not completely devoid of moral value.

More arguments and a related discussion here.

Replies from: Will_Newsome, XiXiDu
comment by Will_Newsome · 2011-07-07T19:53:55.538Z · LW(p) · GW(p)

It can mostly be ignored, but uFAI affects physically-nearby aliens who might have developed a half-Friendly AI otherwise. (But if they could have, then they have counterfactual leverage in trading with your uFAI.) No reason to suspect that those aliens had a much better shot than we did at creating FAI, though. Creating uFAI might also benefit the aliens for other reasons... that I won't go into, so instead I will just say that it is easy to miss important factors when thinking about these things. Anyway, if the nanobots are swarming the Earth, then launching uFAI does indeed seem very reasonable for many reasons.

comment by XiXiDu · 2011-07-07T19:32:52.147Z · LW(p) · GW(p)

That is, if you have an option of trading Doom for UFAI, while forsaking only negligible probability of FAI, you should take it.

Fascinating! Do you still agree with what you wrote there? Are you still researching this issues and do you plan on writing a progress report or an open problems post? Would you be willing to write a survey paper on decision theoretic issues related to acausal trade?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-07-07T20:03:38.492Z · LW(p) · GW(p)

My best guess about what's preferable to what is still this way, but I'm significantly less certain of its truth (there are analogies that make the answer come out differently, and level of rigor in the above comment is not much better than these analogies). In any case, I don't see how we can actually use these considerations. (I'm working in a direction that should ideally make questions like this more clear in the future.)

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-08T12:13:14.412Z · LW(p) · GW(p)

In any case, I don't see how we can actually use these considerations.

If you know how to build a uFAI (or "probably somewhat reflective on its goal system but nowhere near provably Friendly" AI), build one and put it in an encrypted glass case. Ideally you would work out the AGI theory in your head, determine how long it would take to code the AGI after adjusting for planning fallacy, then be ready to start coding if doom is predictably going to occur. If doom isn't predictable then the safety tradeoffs are larger. This can easily go wrong, obviously.

comment by Peter_Lambert-Cole · 2010-06-18T00:12:26.949Z · LW(p) · GW(p)

I have an idea that I would like to float. It's a rough metaphor that I'm applying from my mathematical background.

Map and Territory is a good way to describe the difference between beliefs and truth. But I wonder if we are too concerned with the One True Map as opposed to an atlas of pretty good maps. You might think that there is a silly distinction, but there are a few reason why it may not be.

First, different maps in the atlas may disagree with one another. For instance, we might have a series of maps that each very accurately describe a small area but become more and more distorted the farther we go out. Each ancient city state might have accurate maps of the surrounding farms for tax purposes but wildly guess what lies beyond a mountain range or desert. A map might also accurately describe the territory at one level of distance but simplify much smaller scales. The yellow pixel in a map of the US is actually an entire town, with roads and buildings and rivers and topography, not perfectly flat fertile farmland.

Or take another example. Suppose you have a virtual reality machine, one with a portable helmet with a screen and speakers, in a large warehouse, so that you can walk around this giant floor as if you were walking around this virtual world. Now, suppose two people are inserted into this virtual world, but at different places, so that when they meet in the virtual world, their bodies are actually a hundred yards apart in the warehouse, and if their bodies bump into each other in the warehouse, they think they are a hundred yards apart in the virtual world.

Thus, when we as rationalists are evaluating our maps and those of others, an argument by contradiction does not always work. That two maps disagree does not invalidate the maps. Instead, it should cause us to see where our maps are reliable and where they are not, where they overlap with each other or agree and are interchangeable and where only 1 will do. Even more controversially, we should examine maps that are demonstrably wrong in some places to see whether and where they are good maps. Moreover, it might be more useful to add an entirely new map to our atlas instead of trying to improve the resolution on one we already have or moving around the lines every so slightly as we bring it asymptotically closer to truth.

My lesson for the rationality dojo would thus be: -Be comfortable that your atlas is not consistent. Learn how to use each map well and how they fit together. Recognize when others have good maps and figure out how to incorporate those maps into your atlas, even if they might seem inconsistent with what you already have.

If you noticed, this idea comes from Differential Geometry, where you use a collection ("atlas") of overlapping charts/local homeomorphisms to R^n ("maps") as a suitable structure for discussing manifolds.

Replies from: Perplexed, Douglas_Knight, NancyLebovitz
comment by Perplexed · 2010-07-27T23:21:58.783Z · LW(p) · GW(p)

I tend to agree that we frequently would do better to make do with an atlas of charts rather than seeking the One True Map. But I'm not sure I like the differential geometry metaphor. It is not the location on the globe which makes use of one chart more fruitful than another. It is the question of scale, or as a computer nerd might express it, how zoomed in you are. And I would prefer to speak of different models rather than different maps.

For example, at one level of zoom, we see the universe as non-deterministic due to QM. Zoom out a bit and you have billiard-ball atoms in a Newtonian billiard room. Zoom out a bit more and find non-deterministic fluctuations. Out a bit more and you have deterministic chemical thermodynamics (unless you are dealing with a Brusselator or some such).

But I would go farther than this. I would also claim that we shouldn't imagine that these maps (as you zoom in) necessarily become better and better maps of the One True Territory. We should remain open to the idea that "It's maps (or models, or turtles) all the way down".

comment by Douglas_Knight · 2010-06-18T21:53:50.178Z · LW(p) · GW(p)

But I wonder if we are too concerned with the One True Map as opposed to an atlas of pretty good maps.

What's an example of people doing this?

Replies from: Peter_Lambert-Cole
comment by Peter_Lambert-Cole · 2010-06-20T17:49:40.030Z · LW(p) · GW(p)

I think one place to look for this phenomenon is when in a debate, you seize upon someone's hidden assumptions. When this happens, it usually feels like a triumph, that you have successfully uncovered an error in their thinking that invalidates a lot of what they have argued. And it is incredibly annoying to have one of your own hidden assumptions laid bare, because it is both embarrassing and means you have to redo a lot of your thinking.

But hidden assumptions aren't bad. You have to make some assumptions to think through a problem anyway. You can only reason from somewhere to somewhere else. It's a transitive operation. There has to be a starting point. Moreover, assumptions make thinking and computation easier. They decrease the complexity of the problem, which means you can figure out at least part of the problem. Assuming pi is 3.14 is good if you want an estimate of the volume of the Earth. But that is useless if you want to prove a theorem. So in the metaphor, maps are characterized by their assumptions/axioms.

When you come into contact with assumptions, you should make them as explicit as possible. But you should also be willing to provisionally accept others' assumptions and think through their implications. And it is often useful to let that sit alongside your own set of beliefs as an alternate map, something that can shed light on a situation when your beliefs are inadequate.

This might be silly, but I tend to think there is no Truth, just good axioms. And oftentimes fierce debates come down to incompatible axioms. In these situations, you are better off making explicit both sets of assumptions, accepting that they are incompatible and perhaps trying on the other side's assumptions to see how they fit.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-20T18:04:08.858Z · LW(p) · GW(p)

Mostly agree. It's really irritating and unproductive (and for me, all too frequent) when someone thinks they've got you nailed because they found a hidden assumption in your argument, but that assumption turns out to be completely uncontroversial, or irrelevant, or something your opponent relies on anyway.

Yes, people need to watch for the hidden assumptions they make, but they shouldn't point out the assumptions others make unless they can say why it's unreasonable and how its weakening would hurt the argument it's being used for. "You're assuming X!" is not, by itself, relevant counterargument.

comment by NancyLebovitz · 2010-06-18T14:47:41.731Z · LW(p) · GW(p)

You might be interested in How to Lie with Maps.

comment by nhamann · 2010-06-14T21:00:14.304Z · LW(p) · GW(p)

“There is no scientist shortage,” declares Harvard economics professor Richard Freeman, a pre-eminent authority on the scientific work force. Michael Teitelbaum of the Alfred P. Sloan Foundation, a leading demographer who is also a national authority on science training, cites the “profound irony” of crying shortage — as have many business leaders, including Microsoft founder Bill Gates — while scores of thousands of young Ph.D.s labor in the nation’s university labs as low-paid, temporary workers, ostensibly training for permanent faculty positions that will never exist.

The Real Science Gap

ETA: Here's a money quote from near the end of the article:

The main difference between postdocs and migrant agricultural laborers, he jokes, is that the Ph.D.s don’t pick fruit.

(Ouch)

Replies from: Houshalter
comment by Houshalter · 2010-06-15T00:34:29.945Z · LW(p) · GW(p)

I'm not sure I see what the problem is. Capitalism works? It makes it seem like this system is unsustainable or bound to collapse, but I'm not sure I see how two and two fit together. I am particularly confused with this quote:

Obviously, the “pyramid paradigm can’t continue forever,” says Susan Gerbi, chair of molecular biology at Brown University and one of the relatively small number of scientists who have expressed serious concern about the situation. Like any Ponzi scheme, she fears, this one will collapse when it runs out of suckers — a stage that appears to be approaching. “We need to have solutions for some new steady-state model” that will limit the production of new scientists and offer them better career prospects, she adds.

First of all, how is it a ponzi scheme that is bound to collapse? Also limiting the number of scientists is not going to make the system better, except that maybe individuals will have less competition = more opportunities, which is not a benefit to the whole system, just to the individual.

EDIT: Fixed spelling.

Replies from: SilasBarta, nhamann
comment by SilasBarta · 2010-06-15T01:01:33.841Z · LW(p) · GW(p)

I'm not sure if it meets the Ponzi scheme model, but the problem is this: lots of students are going deeper into debt to get an education that has less and less positive impact on their earning power. So the labor force will be saturated with people having useless skills (given lack of demand, government-driven or otherwise, for people with a standard academic education) and being deep in undischargeable debt.

The inertia of the conventional wisdom ("you've gotta go to college!") is further making the new generation slow to adapt to the reality, not to mention another example of Goodhart's Law.

On top of that, to the extent that people do pick up on this, the sciences will continue to be starved of the people who can bring about advances -- this past generation they were lured away to produce deceptive financial instruments that hid dangerous risk, and which (governments claim) put the global financial system at the brink of collapse.

My take? The system of go-to-college/get-a-job needs to collapse and be replaced, for the most part, by apprenticeships (or "internships" as we fine gentry call them) at a younger age, which will give people significantly more financial security and enhance the economy's productivity. But this will be bad news for academics.

And as for the future of science? The system is broken. Peer review has become pal review, and most working scientists lack serious understanding of rationality and the ability to appropriately analyze their data or know what heavy-duty algorithmic techniques to bring in.

So the slack will have to be picked up by people "outside the system". Yes, they'll be starved for funds and rely on rich people and donations to non-profits, but they'll mostly make up for it by their ability to get much more insight out of much less data: knowing what data-mining techniques to use, spotting parallels across different fields, avoiding the biases that infect academia, and generally automating the kind of inference currently believed to require a human expert to perform.

In short: this, too, shall pass -- the only question is how long we'll have to suffer until the transition is complete.

Sorry, [/rant].

Replies from: fiddlemath, Vive-ut-Vivas, Mass_Driver, Will_Newsome, Douglas_Knight, Houshalter
comment by fiddlemath · 2010-06-15T03:51:51.117Z · LW(p) · GW(p)

I agree that college as an institution of learning is a waste for most folks - they will "never use this," most disregard the parts of a liberal arts education that they're force-fed, and neither they nor their jobs benefit. Maybe students gain something from networking with each other. But yes, Goodhart's Law applies. Employers appear to use a diploma as an indicator of deligence and intelligence. So long as that's true, students will fritter away four years of their lives and put themselves deep in debt to get a magic sheet of paper.

And as for the future of science? The system is broken. Peer review has become pal review, and most working scientists lack serious understanding of rationality and the ability to appropriately analyze their data or know what heavy-duty algorithmic techniques to bring in.

It's been broken forever, in basically the same way it is now. Most working scientists are trying to prove their idea, because neagtive results don't carry nearly so much prestige as positive results, and the practice of science is mostly about prestige. I'm sure I could find citations for peer review being "pal review" throughout its lifetime. (ooh. I'll try this in a moment.)

To the extent that science has ever worked, it's because the social process of science has worked - scientists are just open-minded enough to, as a whole, let strong evidence change their collective minds. I'm not really convinced that the social process of science has changed significantly over the last decades, and I can imagine these assertions being rooted in generalized nostalgia. Do you have reasons to assert this?

(Are you just blowing off steam about this? I can totally support that, because argh argh argh the publication treadmill in my field headdesk headdesk expletives. But if you have evidence, I'd love to hear it.)

Replies from: SilasBarta
comment by SilasBarta · 2010-06-15T04:20:34.938Z · LW(p) · GW(p)

I mainly have evidence for the absolute level, not necessary for the trend (in science getting worse). For the trend, I could point to Goodhart phenomena like having to rely on the publication per unit time metric being gamed, and getting worse as time progresses.

I also think that in this context, the absolute level is evidence of the trend, when you consider that the number of scientists has increased; if the quality of science in general has not increased with more people, it's getting worse per unit person.

For the absolute level, I've noticed scattered pieces of the puzzle that, against my previous strong presumption, support my suspicions. I'm too sleepy to go into detail right now, but briefly:

  • There's no way that all the different problems being attacked by researchers can be really, fundamentally different: the functionspace is too small for a unique one to exist for each problem, so most should be reducible to a mathematical formalism that can be passed to mathematicians who can tell if it's solvable.

  • There is evidence that such connections are not being made. The example I use frequently is ecologists and the method of adjacency matrix eigenvectors. That method has been around since the 1960s and forms the basis of Google's PageRank, allowing it to identify crucial sites. Ecologists didn't apply it to the problem of identifying critical ecosystem species until a few years ago.

  • I've gone into grad school myself and found that existing explanations of concepts is a scattered mess: it's almost like they don't want you to understand papers or break into advanced topics that are the subject of research. Whenever I understand such a topic, I find myself able to explain it in much shorter time than experts in the field in explained it to me. This creates a fog over research, allowing big mistakes to last for years, with no one ever noticing it because too few eyeballs are on it. (This explanation barrier is the topic of my ever-upcoming article "Explain yourself!")

As an example of what a mess it is (and at risk of provoking emotions that aren't relevant to my point), consider climate science. This is an issue where they have to convince LOTS of people, most of whom aren't as smart. You would think that in documenting the evidence supporting their case, scientists would establish a solid walkthrough: a runnable, editable model with every assumption traceable to its source and all inputs traceable to the appropriate databases.

Yet when climate scientists were in the hot seat last fall and wanted to reaffirm the strength of their case, they had no such site to point anyone to. RealClimate.org made a post saying basically, "Um, anyone who's got the links to the public data, it'd be nice if you could post them here..."

To clarify, I'm NOT trying to raise the issue about AGW being a scam, etc. I'm saying that no matter how good the science is, here we have a case where it's of utmost important to explain research to the masses, and so it would have the most thorough documentation and traceability. Yet here, at the top of the hill, no one bothered to trace out the case from start to finish, fully connecting this domain to the rest of collective scientific knowledge.

Replies from: fiddlemath, NancyLebovitz
comment by fiddlemath · 2010-06-15T05:54:27.839Z · LW(p) · GW(p)

If the quality of science in general has not increased with more people, it's getting worse per unit person.

Er, I'd just expect to see more science being done. I know of no one studying overall mechanisms of science-as-it-is-realized (little-s "science"), and thereby seriously influencing it. Further, that's not something current science is likely to worry about, unless someone can somehow point to irrefutable evidence that science is underperforming.

All of the points you list are real issues; I watch them myself, to constant frustration. I think they have common cause in the incentive structure of science. The following account has been hinted at many times over around Less Wrong, but spelling it out may make it clear how your points follow:

Researchers focus on churning out papers that can actually get accepted at some highly-rated journal or conference, because the quantity of such papers are seen as the main guarantor of being hired as a faculty, making tenure, and getting research grants. This quantity has a strong effect on scientists' individual futures and their reputations. For all but the most well-established or idealistic scientists, this pressure overrides the drive to promote general understanding, increase the world's useful knowledge, or satisfy curiosity[*].

This pressure means that scientists seek the next publication and structure their investigations to yield multiple papers, rather than telling a single coherent story from what might be several least publishable units. Thus, you should expect little synthesis - a least publishable unit is very nearly the author's research minus the current state of knowledge in a specialized subfield. Thus, as you say, existing explanations are a scattered mess.

Since these explanations are scattered and confusing, it's brutally difficult to understand the cutting edge of any particular subfield. Following publication pressure, papers are engineered to garner acceptance from peer reviewers. Those reviewers are part of the same specialized subfield as the author. Thus, if the author fails to use a widely-known concept from outside his subfield to solve a problem in his paper, the reviewers aren't likely to catch it, because it's hard to learn new ideas from other subfields. Thus, the author has no real motivation to investigate subfields outside of his own expertise, and we have a stable situation. Thus, your first and second points.

All this suggests to me that, if we want to make science better, we need to somehow twiddle its incentive structure. But changing longstanding organizational and social trends is, er, outside of my subfield of study.

[*] This demands substantiation, but I have no studies to point to. It's common knowledge, perhaps, and it's true in the research environments I've found myself in. Does it ring true for everyone else reading this, with appropriate experience of academic research?

Replies from: Douglas_Knight, Morendil
comment by Douglas_Knight · 2010-07-14T08:20:38.469Z · LW(p) · GW(p)

It's been broken forever, in basically the same way it is now...

the quantity of such papers are seen as the main guarantor of being hired as a faculty, making tenure, and getting research grants.

No, these are recent developments (though the stuff from your first post may be old). For the first 300 years, scientists were amateurs without grants and no one cared about quantity. For evidence of recent changes, look at the age of NIH PIs

comment by Morendil · 2010-06-15T06:22:55.523Z · LW(p) · GW(p)

At the conclusion of the interview, Pierre deduces one general lesson : "You can't be inhibited, you must free yourself of the psychological obstacle that consists in being tied to something." Oh no, our friend Pierre is not inhibited ; look how for the past twenty years he has jumped from subject to subject, from boss to boss, from country to country, bringing into action all the differences of potential, seizing polypeptides, selling them off as soon as they begin declining, betting on Monod and then dropping him as soon as he gets bogged down; and here he is, ready to pack his bags again for the West Coast, the title of professor, and a new laboratory. What thing is he accumulating ? Nothing in particular, except perhaps the absence of inhibition, a sort of free energy prepared to invest itself anywhere. Yes, this is certainly he, the Don Juan of knowledge. One will speak of "intellectual curiosity," a "thirst for truth," but the absence of inhibition in fact designates something else : a capital of elements without use value, which can assume any value at all, provided the cycle closes back on itself while always expanding further. Pierre Kernowicz capitalizes the jokers of knowledge.

-- Bruno Latour, Portait of a Biologist as Wild Capitalist

(ETA: see also.)

comment by NancyLebovitz · 2010-06-16T16:46:51.249Z · LW(p) · GW(p)

I think you've got an example of generalizing from one example, and perhaps the habit of thinking of oneself as typical-- you're unusually good at finding clear explanations, and you think that other people could be about as good if they'd just try a little.

I suspect they'd have to try a lot.

As far as I can tell, most people find it very hard to imagine what it's like to not understand knowledge they've assimilated, which is another example of the same mistake.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-16T17:08:38.689Z · LW(p) · GW(p)

Well, I appreciate the compliment, but keep in mind you haven't personally put me to the test on my claim to have that skill at explaining.

As far as I can tell, most people find it very hard to imagine what it's like to not understand knowledge they've assimilated, which is another example of the same mistake.

But I don't understand why this would be hard -- people make quite a big deal about how "I was little boy/girl like you too one time". Certainly a physics professor would generally remember what it was like to take their first physics class, what confused them, what way of thinking made it clearer, etc.

(I remember one of my professors, later my grad school advisor (bless his heart), was a master at explaining and achieving Level 2 understanding on topics. He was always able to connect it back to related topics, and if students had trouble understanding something, he was always able to identify what the knowledge deficit was and jump in with an explanation of the background info needed.)

To the extent that your assessment is accurate, this problem people have can still be corrected by relatively simple changes in practice. For example, instead of just learning the next class up and moving on, people could make a habit of checking for how it connects to the previous class's knowledge, to related topics, to introductory class knowledge, and to layperson knowledge. It wouldn't help current people, as you have to make it an ongoing effort, but it doesn't sound like it's hard.

Also, is it really that hard for people to ask themselves, "Assume I know nothing. What would I have to be told to be able to do this?"

Replies from: sketerpot
comment by sketerpot · 2010-07-13T21:43:04.759Z · LW(p) · GW(p)

Certainly a physics professor would generally remember what it was like to take their first physics class, what confused them, what way of thinking made it clearer, etc.

I remember that it was all pretty straightforward and intuitive. This was not a typical experience, and it also means that I don't really know what average students have trouble with in basic Newtonian physics. Physics professors tend to be people who were unusually good at introductory physics classes. (Meanwhile, I can't seem to find an explanation of standard social skills that doesn't assume a lot of intuitions that I find non-obvious. Fucking small talk, how does it work?!)

Most professors weren't typical students, so why would their recollections be a good guide to what problems typical students have when learning a subject for the first time?

Replies from: SilasBarta, Will_Newsome
comment by SilasBarta · 2010-07-13T22:22:26.833Z · LW(p) · GW(p)

I remember intro physics being straightforward and intuitive, and I had no trouble explaining it to others. In fact, the first day we had a substitute teacher who just told us to read the first chapter, which was just the basics like scientific notation, algebraic manipulation, unit conversion, etc. I ended up just teaching the others when something didn't make sense.

If there was any pattern to it, it was that I was always able to "drop back a level" to any grounding concept. "Wait, do you understand why dividing a variable by itself cancels it out?" "Do you understand what multiplying by a power of 10 does?"

That is, I could trace back to the beginning of what they found confusing. I don't think I was special in having this ability -- it's just something people don't bother to do, or don't themselves possess the understanding to do, whether it's teaching physics or social skills (for which I have the same complaint as you).

Someone who really understands sociality (i.e., level 2, as mentioned above) can fall back to the questions of why people engage in small talk, and what kind of mentality you should have when doing so. But most people either don't bother to do this, or have only an automatic (level 1) understanding.

Do you ever have trouble explaining physics to others? Do you find any commonality to the barriers you encounter?

Replies from: steven0461, JoshuaZ
comment by steven0461 · 2010-07-13T23:51:00.718Z · LW(p) · GW(p)

In mathy fields, how much of it is caused by insufficiently deep understanding and how much of it is caused by taboos against explicitly discussing intuitive ways of thinking that can't be defended as hard results? The common view seems to be that textbooks/lectures are for showing the formal structure of whatever it is you're learning, and to build intuitions you have to spend a lot of time doing exercises. But I've always thought such effort could be partly avoided if instead of playing dignified Zen master, textbooks were full of low-status sentences like "a prisoner's dilemma means two parties both have the opportunity to help the other at a cost that's smaller than the benefit, so it's basically the same thing as trade, where both parties give each other stuff that they value less than the other, so you should imagine trade as people lobbing balls of stuff at each other that grow in the lobbing, and if you zoom out it's like little fountains of stuff coming from nowhere". (ETA: I mean in addition to the math itself, of course.) It's possible that I'm overrating how much such intuitions can be shared between people, maybe because of learning-style issues.

Replies from: sketerpot, None, Douglas_Knight, SilasBarta
comment by sketerpot · 2010-07-14T01:15:59.225Z · LW(p) · GW(p)

I think you've got something really important here. If you want to get someone to an intuitive understanding of something, then why not go with explanations that are closer to that intuitive understanding? I usually understand such explanations a lot better than more dignified explanations, and I've seen that a lot of other people are the same way.

I remember when a classmate of mine was having trouble understanding mutexes, semaphores, monitors, and a few other low-level concurrency primitives. He had been to the lectures, read the textbook, looked it up online, and was still baffled. I described to him a restroom where people use a pot full of magic rocks to decide who can use the toilets, so they don't accidentally pee on each other. The various concurrency primitives were all explained as funny rituals for getting the magic toilet permission rocks. E.g. in one scheme people waiting for a rock stand in line; in another scheme they stand in a throng with their eyes closed, periodically flinging themselves at the pot of rocks to see if any are free. Upon hearing this, my friend's confusion was dispelled. (For my part, I didn't understand this stuff until I had translated it into vague images not too far removed from the stupid bathroom story I told my friend. The textbook explanations are just bad sometimes.)

Or for another example, I had terrible trouble with basic probability theory until I learned to imagine sets of things that could happen, and visualize them as these hazy blob things. Once that happened, it was as if my eyes had finally opened, and everything became clear. I was kind of pissed off that all the classes I'd been in that tried to teach probability focused exclusively on the equations, so I'd had to figure out the intuitive stuff without any help.

As a side-note, this is one reason why I'm optimistic about online education like Salman Khan's videos. It's not that they're inherently better, obviously, but they have the potential for much more competition. I can imagine students in The Future comparing lecturers, with the underlying assumption that you can trivially switch at any time. "Oh, you're trying to learn about the ancient Roman sumptuary laws from Danrich Parrol's lectures? Those are pretty mind-numbing; try Nile Etland's explanations instead. She presents the different points of view by arguing vehemently with herself in several funny accents. It's surprisingly clear, even if she does sound like a total nutcase."

[Side-note to the side-note: I think more things should be explained as arguments. And the natural way to do this is for one person to hold a crazy multiple-personality argument-monologue. This also works for explaining digital hardware design as a bunch of components having a conversation. "You there! I have sent you a 32-bit integer! Tell me when you're done with it!" Works like a charm.]

Man, the future of education will be silly. And more educational!

Replies from: NancyLebovitz, Alicorn, Nisan
comment by NancyLebovitz · 2010-07-14T06:12:37.593Z · LW(p) · GW(p)

Man, the future of education will be silly. And more educational!

It wouldn't surprise me if a big part of the problem now is the assumption that there's virtue to enduring boredom, and a proof of status if you impose it.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-07-14T06:33:39.598Z · LW(p) · GW(p)

It wouldn't surprise me if a big part of the problem now is the assumption that there's virtue to enduring boredom

If by boredom you mean dominance and inequality, then Robin Hanson has been riffing on this theme lately. The main idea is that employers need employees who will just accept what they're told do instead of rebelling and trying to form a new tribe in a nearby section of savannah. School trains some of the rebelliousness out of students. See e.g., this, this, and this.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-14T11:50:07.975Z · LW(p) · GW(p)

No, by boredom I mean lack of appropriate levels of stimulus, and possibly lack of significant work.

Dominance and inequality can play out in a number of ways, including chaos (imagine a badly run business with employees who would like things to be more coherent), physical abuse, and deprivation. Imposed boredom is only one possibility.

Causing people to have, or feel they have, no alternatives is how abusive authorities get away with it.

comment by Alicorn · 2010-07-14T01:19:26.086Z · LW(p) · GW(p)

She presents the different points of view by arguing vehemently with herself in several funny accents.

That sounds like such fun!

Replies from: sketerpot, Emile
comment by sketerpot · 2010-07-14T01:31:03.044Z · LW(p) · GW(p)

It's every bit as fun as you imagine. And it works great.

comment by Emile · 2010-07-14T19:29:26.107Z · LW(p) · GW(p)

Heh, this reminds me of this discussion of Plain Talk on a wiki I participated in years ago. I must have drawn those little characters, what, ten years ago? Not quite (more like six or seven), but it feels like ages ago.

comment by Nisan · 2010-07-16T16:29:55.685Z · LW(p) · GW(p)

I agree with this. It is also true that people's intuitions differ, and people respond differently to different kinds of informal explanation. steven0461's explanation of Prisoner's Dilemma would be good for someone accustomed to thinking visually, for example. For this reason, your vision of individual explanations competing (or cooperating) is important.

comment by [deleted] · 2010-07-14T00:27:44.128Z · LW(p) · GW(p)

One of the things I've always disliked about mathematical culture is this taboo against making allowances for human weakness on the part of students (of any age.) For example, the reluctance to use "plain English" aids to intuition, or pictures, or motivations. Sometimes I almost think this is a signaling issue, where mathematicians want to display that they don't need such crutches. But it seems to get in the way of effective communication.

You can go too far in the other direction -- I've found that it can also be hard to learn when there's too little rigorous formalism. (Most recently I've had that experience with electrical engineering and philosophy.) There ought to be a happy medium somewhere.

Replies from: JoshuaZ, SilasBarta
comment by JoshuaZ · 2010-07-14T00:45:35.200Z · LW(p) · GW(p)

Sometimes I almost think this is a signaling issue, where mathematicians want to display that they don't need such crutches.

This isn't really a signaling issue so much as a response to the fact that mathematicians have had centuries of experience where apparent theorems turned out to be not proven or not even true and the failings were due to too much reliance on intuition. Classical examples of this include how in the 19th century there was about a decade long period where people thought that the Four Color Theorem was proven. Also, a lot of these sorts of issues happened in calculus before it was put on a rigorous setting in the 1850s.

There may be a signaling aspect but it is likely a small one. I'd expect more likely that mathematicians err on the side of rigor.

ETA: Another data point that suggests this isn't about signaling; I've been too a fair number of talks in which people in the audience get annoyed because they think there's too much formalism hiding some basic idea in which case they'll ask questions sometimes of the form "what's the idea behind the proof" or "what's the moral of this result?"

Replies from: None, SilasBarta
comment by [deleted] · 2010-07-14T02:28:28.483Z · LW(p) · GW(p)

Just to be clear: I'm not against rigor. Rigor is there for a good reason.

But I do think that there's a bias in math against making it easy to learn. It's weird.

  1. Math departments, anecdotally in nearly all the colleges I've heard of, are terrible at administrative conveniences. Math will be the only department that doesn't put lecture notes online, won't announce the correct textbook for the course, won't produce a syllabus, won't announce the date of the final exam. Physics, computer science, etc., don't do this to their students. This has nothing to do with rigor; I think it springs from the assumption that such details are trivial.

  2. I've noticed a sort of aesthetic bias (at least in pure math) against pictures and "selling points." I recall one talk where the speaker emphasized how transformative his result could be for physics -- it was a very dramatic lecture. The gossip afterwards was all about how arrogant and salesman-like the speaker was. That cultural instinct -- to disdain flash and drama -- probably helps with rigorous habits of thought, but it ruins our chances to draw in young people and laymen. And I think it can even interfere with comprehension (people can easily miss the understated.)

comment by SilasBarta · 2010-07-14T00:53:42.885Z · LW(p) · GW(p)

Over 99% of students learning math aren't going to be expected to contribute to cutting-edge proofs, so I don't regard this as a good reason not to use "plain English" methods.

In any case, a plain English understanding can allow you to bootstrap to a rigorous understanding, so more hardcore mathematicians should be able to overcome any problem introduced this way.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-14T00:56:55.222Z · LW(p) · GW(p)

I agree that this is likely often suboptimal when teaching math. The argument I was presenting was that this approach was not due to signaling. I'm not arguing that this is at all optimal.

comment by SilasBarta · 2010-07-14T15:37:43.931Z · LW(p) · GW(p)

I don't think this problem is limited to math: it's present in all cutting-edge or graduate school levels of technical subjects. Basically, if you make your work easily accessible to a lay audience[1], it's regarded as lower status or less significant. ("Hey, if it sounds so simple, it must not have been very hard to get!")

And ironically enough, this thread sprung from me complaining about exactly that (see esp. the third bullet point).

[1] And contrary to what turf-defenders like to claim, this isn't that hard. Worst case, you can just add a brief pointer to an introduction to the topic and terminology. To borrow from some open source guy, "Given enough artificial barriers to understanding, all bugs are deep."

comment by Douglas_Knight · 2010-07-14T08:02:56.037Z · LW(p) · GW(p)

The common view seems to be that textbooks/lectures are for showing the formal structure of whatever it is you're learning

I thought that writing was for that and lectures were supposed to be informal, the kind of thing you were asking for. And, I thought everyone agreed that lectures work much better.

Replies from: steven0461
comment by steven0461 · 2010-07-14T20:46:42.404Z · LW(p) · GW(p)

I thought that writing was for that and lectures were supposed to be informal, the kind of thing you were asking for.

I think you're right, but only to a limited (varying) degree. I also think it's not just a matter of being informal, but a matter of just stating explicitly a lot of insights that you're "supposed" to get only through hard mental labor.

comment by SilasBarta · 2010-07-14T00:56:29.189Z · LW(p) · GW(p)

I don't have an answer, but I can attest to not mimicking a textbook when I try to explain high school math to someone. Rather, I first find out where gap is between their understanding and where I want them to be.

Of course, textbooks don't have the luxury of probing each student's mind.

comment by JoshuaZ · 2010-07-13T23:57:24.276Z · LW(p) · GW(p)

That is, I could trace back to the beginning of what they found confusing. I don't think I was special in having this ability -- it's just something people don't bother to do, or don't themselves possess the understanding to do, whether it's teaching physics or social skills (for which I have the same complaint as you).

This demonstrates a highly developed theory of mind. In order to do this one needs to both have a good command of material and a good understanding of what people are likely to understand or not understand. This is often very difficult.

Replies from: SilasBarta, SilasBarta, SilasBarta
comment by SilasBarta · 2010-07-14T15:29:30.265Z · LW(p) · GW(p)

I thought I should add a pointer one of the replies, because it's another anecdote from when poster noticed the difference (in what "understand" means) on an encounter with another person who had a lower threshold.

Maybe there is a wide variance in "understanding criteria" or "curiosity shut-off point" which has real importance for how people learn.

comment by SilasBarta · 2010-07-14T00:26:44.130Z · LW(p) · GW(p)

Maybe so, but then this would be the only area where I have a highly-developed theory of mind. If you'll ask the people who have seen me post for a while, the consensus is that this is where I'm most lacking. They don't typically put it in terms of a theory of mind, but one complaint about me can be expressed as, "he doesn't adequately anticipate how others will react to what he does" -- which amounts to the saying I lack a good theory of mind (which is a common characteristic of autistics).

But that gives me an idea: maybe what's unique about me is what I count as a genuine understanding. I don't regard myself as understanding the material until I have "plugged it in" to the rest of my knowledge, so I've made a habit of ensuring that what I know in one area is well-connected to other areas, especially its grounding concepts. I can't, in other words, compartmentalize subjects as easily.

(That would also explain what I hated about literature and, to a lesser extent, history -- I didn't see what they were building off of.)

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-14T00:32:45.751Z · LW(p) · GW(p)

Yes, I had that thought also but wasn't sure how to put it. Frankly, I'm a bit surprised that you had that good a theory of mind for physics issues. Your hypothesis about plugging in seems plausible.

comment by SilasBarta · 2010-07-16T16:00:20.303Z · LW(p) · GW(p)

Also, it looks like EY already wrote an article about the phenomenon I described: when people learn something in school, they normally don't bother to ground it like I've described, and so don't know what a true (i.e., level 2) understanding looks like.

(Sorry to keep replying to this comment!)

Replies from: Blueberry
comment by Blueberry · 2010-07-16T16:41:19.628Z · LW(p) · GW(p)

Don't let that stop you from writing about related topics.

comment by Will_Newsome · 2010-09-08T17:49:51.025Z · LW(p) · GW(p)

Fucking small talk, how does it work?!

For me, a small but significant hack suggested by Anna Salamon was to try to act (and later, to actually be) cheerful and engaged instead of wittily laconic and 'intelligent'. That said, it's rare that I remember to even try. Picking up habits is difficult.

comment by Vive-ut-Vivas · 2010-06-15T19:40:50.400Z · LW(p) · GW(p)

The inertia of the conventional wisdom ("you've gotta go to college!") is further making the new generation slow to adapt to the reality, not to mention another example of Goodhart's Law.

I wish I could vote this comment up a hundred times. This insane push toward college without much thought about the quality of the education is extremely harmful. People are more focused on slips of paper that signal status versus the actual ability to do things. Not only that, but people are spending tens of thousands of dollars for degrees that are, let's be honest, mostly worthless. Liberal arts and humanities majors are told that their skill set lies in the ability to "think critically"; this is a necessary but not sufficient skill for success in the modern world. (Aside from the fact that their ability to actually "think critically" is dubious in the first place.) In reality, the entire point is networking, but there has to be a more efficient way of doing this that isn't crippling an entire generation with personal debt.

Replies from: wedrifid, SilasBarta, realitygrill
comment by wedrifid · 2010-06-18T11:53:05.633Z · LW(p) · GW(p)

I wish I could vote this comment up a hundred times.

I would settle for just 10 times if it were in the form of a post. ;)

Liberal arts and humanities majors are told that their skill set lies in the ability to "think critically";

Evidently the ability to think critically is instilled after the propaganda is spread.

comment by SilasBarta · 2010-06-17T19:09:00.939Z · LW(p) · GW(p)

Liberal arts and humanities majors are told that their skill set lies in the ability to "think critically"; this is a necessary but not sufficient skill for success in the modern world.

Wow, now that is what I would call fraud. It's something the students should be able to detect right off the bat, given the lack of liberal arts success stories they can point to. It's like they just think, "I like history, so I'll study that", with no consideration of how they'll earn a living in four years (or seven). That can't last.

In reality, the entire point is networking, but there has to be a more efficient way of doing this that isn't crippling an entire generation with personal debt.

And I wish I could vote that up a hundred times. I wouldn't mind as much if colleges were more open about "hey, the whole point of being here is networking", but I guess that's something no one can talk about in polite company.

comment by realitygrill · 2010-06-17T04:37:01.469Z · LW(p) · GW(p)

Tell my parents this one.

On the other hand, is 'success' an existentialist concept (in that you have to define it yourself)? I would think it'd be near impossible to come to a consensus as to what is necessary and sufficient for success.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-06-17T05:13:52.683Z · LW(p) · GW(p)

Sure, it's vague. The point is that, for any plausible, conventional definition of success you might be able to come up with, a typical liberal arts degree is definitely insufficient and probably unnecessary to meet that definition's criteria.

comment by Mass_Driver · 2010-06-17T05:15:31.504Z · LW(p) · GW(p)

In short: this, too, shall pass -- the only question is how long we'll have to suffer until the transition is complete.

Or, it may not pass, and the American educational system may continue to gather detritus until it collapses. Anybody familiar enough with the Chinese Ming dynasty to rationally assess the similarities? I'm not.

Replies from: SilasBarta, wedrifid
comment by SilasBarta · 2010-06-17T19:10:13.205Z · LW(p) · GW(p)

Or, it may not pass, and the American educational system may continue to gather detritus until it collapses.

Not to be pedantic, but that would be passing. I made no pretense about shortness in the time this will take to pass.

comment by wedrifid · 2010-06-18T11:51:30.298Z · LW(p) · GW(p)

Or, it may not pass, and the American educational system may continue to gather detritus until it collapses.

Sounds like we need some heroic ninjas to fill the university water supply with concentrated hallucinogens and blast it with a giant portable microwave.

comment by Will_Newsome · 2010-06-18T08:12:30.913Z · LW(p) · GW(p)

So what is the realistic alternative for those who have no other marketable skills, such as myself? (I specifically don't have a high school diploma, though I suppose it would be trivially easy to nab a GED.)

Replies from: SilasBarta
comment by SilasBarta · 2010-06-18T14:11:59.896Z · LW(p) · GW(p)

Until the adjustment happens, there won't be a common way because most people are still in the current inefficient mentality so you don't get scaling effects. Whatever internships friends and family can offer would probably be the best alternative.

In the future, there will probably be some standardized test you'll have to take at age 16-18 to show that you're reasonably competent and your education wasn't a sham. (The SAT tests could probably be used as they stand for this purpose.) Then, most people will go straight to unpaid or low-paid interships in the appropriate field, during which they may have to take classes to get a better theoretical background in their field (like college, but more relevant).

After a relatively short time, they will either prove their mettle and have contacts, experience, and opportunities, or realize it was a bad idea, cut their losses, and try something else. It sounds like a big downside, until you compare it to college today.

comment by Douglas_Knight · 2010-06-15T04:29:44.454Z · LW(p) · GW(p)

On top of that, to the extent that people do pick up on this, the sciences will continue to be starved of the people who can bring about advances -- this past generation they were lured away to produce deceptive financial instruments that hid dangerous risk, and which (governments claim) put the global financial system at the brink of collapse.

If you're not happy with what they did in finance, why do you think they would have been useful in science?

Replies from: SilasBarta
comment by SilasBarta · 2010-06-15T04:38:19.170Z · LW(p) · GW(p)

They're smart. They're capable of figuring out a creative solution. And the financial instruments they designed were creative, for what they were intended, which was to hide risk and allow banks to offload mortgages to someone else. Someone benefited from the creativity, just not the average worker or consumer.

Replies from: NancyLebovitz, Douglas_Knight
comment by NancyLebovitz · 2010-06-16T16:54:10.218Z · LW(p) · GW(p)

I recommend The Quants-- those guys weren't just hiding risk in mortages, and it's plausible that they were so hooked on competition and smugness that there was a lot of risk they were failing to see. They weren't just hiding it on purpose.

comment by Douglas_Knight · 2010-06-15T04:48:23.095Z · LW(p) · GW(p)

Yes, capable of figuring out a creative solution to maximizing their goals when faced with the incentive structure of science. You think that the people who remain fail to do science when faced with these incentives, so why do you expect these others be more altruistic?

comment by Houshalter · 2010-06-15T02:31:23.617Z · LW(p) · GW(p)

I'm not sure if it meets the Ponzi scheme model, but the problem is this: lots of students are going deeper into debt to get an education that has less and less positive impact on their earning power. So the labor force will be saturated with people having useless skills (given lack of demand, government-driven or otherwise, for people with a standard academic education) and being deep in undischargeable debt.

The inertia of the conventional wisdom ("you've gotta go to college!") is further making the new generation slow to adapt to the reality, not to mention another example of Goodhart's Law.

I suppose thats true, but there is such thing as equilibrium where the factors equal each other out. I do fear that it might be to high, but again, when the price becomes unreasonable, people look for the other options that are cheaper.

And as for the future of science? The system is broken. Peer review has become pal review, and most working scientists lack serious understanding of rationality and the ability to appropriately analyze their data or know what heavy-duty algorithmic techniques to bring in.

Thats kind of sad actually, but no amount of government regulation can fix that. Unfortunatley there is little actual incentive for actual science in a pure capitalist society, though we've been going good so far.

comment by nhamann · 2010-06-15T05:00:57.072Z · LW(p) · GW(p)

I'm not sure I see what the problem is.

From the article:

Paid out of the grant, these highly skilled employees might earn $40,000 a year for 60 or more hours a week in the lab. A lucky few will eventually land faculty posts, but even most of those won’t get traditional permanent spots with the potential of tenure protection. The majority of today’s new faculty hires are “soft money” jobs with titles like “research assistant professor” and an employment term lasting only as long as the specific grant that supports it.

I'm not sure how typical this experience is, but assuming it is as common as the article suggests: you don't see a problem with the fact that huge numbers of highly trained (~4 years for a bachelors, 5-7 for a Ph.D) are getting paid very little to work in conditions with almost no long-term job security? You see that as being perfectly fine, and comment that "capitalism works?" I'm not sure what to say. Such job prospects are decidedly unappealing (some might say intolerable), and I think it's reasonable to suggest that such conditions will result in a substantial decrease in the number of smart, dedicated young people interested in becoming scientists. This, to put it bluntly, is a fucking shame.

Replies from: Houshalter
comment by Houshalter · 2010-06-15T13:27:08.289Z · LW(p) · GW(p)

I'm not sure how typical this experience is, but assuming it is as common as the article suggests: you don't see a problem with the fact that huge numbers of highly trained (~4 years for a bachelors, 5-7 for a Ph.D) are getting paid very little to work in conditions with almost no long-term job security? You see that as being perfectly fine, and comment that "capitalism works?" I'm not sure what to say. Such job prospects are decidedly unappealing (some might say intolerable), and I think it's reasonable to suggest that such conditions will result in a substantial decrease in the number of smart, dedicated young people interested in becoming scientists. This, to put it bluntly, is a fucking shame.

Maybe that was a little harsh. But the question is, why are "huge numbers of highly trained (~4 years for a bachelors, 5-7 for a Ph.D) [...] getting paid very little to work in conditions with almost no long-term job security?" The article suggests it's because we have a surplus. But if those people weren't so highly trained, would they then get those better jobs? Probably not, people don't discriminate against you because you're "highly trained".

Replies from: cupholder
comment by cupholder · 2010-06-15T18:32:32.032Z · LW(p) · GW(p)

But if those people weren't so highly trained, would they then get those better jobs?

They likely wouldn't, but I doubt that's the point. I think the point is that if they weren't so highly trained, their employment status would be more in line with their qualifications, as opposed to the current situation where Ph.Ds are doing jobs that could be done by less-credentialed people.

Probably not, people don't discriminate against you because you're "highly trained".

That sounds unlikely to me; I've heard the word 'overqualified' used to refer to that kind of discrimination.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-16T16:57:21.255Z · LW(p) · GW(p)

And also, the money which is spent on useless "education" could be spent on something more useful, or at least more fun. People with mediocre incomes at least wouldn't lose a lot of flexibility from indebtedness.

comment by Yoreth · 2010-06-14T08:10:24.694Z · LW(p) · GW(p)

A prima facie case against the likelihood of a major-impact intelligence-explosion singularity:

Firstly, the majoritarian argument. If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI? If nothing else, you might think that someone could exaggerate the threat of the singularity and use it to scare people into giving them government funds. But we don’t even see that happening.

Second, a theoretical issue with self-improving AI: can a mind understand itself? If you watch a simple linear Rube Goldberg machine in action, then you can more or less understand the connection between the low- and the high-level behavior. You see all the components, and your mind contains a representation of those components and of how they interact. You see your hand, and understand how it is made of fingers. But anything more complex than an adder circuit quickly becomes impossible to understand in the same way. Sure, you might in principle be able to isolate a small component and figure out how it works, but your mind simply doesn’t have the capacity to understand the whole thing. Moreover, in order to improve the machine, you need to store a lot of information outside your own mind (in blueprints, simulations, etc.) and rely on others who understand how the other parts work.

You can probably see where this is going. The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind. Therefore, while the AI can understand in principle that it is made up of transistors etc., its self-representation necessary has some blank areas. I posit that the AI cannot purposefully improve itself because this would require it to understand in a deep, level-spanning way how it itself works. Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.

So: do you know any counterarguments or articles that address either of these points?

Replies from: IsaacLewis, cousin_it, DanArmak, Morendil, Roko, timtyler, NancyLebovitz, xamdam, mindviews, timtyler, NancyLebovitz, CarlShulman
comment by IsaacLewis · 2010-06-14T17:55:40.405Z · LW(p) · GW(p)

Two counters to the majoritarian argument:

First, it is being mentioned in the mainstream - there was a New York Times article about it recently.

Secondly, I can think of another monumental, civilisation-filtering event that took a long time to enter mainstream thought - nuclear war. I've been reading Bertrand Russel's autobiography recently, and am up to the point where he begins campaigning against the possibility of nuclear destruction. In 1948 he made a speech to the House of Lords (UK's upper chamber), explaining that more and more nations would attempt to acquire nuclear weapons, until mutual annihilation seemed certain. His fellow Lords agreed with this, but believed the matter to be a problem for their grandchildren.

Looking back even further, for decades after the concept of a nuclear bomb was first formulated, the possibility of nuclear was was only seriously discussed amongst physicists.

I think your second point is stronger. However, I don't think a single AI rewiring itself is the only way it can go FOOM. Assume the AI is as intelligent as a human; put it on faster hardware (or let it design its own faster hardware) and you've got something that's like a human brain, but faster. Let it replicate itself, and you've got the equivalent of a team of humans, but which have the advantages of shared memory and instantaneous communication.

Now, if humans can design an AI, surely a team 1,000,000 human equivalents running 1000x faster can design an improved AI?

comment by cousin_it · 2010-06-14T11:49:07.268Z · LW(p) · GW(p)

The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind.

If your argument is based on information capacity alone, it can be knocked down pretty easily. An AI can understand some small part of its design and improve that, then pick another part and improve that, etc. For example, if the AI is a computer program, it has a sure-fire way of improving itself without completely understanding its own design: build faster processors. Alternatively you could imagine a population of a million identical AIs working together on the problem of improving their common design. After all, humans can build aircraft carriers that are too complex to be understood by any single human. Actually I think today's humanity is pretty close to understanding the human mind well enough to improve it.

Replies from: Houshalter, whpearson
comment by Houshalter · 2010-06-14T21:11:24.766Z · LW(p) · GW(p)

I don't think the number of AIs actually matters. If multiple AI's can do a job, then a single AI should be able to simulate them as though it was multiple AI's (or better yet just figure out how to do it on it's own) and then do it as well. Another thing to note is that if the AI makes a copy of its program and puts it in external storage, it doesn't add any extra complexity to itself. It can then run it's optimization process on it, although I do agree that it would be more practical if it only improved parts of itself at a time.

Replies from: cousin_it
comment by cousin_it · 2010-06-14T21:20:58.887Z · LW(p) · GW(p)

You're right, I used the million AIs as an intuition pump, imitating Eliezer's That Alien Message.

comment by whpearson · 2010-06-14T13:10:35.652Z · LW(p) · GW(p)

It depends upon what designing a mind is like. How much minds intrinsically rely on interactions between parts and how far those interactions reach.

In the brain most of the interesting stuff such as science and the like is done by culturally created components. The evidence for this is the stark variety of the worldviews that exist in the world and have existed in history (with most of the same genes) and the ways those views made those that hold them interact with the world.

Making a powerful AI, in this view, is not just a problem of making a system with lots of hardware or the right algorithms from birth; it is a problem of making a system with the right ideas. And ideas interact heavily in the brain. They can squash or encourage each other. If one idea goes, others that rely on it might go as well.

I suspect that we might be close to making the human mind able to store more ideas or make the ideas process more quickly. How much that will lead to the creation of better ideas I don't know. That is will we get a feedback loop? We might just get better at storing gossip and social information.

comment by DanArmak · 2010-06-14T14:54:39.179Z · LW(p) · GW(p)

The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind. Therefore, while the AI can understand in principle that it is made up of transistors etc., its self-representation necessary has some blank areas.

This is strictly true if you're talking about the working memory that is part of a complete model of your "mind". But a mind can access an unbounded amount of externally stored data, where a complete self-representation can be stored.

A Turing Machine of size N can run on an unbounded-size tape. A von Neumann PC with limited main memory can access an unbounded-size disk.

Although we can only load a part of the data into working memory at a time, we can use virtual memory to run any algorithm written in terms of the data as a whole. If we had an AI program, we could run it on today's PCs and while we could run out of disk space, we couldn't run out of RAM.

comment by Morendil · 2010-06-14T08:51:37.697Z · LW(p) · GW(p)

I'd just forget the majoritarian argument altogether, it's a distraction.

The second question does seem important to me, I too am skeptical that an AI would "obviously" have the capacity to recursively self-improve.

The counter-argument is summarized here, whereas we humans are stuck with an implementation substrate which was never designed for understandability, an AI could be endowed with both a more manageable internal representation of its own capacities and a specifically designed capacity for self-modification.

It's possible - and I find it intuitively plausible - that there is some inherent general limit to a mind's capacity for self-knowledge, self-understanding and self-modification. But an intuition isn't an argument.

Replies from: AlanCrowe, Will_Newsome
comment by AlanCrowe · 2010-06-14T12:34:07.441Z · LW(p) · GW(p)

I see Yoreth's version of the majoritarian argument as ahistorical. The US Government did put a lot of money into AI research and became disillusioned. Daniel Crevier wrote a book AI: The tumultuous history of the search for artificial intelligence. It is a history book. It was published in 1993, 17 years ago.

There are two possible responses. One might argue that time has moved on, things are different now, and there are serious reasons to distinguish today's belief that AI is around the corner from yesterday's belief that AI is around the corner. Wrong then, right now, because...

Alternatively one might argue that scaling died at 90 nanometers, practical computer science is just turning out Java monkeys, the low hanging fruit has been picked, there is no road map, theoretical computer science is a tedious sub-field of pure mathematics, partial evaluation remains an esoteric backwater, theorem provers remain an esoteric backwater, the theorem proving community is building the wrong kind of theorem provers and will not rejuvenate research into partial evaluation,...

The lack of mainstream interest in explosive developments in AI is due to getting burned in the past. Noticing that the scars are not fading is very different from being unaware of AI.

Replies from: SilasBarta, rwallace
comment by SilasBarta · 2010-06-14T13:21:54.163Z · LW(p) · GW(p)

There are two possible responses. One might argue that time has moved on, things are different now, and there are serious reasons to distinguish today's belief that AI is around the corner from yesterday's belief that AI is around the corner. Wrong then, right now, because...

I'm reminded of a historical analogy from reading Artificial Addition. Think of it this way: a society that believes addition is the result of adherence to a specific process (or a process isomorphic thereto), and understands part of that process, is closer to creating "general artificial addition" than one that tries to achieve "GAA" by cleverly avoiding the need to discover this process.

We can judge our own distance to artificial general intelligence, then, by the extent to which we have identified constraints that intelligent processes must adhere to. And I think we've seen progress on this in terms of more refined understanding of e.g. how to apply Bayesian inference. For example, the work by Sebastian Thrun on how to seamlessly aggregate knowledge across sensors to create a coherent picture of the environment, which has produced tangible results (navigating the desert).

Replies from: whpearson
comment by whpearson · 2010-06-14T13:35:01.261Z · LW(p) · GW(p)

Can you point me to an overview of this understanding? I would like to apply it to the problem of detecting different types of data in a raw binary file.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-14T13:51:05.247Z · LW(p) · GW(p)

I don't know of a good one. You could try this, but it's light on the math. I'm looking through Thrun's papers to find a good one that gives a simple overview of the concepts, and through the CES documentation.

I was introduced to this advancement in EY's Selling nonapples article.

And I'm not sure how this helps for detecting file types. I mean, I understand generally how they're related, but not how it would help with the specifics of that problem.

Replies from: whpearson
comment by whpearson · 2010-06-14T15:03:47.262Z · LW(p) · GW(p)

Thanks I'll have a look. I'm looking for general purpose insights. Otherwise you could use the same sorts of reasoning to argue that the technology behind deep blue was on the right track.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-14T15:52:42.307Z · LW(p) · GW(p)

True, the specific demonstration of Thrun's that referred to was specific to navigating a terrestrial desert environment, but it was a much more general problem than chess, and had to deal with probabilistic data and uncertainty. The techniques detailed in Thrun's papers easily generalize beyond robotics.

Replies from: whpearson
comment by whpearson · 2010-06-14T16:22:04.279Z · LW(p) · GW(p)

I've had a look, and I don't see anything much that will make the techniques easily generalize to my problems (or any problem that has similar characteristics to mine, such as very large amounts of possibly relevant data). Oh, I am planning to use bayesian techniques. But easy is not how I would characterize the translating of the problem.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-14T16:28:32.628Z · LW(p) · GW(p)

Now that you mention it, one of the reasons I'm trying to get acquainted with the methods Thrun uses is to see how much they rely on advance knowledge of exactly how the sensor works (i.e. its true likelihood function). Then, I want to see if it's possible to infer enough relevant information about the likelihood function (such as through unsupervised learning) so that I can design a program that doesn't have to be given this information about the sensors.

And that's starting to sound more similar to what you would want to do.

Replies from: whpearson
comment by whpearson · 2010-06-14T16:47:26.082Z · LW(p) · GW(p)

That'd be interesting. More posts on the real world use of bayesian models would be good for lesswrong I think.

But I'm not sure how relevant to my problem. I'm in the process of writing up my design deliberations and you can judge better once you have read them.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-15T00:01:05.878Z · LW(p) · GW(p)

Looking forward to it!

The reason I say that our problems are related is that inferring the relevant properties of a sensor's likelihood function looks like a standard case of finding out how the probability distribution clusters. Your problem, that of identifying a file type from its binary bitstream, is doing something similar -- finding what file types have what PD clusters.

comment by rwallace · 2010-06-14T14:43:47.277Z · LW(p) · GW(p)

I know of partial evaluation in the context of optimization, but I hadn't previously heard of much connection between that and AI or theorem provers. What do you see as the connection?

Or, more concretely: what do you think would be the right kind of theorem provers?

Replies from: AlanCrowe, whpearson
comment by AlanCrowe · 2010-06-14T16:13:11.082Z · LW(p) · GW(p)

I think I made a mistake in mentioning partial evaluation. It distracts from my main point. The point I'm making a mess of is that Yoreth asks two questions:

If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI?

I read (mis-read?) the rhetoric here as containing assumptions that I disagree with. When I read/mis-read it I feel that I'm being slipped the idea that governments have never been interested in AI. I also pick up a whiff of "the mainstream doesn't know, we must alert them." But mainstream figures such as John McCarthy and Peter Norvig know and are refraining from sounding the alarm.

So partial evaluation is a distraction and I only made the mistake of mentioning it because it obsesses me. But it does! So I'll answer anyway ;-)

Why am I obsessed? My Is Lisp a Blub post suggests one direction for computer programming language research. Less speculatively, three important parts of computer science are compiling (ie hand compiling), writing compilers, and tools such as Yacc for compiling compilers. The three Futamura projections provide a way of looking at these three topics. I suspect it is the right way to look at them.

Lambda-the-ultimate had an interesting thread on the type-system feature-creep death-spiral. Look for the comment By Jacques Carette at Sun, 2005-10-30 14:10 linking to Futamura's papers. So there is the link to having a theorem proving inside a partial evaluator.

Now partial evaluating looks like it might really help with self-improving AI. The AI might look at its source, realise that the compiler that it is using to compile itself is weak because it is a Futamura projection based compiler with an underpowered theorem prover, prove some of the theorems itself, re-compile, and start running faster.

Well, maybe, but the overviews I've read of the classic text by Jones, Gomard, and Sestoft, make me think that the start of the art only offers linear speed ups. If you write a bubble sort and use partial evaluation to compile it, it stays order n squared. The theorem prover will never transform to an n log n algorithm.

I'm trying to learn ACL2. It is a theorem prover and you can do things such as proving that quicksort and bubble sort agree. That is a nice result and you can imagine that fitting into a bigger picture. The partial evaluator wants to transform a bubble sort into something better, and the theorem prover can annoint the transformation as correct. I see two problems.

First, the state of the art is a long way from being automatic. You have to lead the theorem prover by the hand. It is really just a proof checker. Indeed the ACL2 book says

You are responsible for guiding it, usually by getting it to prove the necessary lemmas. Get used to thinking that it rarely proves anything substantial by itself.

it is a long way from proving (bubble sort = quick sort) on its own.

Second that doesn't actually help. There is no sense of performance here. It only says that they agree, without saying which is faster. I can see a way to fix this. ACL2 can be used to prove that interpreters conform to their semantics. Perhaps it can be used to prove that an instrumented interpreter performs a calculation in fewer than n log n cycles. Thus lifting the proofs from proofs about programs to proofs about interpreters running programs would allow ACL2 to talk about performance.

This solution to problem two strikes me as infeasible. ACL2 cannot cope with the base level without hand holding, which I have not managed to learn to give. I see no prospect of lifting the proofs to include performance without adding unmanageable complications.

Could performance issues be built in to a theorem prover, so that it natively knows that quicksort is faster than bubble sort, without having to pass its proofs through a layer of interpretation? I've no idea. I think this is far ahead of the current state of computer science. I think it is preliminary to, and much simple than, any kind of self-improving artificial intelligence. But that is what I had in mind as the right kind of theorem prover.

There is a research area of static analysis and performance modelling. One of my Go playing buddies has just finished a PhD in it. I think that he hopes to use the techniques to tune up the performance of the TCP/IP stack. I think he is unaware of and uninterested in theorem provers. I see computer science breaking up into lots of little specialities, each of which takes half a life time to master. I cannot see the threads being pulled together until the human lifespan is 700 years instead of 70.

Replies from: rwallace
comment by rwallace · 2010-06-14T16:42:31.321Z · LW(p) · GW(p)

Ah, thanks, I see where you're coming from now. So ACL2 is pretty much state-of-the-art from your point of view, but as you point out, it needs too much handholding to be widely useful. I agree, and I'm hoping to build something that can perform fully automatic verification of nontrivial code (though I'm not focusing on code optimization).

You are right of course that proving quicksort is faster than bubble sort, is even considerably more difficult than proving it is equivalent.

But the good news is, there is no need! All we need to do to check which is faster, is throw some sample inputs at each and run tests. To be sure, that approach is fallible, but what of it? The optimized version only needs to be probably faster than the original. A formal guarantee is only needed for equivalence.

Replies from: wnewman
comment by wnewman · 2010-06-16T14:14:10.668Z · LW(p) · GW(p)

"But the good news is, there is no need! All we need to do to check which is faster, is throw some sample inputs at each and run tests."

"no need"? Sadly, it's hard to use such simple methods as anything like a complete replacement for proofs. As an example which is simultaneously extreme and simple to state, naive quicksort has good expected asymptotic performance, but its (very unlikely) worst-case performance falls back to bubble sort. Thus, if you use quicksort naively (without, e.g., randomizing the input in some way) somewhere where an adversary has strong influence over the input seen by quicksort, you can create a vulnerability to a denial-of-service attack. This is easy to understand with proofs, not so easy either to detect or to quantify with random sampling. Also, the pathological input has low Kolmogorov complexity, so the universe might well happen give it to your system accidentally even in situations where your aren't faced by an actual malicious intelligent "adversary."

Also sadly, we don't seem to have very good standard technology for performance proofs. Some years ago I made a horrendous mistake in an algorithm preprint, and later came up with a revised algorithm. I also spent more than a full-time week studying and implementing a published class of algorithms and coming to the conclusion that I had wasted my time because the published claimed performance is provably incorrect. Off and on since then I've put some time into looking at automated proof systems and the practicalities of proving asymptotic performance bounds. The original poster mentioned ACL2; I've looked mostly at HOL Light (for ordinary math proofs) and to a lesser extent Coq (for program/algorithm proofs). The state of the art for program/algorithm proofs doesn't seem terribly encouraging. Maybe someday it will be a routine master's thesis to, e.g., gloss Okasaki's Purely Functional Data Structures with corresponding performance proofs, but we don't seem to be quite there yet.

Replies from: JoshuaZ, rwallace
comment by JoshuaZ · 2010-06-17T02:48:31.675Z · LW(p) · GW(p)

Part of the problem with these is that there are limits to how much can be proven about correctness of programs. In particular, the general question of whether two programs will give the same output on all inputs is undecidable.

Proposition: There is no Turing machine which when given the description of two Turing machines accepts iff both the machines will agree on all inputs.

Proof sketch: Consider our hypothetical machine A that accepts descriptions iff they correspond to two Turing machines which agree on all inputs. We shall show that how we can construct a machine H from A which would solve the halting problem. Note that for any given machine D we can construct a machine [D, s] which mimics D when fed input string s (simply append states to D so that the machine first erases everything on the tape, writes out s on the tape and then executed the normal procedure for D). Then, to determine whether a given machine T accepts a given input s, ask machine A whether [T,s] agrees with the machine that always accepts. Since we've now constructed a Turing machine which solves the haling problem, our original assumption, the existence of A must be false.

There are other theorems of a similar nature that can be proven with more work. The upshot is that in general, there are very few things that a program can say about all programs.

Replies from: gwern
comment by gwern · 2010-06-17T03:36:31.834Z · LW(p) · GW(p)

Wouldn't it have been easier to just link to Rice's theorem?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-17T03:46:26.425Z · LW(p) · GW(p)

I didn't remember the name of the theorem and my Googlefu is weak,

comment by rwallace · 2010-06-17T02:34:41.151Z · LW(p) · GW(p)

True. Test inputs suffice for an optimizer that on average wins more than it loses, which is good enough to be useful, but if you want guaranteed efficiency, that comes back to proof, and the current state-of-the-art is a good way short of doing that in typical cases.

comment by whpearson · 2010-06-14T14:57:15.911Z · LW(p) · GW(p)

Partial evaluation is interesting to me in a AI sense. If you haven't have a look at the 3 projections of Futamura.

But instead of compilers and language specifications you have learning systems and problem specifications. Or something along those lines.

Replies from: rwallace
comment by rwallace · 2010-06-14T16:15:23.215Z · LW(p) · GW(p)

Right, that's optimization again. Basically the reason I'm asking about this is that I'm working on a theorem prover (with the intent of applying it to software verification), and if Alan Crowe considers current designs the wrong kind, I'm interested in ideas about what the right kind might be, and why. (The current state of the art does need to be extended, and I have some ideas of my own about to do that, but I'm sure there are things I'm missing.)

comment by Will_Newsome · 2010-06-14T09:26:57.790Z · LW(p) · GW(p)

Why is the word obviously in quotes?

Replies from: Morendil
comment by Morendil · 2010-06-14T09:45:20.418Z · LW(p) · GW(p)

Because I am not just saying it's not obvious an AI would recursively self-improve, I'm also referring to Eliezer's earlier claims that such recursive self-improvement (aka FOOM) is what we'd expect given our shared assumptions about intelligence. I'm sort-of quoting Eliezer as saying FOOM obviously falls out of these assumptions.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-06-14T09:49:53.128Z · LW(p) · GW(p)

I'm worried about the "sort-of quoting" part. I get nervous when people put quote marks around things that aren't actually quotations of specific claims.

Replies from: Morendil
comment by Morendil · 2010-06-14T09:53:19.848Z · LW(p) · GW(p)

Noted, and thanks for asking. I'm also somewhat over-fond of scare quotes to denote my using a term I'm not totally sure is appropriate. Still, I believe my clarification above is sufficient that there isn't any ambiguity left now as to what I meant.

comment by Roko · 2010-06-14T13:49:25.929Z · LW(p) · GW(p)

Stephen Hawking, Martin Rees, Max Tegmark, Nick Bostrom, Michio Kaku, David Chalmers and Robin Hanson are all smart people who broadly agree that >human AI in the next 50-100 years is reasonably likely (they'd all give p > 10% to that with the possible exception of Rees). On the con side, who do we have? To my knowledge, no one of similarly high academic rank has come out with a negative prediction.

Edit: See Carl's comment below. Arguing majoritarianism against a significant chance of AI this century is becoming less tenable, as a significant set of experts come down on the "yes" side.

It is notable that I can't think of any very reputable nos. The ones that come to mind are Jaron Lanier and that Glenn Zorpette.

Replies from: CarlShulman, JoshuaZ, timtyler
comment by CarlShulman · 2010-06-14T14:41:36.308Z · LW(p) · GW(p)

10% is a low bar, it would require a dubiously high level of confidence to rule out AI over a 90 year time frame (longer than the time since Turing and Von Neumann and the like got going, with a massively expanding tech industry, improved neuroimaging and neuroscience, superabundant hardware, and perhaps biological intelligence enhancement for researchers). I would estimate the average of the group you mention as over 1/3rd by 2100. Chalmers says AI is more likely than not by 2100, I think Robin and Nick are near half, and I am less certain about the others (who have said that it is important to address AI or AI risks but not given unambiguous estimates).

Here's Ben Goertzel's survey. I think that Dan Dennett's median estimate is over a century, although at the 10% level by 2100 I suspect he would agree. Dawkins has made statements that suggest similar estimates, although perhaps with someone shorter timelines. Likewise for Doug Hofstadter, who claimed at the Stanford Singularity Summit to have raised his estimate of time to human-level AI from 21st century to mid-late millenium, although he weirdly claimed to have done so for non-truth-seeking reasons.

comment by JoshuaZ · 2010-06-14T14:07:37.332Z · LW(p) · GW(p)

None of those people are AI theorists so it isn't clear that their opinions should get that much weight given that it is outside their area of expertise (incidentally, I'd be curious what citation you have for the Hawking claim). From the computer scientists I've talked to, the impression I get is that they see AI as such a failure that most of them just aren't bothering to do much in the way of research in it except for narrow purpose machine learning or expert systems. There's also an issue of a sampling bias: the people who think a technology is going to work are generally more loud about that than people who think it won't. For example, a lot of physicists are very skeptical of Tokamak fusion reactors being practical anytime in the next 50 years, but the people who talk about them a lot are the people who think they will be practical.

Note also that nothing in Yoreth's post actually relied on or argued that there won't be moderately smart AI so it doesn't go against what he's said to point out that some experts think there will be very smart AI (although certainly some people on that list, such as Chalmers and Hanson do believe that some form of intelligence explosion like event will occur). Indeed, Yoreth's second argument applies roughly to any level of intelligence. So overall, I don't think the point about those individuals does much to address the argument.

Replies from: Roko, MatthewW
comment by Roko · 2010-06-14T15:01:10.366Z · LW(p) · GW(p)

None of those people are AI theorists so it isn't clear that their opinions should get that much weight given that it is outside their area of expertise

I disagree with this, basically because AI is a pre-paradigm science. Having been at a big CS/AI dept, I know that the amount of accumulated wisdom about AI is virtually nonexistent compared to that for physics.

What does an average AI prof know that a physics graduate who can code doesn't know? I'm struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy.

The only examples of genuine scientific insight in AI I have seen are in the works of Pearl, Hutter, Drew McDerrmot and recently Josh Tenebaum.

Replies from: JoshuaZ, SilasBarta, Daniel_Burfoot, Vladimir_Nesov, timtyler, whpearson
comment by JoshuaZ · 2010-06-14T15:07:49.065Z · LW(p) · GW(p)

That's a very good point. The AI theorist presumably knows more about avenues that have not done very well (neural nets, other forms of machine learning, expert systems) but isn't likely to have much general knowledge. However, that does mean the AI individual has a better understanding of how many different approaches to AI have failed miserably. But that's just a comparison to your example of the physics grad student who can code. Most of the people you mentioned in your reply to Yoreth are clearly people who have knowledge bases closer to that of the AI prof than to the physics grad student. Hanson certainly has looked a lot at various failed attempts at AI. I think I'll withdraw this argument. You are correct that these individuals on the whole are likely to have about as much relevant expertise as the AI professor.

Replies from: Roko
comment by Roko · 2010-06-14T15:21:08.121Z · LW(p) · GW(p)

I think I'll withdraw this argument.

Upvoted for honest debating!

comment by SilasBarta · 2010-06-14T21:19:45.439Z · LW(p) · GW(p)

What does an average AI prof know that a physics graduate who can code doesn't know? I'm struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy.

So people with no experience programming robots but who know the equations governing them would just be able to, on the spot, come up with comparable code to AI profs? What do they teach in AI courses, if not the kind of thing that would make you better at this?

Replies from: Roko, Roko
comment by Roko · 2010-06-14T21:50:45.060Z · LW(p) · GW(p)

What do they teach in AI courses

How to code, and rookie Bayesian stats/ML, plus some other applied stuff, like statistical Natural Language Processing (this being an application of the ML/stats stuff, but there are some domain tricks and tweaks you need).

comment by Roko · 2010-06-14T21:53:27.294Z · LW(p) · GW(p)

So people with no experience programming robots but who know the equations governing them would just be able to, on the spot, come up with comparable code to AI profs?

The point is that there would only be experience, not theory, separating someone who knew Bayesian stats, coding and how to do science from an AI "specialist". Yes, there are little shortcuts and details that a PhD in AI would know, but really there's no massive intellectual gulf there.

comment by Daniel_Burfoot · 2010-06-14T20:46:16.626Z · LW(p) · GW(p)

I disagree with this, basically because AI is a pre-paradigm science.

I am gratified to find that someone else shares this opinion.

What does an average AI prof know that a physics graduate who can code doesn't know?

A better way to phrase the question might be: what can an average AI prof. do that a physics graduate who can code, can't?

Replies from: Roko
comment by Roko · 2010-06-14T22:47:12.053Z · LW(p) · GW(p)

Each prof will, of course, have a niche app that they do well (in fact sometimes there is too much pressure to have a "trick" you can do to justify funding), but the key question is: are they more like a software engineer masquerading as a scientist than a real scientist? Do they have a paradigm and theory that enables thousands of engineers to move into completely new design-spaces?

I think that the closest we have seen is the ML revolution, but when you look at it, it is not new science, it is just statistics correctly applied.

I have seen some instances of people trying to push forward the frontier, such as the work of Hutter, but it is very rare.

Replies from: CarlShulman, SilasBarta
comment by CarlShulman · 2010-06-15T12:46:08.832Z · LW(p) · GW(p)

I think that the closest we have seen is the ML revolution, but when you look at it, it is not new science, it is just statistics correctly applied.

Statistics vs machine learning: FIGHT!

comment by SilasBarta · 2010-06-15T00:20:17.402Z · LW(p) · GW(p)

Could you clarify exactly what Hutter has done that has advanced the frontier? I used to be very nearly a "Hutter enthusiast", but I eventually concluded that his entire work is:

"Here's a few general algorithms that are really good, but take way too long to be of any use whatsoever."

Am I missing something? Is there something of his I should read that will open my eyes to the ease of mechanizing intelligence?

Replies from: Roko, timtyler
comment by Roko · 2010-06-15T08:33:33.365Z · LW(p) · GW(p)

I think that the way of looking at the problem that he introduced is the key, i.e. thinking of the agent and environment as programs. The algorithms (AIXI, etc) are just intuition pumps.

Replies from: timtyler
comment by timtyler · 2010-06-15T21:07:39.598Z · LW(p) · GW(p)

Surely everyone has been doing that from the beginning.

comment by timtyler · 2010-06-15T21:10:54.898Z · LW(p) · GW(p)

This seems like a fairly reasonable description of the work's impact:

"Another theme that I picked up was how central Hutter’s AIXI and my work on the universal intelligence measure has become: Marcus and I were being cited in presentations so often that by the last day many of the speakers were simply using our first names. As usual there were plenty of people who disagree with our approach, however it was clear that our work has become a major landmark in the area."

Replies from: SilasBarta
comment by SilasBarta · 2010-06-16T16:36:54.739Z · LW(p) · GW(p)

But why does it get those numerous citations? What real-world, non-academic consequences have resulted from this massive usage of Hutter's intelligence definition, which would distinguish it from a mere mass frenzy?

Replies from: timtyler
comment by timtyler · 2010-06-16T17:00:00.509Z · LW(p) · GW(p)

No time for a long explanation from me - but "universal intelligence" seems important partly since it shows how simple an intelligent agent can be - if you abstract away most of its complexity into a data-compression system. It is just a neat way to break down the problem.

comment by Vladimir_Nesov · 2010-06-14T15:07:50.433Z · LW(p) · GW(p)

What does an average AI prof know that a physics graduate who can code doesn't know?

Machine learning, more math/probability theory/belief networks background?

Replies from: Roko
comment by Roko · 2010-06-14T15:15:02.860Z · LW(p) · GW(p)

A good physics or math grad who has done bayesian stats is at no disadvantage on the machine learning stuff, but what do you mean by "belief networks background"?

Do you mean "deep belief networks"?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-14T15:33:51.451Z · LW(p) · GW(p)

There is ton of knowledge about probabilistic processes defined by networks in various ways, numerical methods for inference in them, clustering, etc. All the fundamental stuff in this range has applications to physics, and some of it was known in physics before getting reinvented in machine learning, so in principle a really good physics grad could know that stuff, but it's more than standard curriculum requires. On the other hand, it's much more directly relevant to probabilistic methods in machine learning. Of course both should have good background in statistics and bayesian probability theory, but probabilistic analysis of nontrivial processes in particular adds unique intuitions that a physics grad won't necessarily possess.

comment by timtyler · 2010-06-15T21:05:43.021Z · LW(p) · GW(p)

Re: "What does an average AI prof know that a physics graduate who can code doesn't know? I'm struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy."

A very odd opinion. We have 60 years of study of the field, and have learned quite a bit, judging by things like the state of translation and speech recognition.

comment by whpearson · 2010-06-14T15:08:14.206Z · LW(p) · GW(p)

The AI prof is more likely to know more things that don't work and the difficulty of finding things that do. Which is useful knowledge when predicting the speed of AI development, no?

Replies from: Roko
comment by Roko · 2010-06-14T15:15:39.865Z · LW(p) · GW(p)

Which things?

Replies from: whpearson
comment by whpearson · 2010-06-14T15:22:49.902Z · LW(p) · GW(p)

Trying to model the world as crisp logical statements a la block worlds for example.

Replies from: Roko
comment by Roko · 2010-06-14T16:13:51.260Z · LW(p) · GW(p)

That being in the "things that don't work" category?

Replies from: whpearson
comment by whpearson · 2010-06-14T16:25:51.459Z · LW(p) · GW(p)

Yup... which things were you asking for? Examples of things that do work? You don't actually need to find them to know that they are hard to find!

comment by MatthewW · 2010-06-14T19:10:44.952Z · LW(p) · GW(p)

I think Hofstadter could fairly be described as an AI theorist.

Replies from: Emile
comment by Emile · 2010-06-17T14:14:59.182Z · LW(p) · GW(p)

So could Robin Hanson.

comment by timtyler · 2010-06-15T21:01:05.380Z · LW(p) · GW(p)

Dan Dennett and Douglas Hofstadater don't think machine intelligence is coming anytime soon. Those folk actually know something about machine intelligence, too!

comment by timtyler · 2010-06-15T20:52:56.151Z · LW(p) · GW(p)

Re: "can a mind understand itself?"

That is no big deal: copy the mind a few billion times, and then it will probably collectively manage to grok its construction plans well enough.

comment by NancyLebovitz · 2010-06-15T13:34:28.921Z · LW(p) · GW(p)

Another argument against the difficulties of self-modeling point: It's possible to become more capable by having better theories rather than by having a complete model, and the former is probably more common.

It could notice inefficiencies in its own functioning, check to see if the inefficiencies are serving any purpose, and clean them up without having a complete model of itself.

Suppose a self-improving AI is too cautious to go mucking about in its own programming, and too ethical to muck about in the programming of duplicates of itself. It still isn't trapped at its current level, even aside from the reasonable approach of improving its hardware, though that may be a more subtle problem than generally assumed.

What if it just works on having a better understanding of math, logic, and probability?

comment by xamdam · 2010-06-14T15:33:00.343Z · LW(p) · GW(p)

In addition to theoretical objections, I think the majoritarian argument is factually wrong. Remember, 'future is here, just not evenly distributed'.

http://www.google.com/trends?q=singularity shows a trend

http://www.nytimes.com/2010/06/13/business/13sing.html?pagewanted=all - this week in NYT. Major MSFT and GOOG involvement.

http://www.acceleratingfuture.com/michael/blog/2010/04/transhumanism-has-already-won/

Replies from: timtyler
comment by timtyler · 2010-06-15T20:58:18.464Z · LW(p) · GW(p)

Re: "http://www.google.com/trends?q=singularity shows a trend"

Not much of one - and also, this is a common math term - while:

"Your terms - "technological singularity" - do not have enough search volume to show graphs."

comment by mindviews · 2010-06-14T11:04:31.767Z · LW(p) · GW(p)

Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.

The critical aspect of a "major-impact intelligence-explosion singularity" isn't the method for improvement but the rate of improvement. If computer processing power continues to grow at an exponential rate, even an inefficiently improving AI will have the growth in raw computing power behind it.

So: do you know any counterarguments or articles that address either of these points?

I don't have any articles but I'll take a stab at counterarguments.

A Majoritarian counterargument: AI turned out to be harder and further away than originally thought. The general view is still tempered by the failure of AI to live up to those expectations. In short, the AI researchers cried "wolf!" too much 30 years ago and now their predictions aren't given much weight because of that bad track record.

A mind can't understand itself counterargument: Even accepting as a premise that a mind can't completely understand itself, that's not an argument that it can't understand itself better than it currently does. The question then becomes which parts of the AI mind are important for reasoning/intelligence and can an AI understand and improve that capability at a faster rate than humans.

comment by timtyler · 2010-06-15T20:50:52.575Z · LW(p) · GW(p)

Re: "If nothing else, you might think that someone could exaggerate the threat of the singularity and use it to scare people into giving them government funds. But we don’t even see that happening."

? I see plenty of scaremongering around machine intelligence. So far, few governments have supported it - which seems fairly sensible of them.

comment by NancyLebovitz · 2010-06-15T12:05:49.699Z · LW(p) · GW(p)

How do we know that governments aren't secretly working on AI?

Is it worth speculating about the goals which would be built into a government-designed AI?

comment by CarlShulman · 2010-06-14T15:35:54.985Z · LW(p) · GW(p)

Regarding majoritarianism:

imminent, so important, and furthermore so sensitive to initial conditions To a risk-neutral consequentialist or to an elected or corporate official?

Crash programs in basic science because of speculative applications are very uncommon. Decades of experimentation with nuclear fission only brought a crash program with the looming threat of the Nazis, and after a practical demonstration of a chain reaction.

Over the short time spans over which governments make their plans, the probability of big advances in AI basic science coming is relatively small, even if substantially over the longer term. So you get all the usual issues with attending to improbable (in any given short period) dangers that no one has recent experience with. Note things like hurricane Katrina, the Gulf oil spill, etc. The global warming effects of fossil fuel use have been seen as theoretically inevitable since at least the Eisenhower administration, and momentum for action has only gotten mobilized after a long period of actual warming providing pretty irrefutable (and yet widely rejected anyway!). evidence.

comment by Kevin · 2010-06-16T20:47:29.151Z · LW(p) · GW(p)

IBM's Watson AI trumps humans in "Jeopardy!"

http://news.ycombinator.com/item?id=1436625

Replies from: cousin_it, cousin_it
comment by cousin_it · 2010-06-16T21:10:18.092Z · LW(p) · GW(p)

Thanks a lot for the link. I remember Eliezer arguing with Robin whether AI will advance explosively by using few big insights, or incrementally by amassing encoded knowledge and many small insights. Watson seems to constitute evidence in favor of Robin's position as it has no single key insight:

Ferrucci says his team will continue to fine-tune Watson, but improving its performance is getting harder. “When we first started, we’d add a new algorithm and it would improve the performance by 10 percent, 15 percent,” he says. “Now it’ll be like half a percent is a good improvement.”

comment by cousin_it · 2010-06-16T20:54:15.840Z · LW(p) · GW(p)

Direct link to printable (readable) single-page version of the article.

comment by timtyler · 2010-06-16T08:42:28.787Z · LW(p) · GW(p)

A question: Do subscribers think it would be possible to make an open-ended self-improving system with a perpetual delusion - e.g. that Jesus loves them.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-06-16T09:14:52.187Z · LW(p) · GW(p)

Yes, in that it could be open-ended in any "direction" independent of the delusion. However, that might require contrived initial conditions or cognitive architecture. You might also find the delusion becoming neutralized for all practical purposes, e.g. the delusional proposition is held to be true in "real reality" but all actual actions and decisions pertain to some "lesser reality", which turns out to be empirical reality.

ETA: Harder question: are there thinking systems which can know that they aren't bounded in such a way?

comment by JoshuaZ · 2010-06-15T04:12:58.438Z · LW(p) · GW(p)

I'm thinking of writing a top-post on the difficulties of estimating P(B) in real-world applications of Bayes' Theorem. Would people be interested in such a post?

Replies from: Christian_Szegedy, Kevin
comment by Christian_Szegedy · 2010-06-16T19:40:02.885Z · LW(p) · GW(p)

Funny, I've been entertaining the same idea for a few weeks.

Every time I read statements like "... and then I update the probabilities, based on this evidence ...", I think to myself: "I wish I had the time (or processing power) he thinks he has. ;)"

comment by Kevin · 2010-06-15T14:29:07.210Z · LW(p) · GW(p)

Sure

comment by h-H · 2010-06-15T02:11:17.417Z · LW(p) · GW(p)

yay! music composition AI

we've had then for a while though,but who knows, we might have our first narrow focused AI band pretty soon.

good business opportunity there..maybe this is how the SIAI will guarantee unlimited funding in the future :)?

Replies from: NancyLebovitz, Blueberry, SilasBarta
comment by NancyLebovitz · 2010-06-16T09:09:18.472Z · LW(p) · GW(p)

Thanks for the link.

If a machine could write a Mozart sonata every bit as good as the originals, then what was so special about Mozart?

Mozart developed the Mozart sonata.

comment by Blueberry · 2010-06-16T08:48:01.182Z · LW(p) · GW(p)

Great article. Thanks for the link!

comment by SilasBarta · 2010-06-15T18:06:24.436Z · LW(p) · GW(p)

Good music isn't about good music. It's about which music authorities have approved of it.

Replies from: blogospheroid
comment by blogospheroid · 2010-06-18T06:47:34.955Z · LW(p) · GW(p)

What about saleable pop music?

comment by Morendil · 2010-06-18T07:29:12.902Z · LW(p) · GW(p)

Replicator constructed in Conway's Life

Replies from: NancyLebovitz, Blueberry, wedrifid
comment by NancyLebovitz · 2010-06-18T15:25:27.902Z · LW(p) · GW(p)

Wade's breakthrough came after his real-life child was born. The duties of fatherhood limited the time he could spend playing the game, so he replaced the "computer" with a much simpler pattern called an "instruction tape", made up of smaller patterns known as "gliders". By placing these at precise intervals, he created a program that feeds into the constructor and dictates its actions, much like the punched rolls of tape once used to control the first computers.

One of Eliezer's posts talks about realizing that conventional science is content with an intolerably slow pace. Here we have an example of less time leading to a better solution.

comment by Blueberry · 2010-06-18T07:46:52.805Z · LW(p) · GW(p)

Apparently it doesn't replicate itself any more than a glider does; the old copy is destroyed as it creates a new copy.

Replies from: Morendil
comment by Morendil · 2010-06-18T08:05:15.846Z · LW(p) · GW(p)

Reading the conwaylife.com thread gives a better sense of this thingie's importance than the comparison with a glider. ;)

comment by wedrifid · 2010-06-18T12:02:32.058Z · LW(p) · GW(p)

Now I'm wondering what screen resolution and how many potions of longevity would be required to evolve intelligent life while playing ADOM.

comment by simplicio · 2010-06-17T00:03:02.190Z · LW(p) · GW(p)

An idea I had: an experiment in calibration. Collect, say, 10 (preferably more) occasions on which a weather forecaster said "70% chance of rain/snow/whatever," and note whether or not these conditions actually occurred. Then find out if the actual fraction is close to 0.7.

I wonder whether they actually do care about being well calibrated? Probably not, I suppose their computers just spit out a number and they report it. But it would be interesting to find out.

I will report my findings here, if you are interested, and if I stay interested.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-17T00:07:56.626Z · LW(p) · GW(p)

Note that this sort of thing has been done a bit before. See for example this analysis.

Edit: The linked analysis has a lot of problems. See discussion below.

Replies from: simplicio
comment by simplicio · 2010-06-17T00:22:31.288Z · LW(p) · GW(p)

Cool, but hold on a minute though. I quote:

In measuring precipitation accuracy, the study assumed that if a forecaster predicted a 50 percent or higher chance of precipitation, they were saying it was more likely to rain than not. Less than 50 percent meant it was more likely to not rain.

That prediction was then compared to whether or not it actually did rain...

Isn't something wrong here? If you say "60% chance of rain," and it doesn't rain, you are not necessarily a bad forecaster. Not unless it actually rained on less (or more!) than 60% of those occasions. It should rain on ~60% of occasions on which you say "60% chance of rain."

Am I just confused about this fellow's methodology?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-17T00:29:03.078Z · LW(p) · GW(p)

If I'm reading this correctly they are doing exactly what you want but only breaking into two categories "more likely to rain than not" and "less likely to rain than not." But I'm confused by the fact that 50 percent gets into the expecting rain category.

Replies from: simplicio, simplicio
comment by simplicio · 2010-06-17T01:28:55.838Z · LW(p) · GW(p)

Okay, this is like a sore tooth. Somebody's wrong, and I don't know if it's me. A queazy feeling.

Listen to this though:

The prize for the single most inconsistent forecast goes to Channel 5’s Devon Lucie who on Sunday, September 30th predicted a high temperature of 53 degrees for October 7th, and seven days later changed it to 84 degrees — a difference of 31 degrees! It turned out to be 81 that day.

A close second was Channel 4’s Mike Thompson’s initial prediction of 83 for October 15th, which he changed to 53 just two days later. It turned out to be 64 on the 15th.

Uhhh.... it's remarkable that a forecast changed significantly in SEVEN DAYS? What?!

The weather is the canonical example of mathematical chaos in an (in principle) deterministic system. Of course the forecasts will change, because Tuesday's weather sets the initial conditions for Wednesday, and chaotic systems are ultra-sensitive to initial conditions! The forecasters would be idiots if they didn't update their forecasts as much as possible.

The "close second," moreover, should be first! That change occurred in a two day period versus a seven! ARGGHHH.

comment by simplicio · 2010-06-17T00:48:01.085Z · LW(p) · GW(p)

To me it almost seems as though a scenario like this is happening:

You: "What is the chance that a 6-sided die will turn up one of the numbers 1-4?"

Me: "2/3, of course."

You: "I will take that to mean that it is more likely than not to be 1-4 - thus I will count 1-4 as a hit and 5 or 6 as a miss."

Me: "Um... okay(?)"

You: rolls a die 100 times. "Oh, it seems you were correct only about 66% of the time. You're not very good at this, are you?"

In other words, isn't the author misrepresenting the forecasters in throwing away their POPs, which could be interpreted as subjective beliefs about likelihoods?

I was also sort of confused by:

Have you ever noticed that the prediction for a particular day keeps changing from day to day, sometimes by quite a bit? The graph above shows how much the different stations change their minds about their own forecasts over a seven-day period.

On average, N.O.A.A. is the most consistent, but even they change their mind by more than six degrees and 23 percent likelihood of precipitation over a seven-day span.

The Kansas City television meteorologists will change their mind from 6.8 to nearly nine degrees in temperature and 30 percent to 57 percent in precipitation, showing a distinct lack of confidence in their initial predictions as time goes on.

Is changing the forecast as new information comes in a bad thing?? Or is it merely that they are changing the forecast too much?

Nota bene: I am also very tired and may just be being thickheaded - I rate that possibility at about 50%, and you're welcome to check my calibration. =)

Replies from: JoshuaZ, JoshuaZ, Mass_Driver
comment by JoshuaZ · 2010-06-17T01:39:59.929Z · LW(p) · GW(p)

Related thought: Maybe see if they will give you their data? That would save you sometime and I'm now very interested in if a more careful analysis will substantially disagree with their results.

comment by JoshuaZ · 2010-06-17T01:38:15.868Z · LW(p) · GW(p)

Oh. I see. Yes, they aren't taking into account the accuracy estimations at all. Your criticism seems correct. Your complaints about the other aspects seem accurate also.

Huh. This is disturbing; most of the Freakonomics blog entries I've read have good analysis of data. It looks like this one really screwed the pooch. I have to wonder if others they've done have similar problems that I haven't noticed.

Replies from: simplicio
comment by simplicio · 2010-06-17T01:44:57.518Z · LW(p) · GW(p)

Yeah, I am a fan of Freakonomics generally too. I will write to them, I think. Will let you know how it goes. I want to confirm I am right about the probability stuff though, I still have a niggling doubt that I've just misunderstood something. But I think they are definitely wrong about the forecast updating.

comment by Mass_Driver · 2010-06-17T01:55:21.642Z · LW(p) · GW(p)

Is changing the forecast as new information comes in a bad thing?? Or is it merely that they are changing the forecast too much?

I think the criticism is that if they need to change their predictions so much between time 1 and time 2, then it is irresponsible to make any prediction at time 1. This is a hard case to make out for the temperature swings, since I think 8 degrees is only about one standard deviation for a prediction of a day's temperature in a city knowing only what day of the year it is, but it's an easy case to make out for the precipitation swings: if, on average, you are wrong by 40% objective probability (not even 40% error; 40% chance of rain, here), then a prediction of, e.g., 30% will on average convey virtually no information; that could easily mean 0% or it could easily mean 70%, and without too much implausibiliy it could even mean 90% -- so why bother saying 30% at all when you could (more honestly) admit your ignorance about whether it will rain next week.

In the meteorologists' defense, their medium-range predictions become useful when tested against broader time periods. Specifically, a 60% chance of rain on Thursday means you can be pretty sure that it will rain on Wednesday, Thursday, or Friday -- perhaps with 90% confidence. The reason for this is that predictions of rain generally come from tracking low-pressure pockets of air as they sweep across the continent; these pockets might speed up or slow down, or alter their course by a few degrees, but they rarely disappear or turn around altogether.

You: "I will take that to mean that it is more likely than not to be 1-4 - thus I will count 1-4 as a hit and 5 or 6 as a miss."

This is a much more reasonable testing method when one's predictions are based on an alleged causal process. For example, suppose I claim that I can predict how many cards Bob will draw in a game of blackjack by taking into consideration all of the variables in the game. A totally naive predictor might be "Bob will hit no matter what." That predictor might be right about 60% of the time. A slightly better predictor might be "Bob will hit if his cards show a total of 13 or less." That predictor might be right about 70% of the time. If I, as a skilled blackjack kibitzer, can really add predictive value to these simple predictors, then I should be able to beat their hit-miss ratio, maybe getting Bob's decision right 75% of the time. If I knew Bob quite well and could read his tells, maybe I would go up to 90%.

Anyway, 66% is pretty good for a blind guess that can't be varied from episode to episode. So the test with the die that you're using in your analogy is a fair test, but the bar is set too high. If you can get 66% on a hit-miss test with a one-sentence rule, you're doing pretty well.

Replies from: simplicio
comment by simplicio · 2010-06-17T02:56:26.737Z · LW(p) · GW(p)

Point taken about forecast updating - information changing that drastically may be merely worthless noise.

However, on the coin toss/blackjack thing...

In your blackjack example, the answer you give is binary - Bob will either say "hit me" or "[whatever the opposite is, I've never played]." The meteorologists are giving answers in terms of probabilities: "there is a 70% chance that it will rain."

If you did that in the Blackjack example; i.e., you said "I rate it as 65% likely that Bob will take another card," and then he DIDN'T take another card, that would not mean you were bad at predicting - we would have to watch you for longer.

My complaint is that the author interpreted forecasters' probabilities as certainties, rounding them up to 1 or down to 0. This was unfair as it ignored their self-stated levels of confidence.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-06-17T03:26:10.753Z · LW(p) · GW(p)

Sorry, I didn't communicate clearly.

If you did that in the Blackjack example; i.e., you said "I rate it as 65% likely that Bob will take another card," and then he DIDN'T take another card, that would not mean you were bad at predicting - we would have to watch you for longer.

Correct. However, suppose we repeat this experiment 100 times, each time reducing my probability estimate to a binary prediction of hit-stay. Suppose that Bob hits 60 times, 50 of which were on occasions when I assigned greater than 50% probability to Bob hitting, and Bob stays 40 times, 13 of which were on occasions when I assigned less than 50% probability to Bob hitting. Thus, my overall accuracy, when reduced to a hit-stay prediction, is 63%. This is worse than my claimed certainty level of 65%, but better than the naive predictor "Bob always hits," which only got 60% of the episodes right. Thus, the pass-fail test is one way of distinguishing my predictive abilities from the predictive abilities of a broad generalization.

To see this, suppose instead that I always predict, with 65% certainty, that Bob will hit or that Bob will stay. I might rate the chance of Bob hitting at 65%, or I might rate it at 35%. In this experiment, Bob hits 75 times, 50 of which were on occasions when I assigned a 65% probability that Bob would hit. Bob stays 25 times, 18 of which were on occasions when I assigned a 65% probability that Bob would stay. I correctly predicted Bob's action 68% of the time, which is better than my stated certainty of 65%. However, my accuracy is worse than the accuracy of the naive predictor "Bob always hits," which would have scored 75%. Thus, my predictions are not very good, by one relatively objective benchmark, despite the fact that they are, in a narrow Bayesian sense, fairly well-calibrated.

Again, sorry for the confusion. I gave an incomplete example before.

Replies from: simplicio
comment by simplicio · 2010-06-17T05:54:01.071Z · LW(p) · GW(p)

So if I understand correctly, the issue is not that the meteorologists are poorly calibrated (maybe they are, maybe they aren't), but rather that their predictions are less useful than a simple rule like "it never rains" for actually predicting whether it will rain or not.

However, my accuracy is worse than the accuracy of the naive predictor "Bob always hits," which would have scored 75%. Thus, my predictions are not very good, by one relatively objective benchmark, despite the fact that they are, in a narrow Bayesian sense, fairly well-calibrated.

I think I am beginning to see the light here. Basically, in this scenario you are too ignorant of the phenomenon itself, even though you are very good at quantifying your epistemic state with respect to the phenomenon? If this is more or less right, is there terminology that might help me get a better handle on this?

Replies from: Mass_Driver
comment by Mass_Driver · 2010-06-17T06:29:39.997Z · LW(p) · GW(p)

Bingo! That's exactly what I was trying to say. Thanks for listening. :-)

My jargon mostly comes from political science. We'd say the meteorologists are using an overly complicated model, or seizing on spurious correlations, or that they have a low pseudo-R-squared. I'm not sure any of those are helpful. Personally, I think your words -- the meteorologists are too ignorant for us to applaud their calibration -- are more elegant.

The only other thing I would add is that the reason why it doesn't make sense to applaud the meteorologists' guess-level calibration is because they have such poor model-level calibration. In other words, while their confidence about any given guess seems accurate, their implicit confidence about the accuracy of their model as a whole is too high. If your (complex) model does not beat a naive predictor, social science (and, frankly, Occam's Razor) says you ought to abandon it in favor of a simpler model. By sticking to their complex models in the face of weak predictive power, the meteorologists suggest that either (1) they don't know or care about Occam's Razor, or (2) they actually think their model has strong predictive power.

Replies from: NancyLebovitz, simplicio
comment by NancyLebovitz · 2010-06-18T12:26:26.639Z · LW(p) · GW(p)

Here's a really crude indicator of improvement in weather forecasting: I can remember when jokes about forecasts being wrong were a cliche. I haven't heard a joke about weather forecasts for years, probably decades, which suggests that forecasts have actually gotten fairly good, even if they're not as accurate as the probabilities in the forecasts suggest.

Does anyone remember when weatherman jokes went away?

Replies from: wedrifid
comment by wedrifid · 2010-06-18T13:12:51.923Z · LW(p) · GW(p)

Can we conclude that the prevalence of the cliche dropping is related to the quality of weather forecasting? All else being equal I expect a culture to develop a resistance to any given cliche over time. For example the cliche "It's not you it's me" has dropped in use and been somewhat relegated to 'second order cliche' . But it is true now at least as much as it has been in the past.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-18T13:46:00.450Z · LW(p) · GW(p)

A fair point, though if a cliche has lasted for a very long time, I think it's more plausible that its end is about changed conditions rather than boredom.

comment by simplicio · 2010-06-17T07:13:08.405Z · LW(p) · GW(p)

Gotcha. Thanks for the explanation, it's been very clarifying. =)

comment by cousin_it · 2010-06-14T21:59:06.182Z · LW(p) · GW(p)

Econ question: if a child is renting an apartment for $X, and the parents have a spare apartment that they are currently renting out for $Y, would it help or hurt the economy if the child moved into that apartment instead? Consider the cases X<Y, X=Y, X>Y.

Replies from: SilasBarta, James_K, MichaelBishop, cousin_it, AlephNeil, Vladimir_M, MichaelBishop, AlephNeil, MichaelBishop
comment by SilasBarta · 2010-06-15T15:02:22.928Z · LW(p) · GW(p)

If I moved into that apartment instead, would that help or hurt the country's economy as a whole?

Good question, not because it's hard to answer, but because of how pervasive the wrong answer is, and the implications for policy for economists getting it wrong.

  • If your parents prefer you being in their apartment to the forgone income, they benefit; otherwise they don't.

  • If you prefer being in their apartment to the alternative rental opportunities, you benefit; otherwise, you don't.

  • If potential renters or the existing ones prefer your parents' unit to the other rental opportunities and they are denied it, they are worse off; otherwise, they aren't.

ANYTHING beyond that -- anything whatsoever -- is Goodhart-laden economist bullsh**. Things like GDP and employment and CPI were picked long ago as a good correlate of general economic health. Today, they are taken to define economic health, irrespective of how well people's wants are being satisfied, which is supposed to be what we mean by a "good economy".

Today, economists equate growing GDP -- irrespective of measuring artifacts that make it deviate from what we want it to measure -- with a good economy. If the economy isn't doing well enough, well, we need more "aggregate demand" -- you see, people aren't buying enough things, which must be bad.

Never once has it occurred to anyone in the mainstream (and very few outside of the mainstream) that it's okay for people to produce less, consume less, and have more leisure. No, instead, we have come to define success by the number of money-based market exchanges, rather than whether people are getting the combination of work, consumption, and leisure (all broadly defined) that they want.

This absurdity reveals itself when you see economists scratching their heads, thinking how we can get people to spend more than they want to, in order to help the economy. Unpack those terms: they want people to hurt themselves, in order to hurt less.

Now, it's true there are prisoner's dilemma-type situations where people have to cooperate and endure some pain to be better off in the aggregate. But the corresponding benefit that economists expect from this collective sacrifice is ... um ... more pointless work that doesn't satisfy real demand .. but hey, it keeps up "aggregate demand", so it must be what a sluggish economy needs.

Are you starting to see how skewed the standard paradigm is? If people found a more efficient, mutualist way to care for their children rather than make cash payments to day care, this would be regarded as a GDP contraction -- despite most people being made better off and efficiency improving. If people work longer hours than they'd like, to produce stuff no one wants, well, that shows up as more GDP, and it's therefore "good".

How the **** did we get into this mindset?

Sorry, [/another rant].

Replies from: NancyLebovitz, James_K, Vladimir_Nesov, MichaelBishop, thomblake
comment by NancyLebovitz · 2010-06-16T08:54:25.454Z · LW(p) · GW(p)

What isn't reflected in the GDP is huge.

There's the underground economy-- I've seen claims about the size of it, but how would you check them?

There's everything people do for each other without it going through the official economy.

And there's what people do for themselves-- every time you turn over in bed, you are presumably increasing value. If you needed paid help, it would be adding to the GDP.

comment by James_K · 2010-06-16T05:28:40.145Z · LW(p) · GW(p)

I don't understand where you acquired this view of economists. I am an economist and I assure you economists don't ascribe to the "measured GDP is everything" view you attribute to them.

This absurdity reveals itself when you see economists scratching their heads, thinking how we can get people to spend more than they want to, in order to help the economy. Unpack those terms: they want people to hurt themselves, in order to hurt less.

This is not an accurate portrayal of what Keynesians believe. The Keynesian theory of depressions and recessions is that excessive pessimism leads people to avoid investing or starting businesses, which lowers economic activity further, which promotes more pessimism, and so on.

The goal of stimulus is effectively to trick people into thinking the economy is better than it is, which then becomes a self-fulfilling prophesy; low quality spending by government drives high quality spending by the private sector.

If you wish to be sceptical of this story (I'm fairly dubious about it myself), then fine, but Keynesians aren't arguing what you think they're arguing.

Replies from: SilasBarta, Vladimir_M, NancyLebovitz
comment by SilasBarta · 2010-06-16T14:44:51.623Z · LW(p) · GW(p)

If you wish to be sceptical of this story (I'm fairly dubious about it myself), then fine, but Keynesians aren't arguing what you think they're arguing.

No, that's precisely what I assumed they're arguing, and I believe my points were completely responsive. I will address the position you describe in the context of the criticism in my rant.

The Keynesian theory of depressions and recessions is that excessive pessimism leads people to avoid investing or starting businesses, which lowers economic activity further, which promotes more pessimism, and so on.

The goal of stimulus is effectively to trick people into thinking the economy is better than it is, which then becomes a self-fulfilling prophesy;

Now, unpack the meaning of all of those terms, back to the fundamentals we really care about, and what is all that actually saying? Well, first of all, have you played rationalist taboo with this and tried to phrase everything without economics jargon, so as to fully break down exactly what all the above means at the layperson level? To me, economists seem to talk as if they have not done so.

I would like for you to tell me whether you have done so in the past, and write up the phrasing you get before reading further. You've already tabooed a lot, but I think you need to go further, and remove the terms: recession, depression, stimulus, excessive, pessimism, invest, and economic activity. (What's left? Terms like prefer, satisfaction, wants, market exchange, resources, working, changing actions.)

Now, here's what I get: (bracketed phrases indicate a substitution of standard economic jargon)

"People [believe that future market interactions with others will be less capable of satisfyng their wants], which leads them to [allocate resources so as to anticipate lower gains from such activity]. As people do this, the combined effect of their actions is to make this suspicion true, [increasing the relative benefit of non-market exchanges or unmeasured market exchanges].

"The government should therefore [purchase things on the market] in order to produce a [false signal of the relative merit of selling certain goods], and facilitate production of [goods people don't want at current prices or that they previously couldn't justify asking their government to provide]. This, then, becomes a self-fulfilling prophecy: once people [sell unwanted goods due to this government action], it actually becomes beneficial for others to sell goods people do want on the market, [preventing a different kind of adjustment to conditions from happening]."

Phrased in these terms, does it even make sense? Does it even claim to do something people might want?

Replies from: James_K
comment by James_K · 2010-06-17T08:22:57.879Z · LW(p) · GW(p)

People [believe that future market interactions with others will be less capable of satisfyng their wants]

That was a very useful exercise since it helped me identify the key point of disagreement between you an Keynesianism. If I'm right, you're coming at this from a goods market perspective i.e. "I, a typical consumer am not interested in any of these goods at these prices, so I'm going to not buy so much", whereas the Keynesians are blaming this kind of attitude: "I, a typical consumer am fearful of the future. While I want to buy stuff, I'd better start saving for the future instead in case I lose my job" and it's the saving that triggers the recession (money flows out of the economy into savings, this fools people into thinking they are poorer and the death spiral begins).

A couple of other contextual points: 1) The monetary stimulus that Keynes recommended was based on governments running deficits, not necessarily spending more. Cutting taxes works just as well

2) Keynes was trying to reduce the magnitude of boom-bust swings, not increase trend economic growth rates. As such he prescribed the opposite behaviour in boom times, have government run surpluses to tamp down consumer exuberance. This is less widely known since politicians only ever talk about Keynes during recessions, when it gives them intellectual cover to spend lots of money.

3) The Keynesian consensus is not universal. Arnold Kling's "recalculation" story is much closer to your picture, and you'll notice he doesn't advocate stimulus, but rather waiting to see how people adjust to the new economic circumstances.

4) GDP is the preoccupation of macroeconomists. Microeconomists (like me) care much more about allocative efficiency, which is to say to what extent are things in the hands of the people who value them most? So there's a whole branch of the profession to which your initial GDP-centrism comment does not apply.

It's points 3 and 4 in particular that lead me to object to your claim that economists are obsessed with GDP. To my way of thinking, it's politicians that are obsessed with GDP because they believe their chances of re-election are tied to economic growth and unemployment figures. So they spend a lot of time asking economists how to increase GDP, and therefore economists more often than not to discuss GDP when they appear in public.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-17T14:13:47.108Z · LW(p) · GW(p)

That was a very useful exercise since it helped me identify the key point of disagreement between you an Keynesianism. If I'm right, you're coming at this from a goods market perspective i.e. "I, a typical consumer am not interested in any of these goods at these prices, so I'm going to not buy so much", whereas the Keynesians are blaming this kind of attitude: "I, a typical consumer am fearful of the future. While I want to buy stuff, I'd better start saving for the future instead in case I lose my job" and it's the saving that triggers the recession (money flows out of the economy into savings, this fools people into thinking they are poorer and the death spiral begins).

It's still not clear to me that you've done what I asked (taboo your model's predicates down to fundamentals laypeople care about), or that you have the understanding that would result from having done what I asked.

  • What's the difference between the "goods market" perspective and the "blaming this kind of attitude"/Keynesian perspective? Why is one wrong or less helpful, and what problems would result from using it?

  • Why is it bad for people to believe they are poorer when they are in fact poorer?

  • Why is it bad for more money to go into savings? Why does "the economy" entirely hinge on money not doing this?

Until you can answer (or avoid assuming away) those problems, it's not clear to me that your understanding is fully grounded in what we actually care about when we talk about a "good economy", and so you're making the same oversights I mentioned before.

Replies from: James_K
comment by James_K · 2010-06-18T20:50:43.704Z · LW(p) · GW(p)

you're making the same oversights I mentioned before.

No, I'm not making those oversights because I am a) not a Keynesian and b) not a macroeconomist. My offering defences of this position should not be construed as fundamental agreement that position.

This is quickly turning into a debate about the merits of Keynesianism which is not a debate I am interested in because stabilisation policy is not my field and I don't find it very interesting, I got enough of it at university. I'm going to touch on a few points here, but I'm not going to engage fully with your argument; you really need to talk to a Keynesian macroeconomist if you want to discuss most of this stuff. For one thing my ability to taboo certain words is affected by the fact I don't have a very solid grip on the theory and I don't spend much of my time thinking about high level aggregates like GDP.

Now here's the best I can do on your bullet point questions, sorry if it doesn't help much, but it's all I've got: 1) The difference is that Keynesians believe savings reduce the money supply by taking money out of circulation, this makes them think they are poorer, which makes them act like they're poorer, which makes other people poorer.

2) Because it starts with an illusion of poverty. The first cause of recessions in a Keynesian model is "animal spirits", or in layman's terms, irrational fear of financial collapse. Viewed from this perspective, stimulus is a hack that undoes the irrationality that caused the problem in the first place (and because it's caused by irrationality they can feel confident it is a problem).

3) This is actually one of my biggest problems with Keynesian theory. If it strikes you as counter-intuitive or silly, I'm not going to dissuade you.

One final point: The reason I replied to your initial comment in the first place, was your suggestion that all economists are obsessed with maximising measured GDP over everything else.

But many economists don't deal with GDP at all. When I was learning labour market theory we were taught that once people's wage rate gets high enough, one could expect them to work fewer hours since the demand for leisure time increases with income. There was never a suggestion that this was anything to be concerned about, the goal is utility, not income.

In environmental economics I recall reading a paper by Robert Solow (the seminal figure in the theory of economic growth) arguing that it was important to consider changes in environmental quality along with GDP, to get a better picture of how well off people really are.

I look at what I have been taught in economics, and I simply can't square it with your view of the profession. Some kinds of economists tend to be obsessed with growth, but they tend to be economists who specialise in economic growth. The rest of us have other pursuits, and other obsessions.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-18T21:08:01.395Z · LW(p) · GW(p)

Alright, I'll let anyone judge for themselves if the canonical Keynesian replies reveal a truly grounded understanding of what counts as "helping the economy".

you really need to talk to a Keynesian macroeconomist if you want to discuss most of this stuff. For one thing my ability to taboo certain words is affected by the fact I don't have a very solid grip on the theory ...

Forget Keynesian theory for a minute: I want to know if you have the understanding I expect of whatever theory it is you do endorse. Can you taboo that theory's terminology and ground it layperson level fundamentals? Can you force me to care about whatever jargon you do in fact use?

Because, at risk of sounding rude, I don't think you've acquired this "Level 2" understanding, and I don't think you're atypical among economists in lacking it -- from what I've read of Mankiw, Sumner, and Krugman, they don't have it either.

(btw, you call yourself an economist but don't have a grip on Keynesian theory? Isn't that pretty much required these days?)

One final point: The reason I replied to your initial comment in the first place, was your suggestion that all economists are obsessed with maximising measured GDP over everything else.

But many economists don't deal with GDP at all. ...I look at what I have been taught in economics, and I simply can't square it with your view of the profession. Some kinds of economists tend to be obsessed with growth, but they tend to be economists who specialise in economic growth.

Sure -- I only meant that economic policy advocates who are concerned about aggregate economic variables are obsessed with GDP as one of those variables, but that should be assumed from context. Obviously, you're not going to care about GDP in your capacity as a microeconomist of company behavior.

Replies from: James_K
comment by James_K · 2010-06-18T21:36:43.037Z · LW(p) · GW(p)

Because, at risk of sounding rude, I don't think you've acquired this "Level 2" understanding ... (btw, you call yourself an economist but don't have a grip on Keynesian theory? Isn't that pretty much required these days?)

On macro policy I doubt I have level 2 understanding. I had to take papers in macro at university, and I was able to get reasonable grades on them, but level 0 or 1 understanding is sufficient to do that.

My guess is that if you asked a Keynesian why they care, they would say that boom-bust cycles create uncertainty and fear in people because they don't know if they're going to lose their job (and they want their job, or they'd have already quit) and by taming the boom-bust cycle people will have a more certain and therefore more pleasant life).

Equally if you asked a development economist, they would point to the misery in third world countries and for wealthy countries point out that productivity growth means being able to do more with less, and whether you want to have more, or want to do less, that's a win. Unemployed people are by definition people who want a job but don't have one, so concern about unemployment is easy to work out.

And as for me, well the reason I care about allocative efficiency is that allocative efficiency is the attempt to match reality to people's preferences as well as is possible under current constraints. How do we use our resources and knowledge to create the things people want and how do we get them to the people who want them the most?

The market does a pretty good job of this most of the time, but it does fail sometimes. And when it fails there are things government can do to improve matters, but the government can fail too, so you have to balance out the imperfections of the market and the imperfections of government and try to work out which set of imperfections is more problematic. If I succeed, or if people like me succeed then people will have more of what they want, be that flat screen TVs, or cars or clean air or time with their families. Not everything falls within economics' purview of course, love and truth and beauty are things I can't help with. But for everything else, my goal is to help the market to match infinite wants with finite resources, and imperfect information.

Sure -- I only meant that economic policy advocates who are concerned about aggregate economic variables are obsessed with GDP as one of those variables, but that should be assumed from context.

Perhaps it should have been, but I failed to assume this. And microeconomics is a lot wider than company behaviour, it covers pretty much everything but GDP and unemployment.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-18T22:42:15.648Z · LW(p) · GW(p)

And as for me, well the reason I care about allocative efficiency is that allocative efficiency is the attempt to match reality to people's preferences as well as is possible under current constraints. How do we use our resources and knowledge to create the things people want and how do we get them to the people who want them the most?

That wasn't the question or contributory thereto, though it shows you can ground one concept.

The question is, whatever model/theory you have of the economy, are its predicates fully grounded in what laypersons care about? You mentioned things people care about, but not how they fit into the model that you advocate.

Replies from: James_K
comment by James_K · 2010-06-18T23:12:03.616Z · LW(p) · GW(p)

Allocative efficiency is what I work with. If you asked me why I care about GDP, my response would be, "I don't, particularly".

As for my economic model, I can't give you a full rundown in a comment, but here's the short version: 1) Level 1 is the fully ideal version, unrealistic, but useful for grounding the whole thing in people's preferences. It basically rests on the notion that if you make a battery of assumptions voluntary exchange will result in allocative efficiency, if person A values something more than person B then they will trade, either directly or through side trades until person A has it. Yes there are a lot of reasons this doesn't work in practice, but that's level 2.

2) Level 2 picks at all those assumptions in level 1. Things like externalities (like pollution) imperfect information, irrational behaviour, imperfect competition, transaction costs and other git in the gears. These things cause violations of the assumptions in 1, and therefore prevent potentially efficiency-enhancing trades from occurring. The academic work at level 2 is focused around identifying these problems and considering possible solutions a government could introduce to correct for them.

3) Level 3 looks at the ability of government to effectively implement the policies identified at level 2. Theories like social choice theory (the ability of voting systems to effectively aggregate votes into social preferences) and public choice theory (how well do governments act as agents of the voting public). The academic work at level 3 is focused around identifying the limitations of real world governments, and identifying the side-effects of badly implemented policies.

Level 1 is all about individual preferences, not attempting to measure them directly because you can't, but rather in setting up a system so people can sort it out themselves.

As for how GDP factors in, well they it doesn't directly. Macro and micro aren't integrated, they haven't been since Keynes. You learn about them indifferent courses, people tend not to specialise in both, so there's a gap there. Hence the reason I don't care about GDP per se.

Now productivity I care about, because higher productivity means more resources for people to trade with and more preferences can be satisfied. I care about unemployment because it implies people are willing to make a trade, but unable to do so due to some bug in the system, either a level 2 problem (market failure), or a level 3 problem (government failure).

comment by Vladimir_M · 2010-06-16T20:12:29.809Z · LW(p) · GW(p)

James_K:

I am an economist and I assure you economists don't ascribe to the "measured GDP is everything" view you attribute to them.

Aside from the standard arguments about the shortcomings of GDP, my principal objection to the way economists use it is the fact that only the nominal GDP figures are a well-defined variable. To make sensible comparisons between the GDP figures for different times and places, you must convert them to "real" figures using price indexes. These indexes, however, are impossible to define meaningfully. They are produced in practice using complicated, but ultimately arbitrary number games (and often additionally slanted due to political and bureaucratic incentives operating in the institutions whose job is to come up with them).

In fact, when economists talk about "nominal" vs. "real" figures, it's a travesty of language. The "nominal" figures are the only ones that measure an actual aspect of reality (even if one that's not particularly interesting per se), while the "real" figures are fictional quantities with only a tenuous connection to reality.

Replies from: realitygrill, James_K
comment by realitygrill · 2010-06-17T04:14:17.073Z · LW(p) · GW(p)

It's pretty easy to get this sort of view just reading books. In my (limited) experience, there are a fair percentage of divergent types that are not like this - and they tend to be the better economists.

You may like Morgenstern's book On the Accuracy of Economic Observations. How I rue the day I saw this in a used bookstore in NY and didn't have the cash to buy it..

EDIT: fixed title name

Replies from: Vladimir_M, Vladimir_M, NancyLebovitz
comment by Vladimir_M · 2010-06-17T23:34:04.831Z · LW(p) · GW(p)

I'm going through Morgenstern's book right now, and it's really good. It's the first economic text I've ever seen that tries to address, in a systematic and no-nonsense way, the crucial question of whether various sorts of numbers routinely used by economists (and especially macroeconomists) make any sense at all. That this book hasn't become a first-rank classic, and is instead out of print and languishing in near-total obscurity, is an extremely damning fact about the intellectual standards of the economic profession.

I've also looked at some other texts by Morgenstern I found online. I knew about his work in game theory, but I had no idea that he was such an insightful contrarian on the issues of economic statistics and aggregates. He even wrote a scathing critique of the concept ot GNP/GDP (a more readable draft is here). Unfortunately, while this article sets forth numerous valid objections to the use of these numbers, it doesn't discuss the problems with price indexes that I pointed out in this thread.

comment by Vladimir_M · 2010-06-17T06:29:19.920Z · LW(p) · GW(p)

realitygrill:

It's pretty easy to get this sort of view just reading books. In my (limited) experience, there are a fair percentage of divergent types that are not like this - and they tend to be the better economists.

Could you please list some examples? Aside from Austrians and a few other fringe contrarians, I almost always see economists talking about the "real" figures derived using various price indexes as if they were physicists talking about some objectively measurable property of the universe that has an existence independent of them and their theories.

You may like Morgenstern's book On the Accuracy of Economic Measurements. How I rue the day I saw this in a used bookstore in NY and didn't have the cash to buy it..

Thanks for the pointer! Just a minor correction: apparently, the title of the book is On the Accuracy of Economic Observations. It's out of print, but a PDF scan is available (warning -- 31MB file) in an online collection hosted by the Stanford University.

I just skimmed a few pages, and the book definitely looks promising. Thanks again for the recommendation!

Replies from: realitygrill
comment by realitygrill · 2010-06-19T02:41:48.453Z · LW(p) · GW(p)

Could you please list some examples? Aside from Austrians and a few other fringe contrarians, I almost always see economists talking about the "real" figures derived using various price indexes as if they were physicists talking about some objectively measurable property of the universe that has an existence independent of them and their theories.

I meant personally - I did my undergrad in economics. I'm extremely skeptical of macroeconomics and currently throw in with the complex adaptive system dynamicists and the behavioral economists (and Hansonian cynicism; that's just me). But, to give an example, Krugman has done quite a bit of work in the complexity arena.

I just skimmed a few pages, and the book definitely looks promising. Thanks again for the recommendation!

Yeah, you're welcome! The first I heard of that book was someone using the example of calculating in-flows and out-flows of gold. Each country's estimates differed by orders of magnitude or something like that, and even signs.

comment by NancyLebovitz · 2010-06-18T12:17:10.950Z · LW(p) · GW(p)

There are a number of reasonably priced copies on amazon.

Replies from: realitygrill
comment by realitygrill · 2010-06-19T02:20:42.433Z · LW(p) · GW(p)

Oh good, they certainly weren't that reasonable the last I checked.

comment by James_K · 2010-06-17T08:50:23.488Z · LW(p) · GW(p)

It's not so much a matter of being overconfident as it is not listing the disclaimers at every opportunity. The Laspeyres Price Index (the usual type of price index) has well understood limitations (specifically that it overestimates consumer price growth as it doesn't deal with technological improvement and substitution effects very well), but since we don't have anything better, we use it anyway.

"Real" is a term of art in economics. It's used to reflect inflation-adjusted figures because all nominal GDP tells you is how much money is floating around, which isn't all that useful. real GDP may be less certain, but it's more useful.

Bear in mind that everything economists use is an estimate of a sort, even nominal GDP. Believe it or not, they don't actually ask every business in the country how much they produced and / or received in income (which is why the income and expenditure methods of calculating GDP give slightly different numbers although they should give exactly the same result in theory). The reason this may not be readily apparent is that most non-technical audiences start to black out the moment you talk about calculating a price index (hell, it makes me drowsy) and technical audiences already understand the limitations.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-17T17:45:17.409Z · LW(p) · GW(p)

James_K:

"Real" is a term of art in economics. It's used to reflect inflation-adjusted figures because all nominal GDP tells you is how much money is floating around, which isn't all that useful. real GDP may be less certain, but it's more useful.

You're talking about the "real" figures being "less certain," as if there were some objective fact of the matter that these numbers are trying to approximate. But in reality, there is no such thing, since there exists no objective property of the real world that would make one way to calculate the necessary price index correct, and others incorrect.

The most you can say is that some price indexes would be clearly absurd (e.g. one based solely on the price of paperclips), while others look fairly reasonable (primarily those based on a large, plausible-looking basket of goods). However, even if we limit ourselves to those that look reasonable, there is still an infinite number of different procedures that can be used to calculate a price index, all of which will yield different results, and there is no objective way whatsoever to determine which one is "more correct" than others. If all the reasonable-looking procedures led to the same results, that would indeed make these results meaningful, but this is not the case in reality.

Or to put it differently, an "objective" price index is a logical impossibility, for at least two reasons. First, there is no objective way to determine the relevant basket of goods, and different choices yield wildly different numbers. Second, the set of goods and services available in different times and places is always different, and perfect equivalents are normally not available, so different baskets must be used. Therefore, comparisons of "real" variables invariably involve arbitrary and unwarranted assumptions about the relative values of different things to different people. Again, of course, different arbitrary choices of methodology yield different numbers here.

(By the way, I find it funny how neoclassical economists, who hold it as a fundamental axiom that value is subjective, unquestioningly use price indexes without stopping to think that the basic assumption behind the very notion of a price index is that value is objective and measurable after all.)

Replies from: Clippy, NancyLebovitz, James_K, None
comment by Clippy · 2010-06-18T20:24:59.410Z · LW(p) · GW(p)

The most you can say is that some price indexes would be clearly absurd (e.g. one based solely on the price of paperclips), while others look fairly reasonable (primarily those based on a large, plausible-looking basket of goods)

Very true. A good general measure in human economic systems should NOT merely look at the ease of availability of finished paperclips. It should also include, in the "basket", such things as extrudable metal, equipment for detecting and extracting metal, metallic wire extrusion machines, equipment for maintaining wire extrusion machines, bend radius blocks, and so forth.

Thank you for pointing this out; you are a relatively good human.

By the way, I find it funny how neoclassical economists, who hold it as a fundamental axiom that value is subjective

That is a very poor inference on their part.

comment by NancyLebovitz · 2010-06-18T13:17:36.060Z · LW(p) · GW(p)

Here's a crude metric I use for gauging the relative goodness of societies as places to live: Immigration vs. emigration.

It's obviously fuzzy-- you can't get exact numbers on illegal migration, and the barriers (physical, legal, and cultural) to relocation matter, but have to be estimated. So does the possibility that one country may be better than another, but a third may be enough better than either of them to get the immigrants.

For example, the evidence suggests that the EU and the US are about equally good places to live.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-18T19:16:19.606Z · LW(p) · GW(p)

I don't think that's a good metric. Societies that aren't open to mass immigration can have negligible numbers of immigrants regardless of the quality of life their members enjoy. Japan is the prime example.

Moreover, in the very worst places, emigration can be negligible because people can be too poor to pay for the ticket to move anywhere, or prohibited to leave.

Replies from: None, NancyLebovitz
comment by [deleted] · 2010-06-19T11:58:23.495Z · LW(p) · GW(p)

But "given perfect knowledge of all market prices and individual preferences at every time and place, as well as unlimited computing power", you could predict how people would choose if they were not faced with legal and moving-cost barriers - e.g. imagine a philanthropist willing to pay the moving costs. So your objection to this metric seems to be a surmountable one, in principle, assuming perfect knowledge etc. The main remaining barrier to migration may be sentimental attachment - but given perfect knowledge etc. one could predict how the choices would change without that remaining barrier.

Applying this metric to Europa versus Earth, presumably Europans would choose to stay on Europa and humans would choose to stay on Earth even with legal, moving-cost, and sentimental barriers removed, indeed both would pay a great deal to avoid being moved.

In contrast to Europans versus humans, humans-of-one-epoch are not very different from humans-of-another-epoch.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-10-05T01:58:18.928Z · LW(p) · GW(p)

Excellent point -- although I would pay a good deal to move to Europa, given a few days worth of air and heat.

comment by NancyLebovitz · 2010-06-18T19:24:16.951Z · LW(p) · GW(p)

A fair point, though I think societies like that are pretty rare. Any other notable examples?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-18T19:42:35.449Z · LW(p) · GW(p)

Off the top of my head, I know that Finland had negligible levels of immigration until a few years ago. Several Eastern European post-Communist countries are pretty decent places to live these days (I have in mind primarily the Czech Republic), but still have no mass immigration. As far as I know, the same holds for South Korea.

Regarding emigration, the prime example were the communist countries, which strictly prohibited emigration for the most part (though, rather than looking at the numbers of emigrants, we could look at the efforts and risks many people were ready to undertake to escape, which often included dodging snipers and crawling through minefields).

comment by James_K · 2010-06-18T20:13:22.093Z · LW(p) · GW(p)

First, there is no objective way to determine the relevant basket of goods, and different choices yield wildly different numbers.

The basket used is based on a representation of what people are currently consuming. This means we don't have to second-guess people's preferences. Unique goods like houses pose a problem, but there's not really anything we can do about that, so the normal process is to take an average of existing houses.

Second, the set of goods and services available in different times and places is always different, and perfect equivalents are normally not available, so different baskets must be used.

Which is a well understood problem. Every economist knows this, but what would you have us do? It is necessary to inflation-adjust certain statistics, and if the choice is between doing it badly and not doing it at all, then we'll do it badly. Just because we don't preface every sentence with this fact doesn't mean we're not aware of it.

Replies from: SilasBarta, NancyLebovitz
comment by SilasBarta · 2010-06-18T21:15:47.773Z · LW(p) · GW(p)

Just to avoid confusion among readers, I want to distance myself from part of Vladimir_M's position. While I agree with many of the points he's made, I don't go so far as to say that CPI is a fundamentally flawed concept, and I agree with you that we have to pick some measure and go with it; and that the use of it does not require its caveats to be restated each time.

However, I do think that, for the specific purpose that it is used, it is horribly flawed in noticeable, fixable ways, and that economists don't make these changes because of lost purpose syndrome -- they get so focused on this or that variable that they're disconnected from the fundamental it's supposed to represent. They're doing the economic equivalent of suggesting to generals that their living soldiers be burned to ashes so that the media will stop broadcasting images of dead soldier bodies being brought home.

Replies from: James_K
comment by James_K · 2010-06-18T21:53:29.320Z · LW(p) · GW(p)

I wouldn't be in a good position to determine if it's lost purpose syndrome since I'm an insider, but I would suggest that path dependence has a lot to do with it.

Price indices are produced by governments, who are notoriously averse to change. And what's worse the broad methodology is dictated by international standards, so if an economist or some other intelligent person comes up with a better price index they have to convince the body of economists and statisticians that they have a good idea, and then convince the majority of OECD countries (at a minimum) that their method is worth the considerable effort of changing every country's methodology.

That's a high hurdle to cross.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-18T22:07:58.334Z · LW(p) · GW(p)

On my blog I suggested using insulin prices as a good proxy for inflation. That should be pretty easy for economists to find, even historical data. One economists could find the historical data for one country and use it as a competing measure. No collective action problem to solve there! Just a research paper to present.

(Though I can't find it on google searches, but economists should be able to get access to the appropriate databases.)

Replies from: JoshuaZ, Blueberry
comment by JoshuaZ · 2010-06-18T22:14:20.153Z · LW(p) · GW(p)

The technology to manufacture insulin has been getting a lot cheaper since the late 1970s when bacteria were first used to synthesize insulin (before that it had to be extracted from animals). That process has become even easier since the process for growing E. coli has become much more efficient.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-18T22:38:46.859Z · LW(p) · GW(p)

True, that was just one layman's brief pondering of an alternate metric, and I hadn't realized the secular technology trend. I was mainly looking for something that can't be debased because then people will die, but that is also has minimal volatility in demand, supply, and speculation, and requires numerous inputs so as to smooth out the effect of local shocks.

And perhaps I'm running into a Goodhart trap myself -- today, the problem seems to be inflation being hidden via product degradation, but if I pick a metric mainly optimized for that, it will get worse over time. So finding a good or basket that covers all those would require more work -- but product debasement is pretty clearly being ignored today.

(Note that precious metals are sold in a way that prevents them from being secretly debased, but also are heavily influenced by global extraction rates, and are heavily speculated on and hoarded.)

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-18T23:15:09.913Z · LW(p) · GW(p)

True, that was just one layman's brief pondering of an alternate metric, and I hadn't realized the secular technology trend. I was mainly looking for something that can't be debased because then people will die, but that is also has minimal volatility in demand, supply, and speculation, and requires numerous inputs so as to smooth out the effect of local shocks.

Anything that has numerous inputs will likely be something which is complicated to manufacture and therefore will have increasing efficiency as the technology improves. I can't think of a single good that fits your criteria and hasn'thad substantial technological advancements of how it is made in the last 30 years. This sort of approach might work if one had very steady data for some long historical period with not much technological advancement.

comment by Blueberry · 2010-06-18T22:10:57.824Z · LW(p) · GW(p)

That's making your inflation rate strongly tied to one particular technology. A breakthrough making insulin synthesis easier, or increased diabetes rates, would affect insulin prices but not the rest of the economy.

comment by NancyLebovitz · 2010-06-18T20:19:35.623Z · LW(p) · GW(p)

Would error bars be a bad thing?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-18T20:49:24.623Z · LW(p) · GW(p)

Economists could calculate error bars that would say how closely the calculated aggregate figures approximate their exact values according to definitions. This is normally not done, and as Morgenstern noted in the book discussed elsewhere in the thread, the results would be quite embarrassing, since they'd show that economists regularly talk about changes in the second, third, or even fourth significant digit of numbers whose error bars are well into double-digit percentages.

However, when it comes to the more essential point I've been making, error bars wouldn't make any sense, since the problem is that there is no true value out there in the first place, just different arbitrary conventions that yield different results, neither of which is more "true" than the others.

Replies from: James_K
comment by James_K · 2010-06-18T21:09:08.564Z · LW(p) · GW(p)

Economists could calculate error bars that would say how closely the aggregate figures approximate the exact value as defined. This is normally not done, and as Morgenstern noted in the book discussed elsewhere in the thread, the results would be quite embarrassing, since they'd show that economists regularly talk about changes in the second, third, or even fourth significant digit of numbers whose error bars are well into double-digit percentages.

There's an old joke: "How can you tell macroeconomists have a sense of humour? They use decimal points." I'll admit spurious precision is a problem with a quite a bit of economic reporting. Remember that these statistics are produced by governments, not academics and politicians can have trouble grokking error bars.

the problem is that there is no true value out there in the first place, just different arbitrary conventions that yield different results, neither of which is more "true" than the others.

Actually, that's not really the case. There is an ideal, it's just you can't do it. If you knew everyone's preferences and information and endowments of income, you could work out how people's consumption would change as real incomes and relative prices changed so you could figure out what the right basket of goods is to use for the index at every point in time (the right bundle is whatever bundle consumers would actually pick in a given situation).

But in practice you can't get the information you'd need to do this, and that information would be constantly changing anyway. In practice what statistical agencies do is develop a basket of goods based on current consumption and review it every decade or so. This means the index overestimates inflation (the estimates I've seen put it at about 1 percentage point per year) because when prices rise, people change their consumption patterns and we can't predict how until it's already happened.

This is a flawed procedure, but it's not arbitrary, its an honest effort to approximate the ideal price index as well as we can, given the resources at our disposal.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-19T00:38:09.085Z · LW(p) · GW(p)

James_K:

There is an ideal, it's just you can't do it. If you knew everyone's preferences and information and endowments of income, you could work out how people's consumption would change as real incomes and relative prices changed so you could figure out what the right basket of goods is to use for the index at every point in time (the right bundle is whatever bundle consumers would actually pick in a given situation). But in practice you can't get the information you'd need to do this, and that information would be constantly changing anyway.

To the best of my understanding, what you write above seems to concede that even under the assumption of omniscience, when we consider different times and/or places, with different prices, incomes, and preferences of individuals -- and different sets of goods available on the market, though this can be modeled by assigning infinite prices to unavailable goods -- there is, after all, no unique objectively correct way to define equivalent baskets of goods. You could calculate the baskets that would actually be consumed at each time and place, but not the ratio of their true values (whatever that might mean), which would be necessary for their use as the basis for a true and objective price index.

Am I wrong in this conclusion, and if I am, would you be so kind to explain how?

In practice what statistical agencies do is develop a basket of goods based on current consumption and review it every decade or so. [...] This is a flawed procedure, but it's not arbitrary, its an honest effort to approximate the ideal price index as well as we can, given the resources at our disposal.

I would be really grateful if you could spell out what exactly you mean by "the ideal price index" when it comes to comparing different times and places, given my above observation. Also, you ignore the question of how exactly baskets are "reviewed," which is a step that requires an arbitrary choice of the new basket that will be declared as equivalent to the old.

Moreover, different kinds of "honest efforts" apparently produce very different figures. The procedures for calculating official price indexes have been changed several times in recent decades in ways that make the numbers look very different compared to what the older methods would yield. (And curiously, the numbers according to the new procedures somehow always end up looking better.) Would you say, realistically, that this is purely because we've been moving closer to the truth thanks to our increasing knowledge and insight?

Replies from: James_K
comment by James_K · 2010-06-19T05:14:43.609Z · LW(p) · GW(p)

You could calculate the baskets that would actually be consumed at each time and place, but not the ratio of their true values (whatever that might mean), which would be necessary for their use as the basis for a true and objective price index.

The concept of "true value" is incoherent, at least in my model of reality. The correct price to attach to a good at any time is its market price at that time. If you had the set of information I listed in my last comment, you'd have the market prices, since they're implied by the other stuff.

Also, you ignore the question of how exactly baskets are "reviewed," which is a step that requires an arbitrary choice of the new basket that will be declared as equivalent to the old.

I think we're using different definitions of arbitrary. To me, arbitrary means that there is no correct answer, and all options are equally valid. I don't accept that as a legitimate description of the process, there are judgement calls, but ambiguity is inevitable in the social sciences, you either get used to it, or find something else to study. Now if you're using arbitrary in the way I'm using ambiguous, then I don't think we disagree, except that I think it's less problematic than I think you do, since as soon as you start dealing with people things get so complex that ambiguity is inevitable.

And curiously, the numbers according to the new procedures somehow always end up looking better.

Now, here you have a point. The Laspeyres Index is biased up, it may be an honest effort, but not one that's Bayes correct. But Bayesian rationality has not penetrated through the discipline at this time, and as such a biased estimate is allowed to remain, primarily because there's no methodologically clean way to remove the bias (you'd need to be able to predict things like quality changes and how people change their spending patterns in response to price changes) and without a background in Bayesian probability theory I think most economists would baulk at adding a fudge factor into the calculation.

Replies from: Pavitra, Vladimir_M
comment by Pavitra · 2010-06-19T06:06:50.892Z · LW(p) · GW(p)

The concept of "true value" is incoherent, at least in my model of reality. The correct price to attach to a good at any time is its market price at that time. If you had the set of information I listed in my last comment, you'd have the market prices, since they're implied by the other stuff.

It might be valuable to talk about a "true value" of a given good to a given agent. Yes, the correct price to buy or sell a good at is always the market price; but whether I want to sell at that price or buy at that price depends on how much I want the good. If I sell, then the "true value" of the good to me is less than the current market price; and if I buy, then the "true value" of the good to me is greater than the current market price. In general, the "true value" of a given good to a given agent is the price such that, if the market were trading at that price, that agent would be indifferent regarding whether to buy or sell that good.

Replies from: James_K
comment by James_K · 2010-06-19T06:21:53.320Z · LW(p) · GW(p)

Yes that is a coherent definition of true value. It's not a concept that maps well to price indices though.

comment by Vladimir_M · 2010-06-19T07:50:53.067Z · LW(p) · GW(p)

James_K:

The concept of "true value" is incoherent, at least in my model of reality.

I heartily agree -- but what is a price index, other than an attempt at answering the question of what the "true value" of a unit of currency is? What are the fabled "real" values other than attempts at coming up with a coherent concept of "true value"?

The correct price to attach to a good at any time is its market price at that time. If you had the set of information I listed in my last comment, you'd have the market prices, since they're implied by the other stuff.

Yes, but even given perfect knowledge of all market prices and individual preferences at every time and place, as well as unlimited computing power, I still don't see how this solves the problem. We can find out the average basket consumed per individual (or household or whatever) and its price at each time and place, but what next? How do we establish the relative values of these baskets, whose composition will be different both quantitatively and qualitatively?

To clarify things further, I'd like to ask you a different question. Suppose the moon Europa is inhabited by intelligent jellyfish-like creatures floating in its inner ocean. The Europan economy is complex, technologically advanced, and money-based, but it doesn't have any goods or services in common with humans, except for a few inevitable ones like e.g. some basic chemical substances, and there is no trade whatsoever between Earth and Europa due to insurmountable distances. Would it make sense to define a price index that would allow us to compare the "real" values of various aggregate variables in the U.S. and on Europa?

If not, what makes the U.S./Europa situation essentially different from comparing different places and epochs on Earth? Or does the meaningfulness of price indexes somehow gradually fall as differences accumulate? But then how exactly do we establish the threshold, and make sure that the differences across decades and continents here on Earth don't exceed it?

I think we're using different definitions of arbitrary. To me, arbitrary means that there is no correct answer, and all options are equally valid. I don't accept that as a legitimate description of the process, there are judgement calls, but ambiguity is inevitable in the social sciences, you either get used to it, or find something else to study.

Well, if macroeconomists and other social scientists were just harmless and benign philosophers, I'd be happy to leave them to ponder their ambiguities in peace!

Trouble is, to paraphrase Trotsky's famous apocryphal quote, you may not be interested in social science, but social science is interested in you. In the present Western political system, whatever passes for reputable high-profile social science will be used as basis for policies of government and various powerful entities on its periphery, which can have catastrophic consequences for all of us if these ideas are too distant from reality. (And arguably already has.) Macroeconomics is especially critical in this regard.

Replies from: James_K
comment by James_K · 2010-06-19T08:36:10.076Z · LW(p) · GW(p)

but what is a price index, other than an attempt at answering the question of what the "true value" of a unit of currency is? What are the fabled "real" values other than attempts at coming up with a coherent concept of "true value"?

No, no. A price index is an attempt to work out how much things cost relative to what they used to cost. Real GDP is an attempt to measure how much stuff is being produced relative to how much stuff was being produced. GDP is not an attempt to determine what that stuff is worth in a metaphysical or personal sense, the production is merely valued at its market price (adjusted for inflation, in the case of real GDP). To a pacifist, the portion of GDP spent on the military is worth less than nothing, but it's still part of GDP because it was stuff that was produced.

Or does the meaningfulness of price indexes somehow gradually fall as differences accumulate?

Yes, the closer the consumption patters of the two economies being compared, the more useful the comparison is. If there were no common goods between two economies it would be impossible to compare them meaningfully. As to where to draw the line, well I wish I had a good answer for you, but I don't. All I can say is that the value of the comparison decays over "distance" (meaning differences in consumption patterns).

Some economists have created more specialised indices for long-run comparisons; William Nordhaus created a price index for light (based on hours of work per candela-hour) from the stone age to modern times. This is a little unusual at the moment since macroeconomists don't usually do comparisons over long time periods (it's fiendishly hard to get data going back before the 20th Century on most indicators), but it shows you that we are aware of the limitations of our tools, including price indices.

In the present Western political system, whatever passes for reputable social science will be used as basis for policies of government and various powerful entities on its periphery, which can have catastrophic consequences for all of us if these ideas are too distant from reality. Macroeconomics is especially critical in this regard.

I agree wholeheartedly, good quality policy advice is something I take very seriously. The social science we have has significant limitations, but right now, we don't have anything better. I very much doubt the quality of our policy would improve if politicians paid less attention to their advisers than they do at the moment. So we do what we can, help thing along as much as our knowledge and the institutional frameworks decisions are made will permit. What else can you do?

Replies from: Vladimir_M, Vladimir_M
comment by Vladimir_M · 2010-06-19T21:26:09.297Z · LW(p) · GW(p)

James_K:

Some economists have created more specialised indices for long-run comparisons; William Nordhaus created a price index for light (based on hours of work per candela-hour) from the stone age to modern times. This is a little unusual at the moment since macroeconomists don't usually do comparisons over long time periods (it's fiendishly hard to get data going back before the 20th Century on most indicators), but it shows you that we are aware of the limitations of our tools, including price indices.

That's a very interesting paper (available here), thanks for the pointer!

As with nearly all papers addressing such topics, parts of it look as if they were purposefully written to invite ridicule, as when he presents estimates of 19th century prices calculated to six significant digits. (Sorry for being snide, but what was that about spurious precision in economics being the fault of politicians?) However, the rest of it presents some very interesting ideas. Here are a few interesting bits I got from skimming it:

  • The mathematical discussion in Section 1.3.2. seems to imply (or rather assume) that even assuming omniscience, a "true price index" (Nordhaus's term) can be defined only for a population of identical individuals with unchanging utility functions. This seems to support my criticisms, especially considering that the very notion of a human utility function is a giant spherical cow.

  • The discussion in the introduction basically says that the way price indexes are done in practice makes them meaningless over periods of significant technological change. But why do we then get all this supposedly scientific research that uses them nonchalantly, not to mention government policy based on them? Nordhaus is, unsurprisingly, reluctant to draw some obvious implications here.

  • Nordhaus considers only the fact that price indexes fail to account for the benefits of technological development, so he keeps insisting that the situation is more optimistic than what they say. But he fails to notice that the past was not necessarily worse in every respect. In many places, for example, it is much less affordable than a few decades ago to live in a conveniently located low-crime neighborhood, and this goal will suck up a very significant percentage of income of all but the wealthiest folks. Moreover, as people's preferences change with time, many things that today's folks value positively would have been valued negatively by previous generations. How to account for that?

  • More to the same point, unless I missed the part where he discusses it, Nordhaus seems oblivious to the fact that much consumption is due to signaling and status competition, not utility derived from inherent qualities of goods. I'm hardly an anti-capitalist leftie, but any realistic picture of human behavior must admit that much of the benefit from economic and technological development ultimately gets sucked up by zero-sum status games. Capturing that vitally important information in a price index is a task that it would be insulting to Don Quixote to call quixotic.

  • Finally, I can't help but notice that in the quest for an objective measure of the price of light, Nordhaus seems to have reinvented the labor theory of value! Talk about things coming back full circle.

Overall, I would ask: can you imagine a paper like this being published in physics or some other natural science, which would convincingly argue that widely used methodologies on which major parts of the existing body of research rest in fact produce spurious numbers -- with the result that everyone acknowledges that the author has a point, and keeps on doing things the same as before?

Replies from: James_K, wedrifid
comment by James_K · 2010-06-20T07:37:09.866Z · LW(p) · GW(p)

As with nearly all papers addressing such topics, parts of it look as if they were purposefully written to invite ridicule, as when he presents estimates of 19th century prices calculated to six significant digits. (Sorry for being snide, but what was that about spurious precision in economics being the fault of politicians?)

[facepalm] OK, I'm not making any excuse for that. Given the magnitude of his findings he doesn't even need them to make his point.

The mathematical discussion in Section 1.3.2. seems to imply (or rather assume) that even assuming omniscience, a "true price index" (Nordhaus's term) can be defined only for a population of identical individuals with unchanging utility functions. This seems to support my criticisms, especially considering that the very notion of a human utility function is a giant spherical cow.

Yes, you can't produce a true price index. But less-than-true price indices can still be useful.

Nordhaus considers only the fact that price indexes fail to account for the benefits of technological development, so he keeps insisting that the situation is more optimistic than what they say. But he fails to notice that the past was not necessarily worse in every respect. In many places, for example, it is much less affordable than a few decades ago to live in a conveniently located low-crime neighborhood, and this goal will suck up a very significant percentage of income of all but the wealthiest folks.

But houses keep getting bigger and you have to account for that too. Besides which, housing is no more than a third of most people's income, at least it is in my country. That is a significant percentage, but it's still less than half. And things keep getting better (or no worse) in the remaining two thirds.

More to the same point, unless I missed the part where he discusses it, Nordhaus seems oblivious to the fact that much consumption is due to signalling and status competition, not utility derived from inherent qualities of goods.

Assuming it's even possible to adjust for that, I'd really want to apply the adjustment to GDP, not prices. Signalling isn't a matter of cost but rather value.

Finally, I can't help but notice that in the quest for an objective measure of the price of light, Nordhaus seems to have reinvented the labor theory of value! Talk about things coming back full circle.

No, you're confusing cost and value. The labour theory of value is the theory that the value of a good derives from the labour taken to produce it. If Nordhaus were using this theory he'd be arguing that the value of light keeps falling. Measuring cost with labour is another thing entirely.

Overall, I would ask: can you imagine a paper like this being published in physics or some other natural science, which would convincingly argue that widely used methodologies on which major parts of the existing body of research rest in fact produce spurious numbers -- with the result that everyone acknowledges that the author has a point, and keeps on doing things the same as before?

No. I recognise this is a problem. I can only imagine they thing it's too had to correct for technological change robustly, but that's not really an excuse. If you can't do it well, it's generally still better to do it badly than not at all. And I didn't realise the research was that old (I've actually never read the paper, I read a summary in a much more recent book). Apparently macroeconomists have more catch-up to do than I thought.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-20T09:21:31.408Z · LW(p) · GW(p)

This sentence of yours probably captures the heart of our disagreement:

If you can't do it well, it's generally still better to do it badly than not at all.

We don't seem to disagree that much about the limitations of knowledge in this whole area, epistemologically speaking. Where we really part ways is that I believe that historically, the whole edifice of spurious expertise produced by macroeconomists and perpetuated by gargantuan bureaucracies has been an active force giving impetus for bad (and sometimes disastrous) policies, and that it's overall been a step away from reality compared to the earlier much simpler, but ultimately more realistic conventional wisdom. Whereas you don't accept this judgment.

Given what's already been said, I think this would be a good time to conclude our discussion. Thanks for your input; your comments have, at the very least, made me learn some interesting facts and rethink my opinions on the subject, even if I didn't change them substantially at the end.

(Oh, and you're right that I confused cost and value in that point from my above comment. I was indeed trying to be a bit too much of a smartass there.)

Replies from: James_K
comment by James_K · 2010-06-21T05:26:57.465Z · LW(p) · GW(p)

This sentence of yours probably captures the heart of our disagreement:

If you can't do it well, it's generally still better to do it badly than not at all.

Yes, I think so. It's not that I think that macroecoonmics has covered its self in glory, it hasn't. But this really is literally the only way to learn for those guys. And I believe it's worth it in the short run, though I'm less sure of that, than I was before we started this. Maybe those macro guys should go try micro or something.

Given what's already been said, I think this would be a good time to conclude our discussion. Thanks for your input; your comments have, at the very least, made me learn some interesting facts and rethink my opinions on the subject, even if I didn't change them substantially at the end.

Same here, it's been fun.

comment by wedrifid · 2010-06-19T22:10:10.756Z · LW(p) · GW(p)

William Nordhaus created a price index for light (based on hours of work per candela-hour) from the stone age to modern times.

How much did it cost a cave man to walk outside? Or are we including the time he spent digging renovations to put the sky-light in his roof?

Replies from: SilasBarta
comment by SilasBarta · 2010-06-19T23:15:23.541Z · LW(p) · GW(p)

Heh. Yeah, I'm going to go out on a limb and guess that Nordhaus didn't subtract off the previously-free sunlight lost to global dimming and the attenuation of natural sources of nightlight due to interference from artificial light.

This is NOT to say I'm endorsing some kind of greenie move toward a pre-industrial time just so we can see the undimmed sky or have less "light pollution". I'm just saying that ignoring natural and informal sources of wealth is a bad habit to get into.

Reading paper to see if I can guess them right...

ETA: Ohhhhhh! Can I call 'em or what?

comment by Vladimir_M · 2010-06-19T17:45:03.818Z · LW(p) · GW(p)

James_K:

A price index is an attempt to work out how much things cost relative to what they used to cost. Real GDP is an attempt to measure how much stuff is being produced relative to how much stuff was being produced. GDP is not an attempt to determine what that stuff is worth in a metaphysical or personal sense, the production is merely valued at its market price (adjusted for inflation, in the case of real GDP). To a pacifist, the portion of GDP spent on the military is worth less than nothing, but it's still part of GDP because it was stuff that was produced.

But now we're back to square one. Since different things are produced in different times and places, to produce these "real" figures for comparison, we need to come up with a way to compare apples and oranges (sometimes literally!). Now, if economists just said that they would consider an apple equivalent to an orange for some simple Fermi problem calculation, I'd have no problem with that.

However, what economists use in practice are profoundly complicated methodologies that will tell us that an orange is presently equivalent to 1.138 of an apple, and then we get subtle arguments and policy prescriptions based on the finding that this means an increase in the orange-apple index of 2.31% relative to last year. Here we enter the realm of pure nebulosity, where the indexes and "real" figures stop being vague heuristics where even the order of magnitude is just barely meaningful, and acquire a metaphysical existence of their own, as "real" variables to be calculated to multiple digits of precision, fed into complex mathematical models and policy guidelines, and used to measure reified true, objective value.

Yes, the closer the consumption patters of the two economies being compared, the more useful the comparison is. If there were no common goods between two economies it would be impossible to compare them meaningfully. As to where to draw the line, well I wish I had a good answer for you, but I don't. All I can say is that the value of the comparison decays over "distance" (meaning differences in consumption patterns).

So, here is a straightforward question then: how do we know that it is meaningful to do comparison across, say, between the U.S. in 2010 and the U.S. in 1960 or 1910? What argument supports the assumption that the differences between them are small enough?

The social science we have has significant limitations, but right now, we don't have anything better. [...] So we do what we can, help thing along as much as our knowledge and the institutional frameworks decisions are made will permit. What else can you do?

Sometimes it's safer to just leave things alone if you don't know what you're doing. Presenting dubious conclusions and questionable expertise as scientific insight leads to the equivalent of dilettante surgery being performed on entire countries by their governments, sometimes with awful consequences, and with even worse ones threatening in the future. (Prominent macroeconomists will in fact agree with me, it's just that they'll claim that their professional rivals are the dilettantes, and only they are true experts who should be listened to.)

Replies from: James_K
comment by James_K · 2010-06-19T21:07:11.780Z · LW(p) · GW(p)

Here we enter the realm of pure nebulosity, where the indexes and "real" figures stop being vague heuristics where even the order of magnitude is just barely meaningful, and acquire a metaphysical existence of their own, as "real" variables to be calculated to multiple digits of precision, fed into complex mathematical models and policy guidelines, and used to measure reified true, objective value.

I happen to agree that macroeconomists are overdoing it on the level of precision they can provide. Arnold Kilng (himself a macroeconomist) made this same point in a blog post last year: http://econlog.econlib.org/archives/2009/03/paragraphs_to_p.html

So, here is a straightforward question then: how do we know that it is meaningful to do comparison across, say, between the U.S. in 2010 and the U.S. in 1960 or 1910? What argument supports the assumption that the differences between them are small enough?

I would be careful about using a price index over that kind of time frame, I don't actually know how macroeconomists treat it, but I have read books that point out the inherent difficulty of making comparisons over long time periods (where long means more than about 20 years), and that if you're trying to capture differences in standard of living over a long period one should try to account for differences in product quality and product mix over time. Of course that's incredibly hard to do, and I don't know how seriously this issue is treated in macroeconomics, but it should be taken seriously.

Sometimes it's safer to just leave things alone if you don't know what you're doing.

I strongly agree. However, there are two limiting factors when applying to this logic to policy advice: 1) If you don't give a politician any advice, their reaction won't be to do nothing, it will be to do whatever they think is a good idea. The average macroeconomist may not know a lot, but they know enough that their advice will probably help a little. I do think that macroeconomists should be less willing to offer active advice, as opposed to "we don't understand this problem, the best thing to do here is nothing", but politicians have a strong aversion to doing nothing in the face of a crisis, and if their advisers keep telling them to do politically unpalatable things, they'll find advisers that will tell them what to hear.

2) You can't run experiments in macroeconomics, the only way to acquire data on how well an intervention works is to try it (multiple times in multiple countries) and find out how it goes, and even then you end up arguing what would have happened if you did nothing. That means that if you don't try to fix and/or prevent macroeconomic problems you don't get any better information on how to fix future ones. Maybe that's an acceptable trade off, but I'm sure you can see why macroeconomists don't think so. Also bear in mind that what brought macro into its own as a discipline was the Great Depression. Maybe it's worth risking some bumps in the road to try to work out how to stop something like that happening again.

Prominent macroeconomists will in fact agree with me, it's just that they'll claim that their professional rivals are the dilettantes, and only they are true experts who should be listened to.

Yes, it's depressing how much a macroeconomists' opinion on what caused the recent troubles matches up with their political ideology. But it's a function of the low quality of evidence available, in Bayesian terms when you only have access to weak evidence, your prior matters more than when the evidence is strong. The inevitable influence politics has on the discipline doesn't help either. Politicians are all too keen to build up economists who are telling them to do things they want to do anyway.

comment by [deleted] · 2010-06-17T23:42:32.427Z · LW(p) · GW(p)

If some price indexes are "clearly absurd", then they apparently have some value to us - for if they were valueless, then why call any particular one "absurd"? If they yield different results, then so be it - let us simply be open about how the different indexes are defined and what result they yield. The absence of a canonical standard will of course not be useful to people primarily interested in such things as pissing contests between nations, but the results should be useful nonetheless.

We commonly talk about tradeoffs, e.g., "if I do this then I will benefit in one way but lose in another". We can do the same thing with price indexes. "In this respect things have improved but in this other respect things have gotten worse."

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-18T00:25:29.695Z · LW(p) · GW(p)

Constant:

We commonly talk about tradeoffs, e.g., "if I do this then I will benefit in one way but lose in another". We can do the same thing with price indexes. "In this respect things have improved but in this other respect things have gotten worse."

Sure, but such an approach would deny the validity of all these "real" economic variables that are based on a scalar price index. In particular, it would definitely mean discarding the entire concept of "real GDP" as incoherent. This would mean conceding the criticisms I've been expounding in this thread, and admitting the fundamental unsoundness of much of what passes for science in the field of macroeconomics.

Moreover, disentangling the complete truth about what various price indexes reveal and what they hide is an enormously complex topic that requires lengthy, controversial, and subjective judgments. This is inevitable because, after all, value is subjective.

Take for example two identically built houses located in two places that greatly differ in various aspects of the natural environment, society, culture, technological development, economic infrastructure, and political system. (It can also be the same place in two different time periods.) It makes no sense to treat them as equivalent objects of identical value; you'd have a hard time finding even a single individual who would be indifferent between the two. Now, if you want to discuss what exactly has been neglected by treating them as identical (or reducing their differences to a single universally applicable scalar factor) for the purposes of constructing a price index, you can easily end up writing an enormous treatise that touches on every aspect in which these places differ.

comment by NancyLebovitz · 2010-06-16T08:56:29.033Z · LW(p) · GW(p)

I've heard that the trick works less well each time it's used (perhaps within a limited time period). Is this plausible?

comment by Vladimir_Nesov · 2010-06-15T15:09:47.482Z · LW(p) · GW(p)

There could be indirect consequences of the decision in question, resulting from counter-intuitive effects on the existing economic process, on lives of other people not directly involved in the decision. The relevant question is about estimate of those indirect consequences. However imprecise economic indicators are, you can't just replace them with presumption of total lack of consequences, and only consider the obvious.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-15T15:12:33.161Z · LW(p) · GW(p)

I didn't ignore the indirect consequences:

If potential renters or the existing ones prefer your parents' unit to the other rental opportunities and they are denied it, they are worse off; otherwise, they aren't.

To the extent that the indirect effects go beyond this, standard mainstream metrics in economics don't measure them, because they are essentially independent of how well off others have become as a result of these rental decisions.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-15T15:36:07.027Z · LW(p) · GW(p)

To the extent that the indirect effects go beyond this, standard mainstream metrics in economics don't measure them, because they are essentially independent of how well off others have become as a result of these rental decisions.

Well, maybe there are no such consequences (which is not obvious to me), but that's what I meant.

comment by Mike Bishop (MichaelBishop) · 2010-06-15T16:23:47.265Z · LW(p) · GW(p)

Never once has it occurred to anyone in the mainstream (and very few outside of the mainstream) that it's okay for people to produce less, consume less, and have more leisure.

  1. Really? Because I hear economists talk about the value of leisure time quite frequently.
  2. IMO, most economists don't fetishize GDP the way you suggest they do.
  3. You seem to be denying the benefits of Keynesian stimulus in a downturn. That position is not indefensible, but you're not defending it, you're just claiming it.
Replies from: SilasBarta
comment by SilasBarta · 2010-06-15T16:42:39.185Z · LW(p) · GW(p)

Really? Because I hear economists talk about the value of leisure time quite frequently. ...IMO, most economists don't fetishize GDP the way you suggest they do.

Both of these are contradicted by the fact that no economist, in discussion of the recent economic troubles, has suggested that letting the economy adjust to a lower level of output/work would be an acceptable solution.

Yes, they recognize that leisure is good in the abstract, but when it comes to proposals for "what to do" about the downturn, the implicit, unquestioned assumption is that we must must must get GDP to keep going up, no matter how many make-work projects or useless degrees that involves.

You seem to be denying the benefits of Keynesian stimulus in a downturn. That position is not indefensible, but you're not defending it, you're just claiming it.

I most certainly am defending it -- by showing the errors in the classification of what counts as a benefit. If the argument is that stimulus will get GDP numbers back up, then yes, I didn't provide counterarguments. But my point was that the effect of the stimulus is to worsen that which we really mean by a "good economy".

The stimulus is getting people to do blow resources doing (mostly) useless things. Whether or not it's effective at getting these numbers where they need to be, the numbers aren't measuring what we really want to know about. Success would mean the useless, make-work jobs eventually lead to jobs satisfying real demand, yet no metric that they focus on captures this.

Replies from: SilasBarta, CronoDAS, Unnamed
comment by SilasBarta · 2010-06-16T03:23:09.270Z · LW(p) · GW(p)

Downvote explanation requested. This looks like a reasoned reply to MichaelBishop's criticism, and I'm interested in knowing how it errs and how Michael's comment doesn't, and how this is so obvious.

Replies from: CarlShulman
comment by CarlShulman · 2010-06-16T03:25:34.977Z · LW(p) · GW(p)

Yes, they recognize that leisure is good in the abstract, but when it comes to proposals for "what to do" about the downturn, the implicit, unquestioned assumption is that we must must must get GDP to keep going up, no matter how many make-work projects or useless degrees that involves.

[Didn't downvote.] This is silly. The 'leisure' of unemployment is concentrated on a few, and comes with elevated rates of low status, depression, suicide, divorce, degradation of employability, etc.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-16T03:48:28.926Z · LW(p) · GW(p)

That's a misinterpretation of what I was suggesting as the alternative. Lower output + more leisure doesn't mean the "leisure" is concentrated entirely in a few workers, making them full-time leisurists who starve. Rather, it means that anyone who wants to work for money would work fewer hours and have a lower level of consumption, not zero consumption.

Furthermore, the lower consumption is only consumption of goods purchased with money; with significant restructuring, labor with predictable demand (like babysitting) can be handled by cooperatives that avoid the need to pay for it out of cash reserves.

I don't deny that make-work programs allow workers to show off and practice their skills, retaining employability. I criticize economists who miss this benefit. But if you're going to spend money to get this benefit, you should spend it in a way that directly targets the achievement of this benefit to the workers, rather than on make-work projects that only achieve this benefit as a site effect, and which waste capital goods and distort markets in the process.

Replies from: CronoDAS
comment by CronoDAS · 2010-09-22T00:50:39.991Z · LW(p) · GW(p)

That's a misinterpretation of what I was suggesting as the alternative. Lower output + more leisure doesn't mean the "leisure" is concentrated entirely in a few workers, making them full-time leisurists who starve. Rather, it means that anyone who wants to work for money would work fewer hours and have a lower level of consumption, not zero consumption.

Unfortunately, in the United States, you really would end up with much more of the former and less of the latter. Europe would be better off, though, thanks to different labor laws; would you suggest that the United States adopt something like France's maximum 35 hour workweek, or Germany's subsidies to part-time workers?

Currently, hours worked per week is positively correlated with hourly wages; one person working 80 hours a week usually makes more money than two people who both work 40 hours a week. Also, specifically wanting to do part-time work is a bad signal to employers. It signals that you're not committed to your job, that you're probably lazy, and that you're weird. So, absent government intervention, you probably won't see people voluntarily reducing their working hours.

comment by CronoDAS · 2010-09-22T00:41:54.176Z · LW(p) · GW(p)

Both of these are contradicted by the fact that no economist, in discussion of the recent economic troubles, has suggested that letting the economy adjust to a lower level of output/work would be an acceptable solution.

This is because it isn't. A "lower level of output/work" means that people, on average, are going to be poorer. And the way our economy is set up (in the United States at least), reducing output/work by 1% doesn't mean that each person works 1% less, produces 1% less, and consumes 1% less, it means that 1 in 100 people lose their job, can't find another one, and become poor, while the rest keep going on as they have been. So, when output/work falls, you don't get more leisure, you get more poverty.

And I disagree that most stimulus spending ends up being directed to "worthless" projects. Maybe they're not the best value for money, but even completely worthless make-work projects are still effective at wealth redistribution. Furthermore, if people are willing to lend the government money for really, really low interest rates (as demonstrated by prices of U.S Treasury securities) then isn't that a signal that it's an unusually good time for the U.S. government to borrow and spend - that the economy wants more of what the government produces and less of what private industry produces?

Replies from: SilasBarta
comment by SilasBarta · 2010-09-22T18:30:12.960Z · LW(p) · GW(p)

This is because it isn't. A "lower level of output/work" means that people, on average, are going to be poorer. And the way our economy is set up (in the United States at least), reducing output/work by 1% doesn't mean that each person works 1% less, produces 1% less, and consumes 1% less,

This I think reflects a status-quo bias. When the per capita GDP was lower in 2000, or 1990, the economy managed to employ a higher percentage of people. While you're right that current institutions, inertia, and laws prevent shorter workweeks, that is an argument for removing these barriers, not an argument for trying to game the GDP numbers in the (false) hope that this will somehow translate into sustainable employment because of the historical correlation.

And I disagree that most stimulus spending ends up being directed to "worthless" projects. Maybe they're not the best value for money, but even completely worthless make-work projects are still effective at wealth redistribution.

Okay, but that still looks like a case of lost purposes and fake utlity functions. If you're spending money to redistribute, then spend the money to redistribute! Don't spend it on a project that hogs up real resources just to get a small side-effect of transferring money to people you want to help. ("What's your real objection" and all.) If it's important that they feel they earn the paycheck, then require that they take job training.

And the reason I call the projects worthless is this (and it doesn't require an ideological commitment to being against government projects): people couldn't justify asking the government to provide these things before the recession. But if the recession is a contraction of productive capacity, then the projects we commit to should also contract -- it should look like an even worse deal.

The fact that the government can issue debt cheaper doesn't change this fact. The reduced productive capacity is a real (i.e. non-nominal) phenomenon. The greater ease with which government can procure resources does not mean our aggregate ability to produce them has increased; it just means the government can more easily increase its share of the shrinking pie. That still implies that our "choice set" is being reduced, and the newer, larger wastefulness of these projects will have to show up somewhere.

If the fundamental determinant of reduced unemployment is whether the economy has entered into (as Arnold Kling says) sustainable patterns of specialization and trade, then temporary stimulus projects can't accelerate this, because they're by definition not sustainable: after they're over, we'll just have to readjust again.

I must emphasize, as I did in this blog post, that this does not mean we should give suffering families the finger because "it would be inefficient and all" -- the fact that they (under a stimulus project) are working, feeling productive, and getting a paycheck is very significant, and definitely counts as a benefit. It's just that you should help them a way that doesn't inhibit the economy's search for efficient use of factors of production, nor (significantly) favor these families over the ones that are going to be screwed again when the projects have to stop, and the hunt for re-coordination starts anew.

Replies from: CronoDAS
comment by CronoDAS · 2010-09-23T00:45:09.941Z · LW(p) · GW(p)

While you're right that current institutions, inertia, and laws prevent shorter workweeks, that is an argument for removing these barriers, not an argument for trying to game the GDP numbers in the (false) hope that this will somehow translate into sustainable employment because of the historical correlation.

Oh, definitely.

Okay, but that still looks like a case of lost purposes and fake utlity functions. If you're spending money to redistribute, then spend the money to redistribute! Don't spend it on a project that hogs up real resources just to get a small side-effect of transferring money to people you want to help. ("What's your real objection" and all.) If it's important that they feel they earn the paycheck, then require that they take job training.

I basically agree with this; if you want to redistribute, then certainly it's better to just redistribute than to "employ" people to do completely useless things. (For example, extending unemployment benefits is a form of redistribution.)

And the reason I call the projects worthless is this (and it doesn't require an ideological commitment to being against government projects): people couldn't justify asking the government to provide these things before the recession. But if the recession is a contraction of productive capacity, then the projects we commit to should also contract -- it should look like an even worse deal.

Well, what matters is the opportunity cost. A project that wasn't worth doing before can become worth doing if the better alternatives aren't there anymore; a contraction of productive capacity doesn't have to affect all sectors of the economy equally. For example, people in a country experiencing an oil shortage may find that investing in more expensive, non-oil energy sources has become worthwhile; it's worse than what used to be possible, but it's the best remaining alternative. Given that people are willing to lend to the federal government more cheaply now than before the recession, the new equilibrium might end up involving more "investment in government", not because government has become more productive, but because the alternative investments have gotten worse.

And I'm not necessarily sure that absolute productive capacity went down all that much in the current recession. During the Great Depression, the factories were still there, there were people willing and able to operate the factories, and there people who wanted the goods the factories could produce, yet the factories were idle, the would-be factory workers were unemployed, and the would-be consumers didn't have the goods they wanted. (The Keynesian position is that there was a collapse in aggregate demand, leading to a general glut, followed by a reduced output level.)

comment by Unnamed · 2010-06-21T08:32:37.603Z · LW(p) · GW(p)

Economists who argue for stimulus spending on Keynesian grounds understand that GDP is not a perfect measure and that the value produced by stimulus projects may be less than the value produced by ordinary spending. See, for instance, this Brad DeLong post, where he estimates the net benefit of the stimulus and counts the useful stuff produced using stimulus money as being only 80% as valuable as the dollar amount would suggest. Or, as he writes:

The extra people put to work produce $110,000 of useful stuff--that's a benefit. ... However, because we are pulling forward spending from the future into the present--spending the $92,000 now rather than in the future--we are buying stuff too soon, and because the government is all thumbs we are to some degree buying less valuable stuff than we woul ordinarily by buying. Figure a 20% discount--that's an $18,000 cost.

Replies from: wedrifid
comment by wedrifid · 2010-06-21T08:47:07.999Z · LW(p) · GW(p)

See, for instance, this Brad DeLong post, where he estimates the net benefit of the stimulus and counts the useful stuff produced using stimulus money as being only 80% as valuable as the dollar amount would suggest.

Well, at least that is 20% closer to the mark!

comment by thomblake · 2010-06-15T16:07:03.339Z · LW(p) · GW(p)

Nice to see this kind of thinking from a capitalistish.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-16T03:18:24.040Z · LW(p) · GW(p)

I'll accept that compliment, backhanded though it might be :-) (I canceled out the downmod you got for that comment -- no offense taken.)

I would appreciate, though, if you could (as best you can) tell me what it was I said that led you to believe I'm capitalistish (in the sense that you meant), or that I would otherwise disagree with my above GDP rant. No need to dig up links, just tell me whatever you remember or can quickly find.

I'm not doing this to make you feel foolish for having said what you did (like I've been known to try with you ...), but because I want to know what it is that gives of these impressions of my views, and whether I should be using different terms to describe them.

As I've said before, I have a love-hate relationship with libertarianism. I believe largely what I did ten years ago about the proper role of government, but much of what self-described libertarians advocate is sharply contrary to what I considered to be my libertarian view.

comment by James_K · 2010-06-15T05:53:39.329Z · LW(p) · GW(p)

An interesting question. Here are some initial thoughts:

In terms of broad economic aggregates, it won't make any difference. If you rent the room off your parents for a market rate, GDP is exactly unaffected, people are paying the same money to different people. If you rent it for less than market rate, GDP is lower, but this reflects deficiencies in measured GDP since GDP uses market prices as a proxy for the value of a transaction (this is fine for the most part, but doing your child a favour is an exception conventional methodology can't deal with). So from a macroeconomic perspective I'd say it's a wash either way.

Microeconomically, there could be some efficiencies in you renting from your parents. If they trust you more than a random stranger (and let's hope they do) they will spend less time monitoring your behaviour (property inspections and the like) than they would a random stranger, but the value of your familial relationship should constrain you from taking advantage of that lax monitoring in the way a stranger would. This mean that your parents save time (which makes their life easier) and no one should be worse off (I assume the current tenant of their room would find adequate accommodation elsewhere).

However, one note of caution. If you were to get into a dispute of some sort with your parents over the tenancy, this could damage your relationship with your parents. If you value this relationship (and I assume you do), this is a potential downside that doesn't exist under the status quo. Also, some people might see renting from your parents as little different to living with your parents which (depending on your age) may cost you status in your day-to-day life (even if you pay a market rate). If you value status, you should be aware of this drawback.

So in summary, the most efficient outcome depends on three variables: 1) How much time and effort do your parents spend monitoring their tenant at the moment? 2) How likely is it that your relationship with them could be strained as a result of you living there? 3) How many friends / acquaintances / colleagues do you have that would think less of your for renting from your parents (and how much do you care)?

I hope that helps.

comment by Mike Bishop (MichaelBishop) · 2010-06-15T03:30:32.318Z · LW(p) · GW(p)

I think that a majority of economists agree that in many downturns, it helps the economy if people, on the margin, spend a little more. This justifies Keynesian stimulus. Therefore, the economy would be helped if your choice increases the total amount of money changing hands, presumably if you rent the apartment for $X when X>Y. My impression is that in good economic times, marginal spending is not considered to improve economic welfare.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-16T15:48:18.704Z · LW(p) · GW(p)

I think that a majority of economists agree that in many downturns, it helps the economy if people, on the margin, spend a little more. This justifies Keynesian stimulus. Therefore, the economy would be helped if your choice increases the total amount of money changing hands ...

Imagine that the "economy" is sluggish, and that a widget maker currently profits $1 on each widget sale. Now, consider these two scenarios:

a) I buy 100 widgets that I don't want, in order "to help the economy".
b) I give the widget-maker $100. Then, I lie and say, "OMG!!! I just heard that demand for widgets is SURGING, you've GOT to make more than usual!" (Assume they trust me.)

In both cases, the widget-maker is $100 richer, the real resources in the economy are unchanged, and the widget-maker has gotten a false signal that more widgets should be produced. Yet one of those "helps the economy", while the other doesn't? How does that make sense?

If you believe that either one of those "helps the economy", your whole view of "the economy" took a wrong turn somewhere.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2010-06-17T16:36:39.914Z · LW(p) · GW(p)

I agree that both a) and b) would have a similar effect in that the widget manufacturer puts to work resources (labor, machines) which would otherwise not be utilized. I wouldn't recommend either a) or b) because there are many more efficient ways to stimulate the economy. One that my father, who happens to be an economist, has promoted is a temporary tax credit for new hires. More detail. If there are some roads you were going to build a couple years from now, speeding up that investment is probably a good idea in an economic downturn. I'm not defending legislation that actually got passed... I try not to pay too much attention.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-17T17:35:53.647Z · LW(p) · GW(p)

I agree that both a) and b) would have a similar effect in that the widget manufacturer puts to work resources (labor, machines) which would otherwise not be utilized. I wouldn't recommend either a) or b)

Then why did you say this, in the very comment I was replying to?

Therefore, the economy would be helped if your choice increases the total amount of money changing hands,

That's the same as recommending a)!

because there are many more efficient ways to stimulate the economy

It doesn't matter that you can think of better ways; the problem is with a view of the economy that regards either of a) or b) as "good for the economy". And you in fact hold that view.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2010-06-17T21:42:13.106Z · LW(p) · GW(p)

We were asked a sort of odd question which was which apartment choice would help the economy when not taking into account the individuals preferences about apartments. Those preferences in fact dominate the overall effect on the economy. I wouldn't recommend anyone personally attempting Keynesian stimulus.

Increasing the amount of money changing hands only helps in certain circumstances, and even then it is not necessarily the dominant effect.

What about the examples of intelligent stimulus I offered?

comment by cousin_it · 2019-08-21T09:26:03.140Z · LW(p) · GW(p)

Coming back to this question after a few years, I was able to find a surprisingly simple Econ 101 answer in five minutes. To zeroth order, there's no change because the amount of goods and services in the economy stays the same. To first order, allowing a deal to be freely made usually increases total value in the economy, not just the value for those making the deal; so this deal is good for the economy iff both sides agree to it.

That sidesteps all complications like "the parents are happy to help their child", "the apartment might have facilities that the child doesn't need", etc. I guess reading an econ textbook has taught me to look for ways to estimate the total without splitting it up.

comment by AlephNeil · 2010-06-15T11:29:04.120Z · LW(p) · GW(p)

Here's another question to chew on:

Suppose you're in a country that grows and consumes lots of cabbages, and all the cabbages consumed are home-grown. Suppose that one year people suddenly, for no apparent reason, decide that they like cabbages a lot more than they used to, and the price doubles. But at least to begin with, rates of production remain the same throughout the economy. Does this help or harm the economy, or have no effect?

In one sense it 'obviously' has no effect, because the same quantities of all goods and services are produced 'before' and 'afterwards'. So whether we're evaluating them according to the 'earlier' or the 'later' utility function, the total value of what we're producing hasn't changed. (Presumably the prices of non-cabbages would decline to some extent, so it's at least consistent that GDP wouldn't change, though I still can't see anything resembling a mathematical proof that it wouldn't.)

comment by Vladimir_M · 2010-06-14T22:16:45.606Z · LW(p) · GW(p)

would that help or hurt the country's economy as a whole?

What exact metric do you have in mind?

Replies from: cousin_it
comment by cousin_it · 2010-06-14T22:22:18.127Z · LW(p) · GW(p)

I'd be about equally happy if offered a solution in terms of GDP or some more abstract metric like "sum of happiness".

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-14T23:06:11.394Z · LW(p) · GW(p)

Trouble is, all these macroeconomic metrics that can be precisely defined have only a vague and tenuous link to the actual level of prosperity and quality of life, which is impossible to quantify precisely in a satisfactory manner. Moreover, predicting the future consequences of economic events reliably is impossible, despite all the endless reams of macroeconomic literature presenting various models that attempt to do so.

Thus, if you want to ask how your choice will affect the nominal GDP for the current year or some such measure, that's a well-defined question (though not necessarily easy to answer). However, if you want to interpret the result as "helping" or "hurting" the economy, it requires a much more difficult, controversial, and often inevitably subjective judgment.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2010-06-15T03:30:39.202Z · LW(p) · GW(p)

Of course, gdp only measures goods and services sold, not "household production."

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-15T06:28:26.384Z · LW(p) · GW(p)

That's only one of the main problems with GDP. Here's a fairly decent critique of the concept written from a libertarian perspective (but the main points hold regardless of whether you agree with the author's ideological assumptions):
http://www.econlib.org/library/Columns/y2010/HendersonGDP.html

In addition to these criticisms, I would point out the impossibility of defining meaningful price indexes that would be necessary for sensible comparisons of GDP across countries, and even across different time periods in the same country. The way these numbers are determined now is a mixture of arbitrariness and politicized number-cooking masquerading as science.

Replies from: MichaelBishop, NancyLebovitz, SilasBarta
comment by Mike Bishop (MichaelBishop) · 2010-06-15T16:09:04.572Z · LW(p) · GW(p)

It is certainly true that some people make too much of GDP, but those numbers can be pretty helpful for answering certain research questions. Let's not throw the baby out with the bath water.

Replies from: Vladimir_M, SilasBarta
comment by Vladimir_M · 2010-06-16T03:07:32.663Z · LW(p) · GW(p)

To continue on your metaphor, it's not clear to me if there is a baby worth saving there at all. Even if there is, the baby is submerged in an enormous cesspool of filthy and toxic bathwater that's been poisoning us in very nasty ways for a long time.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2010-06-16T14:11:41.765Z · LW(p) · GW(p)

To be clear, you are suggesting we might not lose anything by giving up measuring and using GDP figures? I'll side with the majority of the economics profession... they aren't perfect but they mostly use GDP data in a reasonable way.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-16T14:53:20.003Z · LW(p) · GW(p)

Just so we're on the same page, could you explain what it would look like if economists' collective wisdom were actually so bad that you would agree they use GDP data in an unreasonable way?

Because you can't just look at the fact the top economists all agree -- they'd do that even if the field were collectively garbage. There has to be some real-world entanglement which would reveal the failure of their ideas, and I want to know what you expect such a failure to look like.

Replies from: MichaelBishop, realitygrill
comment by Mike Bishop (MichaelBishop) · 2010-06-17T16:55:55.840Z · LW(p) · GW(p)

I'm a sociologist*, and there is nothing sociologists like to do more than point out where economists go wrong. So if GDP was a worthless figure, I expect the real world entanglement that one of my fellow sociologists would have convinced me of that already.

I'm not saying economists never overinterpret GDP figures, and I'm not saying the consensus of macroeconomists is always correct.

Though I think we might both be better served by quitting conversation and reading actual experts (I don't claim to be one) I would like to make sure we're on the same page about the implications of your criticism. Are you not saying that it is essentially worthless to attempt to study economic growth or business cycles empirically because the data is so poor?

*if you can be one without having completed your dissertation yet.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-17T17:29:53.302Z · LW(p) · GW(p)

I'm a sociologist, and there is nothing sociologists like to do more than point out where economists go wrong. So if GDP was a worthless figure, I expect the real world entanglement that one of my fellow sociologists would have convinced me of that already.

This sounds to me like a case of mistakenly thinking "someone would have noticed!". What exactly would sociologists have noticed and hasn't happened? Remember, "my echo chamber in academia agrees with me" doesn't count as evidence!

And, FWIW, sociologists (and a lot of the left in general) do complain about GDP -- they're the ones spearheading the push to use alternate metrics like "Gross National Happiness" and other things. I think a lot of them are nutty, but at least they're identifying values that need to be looked at.

Though I think we might both be better served by quitting conversation and reading actual experts (I don't claim to be one) I would like to make sure we're on the same page about the implications of your criticism.

But I have read the experts! Top economists like Greg Mankiw, Paul Krugman, and Scott Sumner blog and lay out their arguments in detail, and their (economic basis for making their) arguments are exactly as I have portrayed them! Sumner in particular believes (mistakenly imo) that nominal GDP is a crucial measure.

Krugman certainly relies heavily on measuring real GDP growth and equates it with progress. And James_K, who claims to be an economist, just came out of the woodwork and endorsed exactly what I've accused economists of, though asserting (with a basis I'm shaking) that they don't really make that big of a deal out of GDP.

Are you not saying that it is essentially worthless to attempt to study economic growth or business cycles empirically because the data is so poor?

With the currently studied data, yes, though with different measures, better progress could be made. In the past I've suggested measuring non-cash and non-market production, subtracting certain "bad" activities from GDP (i.e. things which represent a response to destruction, as it's indicative of merely replacing some capital with other capital), measuring product degradation in calculating CPI, and using insulin as a better inflation gauge.

if you can be [a sociologist] without having completed your dissertation yet.

Hey, I'm fine with calling you one if you're fine with calling me an engineer despite just having a bachelors and years of field work but not a P.E. license.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2010-06-17T21:49:20.217Z · LW(p) · GW(p)

I agree that GDP is imperfect. If it were easy to perfect then it would have been done already. Should more resources be devoted to the issue? Probably. I support the use of multiple measures of wealth and well-being. But I do think that when GDP goes up, that usually indicates good things are happening. Other indicators usually track it.

I'm not trying to deny you've noticed a problem, I just think that you're overstating it because even though GDP is imperfect, there is still a lot to be learned from empirical research that uses it.

comment by realitygrill · 2010-06-20T19:42:53.199Z · LW(p) · GW(p)

Oh boy, we should bring Taleb in here.

comment by SilasBarta · 2010-06-15T16:42:35.852Z · LW(p) · GW(p)

If we're going to do metaphors, then yes, you're right, but we also have to make sure we're not drinking the bathwater. The bathwater is for bathing, not for drinking. GDP should be used a very rough cross-country comparison, not as a measure of how well the economy's general ability to satisfy wants changes over short intervals.

Interestingly enough, I was arguing roughly your position a few years ago. But now, seeing how economist deliberately prioritize GDP over the fundamentals it's supposed to measure, I can't even justify defending it for purposes other than, "The US economy is more productive than Uganda's."

comment by NancyLebovitz · 2010-06-16T23:49:14.569Z · LW(p) · GW(p)

The essay at the link talks about government waste. Is it meaningful to talk about waste in business, or should it all be considered to be at least educational?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-17T01:58:22.860Z · LW(p) · GW(p)

Regarding the end-products, one essential difference is that if a business can find private consumers who will purchase its product with their own money and of their own free will, this constitutes strong evidence that these customers assign some positive value to this product, so it can't be fairly described as "waste." In contrast, for many things produced by the government, no such clear evidence exists, and even if one is not of particularly libertarian persuasion, it seems pretty clear that many of them are wasteful in every reasonable sense of the term. Yet all consumer and (non-transfer) government spending is added to the GDP as equivalent.

When it comes to waste generated by inefficiencies, miscalculations, employee misbehavior, and perverse incentives, some amount of wasteful efforts and expenses is obviously inevitable in the internal functioning of any large-scale operation. It does seem pretty clear that in most cases, the incentives to minimize them are much stronger in private businesses than in governments, though unlike the previous point, this one is a matter of degree, not essence. However, when it comes to the GDP accounting, there are important differences here.

The reason is that all non-transfer spending by the government will be added to the GDP, whereas spending by businesses is added only if it constitutes investment (as opposed to mere procuring of the inputs necessary for production). As far as I know, the exact boundary in the latter case is a matter of accounting conventions, though in most cases, it does seem clear which is which (e.g. for a trucking company, buying fuel is not an investment, but buying new trucks is). Therefore, whatever the actual amount of wasteful spending by businesses might be, not all of it will be added to the GDP, unlike the wasteful spending by governments.

comment by SilasBarta · 2010-06-15T15:39:58.783Z · LW(p) · GW(p)

Thanks for that link. I hadn't realized Henderson had written that, let alone just a few months ago! Its recency means he could critique the stimulus arguments of the last two years, making basically the same arguments I do.

My only complaint is that he noted that leaving off non-market exchanges (i.e. maid becoming wife) causes GDP to be understated, when he should have discussed its impact on the rate of change in GDP, which is more important.

comment by Mike Bishop (MichaelBishop) · 2010-06-17T17:11:30.459Z · LW(p) · GW(p)

I recommend going to an econ textbook for good questions.

comment by AlephNeil · 2010-06-15T10:35:29.717Z · LW(p) · GW(p)

My (admittedly simple-minded) answer would be "other things being equal it has no effect at all".

Each day you and your parents do whatever it is you do, creating a given amount of wealth (albeit perhaps in such a way that it's impossible to say exactly how much of this wealth you personally created, rather than your colleagues, or the equipment you use). Then a bunch of wealth gets redistributed in a funny way (through wages and rents being paid). But changing the way that wealth is redistributed doesn't affect the 'total rate of wealth-generation' which is what GDP is trying (sometimes unsuccessfully, as James_K says) to measure. In just the same way, getting a pay rise doesn't in itself help the economy (but it may have been caused by you doing more valuable work, which does help).

Replies from: cousin_it
comment by cousin_it · 2010-06-15T11:25:53.958Z · LW(p) · GW(p)

I'm pretty sure this is wrong. If I have a spare apartment and start renting it out, I'm creating wealth, not just redistributing it. So changing the pattern of who rents from whom should influence the total amount of wealth created.

Replies from: AlephNeil, AlephNeil
comment by AlephNeil · 2010-06-15T11:51:32.449Z · LW(p) · GW(p)

Though I should clarify that when I talk about "the size of the economy" I'm talking about something intangible - the 'wealth of the nation', or more precisely the 'nation's rate of wealth-creation' - rather than simply GDP. Perhaps GDP will reflect the changing rents, perhaps not, depending on which type of GDP we're talking about (I seem to recall that there are several, including a 'spending' measure and an 'income' measure.)

comment by AlephNeil · 2010-06-15T11:43:57.736Z · LW(p) · GW(p)

But we're not talking about someone renting a previously empty apartment, we're talking about a change of occupier. The 'wealth' of the apartment is merely being 'consumed' by someone else.

Suppose without loss of generality (?) that the person who was previously in your parents' apartment is now in your old apartment. Then we can describe the change as follows:

  1. Two people have swapped apartments.
  2. They may be paying different rents from before.

Neither 1 nor 2 in itself changes the size of the economy. (Although, if a rent goes up because an apartment is more desirable then that changes the size of the economy.)

Replies from: cousin_it
comment by cousin_it · 2010-06-15T11:51:57.120Z · LW(p) · GW(p)

Apartments don't have a single intrinsic "desirability" value. Different people assign different values to the same apartment. If you think about it, the fact that different people can value a thing differently is the only reason any deals happen at all. The sum you agree to pay is a proxy for the value you place on the thing.

No, you can't assume without loss of generality that the person who was previously in my parents' apartment will be willing or able to move to mine. It depends on the relationship between X and Y.

Replies from: AlephNeil
comment by AlephNeil · 2010-06-15T12:07:21.606Z · LW(p) · GW(p)

No, you can't assume without loss of generality that the person who was previously in my parents' apartment will be willing or able to move to mine. It depends on the relationship between X and Y.

But the set of living spaces is the same as before. Can't we assume for simplicity that, even if it's not as simple as two people swapping places with each other, what we have is a 'permutation' such that all previously occupied houses and apartments remain occupied?

Then once again we can factor the change into (1) a permutation and (2) a change of rent, and ask whether either of them changes the wealth of the nation. I'm pretty sure that (2) in itself has no effect - it's just a 'redistribution' between landlords and their tenants. Whether (1) has an effect depends on whether or not we're including the fact that different people may make different assessments of desirability (i.e. whether different people have different preferences about the kind of apartment they'd like to live in.)

Of course you're quite right that different people do have different preferences - I was merely ignoring this for simplicity - but in any case the statement of the problem says nothing explicit about your or anyone else's preferences, it only talks about X and Y. Are your apartment-preferences supposed to change depending on the values of X and Y?

Replies from: cousin_it
comment by cousin_it · 2010-06-15T12:19:46.531Z · LW(p) · GW(p)

You're right that (2) has no effect, but (1) probably does have effect. I thought we could somehow guess the effect of (1) by looking at X and Y, but now I see it's not easy.

comment by Mike Bishop (MichaelBishop) · 2010-06-14T22:15:03.810Z · LW(p) · GW(p)

There is other information you want to consider. Tax rates for example, and whether or not the economy is in the sort of downturn that would benefit from stimulus or not.

Regardless, the effects on aggregate supply and demand will be tiny. How much you and your parents value these alternatives is what matters most.

Replies from: cousin_it
comment by cousin_it · 2010-06-14T22:24:01.346Z · LW(p) · GW(p)

I'm not asking about what I should decide, I'm asking about the sign of those tiny effects on the country as a whole. Is it actually a difficult question in disguise? Why? I know next to nothing about economics, but the question sounds to me like it should be really easy for anyone qualified.

Replies from: Houshalter
comment by Houshalter · 2010-06-15T02:46:25.541Z · LW(p) · GW(p)

I think the best way to measure it in any meaningful way would be to consider the same scenerio with millions of people doing it instead of just one, but even then it doesn't look like it makes much of a difference.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2010-06-15T16:33:02.605Z · LW(p) · GW(p)

This is a good point. What happens in this individual case would be dominated by random facts about the individuals directly involved. If you imagine the same situation repeated many times, 100 should be plenty, the randomness cancels out.

Replies from: realitygrill
comment by realitygrill · 2010-06-19T03:34:23.107Z · LW(p) · GW(p)

So you might think. Sensitivity to initial conditions!

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2010-06-20T16:30:45.255Z · LW(p) · GW(p)

care to explain why we should expect sensitivity to initial conditions to matter in the particular example being discussed here?

Replies from: realitygrill
comment by realitygrill · 2010-06-20T18:56:11.205Z · LW(p) · GW(p)

I am struggling to convey this, so I'll have to think about it more.

For now, though: I do think that differences in the initial conditions would be propagated by adaptive individuals and institutions (rather than smoothed away). That should lead to bifurcations and path dependencies that would generate drastically different outcomes. Enough that averaging them would be meaningless.

Why do you think repeating it many times would converge? Are the statistical limit theorem conditions really met? I don't think so..

None of this really explicitly says that you wouldn't be able to at least figure out the sign of the change. It might be computationally intractable but qualitatively determinable in special cases.

comment by MartinB · 2010-06-17T23:47:52.833Z · LW(p) · GW(p)

These days, I sometimes get bumped into great new ideas[tm], that are at times well proven, or at least workable and useful -- only to remember that I did already use that idea some years ago with great success and then dumped it for no good reason whatsoever. Simple example: in language learning write ups, I repeatedly find the idea of an SRS. That is a program which does spaced repetitions at nice intervals and consistently helps in memorizing not only language items, but also all other kinds of facts. Programs and data collections are now freely available -- but I already programmed my own program for that about 14 years ago as a nice entry level programming exercise, and used it quite extensively and successfully for about 2 years in school, till I suddenly stopped. That made me wonder which other great ideas I already used and discarded, why former me would do such a thing and to make it a public question: which great things LWers might have tried and discarded for no particular reason.

Another obvious example from my own stack would be the use of checklists to pack for holidays. Worked great for years and still does.

Replies from: gwern
comment by gwern · 2010-06-21T01:48:41.958Z · LW(p) · GW(p)

which great things LWers might have tried and discarded for no particular reason.

That's kind of hard - if they were so great, how could we remember they were great and also not immediately reinstate them?

comment by Morendil · 2010-06-17T06:00:14.249Z · LW(p) · GW(p)

Looks like LW briefly switched over to its backup server today, one with a database a week out of date. That, or a few of us suffered a collective hallucination. Or, for that matter, just me. ;)

Just in case you were wondering too.

Replies from: AndyWood
comment by AndyWood · 2010-06-17T07:31:55.849Z · LW(p) · GW(p)

I was wondering indeed. That was surreal.

comment by Psy-Kosh · 2010-06-17T04:00:30.213Z · LW(p) · GW(p)

The causal-set line of physics research has been (very lightly) touched on here before. (I believe it was Mitchel Porter that had linked to one or two things related to that, though I may be misremembering). But recently I came across something that goes a bit farther: rather than embedding a causal set in a spacetime or otherwise handing it the spacetime structure, it basically just goes "here's a directed acyclic graph... we're going to add on a teensy weensy few extra assumptions... and out of it construct the minkowski metric, and relativistic transformations"

I'm slowly making my way through this paper (partly slowed by the fact that I'm not all that familiar with order theory), but the reason I mention the paper (A Derivation of Special Relativity from Causal Sets) is because I can't help but wonder if it might give us a hook to go in the other direction. That is, if this line of research might let us bring the mathematical machinery of much of physics to help us analyze stuff like Bayes nets and decision theory and give us a (potentially) really powerful mathematical tool.

Maybe I'm completely wrong and nothing interesting will come of trying to "reverse" the causal set line of research, (but causal set stuff is neat anyways, so at least I get some fun from reading and thinking about it) but does seem potentially worth looking into.

Besides, if this does end up being a useful tool, it would be perhaps one of the biggest and subtlest punchlines the universe pulled on us: since causal-sets are an approach to quantum gravity, if it ended up helping with the rationality/AI/etc stuff...

That would mean that Penrose was right about quantum gravity being a key to mind... BUT IN A WAY ENTIRELY DIFFERENT THAN HE INTENDED! bwahahahaha. :)

comment by Mike Bishop (MichaelBishop) · 2010-06-14T15:44:53.008Z · LW(p) · GW(p)

Whole Brain Emulation: The Logical Endpoint of Neuroinformatics? (google techtalk by Anders Sandberg)

I assume someone has already linked to this but I didn't see it so I figured I'd post it.

comment by NancyLebovitz · 2010-06-18T13:04:47.205Z · LW(p) · GW(p)

Creeping rationality: I just heard a bit on NPR about a proposed plan to distribute the returns from newly found mineral wealth in Afghanistan to the general population. This wasn't terribly surprising. What delighted and amazed me was the follow-up that it was hoped that such a plan would lead to a more responsive government, but all that was known was that such plans have worked in democratic societies, and it wasn't known whether causality could be reversed to use such a plan to make a society more democratic.

Replies from: knb
comment by knb · 2010-06-18T21:26:13.093Z · LW(p) · GW(p)

Such plans work in societies with rule of law, and fail miserably in societies that are clan based and tribal. A quarter of Afghanistan's GDP may go to bribes and shakedowns. A more honest description from NPR would be that historically, mineral wealth when controlled by deeply corrupt governments like Afghanistan's, is primarily used for graft and nepotism, benefiting a few elites in government and industry while funding the oppression of everyone else.

In other words, Afghanistan is more like Nigeria than Norway.

comment by Mass_Driver · 2010-06-17T19:30:14.458Z · LW(p) · GW(p)

Can anyone recommend a good book or long article on bargaining power? Note that I am NOT looking for biographies, how-to books, or self-help books that teach you how to negotiate. Biographies tend to be outliers, and how-to books tend to focus on the handful of easily changeable independent variables that can help you increase your bargaining power at the margins.

I am instead looking for an analysis of how people's varying situations cause them to have more or less bargaining power, and possibly a discussion of what effects this might have on psychology, society, or economics.

By "bargaining power" I mean the ability to steer transactions toward one's preferred outcome within a zone of win-win agreements. For example, if we are trapped on a desert island and I have a computer with satellite internet access and you have a hand-crank generator and we have nothing else on the island except that and our bathing suits and we are both scrupulously honest and non-violent, we will come to some kind of agreement about how to share our resources...but it is an open question whether you will pay me something of value, I will pay you something, or neither. Whoever has more bargaining power, by definition, will come out ahead in this transaction.

Replies from: Lonnen
comment by Lonnen · 2010-06-18T14:00:25.452Z · LW(p) · GW(p)

I'm currently reading Thomas Schelling's Strategy of Conflict and it sounds like what you're looking for here. From this Google Books Link to the table of contents you can sample some chapters.

comment by SilasBarta · 2010-06-17T15:48:02.948Z · LW(p) · GW(p)

Amanda Knox update: Someone claims he knows the real killer, and is being taken seriously enough to give Knox and Sollecito a chance of being released. Of course, he's probably lying, since Guede most likely is the killer, and it's not who this new guy claims. But what can you do against the irrational?

I found this on a Slashdot discussion as a result of -- forgive me -- practicing the dark arts. (Pretty depressing I got upmodded twice on net.)

Replies from: gwern, JoshuaZ, simplicio, kodos96
comment by gwern · 2010-06-21T02:01:02.767Z · LW(p) · GW(p)

"I know [he was involved] because my brother confessed to me that he had killed Meredith and he asked me to hide a blood-stained knife and set of keys," he said, according to an attachment to Knox's appeal documents.

"I had everything under a little wall behind my house," he said. "I am happy to stand up in court and confirm all this and wrote to the court several times to tell them but was never questioned."

Should be easy to test his claims...

We "can't simply investigate in the course of a trial every claim that comes up," Mignini told CNN.

I sometimes wonder, is the Italian judicial system really that lousy or is there some sort of linguistic or cultural barrier there.

comment by JoshuaZ · 2010-06-18T22:16:35.673Z · LW(p) · GW(p)

Slashdot threads have a bad enough signal to noise ratio as is. Please don't do that sort of thing.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-19T00:07:56.475Z · LW(p) · GW(p)

Should I stop doing this too? Or at least wait until people start challenging the term "top theologian"?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-19T00:11:24.225Z · LW(p) · GW(p)

Yes, as a regular reader of Slashdot, I'd prefer if you didn't do that. I don't see what you are accomplishing from these remarks. It really does come across as simple trolling.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-19T00:17:13.728Z · LW(p) · GW(p)

You know what, bro? I'm not even going to ask your opinion about this.

Notable:

Mere Christianity has been superceded by much better, more solid theology.)

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-19T00:29:19.676Z · LW(p) · GW(p)

That's at least humorous although I have to inquire if the other AC who replies to you about also being a Christian on Slashdot is also you.

Edit: Also, to be clear: My general response whenever these sorts of dark arts come up is very simple: If one needs to do this to get people convinced of you position that's a cause to worry if one's position is actually correct.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-19T00:33:33.740Z · LW(p) · GW(p)

Um, I don't believe the position I linked if that's what you're worried about...

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-19T00:36:53.368Z · LW(p) · GW(p)

No, I mean you are deliberately portraying an alternate position as stupid apparently hoping that people will think that reversed intelligence is stupidity. That's a serious dark art. So if one is going to do that sort of thing one should worry that maybe one's position is really not correct.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-19T00:47:33.326Z · LW(p) · GW(p)

Hm, good point. I guess I am fake justifying. I'll admit, I like to troll, and I'm kinda let down that no one has ever objected to the term "top theologian", saying, "wait, what exactly do you have to do to count as a top theologian? What predictions, exactly?"

I actually participate as a "friendly troll" on a private board on gamefaqs.com. "Friendly troll" in that most everyone there knows I'm a troll and just makes fun of the people who make serious replies to my topics; and I casually chat with people there about what troll topics I should make. The easiest one is, "Isn't evolution still basically just a theory at this point?"

In high school (late 90s), I would troll chatrooms and print transcripts to share with my friends the next day. One of them was a real "internet paladin" type and said, "people like you should be banned from the internet". My crowning "achievement" was to say a bunch of offensive stuff in a gameroom on a card game site, which got a moderator called in; but by that point, everyone was yelling really offensive stuff at me, and got themselves banned. I was left alone because I made (mocking) apologies just in time, and the moderator couldn't scroll up enough to see most of my earlier comments.

I've mostly toned it down and gotten away from it but I still do it here and there. Well, not here, but you get the point.

Replies from: simplicio
comment by simplicio · 2010-06-19T01:48:25.459Z · LW(p) · GW(p)

I'll admit, I like to troll...

It can be fun, I will guiltily admit, but not nearly as much fun as trying to present what you actually believe in a clever enough way that somebody goes... click. (In which endeavour, by all means be sarcastic and use pathos).

You have to do some sort of calculus on what the upshot of this trolling is though... if the upshot is increased irrationality, well, there isn't much functional difference between you and your alter ego.

And all the Anonymous_Coward arguments I've seen that you listed are BETTER arguments (sad as that is) than most sincere ones in support of similar conclusions. The Good Soldier Švejk isn't actually supposed to be a good soldier. :P

Replies from: SilasBarta
comment by SilasBarta · 2010-06-19T04:55:15.045Z · LW(p) · GW(p)

Hm, so you're saying I should use my clever trolling skills to promote rationality, instead of to unsuccessfully satirize irrationality?

Because I used to do the reverse: whenever someone was making irritatingly stupid arguments, I would just add that technique to my trolling arsenal.

Replies from: simplicio, simplicio
comment by simplicio · 2010-06-19T05:24:14.982Z · LW(p) · GW(p)

Just to add to this: Goebbels was perfectly right about the phenomenon of the Big Lie. If you repeat an argument - even a TERRIBLE argument - enough times, people will start to believe it. Exempli gratia:

'Evolution is just a theory.' 'Where are the transitional forms?' 'Hurricane in a junkyard.'

There are the partisans of evolution by n.s. and then there are the partisans of creationism, and then there are the other 85% of people who are too busy getting their GED or feeding their kids or trying to make partner in the firm, to bother really thinking about these issues. A few exposures to an unchallenged, vaguely plausible-sounding meme are enough to put them in the ID camp (say), politically, for life. You are contributing to that irrationalist background noise!

Replies from: SilasBarta
comment by SilasBarta · 2010-06-19T05:34:05.983Z · LW(p) · GW(p)

Point taken. When forming a troll post, I make the arguments with the lowest ratio of length to "confusions one needs to disentangle in order to refute". I use "isn't evolution still basically just a theory at this point?" because it's a slightly improved variant by that metric.

As with my other response, perhaps I could find the good-rationalist analog of this technique and optimize for that? Perhaps minimize the ratio of argument length to "confusions one needs to detour into to refute"?

I think part of what made me stray from "the path" was a tendency to root for the rhetorical "underdog" and be intrigued -- excessively -- with brilliant arguments that could defend ridiculous positions. I think I can turn that around here.

Replies from: simplicio
comment by simplicio · 2010-06-19T05:55:41.517Z · LW(p) · GW(p)

I think part of what made me stray from "the path" was a tendency to root for the rhetorical "underdog"...

Oh, don't get me wrong, I enjoy arguing for the other side too, provided it's disclaimed afterward. It's a good way to see your rationalization machine shift into high gear. There is always a combination of lies, omissions, half-truths, special pleading and personal anecdotes that can convince at least a few people that you're right - or, MUCH better, that your position should be respected.

But... rationality is usually the rhetorical underdog. Tssk! :P

...and be intrigued -- excessively -- with brilliant arguments that could defend ridiculous positions. I think I can turn that around here.

Want a brilliant argument defending a silly position? Try Plantinga's evolutionary argument against naturalism. To ascend such lofty heights of obfuscation, bring lots of pressurized oxygen.

To wit:

-Evolution optimizes for survival value, not truth value in beliefs

-Beliefs are therefore adaptive but not necessarily true (you could, conceivably, believe that you should run away from a tiger because tigers like friendly footraces).

-Therefore, on naturalism, we should expect the reliability of our cognition to be low

-This means we should, if we accept naturalism, also accept that our cognitive apparatus is too flawed to have good reasons to accept naturalism. QED, atheist.

comment by simplicio · 2010-06-19T05:12:49.399Z · LW(p) · GW(p)

Hm, so you're saying I should use my clever trolling skills to promote rationality, instead of to unsuccessfully satirize irrationality?

More or less. I'm saying it's a successful and rather amusing satire for people here. But by the standards of internet discourse among the teeming multitudes, you're actually being fairly rhetorically effective. Case in point:

Are you just trying to insult sincere believers with this smear? Because that is NOT how praying works. No SERIOUS Christian actually tries to backup data by praying to God, okay? So I don't even know what you're trying to prove with your bigoted smear against sincere Christians... Let me just ask you this: have you ever read any serious works by any serious theologians about the nature of God and prayer?

The "serious theologians" line makes me smile. But that is actually the tack taken by many of the more 'sophisticated' goddites. It works rhetorically when you think about it. They are saying we are avoiding our belief's weak points.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-19T05:26:54.855Z · LW(p) · GW(p)

Hm, maybe I'll try to frame my real arguments as trolling, and see if that makes it easier to effectively convey them. Thanks for the idea.

Replies from: simplicio
comment by simplicio · 2010-06-19T05:30:47.474Z · LW(p) · GW(p)

It's your call... I didn't quite mean "troll with your actual beliefs," so much as "use the considerable rhetorical skills you have to advance your sincere position."

Replies from: SilasBarta
comment by SilasBarta · 2010-06-19T05:36:39.062Z · LW(p) · GW(p)

It's your call... I didn't quite mean "troll with your actual beliefs,"

Right, I meant that by framing it as a troll exercise I could come with a better phrasing of my argument, not that I would necessarily slip in the angering jabs that make something a genuine troll post.

comment by simplicio · 2010-06-18T22:12:31.954Z · LW(p) · GW(p)

You were arguing against your real opinion as a 5th columner? May I ask why?

(Well done, by the way, in a technical sense. Just the right amount of character assassination: "Sollecito and Knox were known to be practitioners of dangerous sex acts.")

Just don't kill the younglings, Anakin!

Replies from: SilasBarta
comment by SilasBarta · 2010-06-18T22:49:41.150Z · LW(p) · GW(p)

I thought it would get modded down and then provoke someone as well-informed as komponisto to thoroughly refute it, and make people realize how stupid those arguments were.

Damn ... now that's starting to sound like a fake justification!

Eh, I guess I just like trolling too :-/

Replies from: simplicio
comment by simplicio · 2010-06-18T23:13:37.924Z · LW(p) · GW(p)

...and make people realize how stupid those arguments were.

Internet, Silas. Silas, Internet. ;)

I think you will find an ample number of inspiringly bad arguments out there, without adding to their number. I believe this is called cutting one's nose to spite one's face.

comment by kodos96 · 2010-06-17T21:56:18.661Z · LW(p) · GW(p)

FYI, this was discussed previously here

comment by Lonnen · 2010-06-17T14:39:23.582Z · LW(p) · GW(p)

Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would a FAI?

Replies from: Dagon, cousin_it, wedrifid
comment by Dagon · 2010-06-17T19:29:55.883Z · LW(p) · GW(p)

Calling them "dark arts" is itself a tactic for framing that only affects the less-rational parts of our judgement.

A purely rational agent will (the word "should" isn't necessary here) of course use rhetoric, outright lies, and other manipulations to get irrational agents to behave in ways that further it's goals.

The question gets difficult when there are no rational agents involved. Humans, for instance, even those who want to be rational most of the time, are very bad at judging when they're wrong. For these irrational agents, it is good general advice not to lie or mislead anyone, at least if you have any significant uncertainty on the relative correctness of your positions on the given topic.

Put another way, persistent disagreement indicates mutual contempt for each others' rationality. If the disagreement is resolvable, you don't need the dark arts. If you're considering the dark arts, it's purely out of contempt.

Replies from: torekp
comment by torekp · 2010-06-17T23:45:20.663Z · LW(p) · GW(p)

If the disagreement is resolvable, you don't need the dark arts. If you're considering the dark arts, it's purely out of contempt.

If both parties are imperfectly rational, limited use of dark arts can speed things up. The question shouldn't be whether it's possible to present dry facts and logic with no spin, but whether it's efficient. There are certain biases that tend to prevent ideas from even being considered. Using other biases and heuristics to counteract those biases - just to get more alternative explanations to be seriously considered - won't impair or bypass the rationality of the listener.

comment by cousin_it · 2010-06-17T18:49:04.843Z · LW(p) · GW(p)

Dark arts, huh? Sometime ago I put forward the following scenario:

Bob wants to kill a kitten. The FAI wants to save the kitten because it's a good thing according to our CEV. So the FAI threatens Bob with 50 years of torture unless Bob lets the kitten go. The FAI has two distinct reasons why threatening Bob is okay: a) Bob will comply and there will be no need to torture him, b) the FAI is lying anyway. Expected utility reasoning says the FAI is doing the Right Thing. But do we want that?

(Yes, this is yet another riff on consequentialism, deontologism and lying. Should FAIs follow deontological rules? For that matter, should humans?)

Replies from: Nick_Tarleton, NancyLebovitz, Dagon
comment by Nick_Tarleton · 2010-06-17T18:54:13.870Z · LW(p) · GW(p)

Expected utility reasoning says the FAI is doing the Right Thing. But do we want that?

Expected utility reasoning with a particular utility function says the FAI is right. If we disagree, our preferences might be described by some other utility function.

comment by NancyLebovitz · 2010-06-18T13:39:06.740Z · LW(p) · GW(p)

Is that actually the FAI's only or best technique?

Off the top of my non-amplified brain:

Reward Fred for not torturing kittens.

Give Fred simulated kittens to torture and deny Fred access to real kittens.

Give Fred something harmless to do which he likes better than torturing kittens.

ETA Convince Fred that torturing kittens is wrong.

comment by Dagon · 2010-06-17T19:45:26.066Z · LW(p) · GW(p)

our CEV is (and has to be) detailed enough to answer the question of "do we want that?". Saving a kitten is a good thing. Being truthful to Bob is a good thing. Not torturing Bob is a good thing. The relative weights of these good things determines the FAI's actions.

I'd say that the FAI should calculate some game-theoretic chance of torturing Bob for 50 years based on relative pain of kitten death and of having to inflict the torture. Depending on Bob's expected rationality level, we could tell him "you'll be tortured", or "you might be tortured", or the actual mechanism of determining whether he is tortured.

Actually, strike that. Any competent AI will find ways aside from possible torture to make Bob not want that. Either agree with Bob's reason for killing the kitten, or fix him so he only wants things that make sense. I'm not sure how friendly this is - I haven't seen a good writeup or come to any conclusions myself of what FAI does with internal contradictions in a CEV (that is, when a population's extrapolated volition is not coherent).

Replies from: cousin_it
comment by cousin_it · 2010-06-17T22:01:17.956Z · LW(p) · GW(p)

My thoughts about this problem are kind of a mess right now, but I feel there's more than meets the eye.

Ignore the torture, "possible torture" and all that. It's all a red herring. The real issue is lying, tricking humans into utility-increasing behaviors. It's almost certain that some combination of "relative weights of good things" will make the FAI lie to humans. Maybe not the Bob+kitten scenario exactly, but something is bound to turn up. (Unless of course our CEV places a huge disutility on lies, which I'm pretty sure won't be the case.) On the other hand, we humans quickly jump to distrusting anyone who has lied in the past, even if we know it's for our own good. So now the FAI has huge incentive to conceal its lies, prevent the news from spreading among humans. I don't have enough brainpower to model this scenario further, but it troubles me.

Replies from: Houshalter
comment by Houshalter · 2010-06-18T02:20:15.850Z · LW(p) · GW(p)

Lying is a form of manipulation, and humans don't want/like to be manipulated. If the CEV works, then it will understand human concepts like "trust" and "lying" and hopefully avoid it. The only situations where it will intentionally manipulate people is when it is trying to do what is best for humanity. In these cases, you don't have to worry because the CEV is smarter then you, but is still trying to do the "right thing" that you would do if you knew everything it knew.

Replies from: wedrifid
comment by wedrifid · 2010-06-18T03:48:04.296Z · LW(p) · GW(p)

Lying is a form of manipulation, and humans don't want/like to be manipulated.

Well... that depends...

In these cases, you don't have to worry because the CEV is smarter then you, but is still trying to do the "right thing" that you would do if you knew everything it knew.

Exactly.

comment by wedrifid · 2010-06-17T15:10:21.349Z · LW(p) · GW(p)

Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents.

Yes.

For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it?

Yes. (When we say 'rational agent' or 'rational AI' we are usually referring to "instrumental rationality". To a rational agent words are simply symbols to use to manipulate the environment. Speaking the truth, and even believing the truth are only loosely related concepts.

Would a FAI?

Almost certainly, but this may depend somewhat on who exactly it is 'friendly' to and what that person's preferences happen to be.

Replies from: Lonnen
comment by Lonnen · 2010-06-17T16:32:29.170Z · LW(p) · GW(p)

That agrees with my intuitions. I had some series of ideas that ware developing around the idea that exploiting biases was sometimes necessary, and then I found:

Eliezer on Informers and Persuaders

I finally note, with regret, that in a world containing Persuaders, it may make sense for a second-order Informer to be deliberately eloquent if the issue has already been obscured by an eloquent Persuader - just exactly as elegant as the previous Persuader, no more, no less. It's a pity that this wonderful excuse exists, but in the real world, well...

It would seem that in trying to defend others against heuristic exploitation it may be more expedient to exploit heuristics yourself.

Replies from: wedrifid
comment by wedrifid · 2010-06-17T18:41:33.622Z · LW(p) · GW(p)

I'm not sure where Eliezer got the 'just exactly as elegant as the previous Persuader, no more, no less" part from. That seems completely arbitrary. As though the universe somehow decrees that optimal informing strategies must be 'fair'.

comment by xamdam · 2010-06-16T22:15:21.241Z · LW(p) · GW(p)

Q: What Is I.B.M.’s Watson?

http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html?pagewanted=all

A: what is Skynet?

Replies from: Morendil, cupholder, simplicio
comment by Morendil · 2010-06-17T06:18:52.058Z · LW(p) · GW(p)

Sounds a little like Shalmaneser.

comment by cupholder · 2010-06-16T23:34:36.146Z · LW(p) · GW(p)

And now it's time for the Daily Double!

comment by simplicio · 2010-06-16T22:26:59.334Z · LW(p) · GW(p)

In the video, I didn't understand whether that series of wrong answers was staged or actually happened.

Very impressive though. Class.

comment by cousin_it · 2010-06-15T22:12:44.982Z · LW(p) · GW(p)

Apologies for posting so much in the June Open Threads. For some reason I'm getting many random ideas lately that don't merit a top-level post, but still lead to interesting discussions. Here's some more.

  1. How to check that you aren't dreaming: make up a random number that's too large for you to factor in your head, factor it with a computer, then check the correctness by pen and paper. If the answer fits, now you know the computing hardware actually exists outside of you.

  2. How to check that you aren't a brain in a vat: inflict some minor brain damage on yourself. If it influences your mind's workings as predicted by neurology, now you know your brain is physically here, not in a vat somewhere.

Of course, both those arguments fall apart if the deception equipment is "unusually clever" at deceiving you. In that case both questions are probably hopeless.

Replies from: JoshuaZ, zero_call, Mass_Driver, humpolec, wedrifid, Dagon
comment by JoshuaZ · 2010-06-15T22:21:24.266Z · LW(p) · GW(p)

The first one fails terribly. I've had dreams where I've thought I've proven some statement I'm thinking about and when waking up can remember most of the "proof" and it is clearly incoherent. No, subconscious, the fact that Martin van Buren was the 8th President of the United States does not tell me anything about zeros of L-functions. (I've had other proofs that were valid though so I don't want the subconscious to stop working completely).

The second one seems more viable. May I suggest using something like electromagnetic stimulation of specific areas of the brain rather than deliberately damaging sections? For that matter, the fact that drugs can alter thought processes not just perception also strongly argues against being a brain in the vat by the same sort of logic.

Replies from: cousin_it
comment by cousin_it · 2010-06-15T22:34:08.103Z · LW(p) · GW(p)

I like your idea way better than mine. Smoke dope to prove you're not in the Matrix!

Regarding the first point, yes, I guess dreams can hijack your reasoning in arbitrary ways. But maybe I'm atypical like that: whenever my dreams contain verse, music or math proofs, they always make perfect sense upon waking. They do sound "creatively weird", and I must take care to repeat them in my mind to avoid amnesia, but they work fine on real world terms.

comment by zero_call · 2010-06-17T02:17:34.088Z · LW(p) · GW(p)

How to check that you aren't a brain in a vat: inflict some minor brain damage on yourself. If it influences your mind's workings as predicted by neurology, now you know your brain is physically here, not in a vat somewhere.

No, there's no way of knowing that you're not being tricked. If your perception changes and your perception of your brain changes, that just means that the vat is tricking the brain to perceive that.

The "brain in the vat" idea takes its power from the fact that the vat controller (or the vat itself) can cause you to perceive anything it wants.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-17T02:21:39.136Z · LW(p) · GW(p)

If you are a brain in a vat then that should alter sensory perception. It shouldn't alter cognitive processes (say ability to add numbers, or to spell or the like). You could posit a brain in the vat where the controllers also have lots of actual drugs or electromagnetic stimulants read to go to duplicate those effects on the brain, but the point is that we have data about how the external world relates to us that isn't purely sensory.

Replies from: zero_call
comment by zero_call · 2010-06-17T05:14:24.481Z · LW(p) · GW(p)

You don't seem to be familiar with this concept.

You could posit a brain in the vat where the controllers also have lots of actual drugs or electromagnetic stimulants read to go to duplicate those effects on the brain,

This is the entire point of the brain in the vat idea. It's not that "you could posit it", you do posit it. The external world as we experience is utterly and completely controlled by the vat. If we correlate "experienced brain damage" (in our world) with "reduced mental faculties", that just means that the vat imposes that correlation on us through its brain life support system.

Although I don't claim to be an expert in philosophy, the brain in the vat example is widely known to be philosophically unresolvable. The only thing we can really know is that we are a thing that thinks. This is Descartes 101.

Replies from: cousin_it, JoshuaZ
comment by cousin_it · 2010-06-17T22:19:10.066Z · LW(p) · GW(p)

Hmm. Your comment has brought to my attention an issue I hadn't thought of before.

Are you familiar with Aumann's knowledge operators)? In brief, he posits an all-encompassing set of world states that describe your state of mind as well as everything else. Events are subsets of world states, and the knowledge operator K transforms an event E into another event K(E): "I know that E". Note that the operator's output is of the same type as its input - a subset of the all-encompassing universe of discourse - and so it's natural to try iterating the operator, obtaining K(K(E)) and so on.

Which brings me to my question. Let E be the event "you are a thing that thinks", or "you exist". You have read Descartes and know how to logically deduce E. My question is, do you also know that K(E)? K(K(E))? These are stronger statements than E - smaller subsets of the universe of discourse - so they could help you learn more about the external world. The first few iterations imply that you have functioning memory and reason, at the very least. Or maybe you could take the other horn of the dilemma: admit that you know E but deny knowing that you know it. That would be pretty awesome!

Replies from: SilasBarta, Mass_Driver, zero_call
comment by SilasBarta · 2010-06-17T22:32:19.865Z · LW(p) · GW(p)

Or maybe you could take the other horn of the dilemma: claim that you know E but you don't know that you know it. That would be pretty awesome!

When I was younger, a group of my friends started teasing others because they didn't know the Hindu-Arabic number system. In reality, of course, they did know it, but they didn't know that they knew it -- that was the joke.

comment by Mass_Driver · 2010-06-17T22:27:17.875Z · LW(p) · GW(p)

My question is, do you also know that K(E)? K(K(E))?

I have a sensory/gut experience of being a thinking being, or, as you put it, E.

Based on that experience, I develop the abstract belief that I exist, i.e., K(E).

By induction, if K(E) is reliable, then so is K(K(K(K(K(K(K(E)))))))). In other words, there is no particular reason to doubt that my self-reflective abstract propositional knowledge is correct, short of doubting the original proposition.

So I like the distinction between E and K(E), but I'm not sure what insights further recursion is supposed to provide.

Replies from: zero_call
comment by zero_call · 2010-06-21T01:53:03.359Z · LW(p) · GW(p)

I just saw this and realized I basically just expanded on this above.

comment by zero_call · 2010-06-21T01:50:22.498Z · LW(p) · GW(p)

I wasn't familiar with this description of "world states", but it sounds interesting, yes. I take it that positing "I am a think that things" is the same as asserting K(E). In asserting K(K(E)), I assert that I know that I know that I am a thing that thinks. If this understanding is incorrect, my following logic doesn't apply.

I would argue that K(K(E)) is actually a necessary condition for K(E). Because if I don't know that I know proposition A, then I don't know proposition A.

Edit/Revised: I think all you have to do is realize that "K(K(A)) false" permits "K(A) false". At first I had a little proof but now it seems just redundant so I deleted it.

So I guess I disagree, I think the iterations K(K...) are actually weaker statements, which are necessary for K(A) to be achieved. Consequentially I don't see how you can learn anything beyond K(A).

Replies from: cousin_it
comment by cousin_it · 2010-06-21T03:12:49.258Z · LW(p) · GW(p)

K(A) is always a stronger statement than A because if you know K(A) you necessarily know A. (To get the terms clear: a "strong" statement corresponds to a smaller set of world states than a "weak" one.) It is debatable whether K(K(A)) is always equivalent to K(A) for human beings. I need to think about it more.

Replies from: red75
comment by red75 · 2010-06-21T05:03:06.038Z · LW(p) · GW(p)

Format definition of K(E)={s \in S | P(s) \subset E }, where P is partition of S, ensures that K(K(E))=K(E). It's easy to see: if s \in K(E) then P(s) \subset e, thus s \in K(K(E)), and similarly for s \notin K(E).

As for informal sence, I don't see much use of K(K(E)) where E is a plain fact, if I aware that I know E, introspecting on that awareness will provide as much K-s as I like and little more. If I don't aware that I know E (deep buried memory?), I will be aware of it when I remember it. But If I know that I know some class of facts or rules, that is useful for planning. However I can't come up with useful example for K(K(K())) and higher.

Addition: Aumann's formalization have limitations: it can't represent false knowledge, memory glitches (when I know that I know something, but I can't remember it), meta-knowledge, knowledge of rules of any kind (I'm not completely sure about rules).

comment by JoshuaZ · 2010-06-17T14:20:52.416Z · LW(p) · GW(p)

This is the entire point of the brain in the vat idea. It's not that "you could posit it", you do posit it. The external world as we experience is utterly and completely controlled by the vat. If we correlate "experienced brain damage" (in our world) with "reduced mental faculties", that just means that the vat imposes that correlation on us through its brain life support system.

When I've read about the brain-the-vat as an example before they normally just talk about sensory aspects. People don't mention anything like altering the brain itself. So at minimum, cousin it has picked up a hole in how this is frequently described.

Although I don't claim to be an expert in philosophy, the brain in the vat example is widely known to be philosophically unresolvable.

Considering how much philosophy is complete nonsense I'd think that LWers would be more careful about using the argument that something in philosophy is widely known to be not resolvable. I agree that if when people are talking about the brain-the-vat they mean one where the vat is able to alter the brain itself in the process then this is not resolvable.

Replies from: zero_call
comment by zero_call · 2010-06-17T22:34:31.192Z · LW(p) · GW(p)

People don't mention anything like altering the brain itself.

Altering the brain itself? The brain itself is the only thing there is to alter. The only thing that exists in the brain in the vat example is the brain, the vat, and whatever controls the vat. The "human experiences" are just the outcome of an alteration on the brain, e.g., by hooking up electrodes. I really have no idea how else you imagine this is working.

Replies from: cousin_it
comment by cousin_it · 2010-06-17T22:48:22.317Z · LW(p) · GW(p)

FWIW, my original comment talked about a realistic version of brain in a vat, not the philosophical idealized model. But now that I thought about it some more, the idealized model is seeming harder and harder to implement.

The robots who take care of my vat must possess lots of equipment besides electrodes! A hammer, boxing gloves, some cannabis extract, a faster-than-light transmitter so I can't measure the round-trip signal delay... Think about this: what if I went to a doctor and asked them to do an MRI scan as I thought about stuff? Or hooked some electrodes to my head and asked a friend to stimulate my neurons, telling me which ones only afterward? Bottom line, I could be an actual human in an actual world, or a completely simulated human in a completely simulated world, but any in-between situations - like brains in vats - can be detected pretty easily.

Replies from: zero_call
comment by zero_call · 2010-06-18T02:23:00.967Z · LW(p) · GW(p)

Um, if you're a brain in a vat, then any "brain" you perceive in the real world like on a "real world" MRI is nothing but a fictitious sensory perception that the vat is effectively tricking you into thinking is your brain. If you're a brain in a vat, you have nothing to tell you that what you perceive as your brain is actually really your brain. It may be hard to implement the brain in the vat scenario, but when implemented, its absolutely undetectable.

comment by Mass_Driver · 2010-06-17T05:18:57.189Z · LW(p) · GW(p)

How to check that you aren't dreaming: make up a random number that's too large for you to factor in your head, factor it with a computer,

Do you have access to the computer software of your choice in your dreams? That sounds unusually vivid to me, maybe even lucid. I'm lucky if I can find a working pen and a desk that obeys the laws of physics in my dreams.

Replies from: wedrifid, Morendil
comment by wedrifid · 2010-06-17T15:28:26.116Z · LW(p) · GW(p)

Do you have access to the computer software of your choice in your dreams?

I know I do. In the last couple of years I have gone from almost never remembering a dream to having dreams that are sometimes even more vivid than my memories of real life. I even had to check my computer one day to see whether or not what I remembered doing was 'real' or not.

comment by Morendil · 2010-06-17T06:49:27.150Z · LW(p) · GW(p)

Heck, I'm lucky if I can find trousers in my dreams.

Replies from: wedrifid
comment by wedrifid · 2010-06-17T15:20:46.200Z · LW(p) · GW(p)

Depends on how you define 'lucky' I guess. ;)

comment by humpolec · 2010-06-18T09:50:48.634Z · LW(p) · GW(p)

How to check that you aren't dreaming: make up a random number that's too large for you to factor in your head, factor it with a computer, then check the correctness by pen and paper. If the answer fits, now you know the computing hardware actually exists outside of you.

A similar method was used by Solaris protagonist to check if he isn't hallucinating.

Replies from: cousin_it
comment by cousin_it · 2010-06-18T10:06:49.445Z · LW(p) · GW(p)

Ouch! I read Solaris long ago. It seems the idea stuck in my head and I forgot its origin. And it does make much more sense if you substitute "hallucinating" for "dreaming".

comment by wedrifid · 2010-06-17T15:33:33.373Z · LW(p) · GW(p)

How to check that you aren't dreaming: make up a random number that's too large for you to factor in your head, factor it with a computer, then check the correctness by pen and paper. If the answer fits, now you know the computing hardware actually exists outside of you.

The trick, then, is to instill in yourself a habit of checking whether you are asleep regularly (ie. even when you are awake). A habit of thinking "am I awake, let me check" is the hard part and without that habit your sleeping mind isn't likely to question itself. Literature on lucid dreaming talks a lot about such tests. In fact, combined with 'write dreams down as soon as you wake up' and 'consume X substance" it more or less summarizes the techniques.

Replies from: Risto_Saarelma, humpolec
comment by Risto_Saarelma · 2010-06-18T13:48:41.514Z · LW(p) · GW(p)

The odd thing is that despite reading stuff about reality tests and trying to build a habit from doing them while awake, on the rare occasions I've had a lucid dream I've just spontaneously become aware that I'm presently dreaming. I don't remember ever having a non-lucid dream where I've done a reality test.

Instead of fancy stuff like determining prime factors, one consistent dream sign I've had is utter incompetence in telling time from digital watches and clocks. This generally doesn't tip me off that I'm dreaming though, and doesn't occur often enough that I could effectively condition myself to recognize it.

comment by humpolec · 2010-06-18T09:43:15.244Z · LW(p) · GW(p)

In fact, combined with 'write dreams down as soon as you wake up' and 'consume X substance" it more or less summarizes the techniques.

There are also trance/self-hypnosis methods, like WILD, some people seem to be very successful with them.

Replies from: wedrifid
comment by wedrifid · 2010-06-18T11:32:42.291Z · LW(p) · GW(p)

Interesting. And personally I find experimenting with trance and self-hypnosis by themselves to be even more fascinating than vivid dreaming. If only I did not come with the apparent in built feature of inoculating myself to any particular method of trance or self hypnosis after a few successful uses.

comment by Dagon · 2010-06-16T19:14:14.990Z · LW(p) · GW(p)

I think "unusually clever" should be "sufficiently clever" in your caveat. I have very wide error bars on what I think would be usual, but I suspect that it's almost guaranteed to defeat those tests if it's defeated the overall test you've already applied of "have only memories of experiences consistent with a believable reality".

In which case both questions are indeed hopeless.

comment by Houshalter · 2010-06-15T18:16:34.491Z · LW(p) · GW(p)

Is anyone else concerned about the possibility of nuclear terrorist attacks? No, I don't mean what you usually hear on the news about dirty bombs or Iran/North Korea. I mean an actual terrorists with an actual nuclear bomb. There are a suprising number of nuclear weapons on the bottom of the ocean. Has it occured to anyone that someone with enough funding and determination could actually retrieve one of them. Maybe they already have?

In its campaign to discredit General Lebed’s revelations, the Russian government insisted that the loss of a nuclear weapon was unthinkable. No responsible party could lose something so important. But to the contrary, we know that not only the Soviet Union, but also the United States, lost numbers of nuclear weapons. At least four Soviet submarines, armed with a total of 40 nuclear weapons, sank during the Cold War. According to press reports, one of these was partially recovered from the Pacific Ocean floor by a unique deep-water submarine, the Glomar Explorer, owned by the reclusive billionaire Howard Hughes. Three nuclear missiles and two nuclear torpedoes were recovered. The Department of Defense has acknowledged a number of what it calls “Broken Arrows” (nuclear weapons lost by U.S. forces), although it has never said how many. The confirmed reports include a 1965 case where an aircraft loaded with a B43 nuclear bomb rolled off a carrier stationed near Japan. Neither the aircraft nor the weapon was ever recovered. A year later, the U.S. Air Force accidentally dropped a 20-megaton nuclear bomb in the Mediterranean Sea during a high-altitude refueling mission near Palomares, Spain. After three months of frantic searching, it was found. Given the sensitivity of such events, it is reasonable to infer that the few official confirmations are merely the tip of the iceberg.

And here is a public list of known nuclear accidents

Replies from: cupholder, gwern
comment by cupholder · 2010-06-15T23:59:31.873Z · LW(p) · GW(p)

Notice that many of the incidents mentioned at your link don't involve nuclear bombs at all: many involve leaks at research facilities and power stations. Here's a chronological list of radiation incidents that caused injury from the start of the 20th century onwards. The vast majority don't involve nuclear bombs.

Historically, unless you were in Hiroshima or Nagasaki, you would have been less likely to die from a nuclear bombing than you would have been to die from a radiation leak, picking up a lost radioactive source without recognizing it (or living with someone who's brought one into your home), being poisoned with radiation by a coworker, or medical overexposure. (Note also that the list is surely incomplete.) It is possible that this trend will reverse in the future, but it's not obvious that it will.

More generally, gwern sounds about right to me on the subject of terrorists putting together their own nuke. (Or hauling one up from the bottom of the ocean.)

Replies from: mattnewport
comment by mattnewport · 2010-06-16T00:27:05.118Z · LW(p) · GW(p)

Coincidentally I just the other day learned of the banana equivalent dose as a way of placing the risk of radiation leaks in context.

comment by gwern · 2010-06-15T21:09:31.405Z · LW(p) · GW(p)

I am not. To even suggest that that this is a possibility anywhere near the level of a sovereign actor giving terrorists nukes is to dramatically overestimate terrorist groups' technical competence, and also ascribe basic instrumental rationality to them (a mistake; see my Terrorism is not about Terror).

Even if a terrorist could marshal the interest, assemble in one place the millions necessary, and actually hire a world-class submersible and in the scant days they can afford, find the wreckage of a bomb, it would probably be useless. US nukes are designed to failsafe, so if the wiring has corroded, or the explosives are misaligned? And that's ignoring issues with radioactive decay. (Was the bomb a tritium-pumped H-bomb? Well, given tritium's extremely short half-life, I'm afraid that bomb is now useless.)

Replies from: Houshalter
comment by Houshalter · 2010-06-15T22:40:42.701Z · LW(p) · GW(p)

Maybe, although remember there are a lot more players interested in obtaining nuclear weapons then just a few terrorists. And the best crimes are the ones no one knew were commited. Unsucessful criminals are over represented as opposed to ones that got away. I suspect the same is true for terrorists. Blowing up a building isn't going to achieve your goals, but blowing up a city might. After all, it's ended a war once and just the threat stopped another from ever happening. Also, even if the bomb itself is useless, it is probably worth quite a bit of money, more then the millions it would take to retrieve it (maybe thousands as technology improves? There are some in shallower water. In 1958 the government was prepared to retrieve a lost bomb, but never located it.) I don't honestly know a lot about nuclear weapons, but the materials in it, maybe even the design itself, would be worth something to somebody. Maybe said organization has the resources to salvage it, after all, they already had enough money to get it in the first place.

Even if no bombs go off, I wouldn't be suprised if the government eventually gets around to searching for them and finds they're not there. And there are other nuclear threats to. Although I can't find anywhere to confirm it, it was floating around the internet that up to 80 "suitcase nukes" are missing. This quote from wikipedia particularly distrubed me:

The highest-ranking GRU defector Stanislav Lunev claimed that such Russian-made devices do exist and described them in more detail. These devices, "identified as RA-115s (or RA-115-01s for submersible weapons)" weigh from fifty to sixty pounds. They can last for many years if wired to an electric source. In case there is a loss of power, there is a battery backup. If the battery runs low, the weapon has a transmitter that sends a coded message—either by satellite or directly to a GRU post at a Russian embassy or consulate.” According to Lunev, the number of "missing" nuclear devices (as found by General Lebed) "is almost identical to the number of strategic targets upon which those bombs would be used."

Lunev suggested that suitcase nukes might be already deployed by the GRU operatives at the US soil to assassinate US leaders in the event of war. He alleged that arms caches were hidden by the KGB in many countries for the planned terrorism acts. They were booby-trapped with "Lightning" explosive devices. One of such cache, which was identified by Vasili Mitrokhin, exploded when Swiss authorities tried to remove it from woods near Berne. Several others caches were removed successfully. Lunev said that he had personally looked for hiding places for weapons caches in the Shenandoah Valley area and that "it is surprisingly easy to smuggle nuclear weapons into the US" either across the Mexican border or using a small transport missile that can slip undetected when launched from a Russian airplane.

I will leave it at that for now, I'm not one of those paranoid people that goes around ranting about nuclear proliferation or whatever. If there really is a problem, there's not much we can do (except maybe try to get to those lost bombs first, or take anti-terrorism more seriously.)

Replies from: NancyLebovitz, gwern
comment by NancyLebovitz · 2010-06-17T01:08:01.762Z · LW(p) · GW(p)

I prefer spending my precious mental CPUs on worrying about the US government going really bad.

Admittedly, a terrorist nuke (especially if exploded in the US) would be likely to cause the US government to take a lot more control.

comment by gwern · 2010-06-15T22:47:13.308Z · LW(p) · GW(p)

I don't take Lunev seriously. Defectors are notoriously unreliable sources of information (as I think Iraq should have proven. Again.).

The problem with nuclear terrorism is that atomic bombs come with return addresses - the US has always collected isotopic samples (eg. with aerial collecting missions in international airspace) precisely to make sure this is the case. (Ironically, invading Afghanistan and Iraq may've helped deter nuclear terrorism: 'If the US invaded both these countries over just a few thousand dead, then it's plausible they will nuke us even if we cry to the heavens that we just carelessly lost that bomb.')

comment by RobinZ · 2010-06-15T01:37:56.717Z · LW(p) · GW(p)

P. Z. Myers discusses the relevance of gender as a proxy for intelligence.

Related: Argument Screens Off Authority.

Replies from: Emile
comment by Emile · 2010-06-15T12:44:41.354Z · LW(p) · GW(p)

I don't know the ins and the outs of the Summers case, but that article has a smell of straw man. Especially this (emphasis mine) :

You see, there's a shifty little game that proponents of gender discrimination are playing. They argue that high SAT scores are indicative of success in science, and then they say that males tend to have higher math SAT scores, and therefore it is OK to encourage more men in the higher ranks of science careers…but they never get around to saying what their SAT scores were. Larry Summers could smugly lecture to a bunch of accomplished women about how men and women were different and having testicles helps you do science, but his message really was "I have an intellectual edge over you because some men are incredibly smart, and I am a man", which is a logical fallacy.

From what I understand (and a quick check on Wikipedia confirms this), what got Larry Summers in trouble wasn't that he said we should use gender as a proxy for intelligence, but merely that gender differences in ability could explain the observed under-representation of women in science.

The whole article is attacking a position that, as far as I know, nobody holds in the West any more : that women should be discriminated against because they are less good at science.

Well, he also seems to be attacking a second group that does exist (those that say that there are less women in science because they are less likely to have high math ability), mostly by mixing them up with the first, imaginary, group.

Replies from: Douglas_Knight, Nick_Tarleton
comment by Douglas_Knight · 2010-06-23T00:21:08.436Z · LW(p) · GW(p)

The whole article is attacking a position that, as far as I know, nobody holds in the West any more : that women should be discriminated against because they are less good at science.

Well, I think PZ Myers is a liar who has never heard of such people, but they do exist. Robin Hanson, for one. More representative is conchis's claim early in the comments that

some [Oxford] admissions fellows were discounting female students’ grades on the basis that they were more likely to reflect conscientiousness than talent.

Rewritten: I've heard hints along these lines in America, where girls get better grades, in both high school and college, than boys with the same SATs. This is suggested to be about conscientiously doing homework. If American colleges don't want to reward conscientiousness, they could change their grading to avoid homework.

That would make them be like my understanding of Oxford, where I believe grades are based on high-stakes testing, not on homework. But I also thought admissions was only based on high-stakes testing, too. That is, I don't even know what the quoted claim means by "grades," nor have I been able to track down people openly admitting anything like it.

Do British students get grades other than A-levels? Are there sex divergences between the grades and A-levels? A-levels and predictions? I hear that Oxbridge grades are lower variance for girls than boys. I also hear that boys do better on the math SATs than on the math A-levels, which seems like it should be a condemnation of one of the tests.

comment by Nick_Tarleton · 2010-06-16T17:06:09.952Z · LW(p) · GW(p)

Well, he also seems to be attacking a second group that does exist (those that say that there are less women in science because they are less likely to have high math ability), mostly by mixing them up with the first, imaginary, group.

Which makes a kind of instrumental sense, in that advocacy of this position aids the first group by innocently explaining away gender inequalities. (I think it's obvious that most people don't distinguish well, in political situations, between incidental aid and explicit support.) Also, if evaluating individual intelligence is costly and/or inevitably noisy, it is (selfishly) rational for evaluators to give significant weight to gender, i.e. discriminate. And given how little people understand statistics, and the extent to which judgments of status/worth are tied to intelligence and to group membership, it seems inevitable that belief in group differences will lead people to discriminate far more than would be rational.

Replies from: Emile
comment by Emile · 2010-06-17T10:16:05.809Z · LW(p) · GW(p)

Which makes a kind of instrumental sense, in that advocacy of this position aids the first group by innocently explaining away gender inequalities. (I think it's obvious that most people don't distinguish well, in political situations, between incidental aid and explicit support.)

Can't this be said of just about all straw men ? Yes, setting up a straw man may be instrumentally rational, but is it the kind of thing we should be applauding ?

Say we have two somewhat similar positions:

  • Position A, which is false and maybe evil (in this case "we should discriminate against women when hiring scientists, because they aren't as likely to be very smart")
  • Position B, which is maybe true (in this case ("the lack women female scientists could be due to the fact that they aren't as likely to be very smart")

A straw man is pretending that people arguing B are arguing A, or pretending that there's no difference between the two - which seems to be what P.Z. Myers is doing.

You're saying that position B gives support for position A, and, yes, it does. That can be a good reason to attack people who support position B (especially if you really don't like position A), but that holds even if position B is true.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-06-22T18:44:49.889Z · LW(p) · GW(p)

Can't this be said of just about all straw men ? Yes, setting up a straw man may be instrumentally rational, but is it the kind of thing we should be applauding ?

Agreed. I don't necessarily approve of this sort of rhetoric, but I think it's worth trying to figure out what causes it, and recognize any good reasons that might be involved. (I also don't mean to say that people who use this rhetoric are calculating instrumental rationalists — mostly, I think they, as I alluded to, don't recognize the possibility of saying things representative of and useful to an outgroup without being allied with it.)

comment by Alexandros · 2010-06-14T18:10:17.951Z · LW(p) · GW(p)

Off That (Rationalist Anthem) - Baba Brinkman

More about skeptics than rationalists, but still quite nice. Enjoy

Replies from: magfrump
comment by magfrump · 2010-06-14T18:43:28.450Z · LW(p) · GW(p)

I could have sworn that I'd seen this posted somewhere before, for example in this thread. Maybe it was on Stumbelupon...

comment by NancyLebovitz · 2010-06-18T11:48:28.145Z · LW(p) · GW(p)

Sometimes I try to catch up on Recent Comments, but it seems as though the only way to do it is one page at a time. To make matters slightly worse, the link for the Next page going pastwards is at the bottom of the page, but the page loads at the top, so I have to scroll down for each page.

Is there any more efficient way to do it?

Replies from: Houshalter, wedrifid, rhollerith_dot_com
comment by Houshalter · 2010-06-18T19:19:48.703Z · LW(p) · GW(p)

Hmm... I don't know about recent comments, I just go to the posts I'm following. Hit control+F and then type (or copy/paste) "load more comments" and go through and hit each one. Then erase it and type the current date or yesterday's date in the formate "date month" (18 June) and it will highlight all of those comments (if you use youtube a lot, you might already use this method on the "see all comments" page except you have to type "hour" or "minute" instead of an exact time which is actually more convenient.) When you're done checking all of the new comments you can erase that and put in "continue this thread" (is that right, I forgot what it is exactly.)

Hope that helps.

comment by wedrifid · 2010-06-18T12:16:59.178Z · LW(p) · GW(p)

Use the RSS feed that appears on the recent comments page. I use reader.google.com to read my RSS feeds. This will allow you to scroll back in bulk using just the scrollbar then read at leisure. It also shows comments as 'read' or 'unread' based on where you are up to.

comment by RHollerith (rhollerith_dot_com) · 2010-06-18T14:03:10.561Z · LW(p) · GW(p)

The only measure I know of that might make it more efficient to catch up on recent comments is for you to go to your preferences page, and where it says "Display 50 comments by default," change the "50" to some larger number. I have been using "200" on a very slow (33.6 K bits/sec) connection.

Are there periods in your life when you read or at least skim every comment made on Less Wrong? The reason I ask is that I am a computer programmer, and every now and then I imagine ways of making the software behind Less Wrong easier to use. To do that effectively, I need to know things about how people use Less Wrong.

Replies from: NancyLebovitz, Blueberry, NancyLebovitz
comment by NancyLebovitz · 2010-06-18T16:42:33.264Z · LW(p) · GW(p)

Here's my wishlist:

As much trn functionality as it seems to be worth coding-- in particular, the ability to default to only seeing unread comments (or at least a Recent Comments for posts as well as for the whole site) while reading comments to a post while having easy access to old comments. the ability to default to not seeing chosen threads and sub-threads, and tree navigation.

If you want to find out how people generally use the site, I think a top level post asking about it is the only way to get the questions noticed. If you post it, I'll upvote it.

Replies from: pengvado
comment by pengvado · 2010-06-21T17:17:43.467Z · LW(p) · GW(p)

Seconded.

And in the absence of such a feature, my current compromise is to not look at a post until people have mostly stopped commenting on it, so that I can read the comments with threading and without redundancy. This is not very conducive to conversations, and as such I have mostly stopped commenting since adopting this strategy.

comment by Blueberry · 2010-06-18T16:24:02.500Z · LW(p) · GW(p)

I also find this problem annoying and would like to see more recent comments on a page. I usually read through every comment on recent comments when I come to LW.

comment by NancyLebovitz · 2010-06-18T15:02:12.676Z · LW(p) · GW(p)

Thanks. I've got it set at 500 comments, but I don't think it actually shows 500-- and in any case, I think it's just for comment threads, not for recent comments.

It's akrasia, but yeah, I've been using Recent Comments to read or at least skim everything.

I don't even have clear ideas of the right questions to ask about how people use LW, but a survey would be interesting.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-06-18T16:17:09.019Z · LW(p) · GW(p)

I've got it set at 500 comments, but I don't think it actually shows 500-- and in any case, I think it's just for comment threads, not for recent comments.

I never noticed that before, but you are right: all the /comments/ pages I have asked for have 100 comments on them regardless of how I try to change that. (I tried setting the number in prefs to a smaller value, logging out and in again, following a "Next" link.)

(Oddly, although it will show me a page with 100 comments on it if I click it, the URL in the "Next" link bottom of a /comments/ page contains the string "count=50".)

comment by [deleted] · 2010-06-16T14:45:03.781Z · LW(p) · GW(p)

Does anyone happen to know the status of Eliezer's rationality book?

Replies from: Alicorn
comment by Alicorn · 2010-06-16T18:27:07.002Z · LW(p) · GW(p)

The first draft is in progress.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-06-16T20:56:03.160Z · LW(p) · GW(p)

Second draft, technically. The first draft was a rough outline of the contents.

Replies from: Alicorn
comment by Alicorn · 2010-06-16T21:02:46.102Z · LW(p) · GW(p)

I wasn't counting that as a "draft".

comment by cousin_it · 2010-06-15T18:03:55.953Z · LW(p) · GW(p)

Another idea for friendliness/containment: run the AI in a simulated world with no communication channels. Right from the outset, give it a bounded utility function that says it has to solve a certain math/physics problem, deposit the correct solution in a specified place and stop. If a solution can't be found, stop after a specified number of cycles. Don't talk to it at all. If you want another problem solved, start another AI from a clean slate. Would that work? Are AGI researchers allowed to relax a bit if they follow these precautions?

ETA: absent other suggestions, I'm going to call such devices "AI bombs".

Replies from: timtyler, Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-15T19:22:01.684Z · LW(p) · GW(p)

Are AGI researchers allowed to relax a bit if they follow these precautions?

If these precautions become necessary, end of the world will follow shortly (which is the only possible conclusion of "AGI research", so I guess the researchers should rejoice at the work well done, and maybe "relax a bit", as the world burns).

Replies from: cousin_it
comment by cousin_it · 2010-06-15T19:31:21.942Z · LW(p) · GW(p)

I don't understand your argument. Are you saying this containment scheme won't work because people won't use it? If so, doesn't the same objection apply to any FAI effort?

Replies from: khafra, Vladimir_Nesov
comment by khafra · 2010-06-15T19:41:22.832Z · LW(p) · GW(p)

If my Vladimir-modelling heuristic is correct, he's saying that you're postulating a world where humanity has developed GAI but not FAI. Having your non-self-improving GAI solve stuff one math problem at a time for you is not going to save the world quickly enough to stop all the other research groups at a similar level of development from turning you and your boxed GAI into paperclips.

Replies from: cousin_it
comment by cousin_it · 2010-06-15T19:49:02.793Z · LW(p) · GW(p)

An AI in a simulated world isn't prohibited from improving itself.

More to the point, I didn't imagine I would save the world by writing one comment on LW :-) My idea of progress is solving small problems conclusively. Eliezer has spent a lot of effort convincing everybody here that AI containment is not just useless - it's impossible. (Hence the AI-box experiments, the arguments against oracle AIs, etc.) If we update to thinking it's possible after all, I think that would be enough progress for the day.

Replies from: khafra
comment by khafra · 2010-06-15T20:44:02.779Z · LW(p) · GW(p)

I don't think it's really an airtight proof--there's a lot that a sufficiently powerful intelligence could learn about its questioners and their environment from a question; and when we can't even prove there's no such thing as a Langford Basilisk, we can't establish an upper bound on the complexity of a safe answer. Essentially, researchers would be constrained by their own best judgement in the complexity of the questions and of the responses.

Of course, all that's rather unlikely, especially as it (hopefully) wouldn't be able to upgrade its hardware--but you're right, software-only self-improvement would still be possible.

Replies from: cousin_it
comment by cousin_it · 2010-06-15T21:10:51.246Z · LW(p) · GW(p)

Yes, I agree. It would be safest to use such "AI bombs" for solving hard problems with short and machine-checkable solutions, like proving math theorems, designing algorithms or breaking crypto. There's not much point for the AI to insert backdoors into the answer if it only cares about the verifier's response after a trillion cycles, but the really paranoid programmer may also include a term in the AI's utility function to favor shorter answers over longer ones.

comment by Vladimir_Nesov · 2010-06-15T19:50:21.593Z · LW(p) · GW(p)

What khafra said - also this sounds like propelling toy cars using thermonuclear explosions. How is this analogous to FAI? You want to let the FAI genie out of the bottle (although it will likely need a good sandbox for testing ground).

Replies from: cousin_it
comment by cousin_it · 2010-06-15T19:53:01.880Z · LW(p) · GW(p)

Yep, I caught that analogy as I was writing the original comment. Might be more like producing electricity from small, slow thermonuclear explosions, though :-)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-15T20:10:13.899Z · LW(p) · GW(p)

Not small explosions. Spill one drop of this toxic stuff and it will eat away the universe, nowhere to hide! It's not called "intelligence explosion" for nothing.

Replies from: cousin_it
comment by cousin_it · 2010-06-15T20:22:53.981Z · LW(p) · GW(p)

That's right - I didn't offer any arguments that a containment failure would not be catastrophic. But to be fair, FAI has exactly the same requirements for an error-free hardware and software platform, otherwise it destroys the universe just as efficiently.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-15T20:33:26.675Z · LW(p) · GW(p)

Sure, prototypes of FAI will be similarly explosive.

comment by Risto_Saarelma · 2010-06-18T06:37:21.616Z · LW(p) · GW(p)

Aaron Swartz: That Sounds Smart

comment by Paul Crowley (ciphergoth) · 2010-06-17T11:40:12.689Z · LW(p) · GW(p)

I recently read a fascinating paper that argued based on what we know about cognitive bias that our capacity for higher reason actually evolved as a means to persuade others of what we already believe, rather than as a means to reach accurate conclusions. In other words, rationalization came first and reason second.

Unfortunately I can't remember the title or the authors. Does anyone remember this paper? I'd like to refer to it in this talk. Thanks!

Replies from: Morendil
comment by Morendil · 2010-06-17T11:43:58.256Z · LW(p) · GW(p)

That would probably be "Why do humans reason" by Mercier and Sperber, which I covered in this post.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-06-17T11:51:28.826Z · LW(p) · GW(p)

The very one. Thanks - and wow, that was swift!

comment by Kevin · 2010-06-14T23:50:43.952Z · LW(p) · GW(p)

Feds under pressure to open US skies to drones

http://news.yahoo.com/s/ap/20100614/ap_on_bi_ge/us_drones_over_america

comment by magfrump · 2010-06-14T18:57:39.123Z · LW(p) · GW(p)

Looking through a couple of posts on young rationalists, it occurred to me to ask the question, how many murderers have a loving relationship with non-murderer parents?

Is there a way to get these kinds of statistics? Is there a way to filter them for accuracy? Accuracy both of 'loving relationship' and of 'guilty of murder' (i.e. plea bargains, false charges, etc.)

Replies from: Dagon
comment by Dagon · 2010-06-14T19:39:47.640Z · LW(p) · GW(p)

I started to write: The probabilities in my priors are so low that I don't expect any update to occur, even if you could accurately measure. Then I thought: Wait, that's what 'prior' means: of course I don't expect any update to occur! Rationality is hard.

So instead, I'll phrase my confusion this way: I have a hard time stating a belief for which even a surprising result to this measurement would matter. There are so many other reasons to recommend being raised by loving parents that "increased likelihood of murder from near-zero to still-near-zero" is unlikely to change such a preference.

And the overall murder rate is already so low that the reverse isn't true either: you shouldn't worry significantly less about an acquaintance murdering someone just because they have loving parents. Because in most cases you CANNOT worry less than you already should, which is near-zero.

Replies from: magfrump
comment by magfrump · 2010-06-15T03:51:07.483Z · LW(p) · GW(p)

I'm not really thinking in terms of particular issues, the more interesting questions in my mind are the issues that would arise in collecting such data.

comment by SilasBarta · 2010-06-18T20:56:46.198Z · LW(p) · GW(p)

Today is Autistic Pride Day, if you didn't know. Celebrate by getting your fellow high-functioning autistic friends together to march around a populated area chanting "Aspie Power!" Preferably with signs that say "Neurotypical = manipulative", "fake people aren't real", or something to that effect.

Kidding. (About everything after the first sentence, I mean.)

comment by apophenia · 2010-06-17T21:15:53.050Z · LW(p) · GW(p)

one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random errors according to poisson distribution converted to binary. next explanation. one seven one eight eight five two decimals of pi with random errors according to poisson distribution converted to binary followed by eight five nine zero one digits of reflexive code. current explanation--

"Eric, you've got to come over and look at this!" Jerry explained excitedly into the phone. "It's not those damn notebooks again, is it? I've told you, I could just write a computer program and you'd have all your damn results for the last year inside a week," Eric explained sleepily for the umpteenth time. "No, no. Well... yes. But this is something new, you've got to take a look," Jerry wheedled. "What is it this time? I know, it can calculate pi with 99.9% percent accuracy, yadda yadda. We have pi to billions of decimal places with total accuracy, Jerry. You're fifty years too late." "No, I've been trying something new. Come over." Jerry hung up the phone, clearly upset. Eric rubbed his eyes. Fifteen minutes peering at the crackpot notebooks and nodding appreciatively would sooth his friend's ego, he knew. And he was a good friend, if a little nuts. Eric took one last longing look at his bed and grabbed his house key.

"And you see this pattern? The ones that are nearly diagonal here?" "Jerry, it's all a bunch of digits to me. Are you sure you didn't make a mistake?" "I double check all my work, I don't want to go back too far when I make a mistake. I've explained the pattern twice already, Eric." "I know, I know. But it's Saturday morning, I'm going to be a bit--let me get this straight. You decided to apply the algorithm to its old output." "No, not its own output, that's mostly just pi. The whole pad." "Jerry, you must have fifty of these things. There's no way you can--" "Yeah, I didn't go very far. Besides, the scratch pads grow faster than the output as I work through the steps anyway." "Okay, okay. So you run through these same steps with your scratch pad numbers, and you get correct predictions then too?" "That's not the point!" "Calm down, calm down. What's the point then?" "The point is these patterns in the scratch work--" "The memory?" "Yeah, the memory." "You know, if you'd just let me write a program, I--" "No! It's too dangerous." "Jerry, it's a math problem. What's it going to do, write pi at you? Anyway, I don't see this pattern..." "Well, I do. And so then I wondered, what if I just fed it ones for the input? Just rewarded it no matter what it did?" "Jerry, you'd just get random numbers. Garbage in, garbage out." "That's the thing, they weren't random." "Why the hell are you screwing around with these equations anyway? If you want to find patterns in the Bible or something... just joking! Oww, stop. I kid, kid!" "But, I didn't get random numbers! I'm not just seeing things, take a look. You see here in the right hand column of memory? We get mostly zeros, but every once in a while there's a one or two." "Okaaay?" "And if you write those down we have 2212221..." "Not very many threes?" "Ha ha. It's the perfect numbers, Eric. I think I stumbled on some way of outputting the perfect numbers. Although the digits are getting further spaced apart, so I don't know how long it will stay faster than factoring." "Huh. That's actually kinda cool, if they really are the perfect numbers. You have what, five or six so far? Let's keep feeding it ones and see what happens. Want me to write a program? I hear there's a cash prize for the larger ones." "NO! I mean, no, that's fine, Eric. I'd prefer you not write a program for this, just in case." "Geez, Jerry. You're so paranoid. Well, in that case can I help with the calculations by hand? I'd love to get my claim to fame somehow." "Well... I guess that's okay. First, you copy this digit from here to here..."

comment by simplicio · 2010-06-16T21:01:49.050Z · LW(p) · GW(p)

Episode of the show Outnumbered that might appeal to this community. The show in general is very funny, smart and well acted, children's roles in particular.

comment by whpearson · 2010-06-15T19:26:11.603Z · LW(p) · GW(p)

I'm looking for some concept which I am sure has been talked about before in stats but I'm not sure of the technical term for it.

Lets say you have a function you are trying to guess with a certain range and domain. How would you talk about the amount of data you would need to likely get the actual functions with noisy data? My current thoughts are the larger the cardinality of the domain the more data you would need (in a simple relationship) and the type of noise would determine how much the size of the range would affect the amount of data you would need.

Replies from: gwern, cupholder
comment by gwern · 2010-06-21T02:11:00.079Z · LW(p) · GW(p)

How would you talk about the amount of data you would need to likely get the actual functions with noisy data?

First, I would specify what set my 'function' is in. Are there 2 possibilities? 10? A million? log2(x) tells me how many bits of information I need. Then I would treat the data as coming to me through a noisy channel. How noisy? I assume you already know how noisy. Now I can plug in the noise level to Shannon's theorem, and that tells me how many noisy bits I need to get my log2(x) bits.

(This all seems like very layman information theory, which makes me wonder if something makes your problem harder than this.)

Replies from: whpearson
comment by whpearson · 2010-06-25T14:20:13.752Z · LW(p) · GW(p)

By data I meant training data (in a machine learning context), not information.

And it wasn't really the math I was after, it is quite simple, just whether it had been discussed before.

My thoughts on the math. If the cardinality of the input is X and output is Y then the bound for the space of functions you are exploring is by X^Y. E.G. there are 4 possible functions from 1 binary bit to another. (Set to 0, Set to 1, Invert, Keep the Same). I've come across this in simple category theory.

However in order to fully specify which function it is (assuming no noise). You need a minimum of 2 pieces of training data (where training data is input ouput pairs). If you have the training pair (0,0) you don't know if that means keep the same or set to 0. Fairly obviously you need as many unique samples of training data as the cardinality of the domain. You need more when you have noise.

This is a less efficient way of getting information about functions, than just getting the Turing number or similar.

So I'm really wondering if there are guidelines for people designing machine learning systems, that I am missing. So if you know you can only get 5000 training examples, you know you that a system which tries to learn from the entire space of functions with the domain much larger that 5000 is not going to be very accurate unless you have put a lot of information into the prior/hypothesis space.

comment by cupholder · 2010-06-16T00:06:11.128Z · LW(p) · GW(p)

The closest thing I can think of is identifiability, although that's more about whether it's possible to identify a function given an arbitrarily large amount of data.

Replies from: whpearson
comment by whpearson · 2010-06-17T15:44:48.344Z · LW(p) · GW(p)

Hmm, not quite what I was looking for but interesting none the less.

Thanks.

comment by Liron · 2010-06-14T20:57:56.566Z · LW(p) · GW(p)

Physics question: Is it physically possible to take any given mass, like the moon, and annihilate the mass in a way that yields usable energy?

Replies from: timtyler, DanArmak
comment by timtyler · 2010-06-15T21:19:17.174Z · LW(p) · GW(p)

Fusion (mostly) does that. It works better with some elements, of course.

comment by DanArmak · 2010-06-14T21:00:11.440Z · LW(p) · GW(p)

Yes, if you collide it with the same mass of antimatter. Edit: I don't know enough to say if there are other ways.

This may not be very practical to do to the whole moon at once though :-)

Replies from: Liron, DanArmak
comment by Liron · 2010-06-15T08:36:13.014Z · LW(p) · GW(p)

Yeah but does it require a lot of energy/negentropy to get ahold of the necessary antimatter? I'm wondering whether the moon's mass makes it analogous to a charged capacitor or an uncharged capacitor.

Replies from: Mitchell_Porter, wedrifid
comment by Mitchell_Porter · 2010-06-15T09:19:55.773Z · LW(p) · GW(p)

Antimatter is expensive to make. It would require the whole world GDP to make one anti-Liron. Conservation of energy says that to make an antiparticle, you need a collision with kinetic energy equal to the rest mass of the antiparticle you're making. Solar flares make some antimatter as they punch through the solar atmosphere, but good luck getting hold of it before it annihilates.

The standard cosmological model says that shortly after the big bang, matter and antimatter existed in equal quantities, but there were interactions which favored the production of matter, and so all the antimatter was annihilated, leaving an excess of matter, which then in the next stage formed the first atomic nuclei. Antimatter is therefore rare in the universe. There are probably no natural antistars, for example. So it is expensive to come by, but (for a cosmic civilization) it might be a good way to store energy.

Replies from: DanArmak, Liron
comment by DanArmak · 2010-06-15T14:03:08.657Z · LW(p) · GW(p)

There are probably no natural antistars, for example.

And if there are, we don't know how to identify them from far away, do we?

BTW, can there be antimatter black holes? My limited understanding of physics is that matter/antimatter falling into a black hole passes the event horizon before it can interact with anything that fell into the hole in the past; and once it passes the event horizon, even if it mutually annihilates with something already in the black hole, the results can't escape outside. So from the outside there's no difference between matter, antimatter, and mixed black holes.

Replies from: cupholder, wedrifid
comment by cupholder · 2010-06-15T18:43:23.718Z · LW(p) · GW(p)

So from the outside there's no difference between matter, antimatter, and mixed black holes.

I saw this and immediately thought of the no hair theorem, which says that the only distinguishing (reference frame-independent) characteristics of black holes are their mass, their charge and their angular momentum. Turns out that Wikipedia uses matter v. antimatter black holes as an example of the theorem's implications!

Suppose two black holes have the same masses, electrical charges, and angular momenta, but the first black hole is made out of ordinary matter whereas the second is made out of antimatter, then they will be completely indistinguishable to an observer outside the event horizon.

Replies from: DanArmak, wedrifid
comment by DanArmak · 2010-06-15T21:30:11.904Z · LW(p) · GW(p)

So if I find a natural antimatter star, and I'm afraid someone will use it as a weapon, the safest thing to do is to throw it into a black hole.

Suppose two black holes have the same masses, electrical charges, and angular momenta, but the first black hole is made out of ordinary matter whereas the second is made out of antimatter, then they will be completely indistinguishable to an observer outside the event horizon.

In other words, even if we collide a matter black hole and an antimatter black hole, we won't see any evidence of mutual annihilation - we'll just get a double-size black hole. Cool.

Replies from: Douglas_Knight, Douglas_Knight, wedrifid
comment by Douglas_Knight · 2010-06-20T01:25:33.814Z · LW(p) · GW(p)

So if I find a natural antimatter star, and I'm afraid someone will use it as a weapon, the safest thing to do is to throw it into a black hole.

I'm sorry if I'm explaining the joke, but the rule of thumb is that this only saves you an order of magnitude of violence; 10% of the mass is released as radiation.

In fact, "throw it into a black hole" seems like a better answer to Liron's question than "collide it with equally much antimatter." It's not as efficient, but it's a lot easier to find black holes than antimatter. It may be easier in the annihilation case to actually use the energy, but I'm not sure.

Replies from: wedrifid
comment by wedrifid · 2010-06-20T02:58:32.317Z · LW(p) · GW(p)

It may be easier in the annihilation case to actually use the energy, but I'm not sure.

Probably. If nothing else, for a given amount of energy released you will probably be able to stand closer to collect it in the antimatter case.

comment by Douglas_Knight · 2010-06-20T01:25:36.840Z · LW(p) · GW(p)

So if I find a natural antimatter star, and I'm afraid someone will use it as a weapon, the safest thing to do is to throw it into a black hole.

I'm sorry if I'm explaining the joke, but the rule of thumb is that this only saves you an order of magnitude of violence; 10% of the mass is released as radiation.

In fact, "throw it into a black hole" seems like a better answer to Liron's question than "collide it with equally much antimatter." It's not as efficient, but it's a lot easier to find black holes than antimatter. It may be easier in the annihilation case to actually use the energy, but I'm not sure.

comment by wedrifid · 2010-06-19T20:20:30.679Z · LW(p) · GW(p)

So if I find a natural antimatter star, and I'm afraid someone will use it as a weapon, the safest thing to do is to throw it into a black hole.

If an antimatter star (of one solar mass) was thrown at a matter star how far away would they need to be for the ecosystem on earth not to be seriously damaged?

If throwing antimatter stars around is difficult we may be able to resort to playing 'asteroids'. That is, throw actual asteroids at it, resulting in smaller blasts of annhialiation and probably in the star being fragmented, allowing further asteroids to finish the clean up process.

Replies from: DanArmak
comment by DanArmak · 2010-06-19T21:38:42.816Z · LW(p) · GW(p)

That is, throw actual asteroids at it, resulting in smaller blasts of annhialiation and probably in the star being fragmented, allowing further asteroids to finish the clean up process.

Wouldn't we need asteroids of total mass comparable to the anti-star? Where would you we enough? Any planets or asteroid belts around that star would be antimatter too, almost certainly.

Replies from: wedrifid
comment by wedrifid · 2010-06-19T22:05:55.994Z · LW(p) · GW(p)

Wouldn't we need asteroids of total mass comparable to the anti-star? Where would you we enough? Any planets or asteroid belts around that star would be antimatter too, almost certainly.

Yes, you would need that much matter to be annihilated. But finding one system's worth of mass is a (relatively) trivial part of the problem. It is a whole order of plausibility easier than trying to throw an antimatter star into a black hole. Taking apart the nearby systems and throwing the planets and asteriods at the offending star is just an engineering problem once you have that sort of tech. I could probably do it myself if you gave me 30,000 years to work out the finer details. You either push on the asteroid while standing on something bigger or you launch tiny things off the asteroid at large fractions of the speed of light in a suitable direction.

Throwing a whole antimatter star into a suitable black hole? I can't even do that one in principle (within 2 minutes of thought). Apart from being really big and too hot to put propulsion devices on... it's made out of F@#% antimatter. The obvious options for accelerating it are gravity and photons, neither of which care about the 'matter/antimatter' distinction. If you have enough gravity hanging about in the vicinity then the star is probably already falling into the black hole. And if you are planning on pushing a star about using only photons.... well, you may end up using more than just one star worth of matter to pull that off.

Then there is the problem of finding a suitably large black hole to throw it at. They tend to have stuff in their orbit (often the rest of the galaxy). Navigating an antimatter star to the black hole without it annihilating itself of the way there would be tricky. It isn't easy to steer these things.

What may be easier is to dedicate a year or two run time on a Jupiter Brain to work out just the right size rock to throw at just the right time at just the right place. The resulting explosion would be chosen to knock the star in the right direction, or in the right pieces in the right directions, or whatever it is that antimatter stars do when you throw rocks at them. Then most of the destruction would be from it hitting the other stars that you aimed for. You would dispose of the weapon by triggering it in a controlled manner.

comment by wedrifid · 2010-06-19T20:09:52.963Z · LW(p) · GW(p)

I saw this and immediately thought of the no hair theorem, which says that the only distinguishing (reference frame-independent) characteristics of black holes are their mass, their charge and their angular momentum.

Wait... black holes keep their electrical charge? As in... if I shoot enough electrons at a black hole it will start to repel any negatively charged matter rather than attract it? No, that'd allow me to scout out information past the event horizon. Hmmm...

... Apparently charged black holes have two horizons, an event horizon and a Chauchy horizon. But I am still not sure what would happen in the case of a constant stream of electrons. Could someone with physics knowledge fill me in? What does happen when the black hole reaches a critical charge?

two different black holes I can make them actually repel each other rather than attract? Hmmm. And reject No, that lets me get information

Replies from: cupholder
comment by cupholder · 2010-06-20T07:58:49.817Z · LW(p) · GW(p)

As in... if I shoot enough electrons at a black hole it will start to repel any negatively charged matter rather than attract it? No, that'd allow me to scout out information past the event horizon.

(Disclaimer: I'm not a physicist, so this may be BS.) This might not be a problem. If a black hole repels negative charges, all that tells you is the black hole's position and net charge, and AFAIK that kind of information is allowed to 'escape' the black hole: position is OK because that's frame-dependent, and the no-hair theorem says it's OK to know the net charge.

Replies from: wedrifid
comment by wedrifid · 2010-06-20T09:55:54.283Z · LW(p) · GW(p)

I am just speaking BS too but:

If a black hole can be charged sufficiently that it repels an electron rather than attracts via gravity then:

  • There will be a point at which the gravity is perfectly balanced by the repulsion of the negative charges.
  • Just after that point there there will be a point where the electron is subject to a slight acceleration away from the center of the black hole.
  • If I shoot an electron at a suitable speed at such a black hole the electron will slow down and reverse in direction at a point determined by the initial speed and the acceleration. This point could be below the event horizon.
  • If such an election hit something inside the event horizon it would not return to me.
  • This tells me something about things inside the event horizon.
  • The teacher says I am not allowed to discover things about the inside of the event horizon.
  • Something in the above scenario must not be right.
Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-06-20T10:05:57.686Z · LW(p) · GW(p)

I think you will find that the charge repulsion never exceeds the gravitational attraction in this way. The mass of a black hole places a bound on how much charge it can have; if the bound is exceeded, you get a naked singularity. You may actually be rediscovering this!

ETA: The two horizons you mentioned earlier merge when this bound is reached. I suppose this means that if you try to shoot a charge into one of these "extremal" black holes, the charge will be repelled outside the event horizon. That would be a consistent way for everything to work out, so that the bound can never be violated. But I will have to check.

Replies from: wedrifid, wedrifid, red75
comment by wedrifid · 2010-06-20T11:02:26.722Z · LW(p) · GW(p)

I suppose this means that if you try to shoot a charge into one of these "extremal" black holes, the charge will be repelled outside the event horizon.

You could be right, but how? I inject enough electrons into the black hole to maintain it at as high a charge as possible. Then I launch more electrons from a platform that is doing a slighshot pass right by the event horizon. And I dedicate the energy from a nearby star to shooting photons at it to force the extra particles in...

And even forgetting extreme options. Just why? If the black hole is not charged to a level that will repel electrons it will attract them. Add more and they will just hover there without accelerating. Add more still and they will be repelled. This works unless weird math comes in to play.

Red suggests discharge via Hawking radiation. I would not be able to rule out some sort of asymptotic increase in Hawking Radiation discharge toward my electron input rate. (Basically because I don't know how Hawking Radiation works.)

comment by wedrifid · 2010-06-20T10:43:30.048Z · LW(p) · GW(p)

I had never heard the term before but that is just where my thoughts were leading me.

Looking a bit more closely it would seem that 'strong' forces would ensure there is always at least a tiny horizon at which even electrons couldn't escape no matter what the charge. (And if there wasn't the thing would fall apart). It just doesn't matter how big you make the charge. 'Squared' just doesn't cut it. So while the electrons would return from where even photons could not escape they would still get stuck if they went deep enough. But I don't know where things like strong forces start to break down...

comment by red75 · 2010-06-20T10:41:14.324Z · LW(p) · GW(p)

BTW, it seems that charged black hole will discharge via Hawking radiation.

comment by wedrifid · 2010-06-19T19:49:07.317Z · LW(p) · GW(p)

And if there are, we don't know how to identify them from far away, do we?

Yes. Not from the star itself but rather by the interstellar dust (hydrogen atoms floating about, etc). We would detect emissions from interactions at the boundary between 'mostly empty but with bits of matter' and 'mostly empty but with bits of antimatter'.

comment by Liron · 2010-06-19T19:11:39.373Z · LW(p) · GW(p)

So, uncharged capacitor?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-06-20T07:40:30.914Z · LW(p) · GW(p)

The analogy is indeterminate. The energy is there, but in a matter-antimatter "capacitor" or "fuel cell", you would need both ingredients to release it. So maybe it's like half a charged capacitor.

comment by wedrifid · 2010-06-19T20:58:21.213Z · LW(p) · GW(p)

I'm wondering whether the moon's mass makes it analogous to a charged capacitor or an uncharged capacitor.

There isn't an answer to that unless we specify how we intend to consider using the moon. For most part it isn't analogous to either kind of capacitor but we can construct scenarios for either case I expect.

We could, for example, use the moon to store either gravitational or kinetic energy. That would make it fairly charged (but leaking charge over time...)

We could use the moon to store heat energy --> it's uncharged.

As for direct annihilation of the mass to release energy... would you consider that to be analogous to a 'capacitor'? Sounds like more of a 'battery' to me.

comment by DanArmak · 2010-06-14T21:18:18.854Z · LW(p) · GW(p)

This may not be very practical to do to the whole moon at once though :-)

Well, I shouldn't speak before checking. Taking numbers from Wikipedia (ETA fixed numbers):

  • The moon has a mass of 7.36e22 Kg, converting it to energy would yield 6.624e39 J.
  • The Sun's total output is about 3.86e26 J / s, so this is the equivalent of 3.17 million years of Sun energy (if you have a Dyson sphere).
  • A nova releases ~~ 1e34-1e37 J over a few days; only 1/100 as much as converting the moon to energy. A core-collapse supernova bursts 1e44-1e46 J of energy in 10 seconds - a lot more. (Range is according to different Google results.)

ETA: the numbers were completely wrong before and I corrected them.

Replies from: Christian_Szegedy
comment by Christian_Szegedy · 2010-06-14T22:46:35.821Z · LW(p) · GW(p)

Your numbers seem to be off: (e.g. 4.26e9 J/sec would be truly minsiscule) You probably meant 4.29e29 J/sec, but then 5.74e5 years are wrong. According to wikipedia, the Sun's energy output is: 1.2e34 J/s which is still at odd with both of your numbers.

comment by SilasBarta · 2010-06-14T13:06:44.262Z · LW(p) · GW(p)

I'd like to pose a sort of brain-teaser about Relativity and Mach's Principle, to see if I understand them correctly. I'll post my answer in rot13.

Here goes: Assume the universe has the same rules it currently does, but instead consists of just you and two planets, which emit visible light. You are standing on one of them and looking at the other, and can see the surface features. It stays at the same position in the sky.

As time goes by, you gradually get a rotationally-shifted view of the features. That is, the longitudinal centerline of the side you see gradually shifts. This change in view could result from the other planet rotating, or from your planet revolving around it while facing it. (Remember, both planets emit light, so you don't see a different portion being in a shadow like the moon's phases.)

Question: What experiment could you do to determine whether the other planet is spinning, or your planet is revolving around it while facing it?

My answer (rot13): Gurer vf ab jnl gb qb fb, orpnhfr gurer vf ab snpg bs gur znggre nf gb juvpu bar vf ernyyl unccravat, naq vg vf yvgreny abafrafr gb rira guvax gung gurer vf n qvssrerapr. Gur bayl ernfba bar zvtug guvax gurer'f n qvssrerapr vf sebz orvat npphfgbzrq gb n havirefr jvgu zber guna whfg gurfr gjb cynargf, juvpu sbez n onpxtebhaq senzr ntnvafg juvpu bar bs gurz pbhyq or pbafvqrerq fcvaavat be eribyivat.

Replies from: prase, Vladimir_M, mkehrt, Wei_Dai, Roko, PeterS
comment by prase · 2010-06-14T18:41:31.003Z · LW(p) · GW(p)

Imagine a simplified scenario: only one planet. Is the planet rotating or not? You could construct a Foucault pendulum and see. It will show you a definite answer: either its plane of oscillation moves relatively to the ground or not. This doesn't depend on distant stars. If your planet is heavy and dense like hell, you could see the difference between a "rotating" Kerr metric and a "static" Schwarzschild metric.

Of course, general relativity is generally covariant, and any motion can be interpreted as a free fall in some gravitational field, and more, there is no absolute background spacetime with respect to which to measure acceleration. So you can likely find coordinates in which the planet is static and the pendulum movement explain by changing gravitational field. The price paid is that it will be necessary to postulate weird boundary conditions in the infinity. It is possible that more versions of boundary conditions are acceptable in the absence of distant objects and the question whether the planet is rotating is then less defined.

Carlo Rovelli in his Quantum Gravity (once I downloaded it from arXiv, now it seems unavailable, but probably it could still be found somewhere on the net) considers eight versions of Mach principle (MP). This is what he says (he has discussed the parabolic water surface of a rotating bucket before instead of two planets or Foucault pendula):

  • MP1: Distant stars can affect local inertial frame. True. Because matter affects the gravitational field.
  • MP2: The local inertial frame is completely determined by the matter content of the universe. False. The gravitational field has independent degrees of freedom.
  • MP3: The rotation of the inertial frame inside the bucket is in fact dragged by the bucket, and this effect increases with the mass of the bucket. True. This is the Lense-Thirring effect: a rotating mass drags the inertial frames in the vicinity.
  • MP4: In the limit in which the mass is large, the internal inertial reference frame rotates with the bucket. Depends on the details of the way the limit is taken.
  • MP5: There can be no global rotation of the universe. False. Einstein believed this to be true in GR, but Goedel's solution is a counter-example.
  • MP6: In the absence of matter, there would be no inertia. False. There are vacuum solutions of the Einstein equations.
  • MP7: There is no absolute motion, only motion relative to something else, therefore the water in the bucket does not rotate in absolute terms, it rotates with respect to some dynamical physical entity. True. This is the basic physical idea of GR.
  • MP8: The local inertial frame is completely determined by the dynamical fields of the universe. True. In fact, this is precisely Einstein key idea.

I think number 4 is especially relevant here. The boundary conditions or the global topology of the universe have to be taken into account, else the two-planet scenario is not entirely defined.

Edit: The last remark doesn't make much sense after all. The planets aren't thought to be too heavy and the dragging effect shouldn't be too big, and its relation to boundary conditions isn't straightforward. Nevertheless, the boundary conditions still play an important role (see my subcomment here).

Replies from: SilasBarta
comment by SilasBarta · 2010-06-14T20:37:38.351Z · LW(p) · GW(p)

Imagine a simplified scenario: only one planet. Is the planet rotating or not? You could construct a Foucault pendulum and see. It will show you a definite answer: either its plane of oscillation moves relatively to the ground or not. This doesn't depend on distant stars.

Sure it does. If the rest of the objects in the universe were rotating in unison around the earth while the earth was still, that would be observationally indistinguishable from the earth rotating. The GR equations (so I'm told[1]) account for this in that, if the rest of the universe were treated as rotating, that would send gravitaitonal waves that would jointly cause the earth to be still in that frame of reference.

Remove that external mass, and you've removed the gravity waves. Nothing cancels the gravity wave generated by the motion of the planets.

It is possible that more versions of boundary conditions are acceptable in the absence of distant objects and the question whether the planet is rotating is then less defined.

Yes, I think that agrees with my answer to the question.

[1] See here:

Einstein's theory further had the property that moving matter would generate gravitational waves, propagating curvatures. Einstein suspected that if the whole universe was rotating around you while you stood still, you would feel a centrifugal force from the incoming gravitational waves, corresponding exactly to the centripetal force of spinning your arms while the universe stood still around you. So you could construct the laws of physics in an accelerating or even rotating frame of reference, and end up observing the same laws - again freeing us of the specter of absolute space.

Replies from: prase, prase
comment by prase · 2010-06-14T22:06:35.007Z · LW(p) · GW(p)

Let me write one more reply since I think my first one wasn't entirely clear.

Let's put all this into a thought experiment like this: Universe A contains only a light observer with a round bottle half full of water. Universe B contains all that, and moreover a lot of uniformly isotropically distributed distant massive stars. In both universes the spacetime region around the observer can be described by Minkowski metric. At the beginning, the observer sees that the water is spread near the walls of the bottle with a round vacuum bubble in the middle; this minimises the energy due to surface tension. Now, the observer gives the bottle some spin. Will the observation in universe A be different from that in universe B?

If GR is right, then no, it wouldn't. In both, the observers will see the water concentrated in regions most distant from a specific straight line, which is reasonable to call the axis of rotation. To see that, it is enough to realise that the distant stars influence the bottle only by means of the gravitational field, and it remains almost the same in both cases - approximately Minkowskian, assumed that the bottle and the observer aren't of black hole proportions.

Of course one can then change the coordinates to those in which the bottle is static. With respect to these coordinates, the stars in universe B would rotate, and in universe A, well, nothing much can be said. But in both universes, we will find a gravitational field which creates precisely the effects of the rotation of the now static bottle. The stars are there only to distract the attention.

We can almost do the coordinate change in the Newtonian framework: it amounts to use of centrifugal force, which can be thought of as a gravitational force (it is universal in the same way as the gravitational force; of course, this is the equivalence principle). There are only two "minor" problems in Newtonian physics: first, orthodox Newtonianism recognises only gravitational force emanating from massive objects in the way described by Newton's gravitational law, which is why the centrifugal force has to be treated differently, and second, there is the damned velocity dependent Coriolis force.

Edit: some formulations changed

Replies from: SilasBarta
comment by SilasBarta · 2010-06-15T00:08:31.304Z · LW(p) · GW(p)

Okay, I give up. I don't know the math well enough to speak confidently on this issue. I was just taking the Machian principles in the article I linked and extrapolating them to the scenario I envisioned, using some familiarity with frame-dragging effects.

Still, I think it's an interesting exercise in finding the implications of a universe without the background mass, and not as easy to answer as some initially assumed.

Replies from: prase
comment by prase · 2010-06-15T05:51:29.252Z · LW(p) · GW(p)

Yes, it's interesting, I was confused for quite a while, still the answer is simpler than what I initially assumed, which makes it a good brain teaser.

comment by prase · 2010-06-14T21:18:02.558Z · LW(p) · GW(p)

if the rest of the universe were treated as rotating, that would send gravitaitonal waves that would jointly cause the earth to be still in that frame of reference

This is not so simple. The force of the gravitational waves depends on the mass of the rest of the universe. One can easily imagine the same observable rest of the universe with a very different mass (just remove all the dark matter or so). Both can't generate the same gravitational waves, but there would be no significant observable effect on Earth. The metric around here would be still more or less Schwarzschild (or Kerr). The fact that steady state can be interpreted as rotation whose effects are cancelled by gravitational waves has not necessarily much to do with the existence of other objects in the universe. Even in empty space, the gravitational waves can come from infinity.

So, while it's true that there is no absolute space with respect to which one measures the acceleration, there are still Foucault pendula. Because there is no absolute space, to define what constitutes rotation using any particular coordinates would be absurd. But we can still quite reasonably define rotation (extend our present definition of rotation) by use of the pendulum, or bucket, or whatever similar device. Even in single-planet universes, there can be buckets with both flat and parabolic surfaces.

comment by Vladimir_M · 2010-06-14T18:35:53.928Z · LW(p) · GW(p)

I have only a superficial understanding of GR, but nevertheless, your question seems a bit unclear and/or confused. A few important points:

  • Whether GR is actually a Machian theory is a moot point, because it turns out that Mach's principle is hard to formulate precisely enough to tackle that question. See e.g. here for an overview of this problem: http://arxiv.org/abs/gr-qc/9607009

  • According to the Mach's original idea -- whose relation with GR is still not entirely clear, and which is certainly not necessarily implied by GR -- a necessary assumption for the "normal" behavior of rotational and other non-inertial motions is the large-scale isotropy of the universe, and the fact that enormous distant masses exist in every direction. If the only other mass in the universe is concentrated nearby, you'd see only weak inertial forces, and they would behave differently in different directions.

  • The geometry of spacetime in GR is not uniquely determined by the distribution of matter. You can have various crazy spacetime geometries for any distribution of matter. (As a trivial example, imagine you're living in the usual Minkowski or Schwarzschild metric, and then a powerful gravitational wave passes by.) In this sense, GR is deeply anti-Machian.

  • That said, assuming nothing funny's going on, in the scenario you describe, the classical limit applies, and the planets would move pretty much according to Newton's laws. This means they'd both be orbiting around their common center of mass, so it's not clear to me that the observations you listed would be possible. [ETA: please ignore this last point, my typing was faster than my thinking here. See the replies below.]

Therefore, the only way I can make sense of your example would be to assume that the other planet is much heavier than yours, and that the Schwarzschild metric applies and gives approximately Newtonian results, so we get something similar to the Moon's rotation around the Earth. Is that what you had in mind?

Replies from: prase
comment by prase · 2010-06-14T19:29:08.146Z · LW(p) · GW(p)

it's not clear to me that the observations you listed would be possible. ... the only way I can make sense of your example would be to assume that the other planet is much heavier than yours

I don't understand. The listed observations are in accordance with Newton, whatever the masses of the planets.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-14T20:21:30.976Z · LW(p) · GW(p)

Yes, you're right. It was my failure of imagination. I thought about it again, and yes, even with similar or identical masses, the rotations of individual planets around their own axes could be set so as to provide the described view.

comment by mkehrt · 2010-06-14T16:08:46.244Z · LW(p) · GW(p)

Couldn't you tell whether your planet is revolving or rotating using a Foucault's pendulum? I'm not sure whether you can get all the information about the planets' relations with a complex set of Foucault's pendula or not, but you could get some.

Also, I think your answer is a map-territory confusion. While GR does not distinguish certain types of motion from each other, and while GR seems to be the best model of macroscopic behavior we have, to claim that this means that there is really no fact of the matter seems a little overconfident.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-14T17:12:45.652Z · LW(p) · GW(p)

Couldn't you tell whether your planet is revolving or rotating using a Foucault's pendulum? I'm not sure whether you can get all the information about the planets' relations with a complex set of Foucault's pendula or not, but you could get some.

The Foucault pendulum is able to measure earth's rotation in part because of the frame established by the rest of the universe. But in the scenario I described, the frame dragging effect of one or both planets blows up your ability to use the standard equations. Would the corrections introduced by including frame-dragging show a solution that varies depending on which of the planets is "really" moving?

Also, I think your answer is a map-territory confusion. While GR does not distinguish certain types of motion from each other, and while GR seems to be the best model of macroscopic behavior we have, to claim that this means that there is really no fact of the matter seems a little overconfident.

It's the other way around. The fact that there is no test that would distinguish your location along a dimension means that no such dimension exists, and any model requiring such a distinction is deviating from the territory.

Yes, GR could be wrong, but for it to be wrong in a way such that e.g. you actually can distinguish acceleration from gravity would require more than just a refinement of our models; it would mean the universe up to this point was a lie.

Replies from: Vladimir_M, mkehrt
comment by Vladimir_M · 2010-06-14T19:05:58.667Z · LW(p) · GW(p)

SilasBarta:

Yes, GR could be wrong, but for it to be wrong in a way such that e.g. you actually can distinguish acceleration from gravity would require more than just a refinement of our models; it would mean the universe up to this point was a lie.

This isn't really true. In GR, you can in principle always distinguish acceleration from gravity over finite stretches of spacetime by measuring the tidal forces. There is no distribution of mass that would produce an ideally homogeneous gravitational field free of tidal forces whose effect would perfectly mimic uniform acceleration in flat spacetime. The equivalence principle holds only across infinitesimal regions of spacetime.

See here for a good discussion of what the equivalence principle actually means, and the overview of various controversies it has provoked:
http://www.mathpages.com/home/kmath622/kmath622.htm

Replies from: SilasBarta
comment by SilasBarta · 2010-06-14T20:51:39.055Z · LW(p) · GW(p)

This isn't really true. In GR, you can in principle always distinguish acceleration from gravity over finite stretches of spacetime by measuring the tidal forces. ...

Yes, I was just listing an offhand example of an implication of GR and I didn't bother to specify it to full precision. My point was just that in order for a certain implication to be falsified (specifically, that there is no fact of the matter as to e.g. what the velocity of the universe is), you would need the laws of the universe to change, not just a refinement in the GR model.

comment by mkehrt · 2010-06-14T18:00:14.207Z · LW(p) · GW(p)

The Foucault pendulum is able to measure earth's rotation in part because of the frame established by the rest of the universe. But in the scenario I described, the frame dragging effect of one or both planets blows up your ability to use the standard equations. Would the corrections introduced by including frame-dragging show a solution that varies depending on which of the planets is "really" moving?

I must admit I'm a little baffled by this. I'm pretty ignorant of GR, but I was strongly under the impression that

(a) the frame dragging effect was miniscule, and

(b), that Foucault's pendulum works simply because there is no force acting on the pendulum to change the plane of its rotation. Thus, a perfect polar pendulum on a planet in a universe with no other bodies in it will never have any force exerted on it other than gravity and will continue to swing in the same plane. If the planet is rotating, an observer on the planet will be able to tell this by observing the pendulum, even in the absence of any other body in the universe. Similarly, in the above paradox, an observer can tell whether their planet is revolving around the other planet while remaining oriented towards it because the pendulum will rotate over the course of a "year".

Replies from: SilasBarta
comment by SilasBarta · 2010-06-14T18:09:17.454Z · LW(p) · GW(p)

To appreciate how differently things are when you remove the rest of the universe, consider this: what if the universe is just one planet with the people on it? How will a Foucault pendulum behave in that universe? Shouldn't it behave quite differently, given that the rotation of the planet means the rotation of the entire universe, which is meaningless?

Replies from: Vladimir_M, prase
comment by Vladimir_M · 2010-06-14T20:29:59.049Z · LW(p) · GW(p)

To appreciate how differently things are when you remove the rest of the universe, consider this: what if the universe is just one planet with the people on it?

As Prase said above, that depends on the boundary conditions. As the clearest example, if you imagine a flat empty Minkowski space and then add a lightweight sphere into it, then special relativity will hold and observers tied to the sphere's surface would be able to tell whether it's rotating by measuring the Coriolis and centrifugal forces. There would be a true anti-Machian absolute space around them, telling them clearly if they're rotating/accelerating or not. This despite the whole scenario being perfectly consistent with GR.

comment by prase · 2010-06-14T20:25:22.297Z · LW(p) · GW(p)

Rotation of the planets doesn't mean rotation of the universe, don't forget there are not only the planets, but also the gravitational field.

comment by Wei Dai (Wei_Dai) · 2010-06-14T15:35:02.562Z · LW(p) · GW(p)

If the two planets aren't revolving around each other, wouldn't gravity pull them together? But maybe space is expanding at precisely the rate necessary to keep them at the same distance despite gravity? To test that, build a rocket on your planet and push it (the planet) slightly, either toward the other planet or away from it. If the planets are revolving around each other, you've just changed a circular orbit into an elliptical one, so you should see an oscillation in the distance between the two planets. If they are not revolving around each other, then they'll either keep getting closer together or further apart, depending on which direction you made the push.

(This is all based on my physics intuition. Somebody who knows the math should write down the two equations and check if they're isomorphic. :)

Replies from: SilasBarta
comment by SilasBarta · 2010-06-14T15:47:46.793Z · LW(p) · GW(p)

If the two planets aren't revolving around each other, wouldn't gravity pull them together?

Gravity would pull, yes, but the rotation of a body also distorts space in such a way to produce another effect you have to consider.

ETA: Look at a similar scenario. Same as the one I proposed, but you always see the same portion of the other planet. How do you know how fast the two planets are revolving around each other? Isn't this the same as asking how fast the entire universe is rotating?

Replies from: prase, Wei_Dai
comment by prase · 2010-06-14T17:15:38.650Z · LW(p) · GW(p)

Exactly as fast as needed to keep them on cyclical orbit (assuming you don't experience change of the distance to the second planet). For this, you can quite safely use the Newton laws.

In general-relativistic language, what exactly do you mean by "how fast the entire universe is rotating"?

Replies from: SilasBarta
comment by SilasBarta · 2010-06-14T17:35:57.189Z · LW(p) · GW(p)

In general-relativistic language, what exactly do you mean by "how fast the entire universe is rotating"?

I mean nothing. In GR, the very question is nonsense. The universe does not have a position, just relative positions of objects.

The universe does not have a velocity, just relative velocities of various objects.
The universe does not have an acceleration, just relative accelerations of various objects.
The universe does not have a rotational orientation, just relative rotational orientations of various objects.
The universe does not have a rotational velocity, just relative rotational velocities of various objects.

There is no way in this universe to distinguish between a bucket rotating vs. the rest of the universe rotating around the bucket. There is also no such thing as how fast the universe "as a whole" is rotating.

Replies from: Vladimir_M, wnewman
comment by Vladimir_M · 2010-06-14T18:53:54.509Z · LW(p) · GW(p)

I'm not sure if what you write makes sense. Take one simple example: a flat Minkowski spacetime, empty except for a few light particles (so that their influence on the metric is negligible). This means that special relativity applies, and it's clearly consistent with GR.

Accelerated motions are not going to be relative in this universe, just like they aren't in Newton's theory. You can of course observe an accelerating particle and insist on using coordinates in which it remains in the origin (which is sometimes useful, as in e.g. the Rindler coordinates), but in this coordinate system, the universe will not have the above listed properties in any meaningful sense.

comment by wnewman · 2010-06-16T15:52:14.578Z · LW(p) · GW(p)

You write "In GR, the very question is nonsense. [0] The universe does not have a position, just relative positions of objects. [1] The universe does not have a velocity, just relative velocities of various objects. [2] The universe does not have an acceleration, just relative accelerations of various objects." This passage incorrectly appeals to GR to lump together three statements that GR doesn't lump together.

See http://en.wikipedia.org/wiki/Inertial_frames_of_reference and note the distinction there between "constant, uniform motion" and various aspects of acceleratedness. Your [0] and [1] describe changes within an inertial frame of reference, while [2] gets you to a non-inertial frame. Not coincidentally, your [0] and [1] are predicted by GR and are consistent with centuries of careful experiment, while [2] is not predicted by GR and is inconsistent with everyday observation with Mark I eyeballs. (With modern vehicles it's common to experience enough acceleration in the vicinity of some low-friction system to notice that acceleration causes conservation of momentum to break down in ways that a constant displacement and/or uniform motion doesn't.)

Replies from: SilasBarta
comment by SilasBarta · 2010-06-16T16:23:56.497Z · LW(p) · GW(p)

[2] is not predicted by GR and is inconsistent with everyday observation with Mark I eyeballs.

I ask, in return, that you read this. Eliezer Yudkowsky had argued that GR implies it's impossible to measure the acceleration of the universe, and no one had objected. Now, EY is not the pope of rationality, but I suggest things aren't as simple as you're making them.

(With modern vehicles it's common to experience enough acceleration in the vicinity of some low-friction system to notice that acceleration causes conservation of momentum to break down in ways that a constant displacement and/or uniform motion doesn't.)

Your point just seems to be a version of the bucket argument: "acceleration must be real, because it has real, detectable, frame-independent consequences like breakage and pain and ficticious forces". I think I posed the same challenge in an open thread a month or two ago. And as the link you gave says,

In Newton's time the fixed stars were invoked as a reference frame, supposedly at rest relative to absolute space. In reference frames that were either at rest with respect to the fixed stars or in uniform translation relative to these stars, Newton's laws of motion were supposed to hold. In contrast, in frames accelerating with respect to the fixed stars, an important case being frames rotating relative to the fixed stars, the laws of motion did not hold in their simplest form, but had to be supplemented by the addition of fictitious forces, for example, the Coriolis force and the centrifugal force. Two interesting experiments were devised by Newton to demonstrate how these forces could be discovered, thereby revealing to an observer that they were not in an inertial frame: the example of the tension in the cord linking two spheres rotating about their center of gravity, and the example of the curvature of the surface of water in a rotating bucket.

But under Mach's principle (the version that says only relative motion is meaningful, and which GR agrees with), these consequences of acceleration you describe only exist because of the frame against which to describe the acceleration, which is formed by the (relatively!) non-accelerating the rest of the universe. Therefore, if all of the universe were to accelerate uniformly, there would be no relative motion and therefore no experimental consequences, and we should regard the very idea as nonsense.

So if the universe were only you and your vehicle, you would not be able notice joint accelerations of you and the vehicle, only acceleration of yourself relative to the vehicle.

Now, you can disagree with this application of Mach's principle, but the observations you describe do not contradict it.

I should also add one of the great insights I got out of Barbour's book The End of Time (from which EY got his love of Mach's principle and timelessness). The insight is that the laws of physics do not change in a rotating reference frame. Rather, there is a way you can determine if any given object is not in uniform motion relative to the rest of the universe, and this method also allows you to define an "inertial clock" which gives you an appropriate measure of time.

Most importantly, if you are spinning around, and there's some other object accelerating relative to the rest of the universe, this method allows you to detect its acceleration, no matter how much or in what way your own frame is moving!

Replies from: wnewman
comment by wnewman · 2010-06-16T21:07:09.523Z · LW(p) · GW(p)

Perhaps the root of our disagreement is that you think (?) that the GR field equations constrain their solutions to conform to Mach's principle, while I think they admit many solutions which don't conform to Mach's principle, and that furthermore that Vladimir_M is probably correct in his sketch of a family of non-Mach-principle solutions.

EY's article seems pretty clear about claiming not that Mach's principle follows deductively from the equations of GR, but that there's a sufficiently natural fit that we might make an inductive leap from observed regularity in simple cases to an expected identical regularity in all cases. In particular EY writes "I do not think this has been verified exactly, in terms of how much matter is out there, what kind of gravitational wave it would generate by rotating around us, et cetera. Einstein did verify that a shell of matter, spinning around a central point, ought to generate a gravitational equivalent of the Coriolis force that would e.g. cause a pendulum to precess." I think EY is probably correct that this hasn't been verified exactly --- more on that below. I also note that from the numbers given in Gravitation, if you hope to fake up a reasonably fast rotating frame by surrounding the experimenter with a rotating shell too arbitrarily distant to notice, you may need a very steep quantity discount at your nonlocal Black-Holes-R-Us (Free Installation At Any Velocity), and more generally that apparently solutions which locally hide GR's preferred rotational frame seem to be associated with very extreme boundary conditions.

You write "under Mach's principle (the version that says only relative motion is meaningful, and which GR agrees with), these consequences of acceleration you describe only exist because of the frame against which to describe the acceleration, which is formed by the (relatively!) non-accelerating the rest of the universe." I think it would be more precise to say not "which GR agrees with" but "which some solutions to the GR field equations agree with." Similarly, if I were pushing a Newman principle which requires that the number of particles in the universe be divisible by 2, I would not say "which GR agrees with" if there were any chance that this might be interpreted as a claim that "the equations of GR require an even number of particles." Solutions to the GR field equations can be consistent with Mach's principle, but I'm pretty sure that they don't need to be consistent with it. The old Misner et al. Gravitation text remarks on how a point of agreement with Mach's principle "is a characteristic feature of the Friedman model and other simple models of a closed universe." So it seems pretty clear that as of 1971, there was no known requirement that every possible solution must be consistent with Mach's principle. And (Bayes FTW!) if no such requirement was known in 1971, but such a requirement was rigorously proved later, then it's very strange that no one has brought up in this discussion the name of the mathematical physicist(s) who is justly famous for the proof.

(I'm unlikely to look at The End of Time 'til the next time I'm at UTDallas library, i.e., a week or so.)

Replies from: wnewman, SilasBarta
comment by SilasBarta · 2010-06-17T03:12:37.400Z · LW(p) · GW(p)

Refer to the Rovelli paper mentioned in this discussion:

MP7: There is no absolute motion, only motion relative to something else, therefore the water in the bucket does not rotate in absolute terms, it rotates with respect to some dynamical physical entity. True. This is the basic physical idea of GR.

This is a much stronger claim than the one you pretended I was making, that GR agrees my selected Mach's principle -- rather, the pure relativity of universe is the basic idea of GR, not something simply shared between Mach's principle and GR (like with your modulo 2 example).

And (Bayes FTW!) if no such requirement was known in 1971, but such a requirement was rigorously proved later, then it's very strange that no one has brought up in this discussion the name of the mathematical physicist(s) who is justly famous for the proof.

I did -- Barbour.

comment by Wei Dai (Wei_Dai) · 2010-06-14T16:35:02.790Z · LW(p) · GW(p)

Here's another possible experiment. Send a robot to the other planet, cut it in half, and then build a beam to push the two halves apart. If that planet is rotating, then due to conservation of angular momentum, this should cause its rotation to slow down, and you'd see that. If the two planets are just revolving around each other, then you won't observe such a slowdown in the apparent rotation of the other planet.

ETA: I'm pretty curious what the math actually says. Do we have any GR experts here?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-15T18:32:29.476Z · LW(p) · GW(p)

Also, if you've asked the right question, would the stresses that would push the halves apart also show up as geological stresses?

comment by Roko · 2010-06-14T13:36:05.164Z · LW(p) · GW(p)

What experiment could you do to determine whether the other planet is spinning, or your planet is revolving around it while facing it?

check whether you are experiencing a centrifugal force.

Regarding your answer, standard physics seems to indicate that you can tell the difference, unless the laws of physics change to violate newton's laws when there are fewer than 3 bodies. Mach proposed this (I think) but people seem to doubt him.

Replies from: SilasBarta, SilasBarta
comment by SilasBarta · 2010-06-14T13:43:49.213Z · LW(p) · GW(p)

The universe adheres to General Relativity, not Newton's laws. What does GR say about the effect of spinning and revolving bodies?

Replies from: wnewman
comment by wnewman · 2010-06-14T15:02:59.533Z · LW(p) · GW(p)

Relativity says that as motion becomes very much slower than the speed of light, behavior becomes very similar to Newton's laws. Everyday materials (and planetary systems) and energies give rise to motions very very much slower than the speed of light, so it tends to be very very difficult to tell the difference. For a mechanical experimental design that can accurately described in a nontechnical blog post and that you could reasonably imagine building for yourself (e.g., a Foucault-style pendulum), the relativistic predictions are very likely to be indistinguishable from Newton's predictions.

(This is very much like the "Bohr correspondence principle" in QM, but AFAIK this relativistic correspondence principle doesn't have a special name. It's just obvious from Einstein's equations, and those equations have been known for as long as ordinary scientists have been thinking about (speed-of-light, as opposed to Galilean) relativity.)

Examples of "see, relativity isn't purely academic" tend to involve motion near the speed of light (e.g., in particle accelerators, cosmic rays, or inner-sphere electrons in heavy atoms), superextreme conditions plus sensitive instruments (e.g., timing neutron stars or black holes in close orbit around each other), or extreme conditions plus supersensitive instruments (e.g., timing GPS satellites, or measuring subtle splittings in atomic spectroscopy).

Replies from: SilasBarta
comment by SilasBarta · 2010-06-14T15:15:18.921Z · LW(p) · GW(p)

And the example I posited is a superextreme condition: the two bodies in question make up the entire universe, which amplifies the effects that are normally only observable with sensitive instruments. See frame-dragging.

Replies from: prase
comment by prase · 2010-06-14T17:09:57.821Z · LW(p) · GW(p)

Amplifies? The Schwarzschild spacetime (which behaves like Newtonian gravitational field in large distance limit) needs only one point-like massive object. What do you expect as a non-negligible difference made by (non-)existence of distant objects?

Replies from: SilasBarta
comment by SilasBarta · 2010-06-14T17:18:51.823Z · LW(p) · GW(p)

What do you expect as a non-negligible difference made by (non-)existence of distant objects?

The fact that there's no longer a frame against which to measure local rotation in any sense other than its rotation relative to the frame of the other body. So it makes a big difference what counts as "the rest of the universe".

Replies from: prase, wnewman
comment by prase · 2010-06-14T19:16:41.061Z · LW(p) · GW(p)

People believed for a quite long period of time that the distant stars don't provide a stable reference frame. That it is the Earth which rotates was shown by Foucault pendulum or similar experiments, without refering to outer stellar frame.

comment by wnewman · 2010-06-15T13:37:36.858Z · LW(p) · GW(p)

(two points, one about your invocation of frame-dragging upstream, one elaborating on prase's question...)

point 1: I've never studied the kinds of tensor math that I'd need to use the usual relativistic equations; I only know the special relativistic equations and the symmetry considerations which constrain the general relativistic equations. But it seems to me that special relativity plus symmetry suffice to justify my claim that any reasonable mechanical apparatus you can build for reasonable-sized planets in your example will be practically indistinguishable from Newtonian predictions.

It also seems to me that your cited reference to wikipedia "frame-dragging" supports my claim. E.g., I quote: "Lense and Thirring predicted that the rotation of an object would alter space and time, dragging a nearby object out of position compared with the predictions of Newtonian physics. The predicted effect is small --- about one part in a few trillion. To detect it, it is necessary to examine a very massive object, or build an instrument that is very sensitive."

You seem to be invoking the authority of standard GR to justify an informal paraphrase of version of Mach's principle (which has its own wikipedia article). I don't know GR well enough to be absolutely sure, but I'm about 90% sure that by doing so you misrepresent GR as badly as one misrepresents thermodynamics by invoking its authority to justify the informal entropy/order/whatever paraphrases in Rifkin's Entropy or in various creationists' arguments of the form "evolution is impossible because the second law of thermo prevents order from increase spontaneously."

point 2: I'll elaborate on prase's "What do you expect as a non-negligible difference made by (non-)existence of distant objects?" IIRC there was an old (monastic?) thought experiment critique of Aristotelian "heavy bodies fall faster:" what happens when you attach an exceedingly thin thread between two cannonballs before dropping them? Similarly, what happens to rotational physics of two bodies alone in the universe when you add a single neutrino very far away? Does the tiny perturbation cause the two cannonballs discontinously to have doubly-heavy-object falling dynamics, or the rotation of the system to discontinously become detectable?

comment by SilasBarta · 2010-06-14T13:38:05.275Z · LW(p) · GW(p)

How would you measure the centrifugal force?

ETA: I'm not asking because I don't know the standard ways to measure cetrifugal force, I'm asking because the standard measurement methods don't work when the universe is just two planets.

Replies from: prase
comment by prase · 2010-06-14T19:31:21.388Z · LW(p) · GW(p)

Calculate the gravitational force on the surface of a planet of the same size and mass as yours and compare with what you actually measure.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-14T20:41:39.255Z · LW(p) · GW(p)

What do you calibrate your equipment against?

Replies from: prase
comment by prase · 2010-06-14T20:55:46.871Z · LW(p) · GW(p)

The equipment is already calibrated. You have said that everything works in the same way as today, except the universe consists of two planets. Which I have interpreted like that the observer already knows the value of the gravitational constant in units he can use. If the gravitational constant has to be independently measured first, then it is more complicated, of course.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-14T21:14:10.916Z · LW(p) · GW(p)

The equipment is already calibrated. You have said that everything works in the same way as today, except the universe consists of two planets.

Right: you know the laws of physics. You don't know your mass though, and you don't know any object that has a known mass. I posit this because, in the history of science, they made certain measurements that aren't possible in a two-planet universe, and to assume you can calibrate to those measurements would assume away the problem.

Replies from: prase
comment by prase · 2010-06-14T22:27:20.670Z · LW(p) · GW(p)

But still, in the rotating scenario the attractive force wouldn't be perpendicular to the planet's surface, and this can be established without knowing the gravitational constant. If the planet is spherical and you already know what is perpendicular, of course.

comment by PeterS · 2010-06-14T18:26:53.139Z · LW(p) · GW(p)

If you're revolving about the other planet, the direction of tidal forces on your planet should rotate as well. If both planets are fixed, the gradient on your planet should be constant.

edit: Nevermind, after seeing that you specified that the orbit is synchronous.

comment by Kevin · 2010-06-14T07:56:37.277Z · LW(p) · GW(p)

Kids experiment with 'video playdates'

http://www.cnn.com/2010/TECH/innovation/06/11/video.playdate/index.html?hpt=Sbin

Replies from: cupholder
comment by cupholder · 2010-06-14T08:24:47.138Z · LW(p) · GW(p)

Looking forward to the inevitable 'Could video playdates be making your child vulnerable to cyberpredators?' follow-up.

Replies from: Kevin
comment by Kevin · 2010-06-14T08:32:10.484Z · LW(p) · GW(p)

Chatrouletteforkids.com

comment by Lonnen · 2010-06-17T14:37:59.142Z · LW(p) · GW(p)

Lately I've been wondering if a rational agent can be expected to use the dark arts when dealing with irrational agents. For example: if a rational AI (not necessarily FAI) had to convince a human to cooperate with it, would it use rhetoric to leverage the human biases against it? Would an FAI?