Open Thread: April 2010, Part 2
post by Unnamed · 2010-04-08T03:09:18.648Z · LW · GW · Legacy · 202 commentsContents
202 comments
The previous open thread has already exceeded 300 comments – new Open Thread posts should be made here.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
202 comments
Comments sorted by top scores.
comment by Paul Crowley (ciphergoth) · 2010-04-14T16:39:31.654Z · LW(p) · GW(p)
It’s 1971, he’s part way through a randomised trial comparing Coronary Care Units against home care, and the time has come to share some results with the cardiologists.
I am not asking you to appreciate the results: this was a long time ago, and the findings will not be generalisable to modern CCU’s.
I am inviting you to appreciate the mischief.
The results at that stage showed a slight numerical advantage for those who had been treated at home. I rather wickedly compiled two reports: one reversing the number of deaths on the two sides of the trial. As we were going into the committee, in the anteroom, I showed some cardiologists the results. They were vociferous in their abuse: “Archie,” they said “we always thought you were unethical. You must stop this trial at once.”
I let them have their say for some time, then apologized and gave them the true results, challenging them to say as vehemently, that coronary care units should be stopped immediately. There was dead silence and I felt rather sick because they were, after all, my medical colleagues.
comment by Kaj_Sotala · 2010-04-08T21:53:06.466Z · LW(p) · GW(p)
I'm taking part in the SIAI Visiting Fellows program, and have been keeping a diary of the trip. If anyone's interested in the details of what people actually do in the program, the two most recent entries contain some stuff.
Replies from: Matt_Simpson, andreas↑ comment by Matt_Simpson · 2010-04-09T04:43:45.798Z · LW(p) · GW(p)
Very interesting. This makes me want to do the program even more. I look forward to times when I can just pursue whatever interests me (intellectually) at the moment instead of focusing on coursework.
comment by Strange7 · 2010-04-23T17:55:37.831Z · LW(p) · GW(p)
http://vigilantcitizen.com/?p=3563
When will anti-transhumanism become a serious political issue?
Replies from: Kevin, arundelo, RobinZ↑ comment by RobinZ · 2010-04-23T18:13:30.388Z · LW(p) · GW(p)
When the population of transhumanists become a prominent demographic.
Replies from: Jack↑ comment by Jack · 2010-04-23T18:32:15.532Z · LW(p) · GW(p)
Nah, transhumanists are weird enough it will happen before then.
Replies from: RobinZ↑ comment by RobinZ · 2010-04-23T19:29:14.247Z · LW(p) · GW(p)
We might have different definitions of "serious political issue" and "prominent demographic" - I'm talking about the level at which candidates for political office make demonizing you a part of their campaign.
Replies from: Jack↑ comment by Jack · 2010-04-23T19:41:12.124Z · LW(p) · GW(p)
My prediction is that the demonization will begin long before transhumanists have the popularity, clout or resources to alter the established order in a significant way. Atheists are probably the best parallel.
Edit: How would you define prominent?
Replies from: RobinZ↑ comment by RobinZ · 2010-04-23T19:52:58.660Z · LW(p) · GW(p)
Atheists became prominent (again) in the United States around the time that The End of Faith came out and became popular. I think the Stonewall riots brought homosexuals into public prominence in the United States, but I have a poor grasp of history.
comment by kpreid · 2010-04-22T11:00:12.054Z · LW(p) · GW(p)
Non Sequitur presents The Bottom Line literally.
ETA: Reposted to the Bottom Line thread, for better future findability.
comment by Stuart_Armstrong · 2010-04-11T07:50:29.645Z · LW(p) · GW(p)
Just an idea: what about putting a "number of votes" next to the "vote total" score for posts and comments? That would distinguish cases where a subject was highly controvertial from those where no-one really cares.
Replies from: RobinZ↑ comment by RobinZ · 2010-04-11T14:19:55.576Z · LW(p) · GW(p)
That would be nice - better than raw #+/#-, actually, because it immediately gives you the score. Are there any programmers listening?
Replies from: Morendil↑ comment by Morendil · 2010-04-11T17:03:24.465Z · LW(p) · GW(p)
Yep. I'm wondering how this should be formatted - something like 0 (2) maybe?
The implementation looks relatively straightforward from what I've already seen of the code. But while working on other changes, namely an integrated Anti-Kibitz script that works under IE, I have discovered that it's non-trivial to write unit tests for things like how a single comment is rendered. The design of the Reddit codebase has some rough spots, like the use of globals for HTML rendering.
It's the sort of thing that could be done without tests but that I'd hate to do without tests because that would be adding to a technical debt which has already grown into the danger zone. That takes it from straightforward to moderately hard.
Replies from: AlanCrowe, RobinZ↑ comment by RobinZ · 2010-04-11T17:30:01.576Z · LW(p) · GW(p)
I would say "Score: 2/6" (or whatever the numbers come to).
I wish I could help with the rest of it.
Replies from: wedrifidcomment by Mitchell_Porter · 2010-04-10T08:55:39.796Z · LW(p) · GW(p)
String theory derives entropy for astrophysical black holes. Some references here.
For physics, I think this news is of fundamental significance. This is a huge step towards describing the real world in terms of string theory. The backstory is that almost 40 years ago, Bekenstein and Hawking came up with a formula for black hole entropy, but it was based on macroscopic behavior (like the Hawking temperature) and not on a counting of microscopic states. In the mid-90s you had the first microscopic derivation of black hole entropy in string theory, but it was for supersymmetric black holes in five dimensions. Last year that research reached the point of describing an ordinary black hole spinning at maximum velocity, and now it's made contact with the rest of the real-world black holes.
The other important fact is that this is all done using the "AdS/CFT correspondence", which is the "holographic principle" applied to string theory. The holographic principle is the proposition that quantum gravity should be equal to a quantum theory without gravity and with one less spatial dimension. The inspiration is again the Bekenstein-Hawking entropy of a black hole, which depends on the surface area. This places an upper bound on the entropy (as a function of energy and volume), and therefore on the number of states, in any field theory containing gravity; if the theory contains more states, it can't describe black holes. In string theory, the first implementation of the holographic principle was achieved in an "AdS" (anti de Sitter) space, which is a type of spacetime with a boundary of one less dimension. String theory on AdS spaces appears to be equivalent to a certain field theory on their boundaries.
AdS is not the geometry of the real world - that appears to be de Sitter space. However, the geometry near a real-world rotating black hole is a product of an AdS geometry and a circle, and that permits a form of AdS/CFT to be used. The other feature of the real world remaining to be described is, well, ordinary matter outside black holes, and for that "AdS/QCD" should be relevant, though I don't yet see how to combine the two applications of the correspondence at once.
My excuse for posting this is that I see here and there the assumption that string theory is a dead end irrelevant to reality. This is wrong, the other unification frameworks you hear mentioned are just nothing by comparison. But because a proper understanding of string theory has been years in the making, in recent years the media celebration of string theory has turned into media skepticism. Don't be fooled; string theory is where the real progress is taking place.
Replies from: RobinZ↑ comment by RobinZ · 2010-04-10T14:27:43.715Z · LW(p) · GW(p)
That is excellent news!
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-05-12T06:08:36.536Z · LW(p) · GW(p)
Actually, it's even better than I realized when I posted that comment. I hadn't yet grasped the "F-theory" model building program in string theory, which is about two years old. I've been studying that lately, and it was mentally apocalyptic to realize how all the details of the standard model could be expressed as a configuration of branes in hyperspace. The morning after, life went on, and I still have heaps to learn, but there's no turning back after an experience like that.
comment by khafra · 2010-04-13T18:13:54.693Z · LW(p) · GW(p)
A father talks cryonics with his two daughters.
comment by mattnewport · 2010-04-12T03:03:26.531Z · LW(p) · GW(p)
The always interesting Eric Falkenstein on Risk Taking.
Risk taking, I argue, is uncompensated on average. There is no simple form of risk taking such that, if you can tie yourself to some intellectual mast and bear this psychic pain you should expect a higher return. There is a mistaken syllogism at the bottom of portfolio theory, as just because you have to take risk to get rich, or if you take risk you might get rich, this does not mean if you take risk you will become richer on average.
comment by Emile · 2010-04-08T16:21:25.103Z · LW(p) · GW(p)
Any tips on efficiently gathering information on controversial, non-technical subjects, such a "how to raise your kids" or "pros and cons of spanking your kids"? (those are relatively good examples because a lot of people have a strong opinion on them)
I usually look on Wikipedia first, but while it's good at giving a basic overview of a question, it's quite bad at presenting evidence in a properly organized way (I learnt first hand that improving a controversial article is hard).
Research papers are more rigorous and more likely to contain actual useful and surprising information, but finding the right ones is quite a bit of work, and papers non-technical fields don't have a huge
Then there are all kinds of opinion columns and blogs and books - but they tend to be of varying quality, and I don't know what's the best way to find those that honestly summarize the available evidence (as opposed to taking a somewhat extreme position to make the writing more interesting, or trying to present the ideas as new, etc.).
Any useful tips and heuristics?
Replies from: wedrifid, jimmy↑ comment by wedrifid · 2010-04-14T02:09:44.929Z · LW(p) · GW(p)
It is a difficult question to answer. I can point to various studies but I must keep in mind that those that made me aware of such studies are not necessarily unbiased.
Any useful tips and heuristics?
That depends. You need to know just what you want your kids to be like. And that has to be what you really want your kids to be like, not what it sounds good to say you want your kids to be like. For example, spanking kids will make them more likely to be physically aggressive. But this may well benefit them in the long run, teaching them tactics for maintaining higher status and so improving their health and happiness.
There is a clear negative correlation between childhood spanking and IQ. But given that your child's genetic heritage is already determined, your own choice of behaviour quite possibly has no causal influence on the IQ outcome. Low IQ parents are more likely to be physically (rather than verbally or socially) aggressive and also more likely to pass on genes for low IQ so causal influence from the spanking is doubtful.
↑ comment by jimmy · 2010-04-08T17:04:05.800Z · LW(p) · GW(p)
I'm also interested in hearing other peoples tricks, but I'll share mine.
The first thing I'd do is to check LW to see if I got lucky (google "spanking kids site:lesswrong.com" for example)
I don't really have any good tricks for finding good sources, but you might want to try adding some related technical words in your search to filter your results towards smarter people.
Once I find a source, the main thing I look for is "does this person understand the opposing arguments?". If they say something that suggests that they don't understand the idea that different forms of dis-utility might be interchangeable, or if they ever take the "it's bad because it's wrong!" stance, then I'll move on.
comment by ata · 2010-04-11T07:12:27.143Z · LW(p) · GW(p)
I noticed an apparently self-defeating aspect of the Boltzmann brain scenario.
Let's say I do find the Boltzmann brain scenario to be likely (specifically, that I find it likely that I myself am a Boltzmann brain), based on my knowledge of the laws of physics. Then my knowledge of the laws of physics is based on the perceptions and memories that I, as a Boltzmann brain, am arbitrarily hallucinating... in which case there is no reason for me to believe that the real universe (that is, whichever one houses the actual physical substrate of my mind) runs on those laws... the very laws that would provide the only evidence that I am indeed a Boltzmann brain.
So supposing that you are a Boltzmann brain is evidence against the possibility of your being a Boltzmann brain (or at least evidence against all of your evidence for it).
I'm still trying to wrap my own brain (Boltzmann or not) around anthropic reasoning, so I'm not sure if I accept the Boltzmann brain argument in the first place (I don't think I do), but this may serve as a specific argument against it.
Has this been discussed before?
Replies from: Jack↑ comment by Jack · 2010-04-11T07:59:35.592Z · LW(p) · GW(p)
Let me see if I can formalize. This might not be quite what you had in mind, but I think it will be similar:
For clarity we can reduce the possible worlds to two, either there are many many more Boltzman brains than human brains (H1) or there are few if any Boltzman brains (H2).
In H2 aprox. everyone who learns of the Boltzman brain hypothesis (and the evidence in favor) is not a Boltzman brain.
In H1 very very few Boltzman brains will learn of the Boltzman brain hypothesis (and the evidence in favor). A significantly larger percentage of the non-Boltzman brains capable of conceiving the hypothesis will learn of it (and the evidence in favor).
So independent evidence of H1 means (1) H1 is more likely to whatever degree that evidence dictates, (2) if H1 you are more likely than most brains to be non-Boltzman, (3) by the self-indication assumption H2 is more likely because in that world most or all brains are non-Boltzman.
The inference from (2) to (3) seems problematic to me. I'm not sure.
Questions:
- How the hell do we evaluate the evidence since any evidence of H1 is also evidence of H2 (if we like the SIA).
- What the hell is the proper reference class?
- If new evidence came in against H1 would we have to say were more likely to be Boltzman brains?
comment by Kevin · 2010-04-27T21:27:58.090Z · LW(p) · GW(p)
Don't Choke ("performing below skill level due to performance related anxieties"): http://scienceblogs.com/cortex/2010/04/dont_choke.php
comment by PhilGoetz · 2010-04-30T18:31:34.598Z · LW(p) · GW(p)
Today I heard a radio interviewer talking with a politician about House seats that could go to Republicans. It went like this:
Politician: "I think there may be 100 contested seats."
Reporter: "So you think 100 seats could go to the Republicans?"
Followed by confusion due to the fact that neither of them could work out how to use English to distinguish "there are 100 Democrat seats such that that seat could be won by Republicans in the next election" from "the Republicans could gain 100 seats in the next election".
comment by SoullessAutomaton · 2010-04-28T03:44:08.115Z · LW(p) · GW(p)
A minor note of amusement: Some of you may be familiar with John Baez, a relentlessly informative mathematical physicist. He produces, on a less-than-weekly basis, a column on sundry topics of interest called This Week's Finds. The most recent of such mentions topics such as using icosahedra to solve quintic equations, an isomorphism between processes in chemistry, electronics, thermodynamics, and other domains described in terms of category theory, and some speculation about applications of category theoretical constructs to physics.
Which is all well and good and worth reading, but largely off-topic. Rather, I'm mentioning this on LW because of the link and quotation Baez put at the end of the column, as it seemed like something people here would appreciate.
Go ahead and take a look, even if you don't follow the rest of the column!
comment by Nevin · 2010-04-18T06:06:39.054Z · LW(p) · GW(p)
The map image in the masthead confused me when I found LW, and might reduce the probability that casual Web-browsing would-be-rationalists would take the time to understand what LW actually is before moving on.
I'm new to the community; this post may not be structured like the ones you're used to. Bear with me.
If LW is anything like the few sites whose analytics numbers I've seen, a significant portion of traffic comes from Web searches (I would wildly guess 10-30% of their pageviews). According to the analytics I've seen on my own site, out of those landings from Google et. al., many are likely to stay only for a few seconds, presumably trying to see if they've found what they're looking for.
[In my opinion] the name, small grey welcome box for new readers, and the tagline under the logo collectively do a good job of explaining what LW is, even for people who aren't familiar with any related terminology or concepts. The image of a map in the background, [in my opinion], does not. When I first arrived I thought for a few moments it was a site about maps. I ended up reading enough to stick around, but I wonder if some don't.
I would like to ask people who didn't understand what the site was about and didn't return to LW if the image was the reason... but we'll never hear from them. So instead, I invite people here to chime in about whether or not the image deterred them at first, and whether it is something worth re-thinking.
[Whether this potential deterrent is bad is a separate question; I'm just curious about whether it even is a deterrent. I can see arguments for trying to deter people (or certain types of people) intentionally, but I suppose that's irrelevant if the image doesn't affect the probability that first-time readers will return.]
Replies from: Rain, ata, Kevin, Jack, mattnewport↑ comment by Rain · 2010-04-19T16:58:20.757Z · LW(p) · GW(p)
An anecdote:
When I've had people shoulder surf while I was visiting the site, everyone asked, "LessWrong? What's that supposed to mean?" (5+ people). When I explained that it was a rational community where people tried to improve their thinking, they immediately began status attacks against me. One used the phrase "uber-intellectual blog" in a derogatory context and another even asked, "Are you going to come into work with a machine gun?" They often laughed at the concept.
Nobody commented on the graphic.
Replies from: pjeby, pjeby↑ comment by ata · 2010-04-18T07:30:23.360Z · LW(p) · GW(p)
It didn't deter me, but I didn't get it until someone explained it just recently. For a while, I was just thinking "What's that a map of? Is that where FHI is based? Is it the area in Santa Clara surrounding the SIAI House? Whatever it's a map of, is is relevant enough to put it at the top of every page?" (Actual answer from a minute googling street names: it's in San Francisco, but I don't know if there's any reason this particular location was chosen.)
O'course, even for those who get it, it may not be the best illustration of the map/territory distinction, because the lower half isn't the territory either. It's just a more detailed map than the top half. Ceci n'est pas le territoire!
Anyway, I doubt it will actively deter many people, but there are probably better possibilities.
Replies from: ata↑ comment by ata · 2010-04-18T07:36:41.777Z · LW(p) · GW(p)
Actually, regarding "Ceci n'est pas...", The Treachery of Images is a pretty good illustration of the map/territory distinction. But it probably wouldn't make a great masthead.
↑ comment by Kevin · 2010-04-18T08:04:16.042Z · LW(p) · GW(p)
There's also a significant percentage of traffic that comes from Stumble Upon. Not sure how we can better optimize for people arriving for Stumble Upon, but certainly the current state is not ideal.
There is a possibility of presenting different pages to people depending on their referrers...
↑ comment by Jack · 2010-04-18T06:42:22.513Z · LW(p) · GW(p)
The map-territory metaphor is pretty central to what goes on here, so I kind of like it. I don't really know if it is a deterrent. Any alternatives in mind?
I do think the logo could be a map of somewhere more interesting than Candlestick Park! And maybe a cooler place would keep googler's around. Or make it look a dojo.
Replies from: Nevin↑ comment by Nevin · 2010-04-18T16:49:28.719Z · LW(p) · GW(p)
Any alternatives in mind?
The first thing that comes to mind is having no masthead image. Any image will presumably be misunderstood by some fraction of visitors, but the text alone is very clear. I can see why people like the current image; perhaps a solution is to replace it with a solid color for people arriving from Google or StumbleUpon.
↑ comment by mattnewport · 2010-04-18T06:37:14.211Z · LW(p) · GW(p)
I have to admit I'd never really consciously noticed the image until someone recently pointed out that it symbolizes the map/territory distinction. I guess that is evidence that is not very eye catching or distinctive but neither is it particularly off-putting in my opinion.
comment by Kevin · 2010-04-09T05:30:18.220Z · LW(p) · GW(p)
Terence Tao on the relationship between classical and Bayesian reasoning:
http://news.ycombinator.com/item?id=1251183
Replies from: RobinZcomment by Nick_Tarleton · 2010-04-12T18:21:49.393Z · LW(p) · GW(p)
Does the market for sperm and egg donors violate supply and demand?
Replies from: wedrifid“From compensation rates to the smallest details of donor relations, sperm donors are less valued than egg donors,” Almeling said. “Egg donors are treated like gold, while sperm donors are perceived as a dime a dozen.”
The inequities persist despite the fact that profiles of hundreds of potential egg donors languish on agency Web sites, far outstripping recipient demand, while suitable sperm donors are quite rare, Almeling found. In fact, only a tiny fraction of the male population possesses a sperm count consistently high enough to be considered donation-worthy, and more than 90 percent of sperm bank applicants are rejected for this and other reasons. As a result, sperm banks routinely resort to finder’s fees to meet the need.
comment by ata · 2010-04-10T19:41:19.339Z · LW(p) · GW(p)
There's this upcoming meetup called Baloney Detection Workshop in Mountain View. It will probably be fairly basic compared to what's covered on LW, but I might go just for fun. Anyone else thinking of going? They're looking for people to give 10-minute talks on related subjects — maybe someone (possibly me, possibly not) could do one that introduces some of LW's matarial, something that can build off the usual skepticism repertoire and perhaps lead some people to LW. Maybe something on motivated/undiscriminating skepticism, really applying the techniques of critical thinking to your own beliefs, how to actually change your mind, etc.? Any other ideas?
comment by [deleted] · 2010-04-08T03:17:52.981Z · LW(p) · GW(p)
Around here, we seem to have a tacit theory of ethics. If you make a statement consistent with it, you will not be questioned.
The theory is that though we tend to think that we're selfless beings, we're actually not, and the sole reason we act selfless at all is to make other people think we really are selfless, and the reason we think we're selfless is because thinking we're selfless makes it easier to convince others that we're selfless.
The thing is, I haven't seen much justification of this theory. I might have seen some here, some there, but I don't recall any one big attempt at justifying this theory once and for all. Where is that justification?
Replies from: Tyrrell_McAllister, khafra, Morendil, JamesAndrix, Amanojack, knb, pjeby, Jonathan_Graehl↑ comment by Tyrrell_McAllister · 2010-04-08T17:16:33.943Z · LW(p) · GW(p)
I agree with khafra. If "selfish" means "pursuing things if and only if they accord with one's own values", then most people here would say that every value-pursuing agent is selfish by definition.
But, for that very reason (among other things), that definition is not a useful one. A useful definition of "selfish" is closer to "valuing oneself above all other things." And this is not universally agreed to be good around here.
I might value myself a great deal, but it's highly unlikely that I would, upon reflection, value myself over all other things. If I had to choose between destroying either myself or the entire rest of the universe (beyond the bare minimum that I need to stay alive), I would obliterate myself in an instant. I expect that most people here would make the same choice in the same situation.
↑ comment by khafra · 2010-04-08T13:02:23.723Z · LW(p) · GW(p)
I think the general view is more nuanced. If there is a LW theory of selflessness/selfishness, Robin Hanson would be able to articulate it far better than I; but here's my shot:
"Selflessness" is an incoherent concept. When you think of being selfless, you think of actions to make other people better off by your own value system. Your own value system may dictate that fulfilling other people's value systems makes them better off, or yours may say that changing others' value systems to "believing in Jesus is good" makes them better off.
The latter concept is actually more coherent than the first, because if one of those other systems includes a very high utility for "everyone else dies," you cannot make everyone better off.
Many LW members place a high value on altruism, but they don't call themselves selfless; they understand that they're fulfilling a value system which places a high utility on, for lack of a better word, universal eudaimonia.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-04-08T17:13:19.312Z · LW(p) · GW(p)
I think the general view is more nuanced.
Agreed. If "selfish" means "pursuing things if and only if they accord with one's own values", then most people here would say that every value-pursuing agent is selfish by definition.
But, for that very reason (among other things), that definition is not a useful one. A useful definition of "selfish" is closer to "valuing oneself above all other things." And this is not universally agreed to be good around here.
I might value myself a great deal, but it's highly unlikely that I would, upon reflection, value myself over all other things. If I had to choose between destroying either myself or the entire rest of the universe (beyond the bare minimum that I need to stay alive), I would obliterate myself in an instant. I expect that most people here would make the same choice in the same situation.
↑ comment by Morendil · 2010-04-08T07:04:47.786Z · LW(p) · GW(p)
we tend to think that we're selfless beings
That's news to me.
the sole reason we act selfless at all is to make other people think we really are selfless
That doesn't describe me. I sometimes act in ways that are detrimental to me and beneficial to others, out of a broader conception of my own self-interest: I figure that those actions are beneficial to my own projects, properly conceived.
I most specifically don't want people to think I am exploitable (which is one interpretation of "selfless"). I do want people to think of me as someone with whom it is desirable to cooperate.
↑ comment by JamesAndrix · 2010-04-08T07:36:42.631Z · LW(p) · GW(p)
I don't think that's the tacit theory of ethics around here.
Genes may be selfish, but primates survived better who had other related primates looking out for them, or who showed that they were caring. It could well be that some simple mutations led to primates that showed they were caring because they actually were caring. (Edit: It seems to me that this must be the case for at least part of our value system. )
This is relevant:
http://lesswrong.com/lw/uu/why_does_power_corrupt/
but the benefits to the genes can just as easily come from more subtle situational differences, and assistance by related others, rather than a major status change and change in attitudes.
↑ comment by Amanojack · 2010-04-08T12:53:42.127Z · LW(p) · GW(p)
One would be hard-pressed to find a more perfect example of doublethink than the popular notion of selflessness.
Selflessness is supposed to be praiseworthy, but if we try to clarify the meaning of "selfless person" we either get
A person who's greatest (or only) satisfaction comes from helping others, or
A person who derives no pleasure at all from helping others (not even anticipated indirect future pleasure), but does it anyway
Neither of these are generally considered praiseworthy: (1) is clearly someone acting for purely selfish reasons, and (2) is just a robotic servant. Yet somehow a sort of "quantum superposition" of these two is held to be both possible and praiseworthy*.
*The common usage of "selfish" is an analogous kind of doublethink/newspeak
ETA: I, and probably many others, consider (1) praiseworthy, but if that's the definition of selfless then the standard LW argument you mentioned applies to it.
↑ comment by knb · 2010-04-08T07:33:40.041Z · LW(p) · GW(p)
- I don't think that people think they are selfless. They usually think they're more selfless than they actually are, though.
The theory is that though we tend to think that we're selfless beings, we're actually not, and the sole reason we act selfless at all is to make other people think we really are selfless, and the reason we think we're selfless is because thinking we're selfless makes it easier to convince others that we're selfless.
- I suspect most people at Less Wrong have more a complex view than this description. People also behave selflessly for reasons of inclusive fitness and reciprocal altruism. People also engage in "selfless" behavior for the same reason a "forgiving" tit-for-tat strategy wins in iterated prisoner's dilemmas.
↑ comment by Jonathan_Graehl · 2010-04-08T07:11:11.656Z · LW(p) · GW(p)
This seems obviously true, except that there are certain regimes where genuine cooperation isn't ruled out by selfish genes (typically requiring a sort of altruistic willingness to undertake costly detection and punishment of cheaters). So I would not at all rule out instances of genuine altruism if a case can be made that it's positive-sum enough and widespread enough.
comment by Vladimir_Nesov · 2010-04-20T20:48:30.373Z · LW(p) · GW(p)
From Hal Daume's blog:
If you believe A => B, then you have to ask yourself: which do I believe more? A, or not B?
Let's say a weak compressor is one that always reduces a (non-empty) file's size by one bit. A strong compressor is one that cuts the file down to one bit. I can easily prove to you that if you give me a weak compressor, I can turn it into a strong compressor by running it N-1 times on files of size N. Trivial, right? But what do you conclude from this? You're certainly not happy, I don't think. For what I've proved really is that weak compressors don't exist, not that strong compressors do. That is, you believe so strongly that a strong compressor is impossible, that you must conclude from (weak) => (strong) that (weak) cannot possibly exist.
comment by steven0461 · 2010-04-10T01:29:33.740Z · LW(p) · GW(p)
If Airedale and I organized a meetup in the Chicago area, would anyone come? If there's nontrivial interest and we decide on going through with it, we'll make a top-level post with a place and time.
Replies from: Unnamed, arundelo↑ comment by Unnamed · 2010-04-12T06:09:46.195Z · LW(p) · GW(p)
I'd be interested, and you could PM these folks to find others who may not be reading this open thread.
comment by RobinZ · 2010-04-08T23:33:33.311Z · LW(p) · GW(p)
I have discovered myself to be in need of a statistical tool I do not possess. I am confident that a frequentist formula exists, based on the nature of the task to be executed, but it occurs to me that there may be people who would like to prove some point about Bayesianism vs. Frequentism - so here's a challenge for you all:
I am a mechanical engineer - numerate, literate, and reasonably intelligent - educated to the extent of one college course in basic probability and statistics. I have also been reading EY's essays for years, and am familiar (approaching level 3) with the introductory Bayes Law material he has written.
What I want is a handbook - preferably available from the University of Maryland, College Park library [edit: catalog link] or the Montgomery County, Maryland public library system [edit: library system homepage with link to catalogs; but necessarily available for less than $30 U.S. (exact cutoff negotiable) - which is likely to include a procedure I can use to analyse my data and act. Optimally, it should be sufficiently clear that I can use my results to justify a course of action to someone else, if necessary. (Feel free to assume I am eloquent for purposes of this additional requirement.)
Should I have both Bayesian and frequentist methods available, I will employ both and report my results in summary form with analytical details in a separate post.
Replies from: Matt_Simpson↑ comment by Matt_Simpson · 2010-04-09T06:19:38.071Z · LW(p) · GW(p)
What, in particular, is the tool your are looking for?
A First Course in Bayesian Methods is ~$50 used, and covers what I take to be the basics. I'm currently using it in a grad class in Baysian statistics (with a companion text for computing in R) and have no complaints - well, other than that it's not an all encompassing text.
The first edition of Gelman's text is going for ~$35 used (~$50 for the second edtion) and has the added advantage of actually being in UM's library (both editions). I've not read either edition, but I hear it's the general Bayesian text to get.
Replies from: RobinZ↑ comment by RobinZ · 2010-04-09T12:32:06.212Z · LW(p) · GW(p)
Thanks for the recommendations!
I am being intentionally vague about the nature of the task I need to perform, but it is not esoteric. I would expect the problem to be discussed in any good textbook and many undergraduate statistics courses.
Edit: I think I see the chapter in the table of contents of Gelman from Amazon's preview.
comment by Matt_Simpson · 2010-04-08T14:40:23.768Z · LW(p) · GW(p)
I'm looking for a good textbook or two on Bayesian design of experiments. Any suggestions?
While I'm on the topic of Baysian textbooks, is the difference between the 1st and 2nd edition of Gelman's text big enough to be worth buying the 2nd edition over the 1st? (I have a couple of short texts already for one of my courses this semester, but I think the depth is lacking)
comment by PhilGoetz · 2010-04-30T18:27:16.003Z · LW(p) · GW(p)
Wikipedia page on causal decision theory says:
In a 1981 article, Allan Gibbard and William Harper explained causal decision theory as maximization of the expected utility U of an action A of an action "calculated from probabilities of counterfactuals":
U(A)=\sum\limits_{j} P(A > O_j) D(O_j),
where D(Oj) is the desirability of outcome Oj and P(A > Oj) is the counterfactual probability that, if A were done, then Oj would hold.
David Lewis proved that the probability of a conditional P(A > Oj) does not always equal the conditional probability P(Oj | A). If that were the case, causal decision theory would be equivalent to evidential decision theory, which uses conditional probabilities.
Can somebody explain this strange statement?
Replies from: JGWeissman↑ comment by JGWeissman · 2010-04-30T18:36:21.270Z · LW(p) · GW(p)
An important aspect of a decision theory is how it defines counterfactuals. Anna Salamon wrote a good sequence on this topic.
comment by Strange7 · 2010-04-28T14:03:36.269Z · LW(p) · GW(p)
Are crush videos, as mentioned in http://www.overcomingbias.com/2010/04/truetoleranc.html , actually bad, and if so, why?
I theorize that they are, based on what I've read about sex addiction and serial killers, but I'm not really prepared to rigorously defend that position.
Replies from: Jackcomment by Strange7 · 2010-04-28T13:26:27.939Z · LW(p) · GW(p)
Are crush videos, as mentioned in http://www.overcomingbias.com/2010/04/truetoleranc.html , actually bad, and if so, why?
I theorize that they are, based on what I've read about sex addiction and serial killers, but I'm not really prepared to rigorously defend that position.
comment by Strange7 · 2010-04-28T13:23:27.112Z · LW(p) · GW(p)
Are crush videos, as mentioned in http://www.overcomingbias.com/2010/04/truetoleranc.html , actually bad, and if so, why?
I theorize that they are, based on what I've read about sex addiction and serial killers, but I'm not really prepared to rigorously defend that position.
Replies from: Jack↑ comment by Jack · 2010-04-28T14:08:25.458Z · LW(p) · GW(p)
This was a reply to another copy of this comment
Are crush videos, as mentioned in http://www.overcomingbias.com/2010/04/truetoleranc.html , actually bad, and if so, why?
I theorize that they are, based on what I've read about sex addiction and serial killers, but I'm not really prepared to rigorously defend that position.
Torturing animals is bad. But I wouldn't have a problem with, say, a CGI version.
Replies from: Strange7↑ comment by Strange7 · 2010-04-28T14:12:02.690Z · LW(p) · GW(p)
Really? Even if, down the line, somebody credited some better-than-reality gorn as their inspiration for raping and murdering a dozen people?
Replies from: thomblake, Matt_Simpson, Jack↑ comment by Matt_Simpson · 2010-04-28T15:39:52.582Z · LW(p) · GW(p)
The question isn't, will this cause one person to go on a murdering rampage?, but rather what is the net effect on murdering rampages (and everything else we care about)? There is some evidence that violent movies reduce violent crime in the short run, serving as a substitute for actually committing violence. I wouldn't be surprised if the same was true for crush videos.
Replies from: Strange7↑ comment by Strange7 · 2010-04-28T16:00:10.587Z · LW(p) · GW(p)
So, if easy access to violent entertainment leads to desensitization which leads to increased demand, and real/simulated violence are substitutes for this purpose, what happens when the supply of simulated violence is interrupted? That's the situation ancient Rome had, politicians compelled to maintain the gladiator games under threat of mass rioting.
Replies from: thomblake, cupholder, Matt_Simpson↑ comment by cupholder · 2010-04-29T14:02:38.436Z · LW(p) · GW(p)
what happens when the supply of simulated violence is interrupted?
Extremely unlikely, unless movie theaters, DVDs and the Internet were obliterated. And if that happened, the (theoretical) resulting uptick in violence would be the least of most people's worries.
Replies from: Strange7↑ comment by Strange7 · 2010-04-29T14:22:19.219Z · LW(p) · GW(p)
DVD players and computers both depend on centralized power generation, and movie theaters don't show crush videos. It's not necessary that the supply be permanently eliminated, just unexpectedly cut back for some reason. Even if the supply is constant, desensitization means there will be ongoing problems as a result.
Replies from: cupholder↑ comment by cupholder · 2010-04-29T14:38:11.741Z · LW(p) · GW(p)
DVD players and computers both depend on centralized power generation
A centralized power generation failure would probably be even more of a distraction from reenacting violent entertainment than the loss of DVD players and computers!
movie theaters don't show crush videos
My mistake - I had thought you'd broadened what you were talking about to 'violent entertainment' and 'simulated violence' in general as that's what your parent comment refers to.
It's not necessary that the supply be permanently eliminated, just unexpectedly cut back for some reason.
Fair enough - let's suppose that theaters/DVDs/computers are just temporarily inaccessible in some localized region. I suspect that most potential violent entertainment (or crush video, if we're staying specific) imitators in that region would be too concerned with regaining access to theaters/DVDs/computers to do violence themselves.
Even if the supply is constant, desensitization means there will be ongoing problems as a result.
A constant supply is inconsistent with an 'unexpectedly cut back' supply; I wouldn't expect a constant supply to boost violence if it's decreasing supply that's supposed to boost violence.
Replies from: Strange7↑ comment by Strange7 · 2010-04-29T15:08:58.053Z · LW(p) · GW(p)
A constant supply is inconsistent with an 'unexpectedly cut back' supply; I wouldn't expect a constant supply to boost violence if it's decreasing supply that's supposed to boost violence.
In the short term, demand for violence is effectively fixed, so decreases in the supply of simulated violence lead to increases in actual violence as a substitution effect.
In the long term, exposure to violence leads to desensitization, so demand for simulated violence expands to meet the supply.
Given two otherwise-identical societies, in which one strictly limits the supply of violent imagery and the other does not, I predict that the latter will (eventually, due to desensitization) have a higher demand for violence, leading to more actual, physical violence during blackouts.
I've heard it argued that the one time when large-scale censorship would be morally justified is if a "Langford basilisk," that is, an image which kills the viewer, were found to exist. What if there were such an image, but it only killed a tiny percentage of the people who saw it, or required a long cumulative exposure to be effective? What if, rather than killing directly, it compelled the viewer to hurt others, or made those already considering such a course of action more likely to follow through on it?
This isn't a fully-general argument for censorship of any given subject that provokes disgust; it's quite specific to violent pornography.
Replies from: cupholder, mattnewport, thomblake, PhilGoetz↑ comment by cupholder · 2010-04-30T12:42:55.028Z · LW(p) · GW(p)
In the short term, demand for violence is effectively fixed, so decreases in the supply of simulated violence lead to increases in actual violence as a substitution effect.
This is undoubtedly possible, though I'd expect far less of a substitution effect than you because of the distraction effects I suggested above. Ultimately I suppose this is an empirical issue.
In the long term, exposure to violence leads to desensitization, so demand for simulated violence expands to meet the supply.
I suspect that once the level of simulated violence in a real society is above some saturation point, further increases in its supply would not be met by increased demand. Ideally there'd be some way to empirically test this too.
Given two otherwise-identical societies, in which one strictly limits the supply of violent imagery and the other does not, I predict that the former will (eventually, due to desensitization) have a higher demand for violence, leading to more actual, physical violence during blackouts.
I smell a Freakonomics chapter!
Seriously, if there are any economists or sociologists reading this comment, I think something like this could make a cute topic for a paper. Some quick googling makes me think that the effect of blackouts in general on crime hasn't been researched rigorously - I'm mostly seeing offhand claims like 'looting during blackouts blah blah blah' or studies of individual blackouts like New York '77. I see even less about using blackouts to assess the effect of violent media specifically, but I'd be very interested in the results of such a study.
At any rate, your own prediction is an interesting one, if only in terms of thinking about how one could test it, or approximate testing it.
As for which variations on the Langford basilisk I'd be OK with banning: I'd work it out by putting on my utilitarian hat on and plugging in numbers.
This isn't a fully-general argument for censorship of any given subject that provokes disgust; it's quite specific to violent pornography.
More than that; it's specific to media that (1) desensitizes some viewers and (2) have actual violence as a substitute good, which arguably includes violent non-porn as well as violent porn.
↑ comment by mattnewport · 2010-04-29T16:30:26.119Z · LW(p) · GW(p)
If the supply of virtual violence is increasing faster than demand so that real violence is going down would you still support banning virtual violence for fear of this potential uptick? Presumably you would want to try and determine the expected value of virtual violence given the relative effects and probabilities?
If it helps your estimates, evidence suggests that increasing exposure to virtual violence and pornography correlates with reduced rates of real world violence and sexual violence.
Replies from: Strange7↑ comment by Strange7 · 2010-04-29T17:16:59.007Z · LW(p) · GW(p)
Presumably you would want to try and determine the expected value of virtual violence given the relative effects and probabilities?
In this case, my goal is to minimize the expected future amount of real violence, so yes I'd like to see the math. Including the long-term black-swan risks, that interruptions to non-critical infrastructure could create an unanticipated surge of sadism.
Other evidence, not of an increase in violence, but of hard-to-measure slow-developing side effects..
Replies from: mattnewport, NancyLebovitz↑ comment by mattnewport · 2010-04-29T17:46:40.597Z · LW(p) · GW(p)
Including the long-term black-swan risks, that interruptions to non-critical infrastructure could create an unanticipated surge of sadism.
This just sounds like one more potential reason near the bottom of an already long list of reasons to mitigate against such interruptions. This argument looks analogous to the claim that making bullets out of lead is bad because someone who is shot multiple times will end up with an unhealthy dose of lead in their bloodstream.
↑ comment by NancyLebovitz · 2010-04-30T00:11:50.097Z · LW(p) · GW(p)
Very interesting link-- I'm not sure that avoiding superstimuli is part of rationality, but it might be part of the art of living well.
↑ comment by thomblake · 2010-04-30T17:42:12.061Z · LW(p) · GW(p)
This isn't a fully-general argument for censorship of any given subject that provokes disgust; it's quite specific to violent pornography.
I didn't notice this on the first read-through, but cupholder's comment brought this to my attention - the actual content seems to be an irrelevant factor in your general principle, especially the 'pornography' part. Surely we could say the same thing about non-pornographic violent media. Furthermore, if reading the Oxford English Dictionary or looking at Starry Night increases violent tendencies in the same way, then your argument works just as well.
Replies from: Strange7↑ comment by Strange7 · 2010-05-03T19:29:55.779Z · LW(p) · GW(p)
Indeed it would. I am concerned about this because of the risks, not because of a moral objection to pornography (some kinds are rather pleasant).
For that matter, I think the moral revulsion evolved as a means to mitigate the risks associated with superstimuli, fascination with violence, etc.
↑ comment by PhilGoetz · 2010-04-30T18:29:06.465Z · LW(p) · GW(p)
Given two otherwise-identical societies, in which one strictly limits the supply of violent imagery and the other does not, I predict that the former will (eventually, due to desensitization) have a higher demand for violence, leading to more actual, physical violence during blackouts.
I think you mean "I predict the latter will", since desensitization occurs more in the society with more violent imagery.
Replies from: Strange7↑ comment by Matt_Simpson · 2010-04-28T17:22:12.696Z · LW(p) · GW(p)
Well, perhaps, though I would expect the effect to be much smaller today - see, for example, parts of this post.
↑ comment by Jack · 2010-04-28T15:22:24.526Z · LW(p) · GW(p)
I mean for mental health reasons I wouldn't watch them and wouldn't let my children watch them. But they aren't morally wrong- which is the kind of thing that would lead me to want a state intervention.
Are you surprised to get the standard left-libertarian response, here?
Replies from: Strange7↑ comment by Strange7 · 2010-04-28T16:05:42.070Z · LW(p) · GW(p)
Are you surprised to get the standard left-libertarian response, here?
By definition, standard responses shouldn't be surprising.
Disappointment is a separate issue. You've presented a canned NIMBY opinion, not a reasoned argument. Is the statement 'crush videos aren't morally wrong' falsifiable?
Replies from: Jack↑ comment by Jack · 2010-04-28T16:51:58.720Z · LW(p) · GW(p)
You've presented a canned NIMBY opinion,
I guess you can call it that. In my own words I say I am applying a general principle, namely the harm principle to a specific case. I find the harm principle intuitively moral and when applied to a society it describes the kind of place I would like to live in. I don't really go for unified normative theories but the harm principle is consistent with most deontological ethics, a excellent rule of thumb for consequentialists (which is why Mill is the guy who named it), those who follow this rule posses the virtue "tolerance" and it is the bedrock of the liberal political order. Edit: Oh, and contractualism. I might be someone with preferences that others will find obscene so it is in my interest to agree to this principle. Indeed, I have preferences that others probably find obscene so I don't have a lot of trouble thinking this way.
Is the statement 'crush videos aren't morally wrong' falsifiable?
I'm not a moral realist. I'm expressing my preference that people be free to fulfill their preferences so long as they don't hurt anyone.
Replies from: khafra↑ comment by khafra · 2010-04-28T18:28:32.127Z · LW(p) · GW(p)
The harm principle is good in common cases, but I fear this may be an edge case, and the harm principle tends to break down when the meaning of "harm" or "hurt" is called into question. By the standards of Western Civilization, siphoning money from Joe's bank account is harm to Joe, although any physical effect on Joe is very indirect; making out with someone of the same gender is not harm to Joe, even if the sight of it makes him violently ill. By the standards of Islam, drawing certain pictures can be harm to everyone of their faith.
It seems to me that there's a narrow range of value congruence where the harm principle is applicable; go further and it is incoherent, closer and it is redundant.
Replies from: Jack↑ comment by Jack · 2010-04-28T18:58:54.647Z · LW(p) · GW(p)
I agree with this. "Harm" is too vague to make the harm principle a fully general argument for the Western liberal order- and it certainly wouldn't do to try and program an AI with it. One thing a liberal society must wrestle with is what kinds of behavior are considered harmful. Usually, we define harm to include some behaviors beyond physical harm: like theft or slander. But watching computer generated images of any kind, in the privacy of your own home is pretty solidly in "doesn't harm anyone" category, as defined by the liberal/libertarian tradition.
Part of my point is that there isn't really much of an argument to be had. I suppose if someone demonstrated that the existence of computer generated snuff actually threatened our civilization or something, I could be swayed. But basically I think people should do things that make them happy so long as they avoid hurting others: if that isn't a terminal value it is awfully close.
comment by LucasSloan · 2010-04-27T21:49:44.449Z · LW(p) · GW(p)
I intend to start playing World of Warcraft when the summer break begins. Does anyone actually want to do this?
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2010-04-27T22:09:10.803Z · LW(p) · GW(p)
Heh, that is a topic that is very relevant to an article I was intending to post to Less Wrong today.
I've written it, but then noticed I have 17/20 of the required karma points.
Any three people wanna upvote this comment of mine so I can post my article?
comment by byrnema · 2010-04-23T19:11:50.908Z · LW(p) · GW(p)
Is Pascal's wager terribly flawed and is this controversial?
Replies from: Vladimir_Nesov, cupholder↑ comment by Vladimir_Nesov · 2010-04-23T22:24:02.942Z · LW(p) · GW(p)
Accepting God as a probable hypothesis has a lot of epistemic implications. This is not just one thing, everything is connected, one thing being true implies other things being true, other things being false. You won't be seeing the world as you currently believe it to be, after accepting such change, you will be seeing a strange magical version of it, a version you are certain doesn't correspond to reality. Mutilating your mind like this has enormous destructive consequences on your ability to understand the real world, and hence on ability to make the right choices, even if you forget about the hideousness of doing this to yourself. This is the part that is usually overlooked in Pascal's wager.
Belief in belief is a situation where you claim to have a belief, and you believe in having the belief, but you act in a way that can only be explained by working from the understanding of reality that involves the belief in question being wrong. Belief in belief keeps the human believers out of most of the trouble, but that's not what Pascal's wager advocates! Not understanding this distinction may lead to underestimating the horror of the suggestion. You are being offered an option to actually believe, but this is not what people have experience with observing in others. You only see other people believing in belief, which is not as bad as actually believing.
Hence, while you believe in belief that Pascal's wager offers you an option to believe in God, actually you believe that you are offered an option to believe in belief in God. (Phew!)
Replies from: byrnema↑ comment by byrnema · 2010-04-24T00:45:21.714Z · LW(p) · GW(p)
Regarding the first paragraph, I don't see that Pascal's wager requires all these contortions. It only requires estimating the utility of belief in God, and then makes a positive assertion about what you should do with that utility.
Would you agree that your arguments are arguments for why the utility of believing in God should be low?
Regarding the second paragraph, I agree there is a weird double-think aspect to Pascal's Wager. Just in that someone admitting, if PW converted they, that they are believing in something just because it was convenient to do so. Can you really believe something for that reason, knowing that is the reason? So this is an argument in the category, 'you can't really choose your beliefs as an act of will'.
↑ comment by cupholder · 2010-04-23T19:44:34.467Z · LW(p) · GW(p)
(Edit - it's probably a good idea to avoid reading this comment until you try RobinZ's suggestion.)
I looked at the parent of the comment of yours, and I think I can see why you disagreed with MatthewB and JGWeissman about Pascal's Wager: the three of you may be thinking of PW differently to each other.
It sounds to me that MatthewB was evaluating PW in terms of how well it gets you to the truth, whereas you were evaluating PW in terms of whether it helps you win the +∞ reward for belief. PW is misguided for the first purpose, but could work for the second, depending on the situation.
And JGWeissman, I think, was considering PW-as-applied-to-theism whereas you were thinking of PW-in-general - but you identified that difference yourself.
comment by Jack · 2010-04-22T17:51:15.771Z · LW(p) · GW(p)
"Magic everywhere in this bitch."
(For those who aren't aware of this act, yes, they're sincere and have a very sizeable following [the album this track is from peaked at #4 on the Billboard 200.])
Replies from: thomblakecomment by byrnema · 2010-04-20T00:24:48.669Z · LW(p) · GW(p)
Same questions, new formulation.
It seems that here at Less Wrong, we discourage map/territory discrepancies and mind projection fallacies, etc.
However, "winning" is in the map not the territory.
In one extreme aesthetic, we could become agents that have no subjective beliefs about the territory. But then there would be no "winning"; we'd have to give up on that.
So instead we'd like to have our set of beliefs minimally include enough non-objectively-true stuff to make "winning" coherent. Given this, how can we draw a line about which beliefs are good to have? For example, we certainly don't want to have beliefs that are objectively false. But what about the entire set of beliefs that are objectively neither true nor false? Are they all equivalent? Is there any way to define an aesthetic for choosing from this set of beliefs?
I think physical materialism tries 'minimalism' as an aesthetic and fails, because there is a continuous trade-off between fewer and fewer beliefs and a less well-defined sense of "win"; there seems no natural place to make a break.
Instead you could choose beliefs that maximize the sense of winning, and that is what theists do.
Replies from: Jack, Morendil↑ comment by Jack · 2010-04-20T01:55:03.861Z · LW(p) · GW(p)
Same answer, new formulation.
However, "winning" is in the map not the territory.
Nah. Winning isn't determined by the map, it's like a highlighted endpoint (like drawing on a map with a marker). You win when you get there. Note that a little red x or circle on a map isn't really part of the map. There is nothing there that we expect to correspond to the territory (imagine arriving at your destination and everything turns the color of the marker you used!).
The theistic move is like not finding any destination on the map that you're happy with so so you draw in a really cool mountain and make it your endpoint.
Winning isn't in the map because winning conditions are defined by desires, not beliefs.
Replies from: byrnema↑ comment by byrnema · 2010-04-20T02:09:37.583Z · LW(p) · GW(p)
Thanks for responding.
I'm not sure about the other stuff, but you have to agree that winning is in the map. You can define your win as an objective fact about reality (winning = getting to the mountain) but deciding that any objective fact is a win is subjective.
The theistic move is like not finding any destination on the map that you're happy with so so you draw in a really cool mountain and make it your endpoint.
My problem is that I'm trying to identify any lasting, real difference between deciding that a feature of the territory indicated on your map is 'pretty cool' and deciding that aspects of your map are pretty cool in of themselves, even if they don't map to real features in the terrain.
Winning isn't in the map because winning conditions are defined by desires, not beliefs.
OK. But just to check: are you pretty sure this is a real distinction?
Replies from: Jack↑ comment by Jack · 2010-04-20T02:44:26.586Z · LW(p) · GW(p)
I'm not sure about the other stuff, but you have to agree that winning is in the map. You can define your win as an objective fact about reality (winning = getting to the mountain) but deciding that any objective fact is a win is subjective.
It is subjective and it isn't in the territory... but that isn't the extent of our ontology. The map corresponds to your beliefs, the territory to external reality. Your desires are something else.
My problem is that I'm trying to identify any lasting, real difference between deciding that a feature of the territory indicated on your map is 'pretty cool' and deciding that aspects of your map are pretty cool in of themselves, even if they don't map to real features in the terrain.
Right. I don't think I have a new way of answering this question. :-). "Pretty cool" is at most an intersubjectively determined adjective. To say something is pretty cool in and of itself is a category error. Put it this way: what would it possibly mean for something to be pretty cool in a universe without anyone to find it cool? (Same goes for finding things moral, just so we're on the same page).
OK. But just to check: are you pretty sure this is a real distinction?
As certain as I get about anything. Beliefs are accountable to reality, if reality changes beliefs change. From the less wrong wiki on the map and territory:
Since our predictions don't always come true, we need different words to describe the thingy that generates our predictions and the thingy that generates our experimental results. The first thingy is called "belief", the second thingy "reality".
Desires don't generate predictions. In fact, they have exactly the opposite orientation of beliefs. If reality doesn't match our beliefs our beliefs are wrong and we have to change them. If reality doesn't match our desires reality is wrong and we have to change it.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-04-20T08:47:15.003Z · LW(p) · GW(p)
I think there are maps associated with rewards. The reason you want a reward is that you're expecting something good, whether it's a sensation or a chance at further rewards, to be associated with it.
If this has been a difficult question, it suggests that you didn't have your mind (or perhaps your map of your mind) as part of the territory.
Replies from: Jack↑ comment by Jack · 2010-04-20T11:44:39.956Z · LW(p) · GW(p)
Do you mind clarifying this?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-04-20T12:32:43.748Z · LW(p) · GW(p)
I can try, but I'm not sure exactly what's unclear to you, so this is an estimate of what's needed.
It looks to me as though the metaphor is a human looking at a road map, and what's being discussed is whether the human's destination is part of the landscape represented on the map. If you frame it that way, I'd say the answer is no.
However, the map in hand isn't the only representation the human has of the world. The human has a destination, and ideas about what will be accomplished by getting to the destination. I'm saying that the ideas about the goal are a map of how the world works.
From the root of this thread:
It seems that here at Less Wrong, we discourage map/territory discrepancies and mind projection fallacies, etc.
This is a means, not an end. The purpose of Less Wrong is to live as well as possible-- we can't live without maps because the world is very much larger than our minds, and very much larger than any possible AI.
The "extreme aesthetic" of eliminating as much representation as possible doesn't strike me as what we're aiming at, but I'm interested in other opinions on that.
If I understand The Principles of Effortless Power correctly, it's about eliminating (conscious?) representation in martial arts fighting, and thereby becoming very good at it. However, the author puts a lot of effort into representing the process.
Replies from: Jack↑ comment by Jack · 2010-04-20T12:49:25.793Z · LW(p) · GW(p)
I can try, but I'm not sure exactly what's unclear to you, so this is an estimate of what's needed.
Pretty much all of it, but that might just be me. It is a little clearer now. Was there something in my comment in particular you were responding to? My puny human brain might just be straining at the limitations of metaphorical reasoning.
However, the map in hand isn't the only representation the human has of the world. The human has a destination, and ideas about what will be accomplished by getting to the destination. I'm saying that the ideas about the goal are a map of how the world works.
I think we have maps for how to reach our goal but the fact that you have picked goal x instead of any other goal doesn't appear to me to be the product of any belief.
Your last three paragraphs still confuse me. In particular, while they all sound like cool insights I'm not entirely sure what they mean exactly and I don't understand how they relate to each other or anything else.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-04-20T13:12:26.879Z · LW(p) · GW(p)
What caught me was your idea that goals are completely unexaminable. Ultimate goals migtht be, but most of the goals we live with are subordinate to larger goals.
I was trying to answer the root post in this thread, and looking at the question of whether we're trying to eliminate maps. I don't think we are.
The last paragraph was the best example I could find of a human being using maps as little as possible.
Replies from: Jack↑ comment by Jack · 2010-04-20T13:34:55.308Z · LW(p) · GW(p)
What caught me was your idea that goals are completely unexaminable. Ultimate goals migtht be, but most of the goals we live with are subordinate to larger goals.
Got it. And you're right that my claim should be qualified in this way.
I was trying to answer the root post in this thread, and looking at the question of whether we're trying to eliminate maps. I don't think we are.
I see (I think). I guess my position that is that a free-floating belief that is, one that doesn't constrain anticipated experience, or a desire is like a map-inscription which doesn't correspond to anything on the territory. And there is a sense in which such things aren't really part of the map. They're more like an overlay, than the map itself. You can take the compass rose off a map, it might make the map harder to use or less cool to stare at but it doesn't make the map wrong. And not recognizing that this is the case is a serious error! There is no crazy four pointed island in the middle of the South Pacific. Desires and free-floating beliefs are like this. I don't really want them gone I just want people to realize that they aren't actually in the territory and so in some sense aren't really part of the ideal map (even if you keep them there because it is convenient).
Replies from: byrnema↑ comment by byrnema · 2010-04-20T16:31:25.470Z · LW(p) · GW(p)
This is as much a response to Morendil as a response to you and Nancy.
While it is certainly true that many or most of our desires "come with" the territory, these desires are 'base' or 'instinctual' goals that at times we would like to over-ride. The desire to be free of pain, for example. So-called "ultimate goals" can be more cerebral (and perhaps more fictional) and depend much more on beliefs. For example, the desire to help humanity, avoid existential risks, populate the universe, are all desires based more upon beliefs than the territory.
Replies from: Jack↑ comment by Jack · 2010-04-22T02:06:43.860Z · LW(p) · GW(p)
So if we take the view from nowhere: there are brains with which do this thing called being a mind. The minds have things called beliefs and things called desires but all of this is just neuron activity. These minds have a metaphor for relating their neuronal activities called beliefs with the universe that that observe: the map-territory metaphor.
The map-territory distinction is only understandable from the subjective perspective. There is something "outside me" which generates sensory experiences. This is the territory. The is something that is somehow a part of me or at least more proximate to me. These are my expectations about future sensory experiences, my beliefs. This is the map. Desire is a third thing (which of course is in the same universe as everything else, apropos the view from nowhere) it neither generates sensory experiences nor constrains our expectations about future sensory experiences. It isn''t in the territory, or in the map. From the subjective perspective desires are simply given. Now of course there are actually complex causal histories for these things, but from the subjective perspective a desire just arises.
Now through reasoning with our map what are initially terminal desires throw off sub-desires (Like if I desire food I will also desire getting a job to pay for food.) Perhaps we can also have second order desires: desires about our desires. Of course like beliefs desires exist in the territory as aspects of our brain activity. But in the perspective in which the map-territory metaphor is operative desires are sui generis.
Replies from: byrnema↑ comment by byrnema · 2010-04-24T01:34:16.593Z · LW(p) · GW(p)
(Status: so what happened at this point is that gave up. You think that desires are a 3rd thing, which I understand, but I think desires (and beliefs) are something you choose and that you modify in order to be more rational. I didn't realize I gave up until I realized I had stopped thinking about this.)
↑ comment by Morendil · 2010-04-20T07:38:05.275Z · LW(p) · GW(p)
Our sense of "winning" isn't entirely up for grabs: we prefer sensory stimulation to its absence, we prefer novel stimulations to boring old ones, we prefer to avoid protracted pain, we generally prefer living in human company rather than on desert islands, and so on.
In one manner of thinking, our sense of "winning" - considered as a set of statistically reliable facts about human beings - is definitely part of the territory. It's a set of facts about human brains.
"Winning" more reliably entails accumulating knowledge about what constitutes the experience of winning, and it seems that it has to be actual knowledge - it's not enough to say "I will convince myself that my sense of winning is X", where X is some not necessarily coherent predicate which seems to match the world as we see it.
That may work temporarily and for some people, but be shown up as inadequate as circumstances change.
Replies from: byrnema, NancyLebovitz↑ comment by byrnema · 2010-04-20T12:27:12.685Z · LW(p) · GW(p)
Yeah, most desires are part of the territory, and not really influenced by our beliefs.
As a child I was very drawn to asceticism. I thought that by not qualifying any of my natural desires as 'winning', I could somehow liberate myself from them. I think that I did feel liberated, but I was also very religious and so I imagined there was something else (something transcendent) that I was fulfilling. In later years, I developed a sense that I needed to "choose" earthly desires in order to learn more about the world and cope with existential angst. I considered it a necessary 'selling-out' that I would try for 10 years. All this to explain why I don't tend to think of desires as a given, but as a choice. But I suppose desires are given after all, and in my ascetic years I just believed that being unhappy was winning.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-04-20T12:39:32.383Z · LW(p) · GW(p)
I believe asceticism is just another human drive, and possibly one not shared with other animals. In any case, it needs as much examination to see whether it fits into the context of a life as any other drive.
I have a similar take on the desire to help people.
↑ comment by NancyLebovitz · 2010-04-20T09:14:01.378Z · LW(p) · GW(p)
we prefer novel stimulations to boring old ones,
I think there's a lot of variation. Some people choose very stable lives, and I don't know of anyone who wants everything to change all the time.
comment by Kevin · 2010-04-18T22:28:30.568Z · LW(p) · GW(p)
In the Next Industrial Revolution, Atoms Are the New Bits
http://www.wired.com/magazine/2010/01/ff_newrevolution/all/1
comment by orthonormal · 2010-04-16T21:57:54.725Z · LW(p) · GW(p)
Since I don't generally consider myself better informed than the market, I usually invest in index funds. At the moment, though, I find Thiel's diagnosis of irrational exuberance to be pretty reasonable, and I'd like to shift away from stocks for the moment.
My question: Is there an equivalent to index funds for bond markets— i.e. an investing strategy (open to small investors) which matches market performance rather than trying to beat it (at the risk of black-swan blowups)? Or alternately, is there a better investment strategy that I can put into place now and not worry about?
Replies from: Rain, mattnewport↑ comment by Rain · 2010-04-19T13:37:50.412Z · LW(p) · GW(p)
Beware trying to time the market. Make sure you're taking this action, not because you feel that the time is right to switch, but because you've carefully analyzed your risk/reward preferences.
That said, yes, there are 'index fund' bond investment vehicles, outside of the ETFs mentioned by mattnewport. They generally track the time frame (short, medium, long term) and type of bond (corporate, state, federal). Here are some examples from Vanguard: VBISX (Short Term Index), VBIIX (Intermediate Term Index), VBLTX (Long Term Index), and VBMFX (Total Bond Market Index).
What you're talking about is Asset Allocation, and it's the number one predictor of your long term investment results. This generally involves determining your own risk profile and picking bonds vs. stocks appropriately. A rule of thumb is to pick 100 - (your age) as a percentage of stocks, since the younger you are, the more growth you'll need. If you have less tolerance for risk, then you could go lower.
I'm currently invested 20% bonds and 80% stocks, but the bonds I have access to are the safest in the world (Federal employee G Fund, the same thing that Social Security invests in). Further breaking down AA, general categories include foreign vs. domestic, index vs. actively managed, taxable vs. non-taxed.
Example Asset Allocation:
20% bonds:
- 100% Medium-Term Securities (G Fund)
80% equities:
- 25-35% International Index Funds (EAFE, VEIEX)
- 55-75% Wilshire 5000 Index Funds (C/S Fund 3:1, VTSMX)
Tax efficient fund placement:
- Put your most tax-inefficient funds in TSP, 401ks, 403bs, Traditional IRAs and similar retirement accounts.
- Put your next most tax-inefficient funds in your Roth(s).
- Put what's left into your taxable account. Try to use only tax-efficient funds in taxable accounts.
List of securities from least to most tax efficient:
- Hi-Yield Bonds
- Taxable Bonds
- TIPS
- REIT Stocks
- Stock trading accounts
- Small-Value stocks
- Small-Cap stocks
- Large Value stocks
- International stocks
- Large Growth Stocks
- Most stock index funds
- Tax-Managed Funds
- EE and I-Bonds
- Tax-Exempt Bonds
Disclaimers: This advice is US-centric. I invest in Vanguard because they have very low Expense Ratios (ER), low or no purchase costs (loads), and have a large number of high quality index funds. Other investment firms such as Fidelity are also very good, and often have comparable funds.
Replies from: SilasBarta, RobinZ↑ comment by SilasBarta · 2010-04-19T15:23:37.481Z · LW(p) · GW(p)
You forgot to add:
-Everyone else is trying to do the same thing, so look at your actually expected real rate of return on all this saving you're planning (negative even before taxes on withdrawls or dividends, over the last 10 years, and with high volatility), and then hang your head and ask why you even bother.
I bring this up because I save a lot and use the tax-advantaged options, but when I look at the numbers, I have to ask, what's the point? After taxes (which will have to go up as the tidal wave of unfunded obligations comes due) and inflation, you barely get anything out of saving. (Yes, there's the no-tax Roth, but you get to invest very little in it.) Plus, if you save it for long enough not to be penalized on withdrawl, you have to put off consumption until waaaaay into the future, when it will do less for you.
It just seems like you'd be better off buying durable assets or investing in marketable job skills, which are more robust against the kinds of things that punish your savings.
I've been exploiring the "infinite banking" option: mutual universal whole life insurance that you can borrow against and which gets a steady, relatively high rate of return and is tax-shielded and has a long pedigree. Seems a lot better than following the herding into IRAs which will probably have their promises violated at some point.
Replies from: Rain, mattnewport↑ comment by Rain · 2010-04-19T15:57:16.873Z · LW(p) · GW(p)
Everyone else is trying to do the same thing
I don't believe they are. The vast majority of people I see investing and saving do so in a proactive manner, choosing on a whim, and with a risk horizon of less than a year. They pull out when the market goes down and pile on when hot tips become common ("Real estate can't lose!"). Even the big firms are doing a significant amount of trading and reformulating on a daily basis (evidence: financial "crisis").
I put my trust in the people who seem to understand what's really going on, like Warren Buffet, who says that a passively managed Index Fund is the way 99 percent of people should invest.
And if you're ready to say that IRA promises will be broken (which I also consider a good probability), then your "infinite banking" scheme is even less likely to remain stable, as they're backed by private companies rather than the US government.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-04-19T17:12:37.421Z · LW(p) · GW(p)
I don't believe they are. The vast majority of people I see investing and saving do so in a proactive manner, choosing on a whim, and with a risk horizon of less than a year. They pull out when the market goes down and pile on when hot tips become common ("Real estate can't lose!"). Even the big firms are doing a significant amount of trading and reformulating on a daily basis (evidence: financial "crisis").
Nice stereotype, but I didn't do any of that, and still lost a lot from the time I started investing (mid '06), despite concentrating on low-cost index funds (to the extent permitted by the 401k). As did anyone else who started in the decade before that.
Keep in mind, there's a certain cognitive capture going on here: in the popular mind, long-term saving is equated with using the 401k/Roth options, which require you to invest in a very specific class of assets. Even with all the whimsy you refer to, that's building in an unjustifiably low risk premium that has to change eventually.
And if you're ready to say that IRA promises will be broken (which I also consider a good probability), then your "infinite banking" scheme is even less likely to remain stable, as they're backed by private companies rather than the US government.
Wha? What "backing" are you referring to, and is your comparison apples-to-apples? The government doesn't "back" IRAs, it just has a promise they will have certain tax privileges. The assets in the IRAs, where it gets its value, are managed by private companies, just like for mutual whole life insurance (which are member-owned if that matters). Yes, the government could lift their tax privileges too, but this would require breaking an even stronger, longer tradition of not taxing life insurance benefits, which is the (ostensible) purpose of these plans.
ETA:
I put my trust in the people who seem to understand what's really going on, like Warren Buffet, who says that a passively managed Index Fund is the way 99 percent of people should invest.
Buffet hasn't actually worked out the nuts and bolts of how to get the meaningful diversication you need, starting with much smaller sums than he has, and adhering to account minimums and contribution limits. That advice seems like more of a vague pleasantry than something you can benefit from. And, it's not what he does.
Replies from: Rain↑ comment by Rain · 2010-04-19T17:33:51.062Z · LW(p) · GW(p)
I don't like arguing with you, SilasBarta. It feels very combative, and sets off emotional responses in me, even when I think you have a valid point.
As such, I'm tapping out.
Replies from: Morendil, SilasBarta↑ comment by Morendil · 2010-04-19T17:45:59.895Z · LW(p) · GW(p)
In case it may help you to know, I've felt the same on a couple occasions when I engaged Silas in argument.
I've chalked it down to poor skill at positive sum self-esteem transactions on Silas' part, at least when mediated by text. I don't think it's deliberate, as on some other occasions I concluded on a genuine desire to help on his part.
↑ comment by SilasBarta · 2010-04-19T17:38:19.409Z · LW(p) · GW(p)
Could you please at least explain what you had in mind by your claim that infinite banking is backed by private companies rather than the US government (as you presumably meant to say IRAs are)? I promise not to reply to that comment.
Replies from: Rain↑ comment by Rain · 2010-04-19T17:47:40.141Z · LW(p) · GW(p)
I was incorrect in determining the impact on each type of investment from the government considering private companies manage both. At the time, I was thinking that the government created IRAs through law, and I didn't think that was the case with insurance, and thus the insurance plans seemed more likely to be subject to change by profit motive. However, I don't know enough about the particular form of life insurance you're suggesting to feel comfortable making further claims.
↑ comment by mattnewport · 2010-04-19T16:27:08.886Z · LW(p) · GW(p)
look at your actually expected real rate of return on all this saving you're planning (negative even before taxes on withdrawls or dividends, over the last 10 years, and with high volatility), and then hang your head and ask why you even bother.
As Rain said, asset allocation is important. The standard advice to put most of your savings in low cost index funds has the merit of simplicity and is not bad advice for most people but it is possible to do better by having a bit more diversification than that implies. Rain suggests a percentage allocated to international index funds which is a good start. US savers with exposure to foreign index funds, emerging market funds, commodities and foreign currencies (either directly or through foreign indexes) would have done better over the last 10 years than savers with all their exposure concentrated in US equities.
Diversification is the only true free lunch in investing and by selecting an asset allocation that includes assets that are historically uncorrelated or negatively correlated with US equities it is possible to get equal or better average returns with lower volatility over the long term.
If I were in the US I would share your concerns about future tax increases and raids on currently tax protected retirement accounts but I'd argue that just suggests a broader view of diversification that includes non-traditional savings approaches that are less exposed to such risks.
I think most investors (and particularly US investors) are over-invested in their home countries. Since most people's individual economic circumstances are correlated with the performance of the economy as a whole this is poor diversification. Similarly I think it is unwise for people to have significant weighting in sectors or asset classes strongly correlated with the industry they personally earn a living in. Programmers should probably not be over-weighted in tech related investments for example and it is probably a bad idea to retain significant stock in your own employer for most employees. I believe Rain is a government employee and so I would suggest that a lower than normal allocation to government backed investments would be appropriate in that situation for example.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-04-19T17:02:18.698Z · LW(p) · GW(p)
Rain suggests a percentage allocated to international index funds which is a good start. US savers with exposure to foreign index funds, emerging market funds, commodities and foreign currencies (either directly or through foreign indexes) would have done better over the last 10 years than savers with all their exposure concentrated in US equities.
Okay, but in a 401k, you're stuck with the choices your employer gives you, which may not have those options. (usually, the choices are moronic and don't even include more than one index fund. Mine has just one, and I reviewed my cousin's and found that it didn't have any. Commodity trades? You jest.)
And if you're talking about a Roth, well, no mutual funds, not even Vanguard, will let you start out your saving by dividing up that $4000 between five different funds; each one has a minimum limit. You'd have to be investing for a while first, complicating the whole process.
And if you mean taxable accounts, the taxable events incurred gore most of the gains.
If I were in the US I would share your concerns about future tax increases and raids on currently tax protected retirement accounts
The US is not alone in that respect -- other, long-developed countries have it even worse.
but I'd argue that just suggests a broader view of diversification that includes non-traditional savings approaches that are less exposed to such risks.
Right, that's what I was referring to: investing in job skills so you can high-tail it to another country if things become unbearable (and hope they don't seize your assets on the way out).
Replies from: mattnewport↑ comment by mattnewport · 2010-04-19T17:43:56.348Z · LW(p) · GW(p)
Okay, but in a 401k, you're stuck with the choices your employer gives you, which may not have those options. (usually, the choices are moronic and don't even include more than one index fund. Mine has just one, and I reviewed my cousin's and found that it didn't have any. Commodity trades? You jest.)
I'm not in the US so I'm not fully familiar with the retirement options available there. Here in Canada we have what seems to me a pretty good system whereby I can have a tax sheltered brokerage account for retirement savings. In many cases it is hard to argue with the 'free money' of employer matched retirement plans and the tax advantages of particular schemes but I think it is wise to be mindful of all the advantages and disadvantages of a particular scheme (including things like counterparty risk regarding who ultimately backs up your investments) and take that into consideration when weighing options.
And if you're talking about a Roth, well, no mutual funds, not even Vanguard, will let you start out your saving by dividing up that $4000 between five different funds; each one has a minimum limit. You'd have to be investing for a while first, complicating the whole process.
This is definitely an issue when starting out. Transaction costs can make broad diversification prohibitively expensive when your total assets are modest. I see it as something to aim for over time but you are absolutely right to be mindful of these issues. If you have a reasonable choice of mutual funds you can look for ones that are diversified at least internationally if not across asset classes outside of equities and fixed income.
And if you mean taxable accounts, the taxable events incurred gore most of the gains.
This is why I like the options available in Canada. Between self-directed RRSPs and the new TFSA the tax-friendly saving options are pretty good.
The US is not alone in that respect -- other, long-developed countries have it even worse.
Indeed, and this is one reason I'm working towards my Canadian citizenship. It has relatively healthy finances compared to the UK where I grew up. I don't think the necessity for some form of default on the obligations of most developed countries' governments is widely appreciated yet.
Right, that's what I was referring to: investing in job skills so you can high-tail it to another country if things become unbearable (and hope they don't seize your assets on the way out).
This is in line with the broader view of diversification I am advocating. Over the typical individual's expected lifespan this is an important consideration. I think it is a sensible long term goal to diversify in a broad sense so that you maintain options to take your capital (human and otherwise) wherever you can expect the best return on it. Assuming that this will always be the same country you happen to have been born in is short sighted in my opinion.
On a diversification related note, this proposal from Ian Ayres and Barry Nalebuff is an interesting take on the merits of diversification across ones own lifespan.
↑ comment by RobinZ · 2010-04-19T14:40:32.227Z · LW(p) · GW(p)
- Put your most tax-inefficient funds in TSP, 401ks, 403bs, Traditional IRAs and similar retirement accounts.
- Put your next most tax-inefficient funds in your Roth(s).
Back up: you can make maximum Traditional & Roth IRA contributions in the same year? (I live in the U.S., and have only been putting funds into my traditional IRA.)
Replies from: Rain↑ comment by Rain · 2010-04-19T14:54:44.895Z · LW(p) · GW(p)
No, you cannot max both a Traditional and a Roth; it's either/or. Which one you choose depends on several factors, including length of investment and the income you predict you'll have during disbursement. Traditional is better if you expect low income in retirement, or a shorter time frame until retirement; Roth is better if you expect higher income or a longer time frame.
Replies from: RobinZ↑ comment by RobinZ · 2010-04-19T15:01:48.699Z · LW(p) · GW(p)
How do you pick when to switch, then? I assume tax-efficiency, but how tax-efficient should income be before you put it into Roth rather than Traditional? And how do you measure tax-efficiency of income?
I apologize if this is overly off-topic, of course.
Replies from: Rain↑ comment by Rain · 2010-04-19T15:06:46.420Z · LW(p) · GW(p)
It's been a while since I did primary research on the topic; I decided on a Roth for my personal circumstances and dumped most of the other knowledge afterward, so I'll be deferring to references: here are a couple articles about the topic of choosing between them, one which links to a calculator.
You measure tax efficiency by what percentage of the money you get to keep after it's been taxed in the context of your other income and investments. Putting tax-inefficient funds in tax-efficient formats like an IRA lets you keep a (hopefully much) larger percentage.
And I don't see how it's off topic in an Open Thread.
↑ comment by mattnewport · 2010-04-16T22:09:15.119Z · LW(p) · GW(p)
ETFs offer small investors access to a number of alternative investments to stocks. There are lots of bond/fixed income index ETFs available. You can also use ETFs to diversify out of US stocks (assuming you're a US investor) through international indexes. It is also possible to invest in other asset classes such as commodities and foreign currencies through ETFs but there are a number of caveats and potential hidden costs to many of these so you should do some research before going that route.
comment by Cyan · 2010-04-13T22:23:13.738Z · LW(p) · GW(p)
I wanted to ask the LW commentariat what they thought of the morality of the "false time constraint" PU ploy. I'm hereby prefacing that discussion with a meta-inquiry as to whether that conversation should even be opened at all. (The contentious ongoing discussion I found when I came here to make the query has made me gun-shy.)
Replies from: Jack↑ comment by Jack · 2010-04-13T22:41:20.144Z · LW(p) · GW(p)
How about you ask this again when the present PUA type discussion (which has already devolved to some flame waring, in places) calms down?
Replies from: Cyan↑ comment by Cyan · 2010-04-13T22:50:31.259Z · LW(p) · GW(p)
OK.
Replies from: wedrifid↑ comment by wedrifid · 2010-04-13T23:21:52.343Z · LW(p) · GW(p)
And do so without asking for @#@#$ permission!
I'm hereby prefacing that discussion with a meta-inquiry as to whether that conversation should even be opened at all. (The contentious ongoing discussion I found when I came here to make the query has made me gun-shy.)
Less supplication!
Replies from: Cyancomment by CannibalSmith · 2010-04-08T12:34:32.062Z · LW(p) · GW(p)
Help me, LessWrong. I want to build a case for
- Information is a terminal value without exception.
- All information is inherently good.
- We must gather and preserve information for its own sake.
These phrasings should mean the exact same thing. Correct me if they don't.
Elaboration: Most people readily agree that most information is good most of the time. I want to see if I can go all the way and build a convincing argument that all information is good all of the time, or as close to it as I can get. That misuse of information is problem about the misuser and not the information ("guns don't kill people"). Specific cases include: endangered species (DNA is best stored in living organisms), viruses (all three kinds), forbidden books, child pornography and other shocking information, free speech, Archive.org, The Rosetta Project, research on race.
Please post arguments and counterarguments in their own comments and separately from general discussion comments.
Replies from: Yvain, khafra, FAWS, wedrifid, NancyLebovitz, Rain, Document, Morendil, None, Morendil, Jack, fburnaby, Document, jimrandomh, Amanojack, CannibalSmith↑ comment by Scott Alexander (Yvain) · 2010-04-10T00:04:04.241Z · LW(p) · GW(p)
You probably don't mean trivial information eg the position of every oxygen atom in my room at this exact moment. But if you eliminate trivial information and concentrate only on useful information, you've turned it into a circular argument - all useful information is inherently useful.
Further, saying that we "must" gather and preserve information ignores opportunity costs. Sure, anything might eventually turn out to be useful, but at some point we have to say the resources invested in disk space would be better used somewhere else.
It sounds more like you're trying to argue that information can never be evil, but you can't even state that meaningfully without making a similar error. Certainly giving information to certain people can be evil (for example, giving Hitler the information on how to make a nuclear bomb).
See this discussion for why I think calling something like "information" good is a bad idea.
↑ comment by khafra · 2010-04-08T12:53:52.924Z · LW(p) · GW(p)
One thing you may want to address is what you mean by "gather and preserve information." The maximum amount of information possible to know about the universe is presently stored and encoded as the universe. The information that's useful to us is reductions and simplifications of this information, which can only be stored by destroying some of the original set of information.
Replies from: Document, CannibalSmith↑ comment by Document · 2010-04-08T21:41:11.385Z · LW(p) · GW(p)
In other words, "information" in this case might be an unnatural category.
Replies from: khafra↑ comment by khafra · 2010-04-08T22:47:42.547Z · LW(p) · GW(p)
Yes. CannibalSmith's usage sounded to me somewhere indeterminately in between the information theoretic definition and the common meaning which is indistinct but similar to "knowledge." My request for clarification assumes the strictly information theoretic definition isn't quite what he wanted.
↑ comment by CannibalSmith · 2010-04-08T16:21:00.706Z · LW(p) · GW(p)
My mom complains I take things too literally. Now I know what she means. :)
Seriously though, I mean readable, usable, computable information. The kind which can conceivably turned into knowledge. I could also say, we want to lossly compress the Universe, like an mp3, with as good a ratio as possible.
↑ comment by FAWS · 2010-04-08T13:32:37.594Z · LW(p) · GW(p)
Do you mean that information already is a terminal value for (most) humans? Arguing that something should be a terminal value makes only a limited amount of sense, terminal values usually don't need reasons, though they have (evolutionary, cultural etc.) causes.
Replies from: CannibalSmith↑ comment by CannibalSmith · 2010-04-08T16:10:19.305Z · LW(p) · GW(p)
Neither. I guess I shouldn't have used the term "terminal value". See the elaboration - how do you think I should generalize and summarize it?
Replies from: Jack↑ comment by wedrifid · 2010-04-08T15:27:48.149Z · LW(p) · GW(p)
I don't make arguments for terminal values. I assert them.
Arguments that make any (epistemic) sense in this instance would be references to evidence to something that represents the value system (eg. neurological, behavioural or introspective observations about the relevant brain).
Replies from: CannibalSmith↑ comment by CannibalSmith · 2010-04-08T16:28:52.985Z · LW(p) · GW(p)
Looks like I've been using "terminal values" incorrectly.
↑ comment by NancyLebovitz · 2010-04-08T14:53:55.575Z · LW(p) · GW(p)
Information takes work to produce, to filter, and to receive, and more work to evaluate it and (if genuinely new) to understand it. There's a strong case that information isn't a terminal value because it's not the only thing people need to do with their time.
You wouldn't want your inbox filled with all the things anyone believes might be information for you.
Another case of limiting information: rules about what juries are allowed to know before they come to a verdict.
There might be an important difference between forbidding censorchip vs. having information as a terminal value.
↑ comment by Rain · 2010-04-08T14:50:10.270Z · LW(p) · GW(p)
I very much doubt that we have enough understanding of human values / preferences / utility functions to say that anything makes the list, in any capacity, without exception.
In this case, I think that information is useful as an instrumental value, but not as a terminal value in and of itself. It may lie on the path to terminal values in enough instances (the vast majority), and be such a major part of realizing those values, that a resource-constrained reasoning agent might treat it like a terminal value, just to save effort.
I look at it like a genie bottle: nearly anything you want could be satisfied with it, or would be made much easier with its use, but the genie isn't what you really want.
Replies from: CannibalSmith↑ comment by CannibalSmith · 2010-04-08T16:24:33.730Z · LW(p) · GW(p)
Well, all agents are resource-constrained. But I get what you mean.
↑ comment by Document · 2010-04-08T20:56:15.320Z · LW(p) · GW(p)
- Storing information has an inherent cost in resources, and some information might be so meaningless that no matter how abundant those resources are, there will always be a better or more interesting use for them. I'm not sure if that's true.
- "Information" might be an unnatural category in the way you're using it. Why are the bits encoded in an animal's DNA worth more than the bits encoded in the structure of a particular rock? Doesn't taking any action erase some information about the state the world was in before that action?
- EY might call information bad that prevents pleasant surprise.
↑ comment by Morendil · 2010-04-08T17:19:03.387Z · LW(p) · GW(p)
A straightforward counter-argument is that forgetting, i.e. erasing information, is a valuable habit to acquire; some "information" is of little value and we would burden our minds uselessly, perhaps to the point of paralysis, by hanging on to every trivial detail.
If that holds for an individual mind, it could perhaps hold for a society's collective records; perhaps not all of YouTube as it exists now needs to be preserved for an indefinite future, and a portion of it may be safely forgotten.
Replies from: Document, NancyLebovitz↑ comment by Document · 2010-04-08T21:52:14.171Z · LW(p) · GW(p)
That's a good point, but rather than Youtube I'd suggest something like the exact down-to-the-molecule geography and internal structure of Mercury; or better yet, the output of a random number generator that you accidentally left running for a year.
For the record, the wording I came up with originally was "Storing information has an inherent cost in resources, and some information might be so meaningless that no matter how abundant those resources are (even if they seem to be unlimited), there will always be a better or more interesting use for them.".
(Edit 4/11: I was thinking of trying to come up with something like torture versus scrambling 3^^^3 bits of useless information, but that probably wouldn't be a good line of argument anyway.)
↑ comment by NancyLebovitz · 2010-04-09T07:49:09.188Z · LW(p) · GW(p)
Forgetting is crucial for my ability to do dual n-back.
Replies from: gwern↑ comment by gwern · 2010-04-19T14:57:21.448Z · LW(p) · GW(p)
That's a fact about the human mind, though; DNB is designed to stress fuzzy human WM's weaknesses. DNB is trivially doable by a computer (look at all the implementations).
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-04-19T15:12:27.103Z · LW(p) · GW(p)
Computers have memory limits. They're just much higher than human limits.
WM?
Replies from: gwern, cupholder↑ comment by gwern · 2010-04-19T16:28:28.979Z · LW(p) · GW(p)
They're just much higher than human limits.
It's not just quantity; it's quality. Human WM is qualitatively different from RAM.
Yes, you could invent a 'dual 4-gigabyte back', and the computer would do just as well. Bits don't change in RAM. If it needs to compare 4 billion rounds back, it will compare as easily as if it were 1 round back. Computer 'attention' doesn't drift, while a human can still make mistakes on D1B. And so on.
You could cripple a computer to make mistakes like a human, but the word 'cripple' is exactly what's going on and demonstrates that the errors and problems of human WM have nothing interesting to say about the theoretical value (if any) of forgetting.
You only need to forget in DNB because you have so little WM. If you could remember 1000 items in your WM, what value would forgetting have on D10B? It would have none; forgetting is a hack, a workaround your limits, an optimization akin to Y2K.
↑ comment by [deleted] · 2010-04-09T04:26:44.400Z · LW(p) · GW(p)
Reading what you have said in this thread, I was confident that you were committing the fallacy of rationalization. Your statement is simple, and it seems like reality can be made to fit it, so you do so. But your name looked familiar, and so I clicked on it, and found that your karma is higher than mine, which seems to be strong evidence that you would not commit such a fallacy, using phrases so revealing as "I want to build a case for . . .".
Your words say you are rationalizing; your karma says you are not. I am confused.
Replies from: Morendil↑ comment by Morendil · 2010-04-09T07:08:59.151Z · LW(p) · GW(p)
Argument screens off karma. ;)
I agree with you about "I want to build a case", the phrasing is unfortunate. However I note that the OP asked for arguments on both "sides".
Replies from: None↑ comment by [deleted] · 2010-04-10T23:09:38.222Z · LW(p) · GW(p)
The OP asked for a specific thing to be done with arguments on both sides. "Please place garbage in the bin in the corner" doesn't mean I want the bin to contain more garbage.
Or maybe you're not referring to "Please post arguments and . . ."
↑ comment by Morendil · 2010-04-08T17:09:25.162Z · LW(p) · GW(p)
May I suggest adding to your list of test cases the blueprints for a non-Friendly AI?
By that I mean any program which is expected to be a General Intelligence but which isn't formally or rigorously proven to be Friendly. (I still haven't come to definite conclusions about the plausibility of an AI intelligence explosion, therefore about the urgency of FAI research and that of banning or discouraging the dissemination of info leading to non-F, but given this blog's history it feels as if this test case should definitely be on the list.)
↑ comment by Jack · 2010-04-08T17:00:52.021Z · LW(p) · GW(p)
Some counter-arguments
child pornography
What exactly is the pro-information position here? Cause I'm against this being produced and agree with bans on it's distribution and possession as a way of hurting its purveyors. The way such laws are enforced, at least in America, is sometimes disgraceful. But I don't think it is an inherrently bad policy.
viruses (all three kinds),
Biological, computer and memetic? The last one looks like an open and shut case to me. If learning information (being infected by a meme) can damage me then I should think that information should be destroyed. Maybe we want some record around so that we can identify them to protect people in the future? Maybe this stuff is too speculative to flesh out.
research on race
For the IQ issue: Here is my read of the status quo: most people believe the science says there is no innate racial difference in IQ. This is probably what it says but if we really want to know for sure we'd need to gather more data. If we gathered more data there are three possible outcomes: (1) we find out conclusively there is no innate IQ difference. Most people's beliefs do not change. An impassioned minority continues to assert that there is an IQ difference and questions the science, perpetuating the controversy. This is socially the status quo but some people paying attention have actually learned something. (2) We don't learn anything conclusive one way or the other. The status quo continues. (3) We learn there are innate racial differences in IQ. All hell breaks lose.
Replies from: Strange7↑ comment by Strange7 · 2010-04-08T17:10:35.970Z · LW(p) · GW(p)
What exactly is the pro-information position here? Cause I'm against this being produced and agree with bans on it's distribution and possession as a way of hurting its purveyors. The way such laws are enforced, at least in America, is sometimes disgraceful. But I don't think it is an inherrently bad policy.
If the purveyors are revealed to the public, I think we'll find better ways to stop them, instead of creating a black-market environment which makes their product more valuable. There's also the non-negligible side benefit of turning fewer innocent people into lifelong pariahs.
Replies from: Jack↑ comment by Jack · 2010-04-08T17:30:10.173Z · LW(p) · GW(p)
If the purveyors are revealed to the public,
Well yes, that would be great information. But I don't see how letting people own and distribute child porn is going to reveal that information. The market is always going to be black in some respect if it is illegal to produce it. The reason I asked what the position was is that it isn't obvious to me that producing child pornography isn't gathering information.
instead of creating a black-market environment which makes their product more valuable.
If you legalize possession but not production you've lowered the cost of consuming (increased the demand) while not affecting the supply. This will drive up prices.
There's also the non-negligible side benefit of turning fewer innocent people into lifelong pariahs.
Just adjust the laws so that someone who decides to download a huge pornfile that happens to include a few illegal photos doesn't get convicted...
Replies from: Strange7↑ comment by Strange7 · 2010-04-08T17:49:09.694Z · LW(p) · GW(p)
If you legalize possession but not production you've lowered the cost of consuming (increased the demand) while not affecting the supply.
There is this thing called 'peer-to-peer file sharing.' If possession is legal, any possessor can also be a supplier by sharing what they've already got, but the original producers can't claim copyright without incriminating themselves. That drastically increases the supply, driving the price down close to zero.
Replies from: Jack↑ comment by Jack · 2010-04-08T18:36:16.355Z · LW(p) · GW(p)
Close to zero? Really? There is already negligible enforcement of copyright and for a number of years there was zero enforcement of copyright. Media industries, porn and otherwise, have been doing fine.If necessary the industry will start only streaming video and uploading decoy files. Not to mention groups of people who just produce it for each other with no money changing hands will be able to operate unhindered. I'm not an expert but I imagine it is drastically more difficult to put someone away for distribution than production- and thats how the industry would end up working, shielding the producers while legal distributors buy and sell.
Replies from: Strange7↑ comment by Strange7 · 2010-04-08T18:51:21.238Z · LW(p) · GW(p)
If producers work closely with specific distributors, it would be possible to get the distributors for 'aiding and abetting' or RICO sorts of things. Customers would also be more willing to cooperate with law enforcement if they knew they wouldn't be punished for doing so, and limited enforcement resources could be concentrated on the actual producers instead of randomly harrassing anyone who happens to have it on their HD.
Groups of people who produce it for each other with no money involved would be hard to track down under any circumstances; I don't see how decriminalizing possession makes that worse.
Replies from: Jack↑ comment by Jack · 2010-04-08T19:14:26.645Z · LW(p) · GW(p)
If producers work closely with specific distributors, it would be possible to get the distributors for 'aiding and abetting' or RICO sorts of things.
A lot harder to prove than distribution and possession.
Customers would also be more willing to cooperate with law enforcement if they knew they wouldn't be punished for doing so
Well you've just taken away law enforcement's entire bargaining position. Right now customers have to cooperate under threat of prosecution.
and limited enforcement resources could be concentrated on the actual producers instead of randomly harrassing anyone who happens to have it on their HD.
What we want is for law enforcement to concentrate their resources on the producers without taking away the tools they need to do so effectively. The key is structuring the law and incentives for law enforcement so that they have to go after the producers and not guys who accidentally download it. Maybe force prosecutors to demonstrate the possessor had intentionally downloaded it or has viewed it multiple times. Or offer institutional incentives for going after the big fish.
Groups of people who produce it for each other with no money involved would be hard to track down under any circumstances; I don't see how decriminalizing possession makes that worse.
Well again, it is a lot easier to prove possession and distribution then it is production.
Replies from: Strange7↑ comment by Strange7 · 2010-04-08T19:36:21.048Z · LW(p) · GW(p)
Well you've just taken away law enforcement's entire bargaining position. Right now customers have to cooperate under threat of prosecution.
So most of them avoid law enforcement entirely for fear of getting 'v&' instead of providing tips out of concern for the welfare of the children. I mean, once you've cooperated, what's law enforcement's incentive not to prosecute you?
Justice is not necessarily best served by making the cop's job easier. So long as law enforcement is rewarded by the conviction, they'll go for low-hanging fruit: that is, the people who aren't protecting themselves because they think they're not doing anything wrong. Broad laws that anyone could violate unwittingly, and which the police enforce at their own discretion? That's not a necessary tool for some higher purpose, it's overwhelming power waiting to be abused.
Replies from: Jack↑ comment by Jack · 2010-04-08T20:17:11.518Z · LW(p) · GW(p)
So most of them avoid law enforcement entirely for fear of getting 'v&' instead of providing tips out of concern for the welfare of the children. I mean, once you've cooperated, what's law enforcement's incentive not to prosecute you?
You know what prosecutorial immunity is, right? Also, I don't know why you think pedophiles are itching to come forward with tips on their porn suppliers. If they were there are always ways to make anonymous tips to the police.
Justice is not necessarily best served by making the cop's job easier. So long as law enforcement is rewarded by the conviction, they'll go for low-hanging fruit: that is, the people who aren't protecting themselves because they think they're not doing anything wrong.
For the third time: make prosecuting the low-hanging fruit more difficult and lower the incentives to do so. That is my position. You don't have to handcuff law enforcement's investigation of the producers to do this.
Edit: One other way to do this that I haven't mentioned: legalize small possession of a small amount of child pornography or make small amounts a misdemeanor.
↑ comment by fburnaby · 2010-04-08T15:05:49.526Z · LW(p) · GW(p)
I'll attempt a counter-example. It's not definitive, but at least makes me question your notion:
Does a spy want to know the purpose of his mission? What if (s)he gets caught? Is it easier for them to get though an interrogation not knowing the answers to the questions?
↑ comment by Document · 2010-04-08T21:31:46.413Z · LW(p) · GW(p)
Please post arguments and counterarguments in their own comments and separately from general discussion comments.
At first I thought you were saying that you wanted the comments to be flat rather than threaded; I figured that that was because you wanted inbox notification of each new reply. Then I saw you replying to replies yourself, so I was less sure. I take it you actually mean that (for example) I shouldn't include remarks on the main topic in this comment, or vice versa?
↑ comment by jimrandomh · 2010-04-08T14:00:50.641Z · LW(p) · GW(p)
Information is a terminal value without exception
What would an unfriendly superintelligence that wanted to hack your brain say to you? Does knowing the answer to that have positive value in your utility function?
That said, I do think information is a terminal value, at least in my utility function; but I think an exception must be made for mind-damaging truths, if such truths exist.
Replies from: FAWS↑ comment by FAWS · 2010-04-08T14:18:32.230Z · LW(p) · GW(p)
I don't think the idea of a conditional terminal value is very useful. If information is a terminal value for me I'd want to know what the unfriendly superintelligence would say, but unless it's the only terminal value and I don't think the result would have any influence on other information gathering there would be other considerations speaking against learning that particular piece of information and probably outweighing it. No need to make any exceptions for mind damaging truths because to the extent mind damage is a bad thing according to my terminal values they will already be accounted for anyway.
↑ comment by Amanojack · 2010-04-08T13:24:13.930Z · LW(p) · GW(p)
First of all, I recommend clearing away the moral language (value, good, and must) unless you want certain perennial moral controversies to muddy the waters.
Example phrasings of the case you may be trying to make:
Bayesian predictions made from (100% certain) information set {N}U{M} are usually more accurate than those made from {N} alone
I suppose this is true.
Bayesian predictions made from (100% certain) information set {N}U{M} are always more accurate than those made from {N} alone
If you've ever done a jigsaw puzzle, you can probably think of a counterexample to this.
Replies from: Nick_Tarleton, None↑ comment by Nick_Tarleton · 2010-04-08T18:35:06.935Z · LW(p) · GW(p)
Bayesian predictions made from (100% certain) information set {N}U{M} are always more accurate than those made from {N} alone
If you've ever done a jigsaw puzzle, you can probably think of a counterexample to this.
You've never done a jigsaw puzzle using optimal Bayesian methods.
Replies from: wedrifid↑ comment by [deleted] · 2010-04-09T04:12:52.413Z · LW(p) · GW(p)
Here's a counterexample. There is an urn filled with lots of balls, each colored either red or blue. You think there's a 40% chance that the next ball you pull out will be red. You pull out a ball, and it's red; you put it back in and shake the urn. Now you think there's a 60% chance that the next ball you pull out will be red, and you announce this fact and bet on it. You pull out one more ball, and it's blue. If you hadn't seen that piece of evidence, your prediction would have been more accurate.
↑ comment by CannibalSmith · 2010-04-08T12:42:35.281Z · LW(p) · GW(p)
We cannot know what information we might need in the future, therefore we must gather as much as we can and preserve all of it. Especially since much (most?) of it cannot be recreated on demand.
Replies from: Matt_Simpson↑ comment by Matt_Simpson · 2010-04-08T14:38:29.210Z · LW(p) · GW(p)
That's not an argument for information as a terminal value since it depends on the consequences of information, but it's a decent argument for gathering and preserving information.
Replies from: CannibalSmith↑ comment by CannibalSmith · 2010-04-08T16:26:40.009Z · LW(p) · GW(p)
If that distinction exists, my three formulations are not identical. Yes?
Replies from: Document, Document, Matt_Simpson↑ comment by Document · 2010-04-11T04:42:56.211Z · LW(p) · GW(p)
Not sure. "Inherently good" could mean "good for its own sake, not good for a purpose", but it seems like it could also mean "by its very nature, it's (instrumentally) good". And the fact that you said "gather or preserve" makes me want to come up with a value system that only cares about gathering or only cares about preserving.
I'm not sure one couldn't find similarly sized semantic holes in anything, but there they are regardless.
↑ comment by Document · 2010-04-11T04:35:04.608Z · LW(p) · GW(p)
I think so. "All information is inherently good" could mean "inherently instrumentally good", and the fact that you said "gather or preserve" makes me want to come up with a value system that only cares about gathering or only cares about preserving.
↑ comment by Matt_Simpson · 2010-04-08T16:34:43.429Z · LW(p) · GW(p)
Your 3 formulations should be identical. Here's your argument:
We cannot know what information we might need in the future, therefore we must gather as much as we can and preserve all of it
My first thought when I read this is, Why are we gathering information? The answer? Because we may need it in the future. What will we need it for? Presumably to attain some other (terminal) end, since if information was a terminal end the argument wouldn't be "we may need it in the future," it would be "we need it."
Maybe I am just misunderstanding you?