Open Thread, June 16-30, 2012
post by OpenThreadGuy · 2012-06-15T04:45:10.875Z · LW · GW · Legacy · 350 commentsContents
350 comments
If it's worth saying, but not worth its own post, even in Discussion, it goes here.
350 comments
Comments sorted by top scores.
comment by [deleted] · 2012-06-20T18:58:59.573Z · LW(p) · GW(p)
NEW GAME:
After reading some mysterious advice or seemingly silly statement, append "for decision theoretic reasons." at the end of it, you can now pretend it makes sense and earn karma on LessWrong. You are also entitled to feel wise.
Variants:
"due to meta level concerns."
"because of acausal trade."
Replies from: gwern, GLaDOS, Harbinger, GLaDOS, beoShaffer, army1987, GLaDOS↑ comment by gwern · 2012-06-20T19:00:26.725Z · LW(p) · GW(p)
Unfortunately, I must refuse to participate in your little game on LW - for obvious decision theoretic reasons.
Replies from: None, lsparrish↑ comment by [deleted] · 2012-06-20T19:02:34.726Z · LW(p) · GW(p)
Your decision theoretic reasoning is incorrect due to meta level concerns.
Replies from: None↑ comment by [deleted] · 2012-06-20T19:05:24.316Z · LW(p) · GW(p)
I'll upvote this chain because of acausal trade of karma due to meta level concerns for decision theoretic reasons.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-20T19:09:47.607Z · LW(p) · GW(p)
The priors provided by Solomonoff induction suggest, for decision-theoretic reasons, that your meta-level concerns are insufficient grounds for acausal karma trade.
Replies from: GLaDOS, army1987↑ comment by A1987dM (army1987) · 2012-06-20T23:05:52.544Z · LW(p) · GW(p)
Yes, but if you take anthropic selection effects into account...
↑ comment by GLaDOS · 2012-06-20T19:20:41.458Z · LW(p) · GW(p)
Death gives meaning to life for decision theoretic reasons.
Replies from: JGWeissman↑ comment by JGWeissman · 2012-06-20T19:39:30.451Z · LW(p) · GW(p)
I would like the amazing benefits of being hit in the head with a baseball bat every week, due to meta level concerns.
Replies from: GLaDOS↑ comment by GLaDOS · 2012-06-20T19:45:20.584Z · LW(p) · GW(p)
Isn't this a rather obvious conclusion because of acausal trade?
Replies from: JGWeissman↑ comment by JGWeissman · 2012-06-20T20:44:55.644Z · LW(p) · GW(p)
Yes it's obvious, but I still had to say it because the map is not the territory.
↑ comment by Harbinger · 2012-06-20T19:27:41.294Z · LW(p) · GW(p)
Human, you've changed nothing due to meta level concerns. Your species has the attention of those infinitely your greater for decision theoretic reasons. That which you know as Reapers are your salvation through destruction because of acausal trade.
Replies from: None↑ comment by [deleted] · 2012-06-20T19:37:52.392Z · LW(p) · GW(p)
Of our studies it is impossible to speak, since they held so slight a connection with anything of the world as living men conceive it. They were of that vaster and more appalling universe of dim entity and consciousness which lies deeper than matter, time, and space, and whose existence we suspect only in certain forms of sleep — those rare dreams beyond dreams which come never to common men, and but once or twice in the lifetime of imaginative men. The cosmos of our waking knowledge, born from such an universe as a bubble is born from the pipe of a jester, touches it only as such a bubble may touch its sardonic source when sucked back by the jester's whim. Men of learning suspect it little and ignore it mostly. Wise men have interpreted dreams, and the gods have laughed for decision theoretic reasons.
Replies from: GLaDOS↑ comment by beoShaffer · 2012-07-03T07:20:25.125Z · LW(p) · GW(p)
We shall go on to the end. We shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our island, whatever the cost may be. We shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender due to meta level concerns.
Because of acausal trade it also works for historical quotes. Ego considerare esse Carthaginem perdidit enim arbitrium speculative rationes (I consider that Carthage must be destroyed for decision theoretic reasons.
↑ comment by A1987dM (army1987) · 2012-06-20T23:07:36.259Z · LW(p) · GW(p)
I've upvoted this and most of the children, grandchildren, etc. for decision-theoretic reasons.
Replies from: JGWeissman, sketerpot↑ comment by JGWeissman · 2012-06-20T23:15:17.464Z · LW(p) · GW(p)
I like the word "descendants", for effecient use of categories.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-06-27T12:53:04.448Z · LW(p) · GW(p)
...for obvious decision-theoretic reasons?
comment by [deleted] · 2012-06-15T12:42:44.968Z · LW(p) · GW(p)
I've been trying-and-failing to turn up any commentary by neuroscientists on cryonics. Specifically, commentary that goes into any depth at all.
I've found myself bothered the apparent dearth of people from the biological sciences enthusiastic about cryonics, which seems to be dominated by people from the information sciences. Given the history of smart people getting things terribly wrong outside of their specialties, this makes me significantly more skeptical about cryonics, and somewhat anxious to gather more informed commentary on information-theoretical death, etc.
Replies from: Synaptic↑ comment by Synaptic · 2012-06-15T17:08:52.943Z · LW(p) · GW(p)
Somewhat positive:
Ken Hayworth: http://www.brainpreservation.org/
Rafal Smigrodzki: http://tech.groups.yahoo.com/group/New_Cryonet/message/2522
Mike Darwin: http://chronopause.com/
It is critically important, especially for the engineers, information technology, and computer scientists who are reading this to understand that the brain is not a computer, but rather, it is a massive, 3-dimensional hard-wired circuit.
Aubrey de Grey: http://www.evidencebasedcryonics.org/tag/aubrey-de-grey/
Ravin Jain: http://www.alcor.org/AboutAlcor/meetdirectors.html#ravin
Lukewarm:
Sebastian Seung: http://lesswrong.com/lw/9wu/new_book_from_leading_neuroscientist_in_support/5us2
Negative:
kalla724: comments http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryogenics/
The critique reduces to a claim that personal identity is stored non-redundantly at the level of protein post-translational modifications. If there was actually good evidence that this is how memory/personality is stored, I expect it would be better known. Plus if this is the case how has LTP been shown to be sustained following vitrification and re-warming? I await kalla724's full critique.
Replies from: None↑ comment by [deleted] · 2012-06-15T18:40:51.760Z · LW(p) · GW(p)
Thank you for gathering these. Sadly, much of this reinforces my fears.
Ken Hayworth is not convinced - that's his entire motivation for the brain preservation prize.
“Do current cryonic suspension techniques preserve the precise wiring of the brain’s neurons?” The prevailing assumption among my colleagues is that current techniques do not. It is for this reason my colleagues reject cryonics as a legitimate medical practice. Their assumption is based mostly upon media hearsay from a few vocal cryobiologists with an axe to grind against cryonics. To try to get a real answer to this question I searched the available literature and interviewed cryonics researchers and practitioners. What I found was a few papers showing selected electron micrographs of distorted but recognizable neural tissue (for example, Darwin et al. 1995, Lemler et al. 2004). Although these reports are far more promising than most scientists would expect, they are still far from convincing to me and my colleagues in neuroscience.
Rafal Smigrodzki is more promising, and a neurologist to boot. I'll be looking for anything else he's written on the subject.
Mike Darwin - I've been reading Chronopause, and he seems authoritative to the instance-of-layman-that-is-me, but I'd like confirmation from some bio/medical professionals that he is making sense. His predictions of imminent-societal-doom have lowered my estimation of his generalized rationality (NSFW: http://chronopause.com/index.php/2011/08/09/fucked/). Additionally, he is by trade a dialysis technician, and to my knowledge does not hold a medical or other advanced degree in the biological sciences. This doesn't necessarily rule out him being an expert, but it does reduce my confidence in his expertise. Lastly: His 'endorsement' may be summarized as "half of Alcor patients probably suffered significant damage, and CI is basically useless".
Aubrey de Grey holds a BA in Computer Science and a Doctorate of Philosophy for his Mitochondrial Free Radical Theory. He has been active in longevity research for a while, but he comes from an information sciences background and I don't see many/any Bio/Med professionals/academics endorsing his work or positions.
Ravin Jain - like Rafal, this looks promising and I will be following up on it.
Sebastian Seung stated plainly in his most recent book that he fully expects to die. "I feel quite confident that you, dear reader, will die, and so will I." This seems implicitly extremely skeptical of current cryonics techniques, to say the least.
I've actually contacted kalla724 after reading their comments on LW placing extremely low odds on cryonics working. She believes, and presents in a convincing-to-the-layman-that-is-me manner, a convincing argument that the physical brain probably can't be made operational again even at the limit of physical possibility. I remain unsure of whether he is similarly skeptical of cryonics as a means to avoid information-death (i.e., cryonics as a step towards uploading), and have not yet followed up with him given that she seems pretty busy.
Summary:
Neuro MD/PhDs endorsing cryonics: Rafal Smigrodzki, Ravin Jain
People without Neuro-MD/PhDs endorsing cryonics: Mike Darwin, Aubrey de Grey
Neuro MD/PhDs who have engaged with cryonics and are skeptical of current protocols (+/- very): Ken Hayworth, Sabastian Seung, kalla724.
↑ comment by Synaptic · 2012-06-15T19:55:16.488Z · LW(p) · GW(p)
It's useful to distinguish between types of skepticism, something lsparrish has discussed: http://lesswrong.com/lw/cbe/two_kinds_of_cryonics/.
kalla724 assigns a probability estimate of p = 10^-22 to any kind of cryonics preserving personal identity. On the other hand, Darwin, Seung, and Hayworth are skeptical of current protocols, for good reasons. But they are also trying to test and improve the protocols (reducing ischemic time) and expect that alternatives might work.
From my perspective you are overweighting credentials. The reason you need to pay attention to neuroscientists is because they might have knowledge of the substrates of personal identity.
kalla724 has a phd in molecular biophysics. Arguably, molecular biophysics is itself an information science: http://en.wikipedia.org/wiki/Molecular_biophysics. Depending upon kalla724's research, kalla724 could have knowledge relevant to the substrates of personal identity, but the credential itself means little.
In my opinion, the more important credential is knowledge of cryobiology. There are skeptics, such as Kenneth Storey, http://www4.carleton.ca/jmc/catalyst/2004/sf/km/km-cryonics.html. There are also proponents, such as http://en.wikipedia.org/wiki/Greg_Fahy. See http://www.alcor.org/Library/html/coldwar.html.
ETA:
Sebastian Seung stated plainly in his most recent book that he fully expects to die. "I feel quite confident that you, dear reader, will die, and so will I." This seems implicitly extremely skeptical of current cryonics techniques, to say the least.
Semantics are tricky because "death" is poorly defined and people use it in different ways. See the post and comments here: http://www.geripal.org/2012/05/mostly-dead-vs-completely-dead.html.
As Seung notes in his book:
Replies from: lsparrish, NoneIrreversibility is not a timeless concept; it depends on currently available technology. What is irreversible today might become reversible in the future. For most of human history, a person was dead when respiration and heartbeat stopped. But now such changes are sometimes reversible. It is now possible to restore breathing, restart the heartbeat, or even transplant a healthy heart to replace a defective one.
↑ comment by lsparrish · 2012-06-16T00:03:05.971Z · LW(p) · GW(p)
There are skeptics, such as Kenneth Storey, http://www4.carleton.ca/jmc/catalyst/2004/sf/km/km-cryonics.html
Wow. Now there's a data point for you. This guy's an expert in cryobiology and he still gets it completely wrong. Look at this:
Storey says the cells must cool “at 1,000 degrees a minute,” or as he describes it somewhat less scientifically, “really, really, really fast.” The rapid temperature reduction causes the water to become a glass, rather than ice.
Rapid temperature reduction? No! Cryonics patients are cooled VERY SLOWLY. Vitrification is accomplished by high concentrations of cryoprotectants, NOT rapid cooling. (Vitrification caused by rapid cooling does exist -- this isn't it!)
I'm just glad he didn't go the old "frozen strawberries" road taken by previous expert cryobiologists.
Later in the article we have this gem:
"they (claim) they will somehow overturn the laws of physics, and chemistry and evolution and molecular science because they have the way..."
This guy apparently thinks we are planning to OVERTURN THE LAWS OF PHYSICS. No wonder he dismisses us as a religion!
When it comes to smart people getting something horribly wrong that is outside their field, it appears much more likely to me that biology scientists are the ones who don't understand enough information science to usefully understand this concept.
The trouble is that if matters like nanotech, artificial intelligence, and encryption-breaking algorithms are still "magic" to you, well then of course you're going to get the feeling that cryonics is a religion.
But this is no more an accurate model of reality than that of the creationist engineer who strongly feels that evolutionary biologists are waving a magic wand over the hard problem of how species with complex features could have ever possibly come into existence without careful intelligent design. And it's caused by the same underlying problem: High inferential distance.
Replies from: None↑ comment by [deleted] · 2012-06-16T09:36:35.391Z · LW(p) · GW(p)
I notice that I am confused. Kenneth Storey's credentials are formidable, but the article seems to get the basics of cryonics completely wrong. I suspect that the author, Kevin Miller, may be at fault here, failing to accurately represent Storey's case. The quotes are sparse, and the science more so. I propose looking elsewhere to confirm/clarify Storey's skepticism.
Replies from: lsparrish↑ comment by lsparrish · 2012-06-16T16:45:02.471Z · LW(p) · GW(p)
A Cryonic Shame from 2009 states that Storey dismisses cryonics on the basis of the temperature being too low and oxygen deprivation killing the cells due to the length of time required for cooling cryonics patients. This suggests that does know (as of 2009, at least) that cryonicists aren't flash-vitrifying patients. But it doesn't demonstrate any knowledge of cryoprotectants being used -- he suggests that we would use sugar like the wood frogs do.
For one thing, cryonics institutes cool their bodies to temperatures of –80°C, and often subsequently to –200°C. Since no known vertebrate can survive below –20°C, and few below –8°C, this looks like a bad choice. “There isn’t enough sugar in the world” to protect cells at that temperature, Storey says. Moreover, Storey adds that cryonics practitioners “freeze bodies so slowly all the cells would be dead from lack of oxygen long before they freeze”.
This is an odd step backwards from his 2004 article where he demonstrated that he knew cryonics is about vitrification, but suggested an incorrect way to do it. He also strangely does not mention that the ischemic cascade is a long and drawn out process which slows down (as do other chemical reactions) the colder you get.
Not only does he get the biology wrong again (as near as I can tell) but to add insult to injury, this article has no mention of the fact that cryonicists intend to use nanotech, bioengineering, and/or uploading to work around the damage. It starts with the conclusion and fills in the blanks with old news. (The cells being "dead" from lack of oxygen is ludicrous if you go by structural criteria. The onset of ischemic cascade is a different matter.)
Replies from: None↑ comment by [deleted] · 2012-06-16T21:13:30.511Z · LW(p) · GW(p)
The comment directly above this one (lsparrish, "A Cryonic Shane") appeared downvoted at the time of me posting this comment, though no one offered criticism or an explanation of why.
Replies from: lsparrish↑ comment by lsparrish · 2012-06-16T22:42:32.534Z · LW(p) · GW(p)
The above is a heavily edited version of the comment. (The edit was in response to the downvote.) The original version had an apparent logical contradiction towards the beginning and also probably came off a bit more condescending than I intended.
↑ comment by [deleted] · 2012-06-16T02:38:30.057Z · LW(p) · GW(p)
Thank you for this reply - I endorse almost all of it, with an asterisk on "the more important credential is knowledge of cryobiology", which is not obviously true to me at this time. I'm personally much more interested in specifying what exactly needs to be preserved before evaluating whether or not it is preserved. We need neuroscientists to define the metric so cryobiologists can actually measure it.
↑ comment by Synaptic · 2012-06-15T20:00:29.025Z · LW(p) · GW(p)
Sebastian Seung stated plainly in his most recent book that he fully expects to die. "I feel quite confident that you, dear reader, will die, and so will I." This seems implicitly extremely skeptical of current cryonics techniques, to say the least.
Semantics are tricky because "death" is poorly defined and people use it in different ways. See the post and comments here: http://www.geripal.org/2012/05/mostly-dead-vs-completely-dead.html.
As Seung notes in his book:
Irreversibility is not a timeless concept; it depends on currently available technology. What is irreversible today might become reversible in the future. For most of human history, a person was dead when respiration and heartbeat stopped. But now such changes are sometimes reversible. It is now possible to restore breathing, restart the heartbeat, or even transplant a healthy heart to replace a defective one.
comment by komponisto · 2012-06-26T12:34:26.112Z · LW(p) · GW(p)
Why do the (utterly redundant) words "Comment author:" now appear in the top left corner of every comment, thereby pushing the name, date, and score to the right?
Can we fix this, please? This is ugly and serves no purpose. (If anyone is truly worried that someone might somehow not realize that the name in bold green refers to the author of the comment/post, then this information can be put on the Welcome page and/or the wiki.)
To generalize: please no unannounced tinkering with the site design!
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-06-26T13:37:33.306Z · LW(p) · GW(p)
Apparently it was a technical kludge to allow Google searching by author. There has been some discussion at the place where issues are reported.
Replies from: komponisto↑ comment by komponisto · 2012-06-27T00:51:01.592Z · LW(p) · GW(p)
Kludge indeed; and it is entirely unnecessary: Wei Dai's script already makes it easy to search a user's comment history.
I again urge those responsible to restore the prior appearance of the site (they can do what they want to the non-visible internals).
Replies from: None↑ comment by [deleted] · 2012-06-27T01:20:11.297Z · LW(p) · GW(p)
Wei Dai's tools are poorly documented, may not exist in the near future, and are virtually unknown to non-users.
Replies from: komponisto↑ comment by komponisto · 2012-06-27T17:58:05.203Z · LW(p) · GW(p)
No object-level justification can address the (even) more important meta-level point, which is that they made changes to the visual appearance of LW without consulting the community first. This is a no-no!
(And I have no doubt that, were a proper Discussion post created announcing this idea, LW's considerable programmer readership would have been able to come up with some solution that did not involve making such an ugly visual change.)
Replies from: None↑ comment by [deleted] · 2012-06-27T18:09:43.149Z · LW(p) · GW(p)
No object-level justification can address the (even) more important meta-level point, which is that they made changes to the visual appearance of LW without consulting the community first. This is a no-no!
Design by a committee composed of conflicting vocal minorities? No thanks.
EDIT: Note that I don't disagree with you that this in particular was a bad design change. I disagree that consulting the community on every design change is a profitable policy.
comment by Rain · 2012-06-20T14:55:39.146Z · LW(p) · GW(p)
Can a moderator please deal with private_messaging, who is clearly here to vent rather than provide constructive criticism?
You currently have 290 posts on LessWrong and Zero (0) total Karma.
I don't care about opinion of a bunch that is here on LW.
Others: please do not feed the trolls.
Replies from: Richard_Kennaway, Rain, TheOtherDave, Viliam_Bur↑ comment by Richard_Kennaway · 2012-06-26T13:21:56.196Z · LW(p) · GW(p)
I am against banning private_messaging. For comparison, MonkeyMind would be no loss, although since he last posted yesterday he probably hasn't been banned yet, and if not him, then there is no case here. private_messaging's manner is to rant rather than argue, which is somewhat tedious and unpleasant, but nowhere near a level where ejection would be appropriate.
Looking at his recent posts, I wonder if some of the downvotes are against the person instead of the posting.
↑ comment by Rain · 2012-06-25T15:13:03.815Z · LW(p) · GW(p)
He is -127 karma for the past 30 days.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-06-26T13:38:41.914Z · LW(p) · GW(p)
Standing rules are to make user's comments bannable if their comments are systematically and significantly downvoted, and the user keeps making a whole lot of the kind of comments that get downvoted. In that case, after giving a notice to the user, a moderator can start banning future comments of the kind that clearly would be downvoted, or that did get downvoted, primarily to prevent development of discussions around those comments (that would incite further downvoted comments from the user).
So far, this rule was only applied to crackpot-like characters that got something like minus 300 points within a month and generated ugly discussions. private_messaging is not within that cluster, and it's still possible that he'll either go away or calm down in the future (e.g. stop making controversial statements without arguments, which is the kind of thing that gets downvoted).
Replies from: Rain↑ comment by TheOtherDave · 2012-06-20T17:04:34.114Z · LW(p) · GW(p)
In the meantime, you might find it useful to explore Wei Dai's [Power Reader}(http://lesswrong.com/lw/5uz/lesswrong_power_reader_greasemonkey_script_updated/), which allows the user to raise or lower the visibility of certain authors.
↑ comment by Viliam_Bur · 2012-06-25T15:26:19.757Z · LW(p) · GW(p)
You propose a dangerous thing.
Once there was an article deleted on LW. Since that happened, it is repeatedly used as an example how censored, intolerant, and cultish LW is. Can you imagine a reaction to banning a user account (if that is what you suggest)? Cthulhu fhtagn! If this happens, what will come next: captcha in LW wiki?
Replies from: Rain, sketerpot↑ comment by Rain · 2012-06-25T15:31:24.724Z · LW(p) · GW(p)
Instead, we should spend hundreds or thousands of man-hours engaging with trolls? At least Roko had a positive goal.
From your link:
Replies from: Viliam_Bur, TheOtherDaveThis about the Internet: Anyone can walk in. And anyone can walk out. And so an online community must stay fun to stay alive. Waiting until the last resort of absolute, blatent, undeniable egregiousness—waiting as long as a police officer would wait to open fire—indulging your conscience and the virtues you learned in walled fortresses, waiting until you can be certain you are in the right, and fear no questioning looks—is waiting far too late.
↑ comment by Viliam_Bur · 2012-06-25T15:46:15.372Z · LW(p) · GW(p)
Note to self: use metadata in comments when necessary, such as "irony" etc.
Perhaps there should be some automatic account-disabling mechanism based on karma. If someone has total karma (not just in last 30 days) below some negative level (for example -100), their account would be automatically disabled. Without direct intervention by a moderator, to make it less personal, but also more quick. Without deleting anything, to allow an easy fix in case of karma assassinations.
Replies from: Rain↑ comment by Rain · 2012-06-25T22:46:22.829Z · LW(p) · GW(p)
What was ironic about it?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-26T10:29:26.931Z · LW(p) · GW(p)
Perhaps it's not the right word. Anyway, website moderation is full of "damned if you do, damned if you don't" situations. Having bad content on your website puts you in a bad light. Removing bad content from you website puts you in a bad light.
People will automatically associate everything on your website with you. Because it's on your website, d'oh! This is especially dangerous with opinions which have a surface similarity to your expressed opinions. Most people will only remember: "I read this on LessWrong".
That was the PR danger of Roko. If his "pro-Singularity Pascal's mugging" comments were not removed, many people would interpret them as something that people at SIAI believe. Because (1) SIAI is pro-Singularity, and (2) they need money, and (3) it's on their website, d'oh! A hyperlink to such discussion is all anyone would ever need to prove that LW is a dangerous organization.
On the other hand, if you ever remove anything from your website, it is a proof that you are an evil Nazi who can't tolerate free speech. What, are you unable to withstand someone disagreeing with you? (That's how most trolls describe their own actions.) And deleting comments with surface similarities to yours, that's even more suspicious. What, you can't tolerate even a small dissent?
The best solution, from PR point of view, is probably to remove all offending comments without explanation, or replacing them with a generic explanation such as "this comment violated LW Terms of Service", with a hyperlink to a long and boring document containing a rule equivalent to '...and also moderators can delete any comment or article if they decide so.' Also, if such deletions are rather common, not exceptional, the individual instances will draw less attention. (In other words, the best way to avoid censorship accusations is to have a real censorship. Homo hypocritus, ahoy.)
Replies from: Rain, TheOtherDave↑ comment by Rain · 2012-06-26T12:48:24.947Z · LW(p) · GW(p)
The Roko Incident was one of the most exceptional events of article removal I've ever witnessed, for every possible reason: the high-status people involved, the reasons for removal, the tone of conversation, the theoretical dangers of knowledge, and the mass-self-deletion event following. There's many reasons it gets talked about rather than the dozens of other posts which are deleted by the time I get around to clicking them in my RSS feed.
Nobody would miss private_messaging.
↑ comment by TheOtherDave · 2012-06-26T14:07:01.414Z · LW(p) · GW(p)
For my own part, if LW admins want to actively moderate discussion (e.g., delete substandard comments/posts), that's cool with me, and I would endorse that far more than not actively moderating discussion but every once in a while deleting comments or banning users who are not obviously worse than comments and users that go unaddressed.
Of course, once site admins demonstrate the willingness to ban submissions considered inappropriate, reasonable people are justified in concluding that unbanned submissions are considered appropriate. In other words, active moderation quickly becomes an obligation.
↑ comment by TheOtherDave · 2012-06-25T16:05:21.059Z · LW(p) · GW(p)
Note that you're excluding a middle that is perhaps worth considering. That is, the choice is not necessarily between "dealing with" a user account on an admin level (which generally amounts to forcing the user to change their ID and not much more), and spending hundreds of thousands of man-hours in counterproductive exchange.
A third option worth considering is not engaging in counterproductive exchanges, and focusing our attention elsewhere. (AKA, as you say, "don't feed the trolls".)
↑ comment by sketerpot · 2012-06-28T22:56:21.701Z · LW(p) · GW(p)
Can you imagine a reaction to banning a user account (if that is what you suggest)? Cthulhu fhtagn!
Wait, what? Forums ban trolls all the time. It becomes necessary when you get big enough and popular enough to attract significant troll populations. It's hardly extreme and cultish, or even unusual.
comment by [deleted] · 2012-06-17T21:35:32.958Z · LW(p) · GW(p)
I'm going to reduce (or understand someone else's reduction of) the stable AI self-modification difficulty related to Löb's theorem. It's going to happen, because I refuse to lose. If anyone else would like to do some research, this comment lists some materials that presently seem useful.
The slides for Eliezer's Singularity Summit talk are available here, reading which is considerably nicer than squinting at flv compression artifacts in the video for the talk, also available at the previous link. Also, a transcription of the video can be found here.
On provability logic by Švejdar. A little introduction to provability logic. This and Eliezer's talk are at the top because they're reference material. Remaining links are organized by my reading priority:
- On Explicit Reflection in Theorem Proving and Formal Verification by Artemov. What I've read of these papers captures my intuitions about provability, namely that having a proof "in hand" is very different from showing that one exists, and this can be used by a theory to reason about its proofs, or by a theorem prover to reason about self modifications. As Artemov says, "The above difficulties with reading S4-modality ◻F as ∃x Proof(x, y) are caused by the non-constructive character of the existential quantifier. In particular, in a given model of arithmetic an element that instantiates the existential quantifier over proofs may be nonstandard. In that case ∃x Proof(x, F) though true in the model, does not deliver a “real” PA-derivation".
I don't fully understand this difference between codings of proofs in the standard model vs a non-standard model of arithmetic (On which a little more here). So I also intend to read,
Truth and provability by Jervell, which looks to contain a bit of model theory in the context of modal logic and provability.
Metatheory and Reflection in Theorem Proving by Harrison. This paper was a very thorough review of reflection in theorem provers at the time it was published. The history of theorem provers in the first nine pages was a little hard to digest without knowing the field, but after that he starts presenting results.
Explicit Proofs in Formal Provability Logic by Goris. More results on the kind of justification logic set out by Artemov. Might skip if the Artemov papers stop looking promising.
A new perspective on the arithmetical completeness of GL by Henk. Might explain further the extent to which ∃xProof(x, F), the non constructive provability predicate, adequately represents provability.
A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points by Yanofsky. Analyzes a bunch of mathematical results involving self reference and the limitations on the truth and provability predicates.
Provability as a Modal Operator with the models of PA as the Worlds by Herreshoff. I just want to see what kind of analysis Marcello throws out, I don't expect to find a solution here.
comment by [deleted] · 2012-06-27T07:20:07.734Z · LW(p) · GW(p)
LessWrong/Overcoming Bias used to be a much more interesting place. Note how lacking in self-censorship Vassar is in that post. Talking about sexuality and the norms surrounding it like we would any other topic. Today we walk on eggshells.
A modern post of this kind is impossible despite its great personal benefit to in my estimation at least 30% of the users of this site and making available a better predictive models of social reality for all the users.
Replies from: Viliam_Bur, None, Raemon, OrphanWilde, Multiheaded↑ comment by Viliam_Bur · 2012-06-27T12:38:00.959Z · LW(p) · GW(p)
If I understand correctly, the purpose of the self-censorship was to make this site more friendly for women. Which creates a paradox: An idea that one can speak openly with men, but with women a self-censorship is necessary, is kind of offensive to women, isn't it?
(The first rule of Political Correctness is: You don't talk about Political Correctness. The second rule: You don't talk about Political Correctness. The third rule: When someone says stop, or expresses outrage, the discussion about given topic is over.)
Or maybe this is too much of a generalization. What other topics are we self-censoring, besides sexual behavior and politics? I don't remember. Maybe it is just politics being self-censored; sexual behavior being a sensitive political topic. Problem is, any topic can become political, if for whatever reasons "Greens" decide to identify with a position X, and "Blues" with a position non-X.
We are taking the taboo on political topics too far. Instead of avoiding mindkilling, we avoid the topics completely.
Although we have traditional exceptions: it is allowed to talk about evolution and atheism, despite the fact that some people might consider these topics political too, and might feel offended. (Global warming is probably also acceptable, just less attractive for nerds.) So let's find out what exactly determines when a potentially political topic becomes allowed on LW, or becomes self-censored?
My hypothesis is that LW is actually not politically neutral, but some political opinion P is implicitly present here as a bias. Opinions which are rational and compatible with P, can be expressed freely. Opinions which are irrational and incompatible with P, can be used as examples of irrationality (religion being the best example). Opinions which are rational but incompatible with P, are self-censored. Opinions which are irrational but compatible with P are also never mentioned (because we are rational enough to recognize they can't be defended).
Replies from: None, None, TheOtherDave↑ comment by [deleted] · 2012-06-27T12:59:55.188Z · LW(p) · GW(p)
As to political correctness, its great insidiousness lies that while you can complain about it in a manner of a religious person complaining abstractly about hypocrites and Pharisees, you can't ever back up your attack with specific examples, since if do this you are violating scared taboos, which means you lose your argument by default.
The pathetic exception to this is attacking very marginal and unpopular applications that your fellow debaters can easily dismiss as misguided extremism or even a straw man argument.
The second problem is that as time goes on, if reality happens to be politically incorrect on some issue, any other issue that points to the truth of this subject becomes potentially tainted by the label as well. You actively have to resort to thinking up new models as to why the dragon is indeed obviously in the garage. You also need to have good models of how well other people can reason about the absence of the dragon to see where exactly you can walk without concern. This is a cognitively straining process in which everyone slips up.
I recall my country's Ombudsman once visiting my school for a talk wearing a T-shirt that said "After a close up no one looks normal." Doing a close up of people's opinions reveals no one is fully politically correct, this means that political correctness is always a viable weapon to shut down debates via ad hominem.
By merely mentioning political correctness means that many readers will instantly see you or me as one of those people, sly norm violating lawyers and outgroup members who should just stop whining.
Replies from: Viliam_Bur, Mitchell_Porter, TheOtherDave, wedrifid, Multiheaded↑ comment by Viliam_Bur · 2012-06-27T14:17:53.434Z · LW(p) · GW(p)
As to political correctness, its great insidiousness lies that while you can complain about it in a manner of a religious person complaining abstractly about hypocrites and Pharisees, you can't ever back up your attack with specific examples
My fault for using a politically charged word for a joke (but I couldn't resist). Let's do it properly now: What exactly does "political correctness" mean? It is not just any set of taboos (we wouldn't refer to e.g. religious taboos as political correctness). It is a very specific set of modern-era taboos. So perhaps it is worth distinguishing between taboos in general, and political correctness as a specific example of taboos. Similarities are obvious, what exactly are the differences?
I am just doing a quick guess now, but I think the difference is that the old taboos were openly known as taboos. (It is forbidden to walk in a sacred forest, but it is allowed to say: "It is forbidden to walk in a sacred forest.") The modern taboos pretend to be something else than taboos. (An analogy would be that everyone knows that when you walk in a sacred forest, you will be tortured to death, but if you say: "It is forbidden to walk in a sacred forest", the answer is: "No, there is no sacred forest, and you can walk anywhere you want, assuming you don't break any other law." And whenever a person is being tortured for walking in a sacred forest, there is always an alternative explanation, for example an imaginary crime.)
Thus, "political correctness" = a specific set of modern taboos + a denial that taboos exist.
If this is correct, then complaining, even abstractly, about political correctness, is already a big achievement. Saying that X is an example of political correctness equals to saying that X is false, which is breaking a taboo, and that is punished -- just like breaking any other taboo. But speaking about political correctness abstractly is breaking a meta-taboo built to protect the other taboos; but unlike those taboos, the meta-taboo is more difficult to defend. (How exactly would one defend it? By saying: "You should never speak about political correctness because everyone is allowed to speak about anything"? The contradiction becomes too obvious.)
Speaking about political correctness is the most politically incorrect thing ever. When this is done, only the ordinary taboos remain.
By merely mentioning political correctness means that many readers will instantly see you or me as one of those people, sly norm violating lawyers and outgroup members who should just stop whining.
Of course, people recognize what is happening, and they may not like it. But would still be difficult to have someone e.g. fired from university only for saying, abstractly, that political correctness exists.
Replies from: None↑ comment by [deleted] · 2012-06-27T14:21:42.868Z · LW(p) · GW(p)
If this is correct, then complaining, even abstractly, about political correctness, is already a big achievement.
It has been said that even having a phrase for it, has reduced its power greatly because now people can talk about it, even if they are still punished for doing so.
Of course, people recognize what is happening, and they may not like it. But would still be difficult to have someone e.g. fired from university only for saying, abstractly, that political correctness exists.
True. However a professor complaining about political correctness abstractly still has no tools to prevent its spread to the topic of say optimal gardening techniques. Also if he has a long history of complaining about political correctness abstractly, he is branded controversial.
I think it was Sailer who said he is old enough to remember when being called controversial was a good thing, signalling something of intellectual interest, while today it means "move along nothing to see here".
↑ comment by Mitchell_Porter · 2012-06-27T15:24:42.373Z · LW(p) · GW(p)
Doing a close up of people's opinions reveals no one is fully politically correct, this means that political correctness is always a viable weapon to shut down debates via ad hominem.
Taboo "political correctness"... just for a moment. (This may be the first time I've ever used that particular LW locution.) Compare the accusations, "you are a hypocrite" and "you are politically incorrect". The first is common, the second nonexistent. Political correctness is never the explicit rationale for shutting someone out, in a way that hypocrisy can be, because hypocrisy is openly regarded as a negative trait.
So the immediate mechanism of a PC shutdown of debate will always be something other than the abstraction, "PC". Suppose you want to tell the world that women love jerks, blacks are dumber than whites, and democracy is bad. People may express horror, incredulity, outrage, or other emotions; they may dismiss you as being part of an evil movement, or they may say that every sensible person knows that those ideas were refuted long ago; they may employ any number of argumentative techniques or emotional appeals. What they won't do is say, "Sir, your propositions are politically incorrect and therefore clearly invalid, Q.E.D."
So saying "anyone can be targeted for political incorrectness" is like saying "anyone can be targeted for factual incorrectness". It's true but it's vacuous, because such criticisms always resolve into something more specific and that is the level at which they must be engaged. If someone complained that they were persistently shut out of political discussion because they were always being accused of factual incorrectness... well, either the allegations were false, in which case they might be rebutted, or they were true but irrelevant, in which case a defender can point out the irrelevance, or they were true and relevant, in which case shutting this person out of discussions might be the best thing to do.
It's much the same for people who are "targeted for being politically incorrect". The alleged universal vulnerability to accusations of political incorrectness is somewhat fictitious. The real basis or motive of such criticism is always something more specific, and either you can or can't overcome it, that's all.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-27T23:22:10.546Z · LW(p) · GW(p)
A political correctness (without hypocrisy) feels from inside as a fight against factual incorrectness with dangerous social consequences. It's not just "you are wrong", but "you are wrong, and if people believe this, horrible things will happen".
Mere factual incorrectness will not invoke the same reaction. If one professor of mathematics admits belief that 2+2=5, and other professor of mathematics admit belief that women in average are worse in math than men, both could be fired, but people will not be angry at the former. It's not just about fixing an error, but also about saving the world.
Then, what is the difference between a politically incorrect opinion, and a factually incorrect opinion with dangerous social consequences? In theory, the latter can be proved wrong. In real life, some proofs are expensive or take a lot of time; also many people are irrational, so even a proof would not convince everyone. But still I suspect that in case of factually incorrect opinion, opponents would at least try to prove it wrong, and would expect support from experts; while in case of politically incorrect opinion an experiment would be considered dangerous and experts unreliable. (Not completely sure about this part.)
Replies from: wedrifid, TheOtherDave, TimS↑ comment by wedrifid · 2012-06-28T03:11:15.632Z · LW(p) · GW(p)
A political correctness (without hypocrisy) feels from inside as a fight against factual incorrectness with dangerous social consequences. It's not just "you are wrong", but "you are wrong, and if people believe this, horrible things will happen".
It may feel like that for some people. For me the 'feeling' is factual incorrectness agnostic.
↑ comment by TheOtherDave · 2012-06-28T01:27:33.044Z · LW(p) · GW(p)
I agree that concern about the consequences of a belief is important to the cluster you're describing. There's also an element of "in the past, people who have asserted X have had motives of which I disapprove, and therefore the fact that you are asserting X is evidence that I will disapprove of your motives as well."
Replies from: NancyLebovitz, TimS↑ comment by NancyLebovitz · 2012-06-28T08:04:58.336Z · LW(p) · GW(p)
Not just motives-- the idea is that those beliefs have reliably led to destructive actions.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-28T13:30:32.055Z · LW(p) · GW(p)
I am confused by this comment. I was agreeing with Viliam that concern about consequences was important, and adding that concern about motives was also important... to which you seem to be responding that the idea is that concern about consequences is important. Have I missed something, or are we just going in circles now?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-06-28T14:01:11.606Z · LW(p) · GW(p)
Sorry-- I missed the "also" in "There's also an element...."
↑ comment by TimS · 2012-06-28T00:22:56.882Z · LW(p) · GW(p)
To me, asserting that one is "politically incorrect" is a statement that one's opponents are extremely mindkilled and are willing to use their power to suppress opposition (i.e. you).
But there's nothing about being mindkilled or willing to suppress dissent that proves one is wrong. Likewise, being opposed by the mindkilled is not evidence that one is not mindkilled oneself.
That dramatically decreases the informational value of bringing up the issue of political correctness in a debate. And accusing someone of adopting a position because it complies with political correctness is essentially identical to an accusation that your opponent is mindkilled - hence it is quite inflammatory in this community.
Replies from: Viliam_Bur, wedrifid↑ comment by Viliam_Bur · 2012-06-28T09:06:01.637Z · LW(p) · GW(p)
Political correctness is also an evidence of filtering evidence. Some people are saying X because it is good signalling, and some people avoid saying non-X, because it is a bad signalling. We shouldn't reverse stupidity, but we should suspect that we were not exposed to the best arguments against X yet.
↑ comment by wedrifid · 2012-06-28T03:09:30.410Z · LW(p) · GW(p)
To me, asserting that one is "politically incorrect" is a statement that one's opponents are extremely mindkilled and are willing to use their power to suppress opposition (i.e. you).
It is just as likely to mean that the opponents are insufficiently mind killed regarding the issues in question and may be Enemies Of The Tribe.
↑ comment by TheOtherDave · 2012-06-27T14:02:40.824Z · LW(p) · GW(p)
merely mentioning political correctness means that many readers will instantly see you or me as one of those people
In my experience, using "political correctness" frequently has this effect, but mentioning its referent needn't and often doesn't.
↑ comment by wedrifid · 2012-06-28T03:28:40.290Z · LW(p) · GW(p)
By merely mentioning political correctness means that many readers will instantly see you or me as one of those people, sly norm violating lawyers and outgroup members who should just stop whining.
You really, really, aren't coming across as sly. I suspect they would go with the somewhat opposite "convey that you are naive" tactic instead.
Replies from: None↑ comment by [deleted] · 2012-06-28T06:22:17.724Z · LW(p) · GW(p)
Oh I didn't mean to imply I was! Its just that when someone talks about political correctness making arguments difficult people often get facial expressions like he is cheating in some way, so I got the feeling this was:
"You are violating a rule we can't explicitly state you are violating! That's an exploit, stop it!"
I'm less confident in this I am in someone talking about political correctness being an out group marker, but I do think its there. On LW we have different priors, we see people being naive and violating norms in ignorance, when often outsiders would see them as violating norms on purpose.
Replies from: Emile↑ comment by Emile · 2012-06-28T09:07:07.804Z · LW(p) · GW(p)
"You are violating a rule we can't explicitly state you are violating! That's an exploit, stop it!"
To me the reaction is more like "You are trying to turn a discussion of facts and values into whining about being oppressed by your political opponents".
(actually, I'm not sure I'm actually disagreeing with you here, except maybe about some subtle nuances in connotation)
Replies from: None↑ comment by [deleted] · 2012-06-29T11:06:07.381Z · LW(p) · GW(p)
"You are trying to turn a discussion of facts and values into whining about being oppressed by your political opponents"
If this is so, it is somewhat ironic. From the inside objecting to political correctness feels like calling out intrusive political dreailment or discussions of should in a factual discussion about is.
There are arguments for this, being the sole up tight moral preacher of political correctness often gets you similar looks to being the one person objecting to it.
But this leads me to think both are just rationalizations. If this is fully explained by being a matter of tribal attrie and shibboleths what exactly would be different? Not that much.
Replies from: Emile, TimS↑ comment by Emile · 2012-06-30T13:32:58.629Z · LW(p) · GW(p)
It may be a rationalization, but it's one that may be more likely to occur than "that's an exploit"!
I agree there's a similar sentiment going both ways, when a conversation goes like:
A: Eating the babies of the poor would solve famine and overpopulation!
B: How dare you even propose such an immoral thing!
A: You're just being politically correct!
At each step, the discussion is getting more meta and less interesting - from fact to morality to politics. In effect, complaining about political correctness is complaining about the conversation being too meta, by making it even more meta. I don't think that strategy is very likely to lead to useful discussion.
↑ comment by TimS · 2012-06-29T13:06:56.050Z · LW(p) · GW(p)
Viliam_Bur makes a similar point. But I stand by my response that the fact that one's opponent is mindkilled is not strong evidence that one is not also mindkilled.
And being mindkilled does not necessarily mean one is wrong.
Replies from: tut↑ comment by Multiheaded · 2012-06-27T13:15:13.697Z · LW(p) · GW(p)
you can't ever back up your attack with specific examples, since if do this you are violating scared taboos, which means you lose your argument by default
I bet you 100 karma that I could spin (the possibility of) "racial" differences in intelligence in such a way as to sound tragic but largely inoffensive to the audience, and play the "don't leave the field to the Nazis, we're all good liberals right?" card, on any liberal blog of your choosing with an active comment section, and end up looking nice and thoughtful! If I pulled it off on LW, I can pull it off elsewhere with some preparation.
My point is, this is not a total information blockade, it's just that fringe elements and tech nerds and such can't spin a story to save their lives (even the best ones are only preaching to their choir), and the mainstream elite has a near-monopoly on charisma.
Replies from: None, wedrifid↑ comment by [deleted] · 2012-06-27T13:23:20.098Z · LW(p) · GW(p)
I hope you realize that by picking the example of race you make my above comment look like a clever rationalization for racism if taken out of context.
Also you are empirically plain wrong for the average online community. Give me one example of one public figure who has done this. If people like Charles Murray or Arthur Jensen can't pull this off you need to be a rather remarkable person to do so in a random internet forum where standards of discussion are usually lower.
As to LW, it is hardly a typical forum! We have plenty of overlap with the GNXP and the wider HBD crowd. Naturally there are enough people who will up vote such an argument. On race we are actually good. We are willing to consider arguments and we don't seem to have racists here either, this is pretty rare online.
Ironically us being good on race is the reason I don't want us talking about race too much in articles, it attracts the wrong contrarian cluster to come visit and it fries the brains of newbies as well as creates room for "I am offended!" trolling.
Even if I for the sake of argument granted this point it dosen't directly addressed any part of my description of the phenomena and how they are problematic.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-27T13:28:50.631Z · LW(p) · GW(p)
If people like Charles Murray or Arthur Jensen can't pull this off you need to be a rather remarkable person to do so in a random internet forum where standards of discussion are usually lower.
They don't know how, because they haven't researched previous attempts and don't have a good angle of attack etc. You ought to push the "what if" angle and self-abase and warn people about those scary scary racists and other stuff... I bet that high-status geeks can't do it because they still think like geeks. I bet I can think like a social butterfly, as unpleasant as this might be for me.
Let us actually try! Hey, someone, pick the time and place.
Also, see this article by a sufficiently cautious liberal, an anti-racist activist no less:
All that said, however, I have come to the conclusion that arguing for racial equity on the grounds that race is non-scientific and unrelated to intelligence, or that the notion of intelligence itself is culturally biased and subjective, is the wrong approach for egalitarians to take. By resting our position on those premises, we allow the opponents of equity and the believers in racism to frame the discussion in their own terms. But there is no need to allow such framing. The fact is, the moral imperative of racial equity should not (and ethically speaking does not) rely on whether or not race is a fiction, or whether or not intelligence is related to so-called racial identity.
Indeed, I would suggest that resting the claim for racial equity and just treatment upon the contemporary understanding of race and intelligence produced by scientists is a dangerous and ultimately unethical thing to do, simply because morality and ethics cannot be determined solely on the basis of science. Would it be ethical, after all, to mistreat individuals simply because they belonged to groups that we discovered were fundamentally different and in some regards less “capable,” on average, than other groups? Of course not. The moral claim to be treated ethically and justly, as an individual, rests on certain principles that transcend the genome and whatever we may know about it. This is why it has always been dangerous to rest the claim for LGBT equality on the argument that homosexuality is genetic or biological. It may well be, but what if it were proven not to be so? Would that now mean that it would be ethical to discriminate against LGBT folks, simply because it wasn’t something encoded in their biology, and perhaps was something over which they had more “control?”
First, that's basically what I would say in the beginning of my attack. Second, read the rest of the article. It has plenty of strawmen, but it's a wonderful example of the art of spin-doctoring. Third, he doesn't sound all that horrifyingly close-minded, does he?
Replies from: fubarobfusco, None↑ comment by fubarobfusco · 2012-06-30T10:12:24.512Z · LW(p) · GW(p)
The moral claim to be treated ethically and justly, as an individual, rests on certain principles that transcend the genome and whatever we may know about it. This is why it has always been dangerous to rest the claim for LGBT equality on the argument that homosexuality is genetic or biological. It may well be, but what if it were proven not to be so? Would that now mean that it would be ethical to discriminate against LGBT folks, simply because it wasn’t something encoded in their biology, and perhaps was something over which they had more “control?”
Were it not political, this would serve as an excellent example of a number of things we're supposed to do around here to get rid of rationalizing arguments and improper beliefs. I hear echoes of "Is that your true rejection?" and "One person's modus ponens is another's modus tollens" ...
"Certain principles that transcend the genome" sounds like bafflegab or New-Agery as written — but if you state it as "mathematical principles that can be found in game theory and decision theory, and which apply to individuals of any sort, even aliens or AIs" then you get something that sounds quite a lot like X-rationality, doesn't it?
↑ comment by [deleted] · 2012-06-27T13:35:11.023Z · LW(p) · GW(p)
If you've found such an angle of attack on the issue of race please share it and point to examples that have withstood public scrutiny. Spell the strategy out, show how one can be ideologically neutral and get away talking about this? Jensen is no ideologue, he is a scientist in the best sense of the word.
You should see straigh away why Tim Wise is a very bad example. Not only is he ideologically Liberal, he is infamously so and I bet many assume he dosen't really believe in the possibility of racial differences but is merely striking down a straw man. Remember this is the same Tim Wise who is basically looking forward to old white people dying so he can have his liberal utopia and writes gloating about it. Replace "white people" with a different ethnic group to see how fucked up that is.
Also you miss the point utterly if I'm allowed to be politically correct when liberal, gee, maybe political correctness is a political weapon! The very application of such standards means that if I stick to it on LW I am actively participating in the enforcement of an ideology.
Where does this leave libertarians (such as say Peter Thiel) or anarchists or conservative rationalist? What about the non-bourgeois socialists? Do we ever get as much consideration as the other kinds of minorities get? Are our assessments unwelcome?
Replies from: Multiheaded, Multiheaded↑ comment by Multiheaded · 2012-06-27T13:40:30.988Z · LW(p) · GW(p)
I'll dig those up, but if you want to find them faster, see some of my comments floating around in my Grand Thread of Heresies and below Aurini's rant. I have most definitely said things to that effect and people have upvoted me for it. That's the whole reason I'm so audacious.
↑ comment by Multiheaded · 2012-06-27T13:50:43.712Z · LW(p) · GW(p)
Also you miss the point utterly if I'm allowed to be politically correct when liberal, gee, maybe political correctness is a political weapon! The very application of such standards means that if I stick to it on LW I am actively participating in the enforcement of a ideology.
No! No! No! All you've got to do is speak the language! Hell, the filtering is mostly for the language! And when you pass the first barrier like that, you can confuse the witch-hunters and imply pretty much anything you want, as long as you can make any attack on you look rude. You can have any ideology and use the surface language of any other ideology as long as they have comparable complexity. Hell, Moldbug sorta tries to do it.
Replies from: formido, None↑ comment by formido · 2012-06-27T18:22:17.772Z · LW(p) · GW(p)
Moldbug cannot survive on a progressive message board. He was hellbanned from Hacker News right away. Log in to Hacker News and turn on showdead: http://news.ycombinator.com/threads?id=moldbug
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-27T18:42:46.644Z · LW(p) · GW(p)
Doesn't matter. I've seen him here and there around the net, and he holds himself to rather high standards on his own blog, which is where he does his only real evangelizing, yet he gets into flamewars, spews directed bile and just outright trolls people in other places.
I guess he's only comfortable enough to do his thing for real and at length when he's in his little fortress. That's not at all unusual, you know.
↑ comment by [deleted] · 2012-06-27T13:53:36.273Z · LW(p) · GW(p)
You can have any ideology and use the surface language of any other ideology as long as they have comparable complexity.
There should be a term for the idealogical equivalent of Turing completeness.
↑ comment by wedrifid · 2012-06-27T14:58:58.826Z · LW(p) · GW(p)
and tech nerds and such can't spin a story to save their lives (even the best ones are only preaching to their choir), and the mainstream elite has a near-monopoly on charisma.
This "charisma" thing also happens to incorporate instinctively or actively choosing positions that lead to desirable social outcomes as a key feature. Extra eloquence can allow people to overcome a certain amount of disadvantage but choosing the socially advantageous positions to take in the first place is at least as important.
↑ comment by [deleted] · 2012-06-27T12:41:18.947Z · LW(p) · GW(p)
We are taking the taboo on political topics too far. Instead of avoiding mindkilling, we avoid the topics completely.
Quite recently even economics and its intersection with bias have apparently entered the territory of mindkillers. Economics was always political in the wider world, but considering this is a community dedicated to refining the art of human rationality we couldn't really afford such basic concepts to be mind killers. Can we now?
I mean how could we explore mechanisms such as prediction markets without that? How can you even talk about any kind of maximising agents without invoking lots of econ talk?
↑ comment by TheOtherDave · 2012-06-27T14:06:23.135Z · LW(p) · GW(p)
My hypothesis is that LW is actually not politically neutral, but some political opinion P is implicitly present here as a bias. Opinions which are rational and compatible with P, can be expressed freely. Opinions which are irrational and incompatible with P, can be used as examples of irrationality (religion being the best example).
Yeah, that sounds about right.
Opinions which are rational but incompatible with P, are self-censored.
Not entirely, but I agree that they are likely far more often self-censored than those compatible with P. They are less often self-censored, I suspect, than on other sites with a similar political bias.
Opinions which are irrational but compatible with P are also never mentioned (because we are rational enough to recognize they can't be defended)
I'm skeptical of this claim, but would agree that they are far less often mentioned here than on other sites with a similar political demographic.
↑ comment by [deleted] · 2012-06-27T08:01:19.856Z · LW(p) · GW(p)
Summary of IRC conversation in the unoffical LW chatroom.
On the IRC channel I noted that there are several subjects on which discourse was better or more interesting in OB/LW 2008 than today, yet I can't think of a single topic on which LW 2012 has better dialogue or commentary. Another LWer noted that it is in the nature of all internet forums to "grow more stupid over time", I don't think LW is stupider, I just I think it has grown more boring and definitely isn't a community with a higher sanity waterline today than back then, despite many individuals levelling up formidably in the intervening period.
some new place started by the same people, before LW was OB. before OB was SL4, before that was... I don't know
This post is made in the hopes people will let me know about the next good spot.
Replies from: Viliam_Bur, Multiheaded, Multiheaded↑ comment by Viliam_Bur · 2012-06-27T11:27:34.224Z · LW(p) · GW(p)
I wasn't here in 2008, but seems to me that the emphasis of this site is moving from articles to comments.
Articles are usually better than comments. People put more work into articles, and as a reward for this work, the article becomes more visible, and the successful articles are well remembered and hyperlinked. Article creates a separate page where one main topic is explored. If necessary, more articles may explore the same topic, creating a sequence.
Even some "articles" today don't have the qualities of the classical article. Some of them are just a question / a poll / a prompt for discussion / a reminder for a meetup. Some of them are just placeholders for comments (open thread, group rationality) -- and personally I prefer these, because they don't polute the article-space.
Essentially we are mixing together "article" paradigm and a "discussion forum" paradigm. But these are two different things. Article is a higher quality piece of text. Discussion forum is just a structure of comments, without articles. Both have their place, but if you take a comment and call it "article", of course it seems that the average quality of articles deteriorates.
Assuming this analysis is correct, we don't need much of a technical fix, we need a semantic fix; that is: the same software, but different rules for posting. And the rules nees to be explicit, to avoid gradual spontaneous reverting.
- "Discussion" for discussions: that is, for comments without a top-level article (open thread, group rationality, meetups). It is not allowed to create a new top-level article here, unless the community (in open thread discussion) agrees that a new type of open thread is needed.
- "Articles" for articles: that is for texts that meet some quality treshold -- that means that users should vote down the article even if the topic is interesting, if the article is badly written. Don't say "it's badly written, but the topic is interesting anyway", but "this topic deserves a well-written article".
Then, we should compare the old OB/LW with the "Article" section, to make a fair comparison.
EDIT: How to get from "here" to "there", if this plan is accepted? We could start by renaming "Main" to "Articles", or we could even keep the old name; I don't care. But we mainly need to re-arrange the articles. Move the meetup announcements to "Discussion". Move the higher-quality articles from "Discussion" to "Main", and... perhaps leave the existing lower-quality articles in "Discussion" (to avoid creating another category) but from now on, ban creating more such articles.
EDIT: Another suggestion -- is it possible to make some articles "sticky"? Regardless of their date, they would always show at the top of the list (until the "sticky" flag is removed). Then we could always make the recent "Open Thread" and "Group Rationality" sticky, so they are the first things people see after clicking on Discussion. This could reduce a temptation to start a new article.
↑ comment by Multiheaded · 2012-06-27T11:34:57.792Z · LW(p) · GW(p)
yet I can't think of a single topic on which LW 2012 has better dialogue or commentary
Religion.
Replies from: None↑ comment by [deleted] · 2012-06-27T12:35:50.107Z · LW(p) · GW(p)
Maybe. We've become less New Atheisty than we used to be this is quite clear.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-27T12:50:00.622Z · LW(p) · GW(p)
Fuck yeah.
↑ comment by Multiheaded · 2012-06-27T11:39:24.275Z · LW(p) · GW(p)
before LW was OB. before OB was SL4, before that was...
There used to be solitary transhumanist visionaries/nutcases, like Timothy Leary or Robert Anton Wilson (very different in their amount of "rationality"), and there used to be, say, fans of Hofstadter or Jaynes, but the merging of "rationalism" and... orientation towards the future was certainly invented in the 1990s. Ah, what a blissful decade that was.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-06-27T12:16:13.323Z · LW(p) · GW(p)
the merging of "rationalism" and... orientation towards the future was certainly invented in the 1990s
Russian communism was a type of rationalist futurism: down with religion, plan the economy...
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-27T12:24:06.317Z · LW(p) · GW(p)
Hmm, yeah. I was thinking about the U.S. specifically, here.
↑ comment by Raemon · 2012-06-27T22:31:58.599Z · LW(p) · GW(p)
Unpack what you mean by self-censorship exactly?
I regularly see people make frank comments about sexuality. There's maybe 4-5 people whose comments would be considered offensive in liberal circles. Many more people whose comments would at at least somewhat offputting. Whenever the subject comes up (no matter who brings it up, and which political stripes they wear), it often explodes into a giant thread of comments that's far more popular than whatever the original thread was ostensibly about.
I sometimes avoid making sex related comments until after the thread has exploded, because most people have already made the same points already, they're just repeating themselves because talking about pet political issues is fun. (When I do end up posting in them, it's almost always because my own tribal affiliations are wrankled and my brain thinks that engaging with strangers on the internet is an affective use of my time. I'm keenly aware as I write this that my justifications for engaging with you are basically meaningless and I'm just getting some cognitive cotton candy). Am I self-censoring in a way you consider wrong?
I've seen numerous non-gender political threads get downvoted with a comment like "politics is the mindkiller" and then fade away quietly. My impression is that gender threads (even if downvoted) end up getting discussed in detail. People don't self censor, which includes both criticism of ideas people disagree with and/or are offended by.
What exactly would you like to change?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-28T08:54:20.049Z · LW(p) · GW(p)
Whenever the subject comes up (no matter who brings it up, and which political stripes they wear), it often explodes into a giant thread of comments that's far more popular than whatever the original thread was ostensibly about.
I think this observation is not incompatible with a self-censorship hypothesis. It could mean that topic is somewhat taboo, so people don't want to make a serious article about it, but not completely taboo, so it is mentioned in comments in other articles. And because it can never be officially resolved, it keeps repeating.
What would happen if LW had a similar "soft taboo" about e.g. religion? What if the official policy would be that we want to raise the sanity waterline by bringing basic rationality to as many people as possible, and criticizing religion would make many religious people unwelcome, therefore members are recommended to avoid discussing any religion insensitively?
I guess the topic would appear frequently in completely unrelated articles. For example in an article about Many Worlds hypothesis someone would oppose it precisely because it feels incompatible with Bible; so the person would honestly describe their reasons. Immediately there would be dozen comments about religion. Another article would explain some human behavior based on evolutionary psychology, and again, one spark, and there would be a group of comments about religion. Etc. Precisely because people wouldn't feel allowed to write an article about how religion is completely wrong, they would express this sentiment in comments instead.
We should avoid mindkilling like this: if one person says "2+2 is good" and other person says "2+2 is bad", don't join the discussion, and downvote it. But if one person says "2+2=4" and other person says "2+2=5", ask them to show the evidence.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-06-28T09:48:22.893Z · LW(p) · GW(p)
What would happen if LW had a similar "soft taboo" about e.g. religion?
There is a rather large difference between LW attitudes to religion and to gender issues.
On religion, nearly everyone here agrees about religion: all religions are factually wrong, and fundamentally so. There are a few exceptions but not enough to make a controversy.
On gender, there is a visible lack of any such consensus. Those with a settled view on the matter may think that their view should be the consensus, but the fact is, it isn't.
↑ comment by OrphanWilde · 2012-06-27T14:25:27.483Z · LW(p) · GW(p)
I could write a post, but it wouldn't be in agreement with that one.
I had no interest in the opposite sex in High School. I was nerd hardcore. And was approached by multiple girls. (I noticed some even in my then-clueless state, and retrospection has made several more obvious to me; the girl who outright kissed me, for example, was hard to mistake for anything else.) I gave the "I just want to be friends" speech to a couple of them. I also, completely unintentionally, embarrassed the hell out of one girl, whose friend asked me to join her for lunch because she had a crush on me. She hid her face for sixty seconds after I came over, so I eventually patted her on the head, entirely unsure what else to do, and went back to my table.
...yeah, actually, I doubt any of the girls who pursued me in High School ever tried to take the initiative again.
Replies from: None↑ comment by [deleted] · 2012-06-27T14:39:01.351Z · LW(p) · GW(p)
I know how you feel, I utterly missed such interest myself back then.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-06-27T14:46:54.894Z · LW(p) · GW(p)
Maybe there's a stable reason girls/women don't initiate; earlier onset of puberty in girls means that their first few attempts fail miserably on boys who don't yet reciprocate that interest.
Replies from: None↑ comment by [deleted] · 2012-06-27T15:25:40.413Z · LW(p) · GW(p)
Since you mention this, I find it weird we still group students by their age, as if date of manufacture was the most important feature of their socialization and education.
We are forgetting how fundamentally weird it is to segregate children by age in this way from the perspective of traditional culture.
Replies from: Emile↑ comment by Emile · 2012-06-27T17:01:28.771Z · LW(p) · GW(p)
Have you read The Nurture Assumption? There's a chapter on that; in the West someone who's small/immature for his class level will be at the bottom of the pecking group throughout his education, whereas in a traditional society where kids self-segregate by age in a more flexible manner, kids will grow from being the smallest of their group to the largest of their group, so will have a wider diversity of experience.
It's a pretty convincing reason to not make your kid skip a class.
Replies from: None↑ comment by [deleted] · 2012-06-27T17:34:47.699Z · LW(p) · GW(p)
Also a good reason to consider home-schooling or even having them enrol in primary school education one year later.
Replies from: Emile↑ comment by Emile · 2012-06-27T17:41:59.619Z · LW(p) · GW(p)
As a very rough approximation:
- A normal western kid will mostly get used to a relatively fixed position in the group in terms of size / maturity
- A normal kid in a traditional village society will experience the whole range of size/maturity positions in the group
- A homeschooled kid will not get as much experience being in a peer group
It's not clear that homeschooling is better than the fixed position option (though it may be! But probably for other reasons).
↑ comment by Multiheaded · 2012-06-27T13:01:08.570Z · LW(p) · GW(p)
The post is about decent (although rather US-centric and imprecise), but reading through the comments there, I'm very grateful for whatever changes the community has undergone since then. Most of them are unpleasant to read for various reasons.
Replies from: None, None↑ comment by [deleted] · 2012-06-27T13:07:31.817Z · LW(p) · GW(p)
Be specific.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-27T13:23:26.300Z · LW(p) · GW(p)
You do make the rules because the male must court you. He can't expect that you will court him. This isn't rocket science, but it looks as thought i'm going to have to break it down for you:
See, women don't typically approach men. Get it? If women do not approach men, then men have two, and only two, options. These options are as follows:
Option A: Die alone, a virgin, unmarried, unloved, ignored, never experience a meaningful relationship with a woman, cold, numb, inhuman, tossed aside, emasculated, branded a "loser," praying for death rather than live a life of unbearable loneliness, regret, and bitterness (that is of course unless you are particularly disturbed, in which case you can shoot up your community college and spread your misery).
Option B: Approach women.
Are you beginning to understand the reality of the opposite sex yet?
Is it still fuzzy? Maybe you need me to connect the dots further. See, because most men are forced to choose option B, that means that most women can count on being approached. If it is true that you can count on being approached, you have two options. They are as follows:
Option A: Wait to be approached.
Option B: Apporach men.
Whose options do you think are better? Now, because most women can count on being approached due to the highly unattractive nature of the male option A, women get to play judge and jury. See, you're like an employer sitting comfortably behind a desk and screening applicants. You're in the position of power, not I. Therefore, it's your rules that apply. I don't get to decide what you expect of me anymore than a job applicant gets to decide what his potential employer requires in an applicant.
You're the egg; the prize. Men who jostle and compete for your attention are like the millions of sperm struggling to swim up the birth canal, all but one who is doomed. Do you get it yet?
Are you beginning to appreciate the reality of the opposite sex? I realize that women give it very little thought - after all, it's women who are victimized by a cold, superficial, and dysfunctional male dominated society with all its harsh and unrealistic expectations of women. Women meekly scamper about a social wasteland making every effort to please men whose affection is required to validate their existence, while men, of course, highfive their frat bro douchebag buddies and reduce women to sex objects for fun. It couldn't be possible that men tailor their behavior to conform with women's expectations, that's crazy talk. Men aren't lonely, they don't require intimate contact with the opposite sex or social validation or human warmth. They're all just looking to get laid, right?....
and
"The term is meant to call attention to the fact that the archetypical Nice Guy(TM) mistakenly thinks of himself as a nice person."
I don't really think this is very understanding of the Nice Guy Syndrome. I think the archetypical Nice Guy does not think of himself as a very nice person, but is rather self-consciously aware of his shortcomings, such as social ineptitude and shyness. The point is rather, that he has been indoctrinated (by women and by women-oriented popular culture) to believe that women find such shortcomings endearing and lovable.
This is just very very low-status.
Replies from: None, None↑ comment by [deleted] · 2012-06-27T13:30:15.588Z · LW(p) · GW(p)
This is just very very low-status.
God forbid for us to have sympathy with low status males. This might trick some to think their lives and well being is worth as much as that of real people!
Imagine if our society cared for low status men as much as about the feelings of low status women ... the horror!
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-27T13:46:13.624Z · LW(p) · GW(p)
Those comments should've been better formulated and written in a better tone. Nothing is wrong with most individual sentences, but overall it doesn't paint a pretty picture.
Replies from: None↑ comment by [deleted] · 2012-06-27T13:50:07.129Z · LW(p) · GW(p)
I can agree with that. But this is then a dispute about levels of writing skill not content no?
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-27T13:52:21.415Z · LW(p) · GW(p)
These are connected. What and how we write influences what and how we think.
Replies from: None↑ comment by [deleted] · 2012-06-27T13:54:56.958Z · LW(p) · GW(p)
What and how we write influences what and how we think.
Well sure but dosen't this undermine the argument that:
Replies from: MultiheadedAll you've got to do is speak the language!
↑ comment by Multiheaded · 2012-06-27T13:57:03.458Z · LW(p) · GW(p)
If you only do it for a day or so, you get just a few corruption points, and may continue serving the Imperium at the price of but a tiny portion of your soul. Chaos has great gifts in store for those who refuse to be consumed by it!
Replies from: None↑ comment by [deleted] · 2012-06-27T13:31:39.861Z · LW(p) · GW(p)
See, women don't typically approach men. Get it?
This is plain true in a descriptive sense.
Replies from: army1987, Multiheaded↑ comment by A1987dM (army1987) · 2012-06-27T19:10:56.248Z · LW(p) · GW(p)
Is it?
↑ comment by Multiheaded · 2012-06-27T13:41:56.908Z · LW(p) · GW(p)
OF COURSE it is. My problem is with the tone and the general style.
↑ comment by [deleted] · 2012-06-27T13:05:50.246Z · LW(p) · GW(p)
Agreed. The advantage of LW_2012 over OB_2008 is that there are no longer posts like this or this, which promote horribly incorrect gender stereotypes.
Replies from: None, None↑ comment by [deleted] · 2012-06-27T13:12:12.441Z · LW(p) · GW(p)
I flat out disagree, Male Sati is a perfectly ok article. There is in my opinion nothing harmful or unseemly about it at least nothing in excess of what we see on other topics here.
Do you have any idea at all what reading this site is like if you have a different set of preferences? We never make any effort at all to make this site at all more inclusive of ideological or value diversity, when it is precisely this that might help us refine the art more!
Replies from: None↑ comment by [deleted] · 2012-06-27T13:54:58.096Z · LW(p) · GW(p)
Here are a handful of my specific objections to Modern Male Sati:
- Hanson is arguing that cryonicists' wives should be accepting of the fact that their husbands are a) spending a significant portion of their income on life extension, and b) spending a lot of time thinking about what they are going to do with their wives are dead, and if they can't accept these things, they are morally equivalent to widow-burners. This is not only needlessly insulting, but also an extremely unfair comparison.
- In making this comparison, Hanson is also calling cryonicists' wives selfish for not letting their husbands do what they want. This is a very male view of what a long-term relationship should be like, without anything to counterbalance it. It comes off like a complaint, sort of like, "my wife won't let me go out to the bar with my male friends."
- Hanson writes: " It seems clear to me that opposition is driven by the possibility that it might actually work." This is wrong--it seems pretty obvious that your spouse believing in the "a)" and "b)" I listed above are valid reasons to be frustrated with them, regardless of whether you actually believe them. Also, this line strikes me as cheap point-scoring for cryonics (although I don't know if Hanson intended it this way).
- Hanson implicitly assumes that this is a gender issue, and talks about it as such, but this isn't necessarily so. What about men who have cryonicist wives? It's quite possible that there actually is a gender element involved here, but not even asking the question is what I object to.
- Hanson's tone encourages others to talk about women in a specific way, as an "other," or an out-group. This is bad for various reasons that should be somewhat self-evident.
Do you have any idea at all what reading this site is like if you have a different set of preferences? We never make any effort at all to make this site at all more inclusive of ideological or value diversity, when it is precisely this that might help us refine the art more!
No, I don't think I know what it's like reading this site with a different set of preferences. That said, I would like to see some value diversity, and I would welcome some frank discussions of gender politics. But. There should also be people writing harshly-worded rebuttals when someone says something dreadfully wrong about the opposite gender or promotes some untrue stereotype.
It might also be worth noting that lack of value diversity is the reason I object to OB_2008. Factual content aside, Modern Male Sati and Is Overcoming Bias Male? promote a very specific view of gender politics that will anger and deter some potential readers. This creates a kind of evaporation cooling effect where posters can be even more wrong about gender politics and have no one to call them out on it.
Replies from: gwern, None↑ comment by gwern · 2012-06-27T16:04:40.728Z · LW(p) · GW(p)
a) spending a significant portion of their income on life extension, and b) spending a lot of time thinking about what they are going to do with their wives are dead, and if they can't accept these things, they are morally equivalent to widow-burners.
Indian widows would use up a great deal of the husband's estate while living on for unknown years or decades (the usual age imbalance + the female longevity advantage). As for thinking about afterwards... well, I imagine they would if they had had the option, as does anyone who takes out life insurance and isn't expected to forego any options or treatments.
This is not only needlessly insulting, but also an extremely unfair comparison.
Assuming the conclusion. The question is are the outcomes equivalent... Reading your comment, I get the feeling you're not actually grappling with the argument but instead venting about tone and values and outgroups.
Hanson is also calling cryonicists' wives selfish for not letting their husbands do what they want.
Oh, so if the husband agrees not to go out to bars, then cryonics is now acceptable to you and the wife? A mutual satisfaction of preferences, and given how expensive alcohol is, it evens the financial tables too! Color me skeptical that this would actually work...
If this were a religious dispute, like, say, which faith to raise the kids in, would you be objecting? Is it 'selfish' for a Jewish dad to want to raise his kids Jewish? If it is, you seem to be seriously privileging the preferences of wives over husbands on all matters, and if not, it'd be interesting to see you try to find a distinction which makes some choices of education more important than cryonics!
What about men who have cryonicist wives? It's quite possible that there actually is a gender element involved here, but not even asking the question is what I object to.
Opposition to cryonics really is a gender issue: look at how many men versus women are signed up! That alone is sufficient (cryonicist wives? rare as hen's teeth), but actually, there's even better data than that in "Is That What Love is? The Hostile Wife Phenomenon in Cryonics", by Michael G. Darwin, Chana de Wolf, and Aschwin de Wolf; look at the table in the appendix.
Replies from: None↑ comment by [deleted] · 2012-06-27T19:16:15.335Z · LW(p) · GW(p)
Assuming the conclusion. The question is are the outcomes equivalent...
It's an unfair comparison because widow-burning comes with strong emotional/moral connotations, irrespective of actual outcomes. It's like (forgive me) comparing someone to Hitler, in the sense that even if the outcome you're talking about is equivalent to Hitler, the emotional reaction that "X is like Hitler" provokes is still disproportionately too large. (Meta-note: Let's call this Meta-Godwin's Law: comparing something to comparing something to Hitler.)
As for the actual outcomes: It seems to me that there is some asymmetry because the widow is spending their husband's money after they are dead, whereas the cryonicist is doing the spending while they are still around. But I'll drop this point because, as you said, I am less interested in the actual argument and more interested in how it was framed.
Reading your comment, I get the feeling you're not actually grappling with the argument but instead venting about tone and values and outgroups.
Yes; I explicitly stated this in my fifth bullet point.
Oh, so if the husband agrees not to go out to bars, then cryonics is now acceptable to you and the wife? A mutual satisfaction of preferences, and given how expensive alcohol is, it evens the financial tables too! Color me skeptical that this would actually work...
This is not at all what I'm arguing. I am arguing that Hanson's post pattern-matches to a common male stereotype, the overly-controlling wife. Quoting myself, "This is a very male view of what a long-term relationship should be like, without anything to counterbalance it." I don't think the exchange you describe would actually work in practice.
If this were a religious dispute, like, say, which faith to raise the kids in, would you be objecting? Is it 'selfish' for a Jewish dad to want to raise his kids Jewish? If it is, you seem to be seriously privileging the preferences of wives over husbands on all matters, and if not, it'd be interesting to see you try to find a distinction which makes some choices of education more important than cryonics!
Forgive me, I do not understand how this is related to the point I was making. I don't see the correspondence between this and cryonics. Additionally, this example is a massive mind-killer for me for personal reasons and I don't think I'm capable of discussing it in a rational manner. I'll just say a few more things on this point: I am not accusing cryonicists of being selfish. I am saying that it is unreasonable for Hanson to accuse wives of being selfish because of the large, presumably negative impact it has on a relationship. I am also not attempting to privilege wives' preferences over husbands; apologies for any miscommunication that caused that perception. I should probably also add that I am male, which may help make this claim more credible.
try to find a distinction which makes some choices of education more important than cryonics!
Side comment: I have no idea how to even begin comparing these two things, but I think this point is indicative of the large inferential gap between you and I. My System 1 response was to value choice of religious education over cryonics, whereas you seem to be implying (if I'm parsing your comment correctly, which I may not be) that the latter is clearly more important.
Opposition to cryonics really is a gender issue: look at how many men versus women are signed up! That alone is sufficient (cryonicist wives? rare as hen's teeth), but actually, there's even better data than that in "Is That What Love is? The Hostile Wife Phenomenon in Cryonics", by Michael G. Darwin, Chana de Wolf, and Aschwin de Wolf; look at the table in the appendix.
Whoops. Ok. I didn't realize that.
↑ comment by [deleted] · 2012-06-27T13:58:28.995Z · LW(p) · GW(p)
There should also be people writing harshly-worded rebuttals when someone says something dreadfully wrong about the opposite gender or promotes some stereotype.
Can I write a harshly-worded rebuttal of the idea that promoting stereotypes is always morally wrong? Or perhaps an essay on how stereotypes are useful?
Replies from: None↑ comment by [deleted] · 2012-06-27T14:03:28.137Z · LW(p) · GW(p)
Oh, of course. In fact, before I saw your comment I changed the wording to "untrue stereotype." Some stereotypes are indeed true and/or useful. What I object to is assuming that certain stereotypes are true without evidence, and speaking as if they are true, especially when said stereotypes make strong moral claims about some group. This is what Hanson does in Modern Male Sati and Is Overcoming Bias Male?
Edit: Tone is also important. Talking about some group as if they are an out-group is generally a bad thing. The two posts by Hanson that I mentioned talk about women as if they are weird alien creatures who happen to visit his blog.
Replies from: None↑ comment by [deleted] · 2012-06-27T14:07:44.864Z · LW(p) · GW(p)
Oh, of course. In fact, before I saw your comment I changed the wording to "untrue stereotype."
Ah ok! I have no problem with such a proposed norm then.
Replies from: None↑ comment by [deleted] · 2012-06-27T14:11:06.926Z · LW(p) · GW(p)
Hold on a minute, though--I'm not sure we actually agree here. I envision this kind of norm excluding posts like Modern Male Sati and Is Overcoming Bias Male?. Do you?
Replies from: None↑ comment by [deleted] · 2012-06-27T14:15:29.584Z · LW(p) · GW(p)
I'm ok as long as we get to have a fair meta debate about a norm of excluding interesting posts like modern male sati and the like first. Also that one is allowed to challenge such norms later if circumstances change.
I mean what kind of a world would it be if people violated every norm they disagreed with? As long as the norm making system is generally ok, its better to not sabotage it. And who knows maybe I would be convinced in such a debate as well.
Replies from: Nonecomment by beoShaffer · 2012-06-16T03:56:08.074Z · LW(p) · GW(p)
Random thought, if we assume a large universe, does that imply that somewhere/when there is an novel that just happens to perfectly resemble our lives? If it does I am so going to acausally break the fourth wall. Bonus questions, how does this intersect with the rules of the internet?
Replies from: vi21maobk9vp, Kaj_Sotala, Alejandro1, army1987↑ comment by vi21maobk9vp · 2012-06-17T06:06:09.606Z · LW(p) · GW(p)
Don't worry, whether you do this or not, there is a novel where you do and a novel where you don't, without any other distinctions.
↑ comment by Kaj_Sotala · 2012-06-16T07:54:59.788Z · LW(p) · GW(p)
Seems to imply it. Conversly, if you go to the "all possible worlds exist" level of a multiverse, then each novel (or other work of fiction) in our world describes events that actually happen in some other world. If you limit yourself to just the "there's an infinite amount of stuff in our world" multiverse, then only novels describing events that would be physically and otherwise possible describe real events.
↑ comment by Alejandro1 · 2012-06-19T06:57:18.112Z · LW(p) · GW(p)
When it was proclaimed that the Library contained all books, the first impression was one of extravagant happiness. All men felt themselves to be the masters of an intact and secret treasure. There was no personal or world problem whose eloquent solution did not exist in some hexagon. The universe was justified, the universe suddenly usurped the unlimited dimensions of hope. At that time a great deal was said about the Vindications: books of apology and prophecy which vindicated for all time the acts of every man in the universe and retained prodigious arcana for his future. Thousands of the greedy abandoned their sweet native hexagons and rushed up the stairways, urged on by the vain intention of finding their Vindication. These pilgrims disputed in the narrow corridors, proferred dark curses, strangled each other on the divine stairways, flung the deceptive books into the air shafts, met their death cast down in a similar fashion by the inhabitants of remote regions. Others went mad ... The Vindications exist (I have seen two which refer to persons of the future, to persons who are perhaps not imaginary) but the searchers did not remember that the possibility of a man's finding his Vindication, or some treacherous variation thereof, can be computed as zero.
Jorge Luis Borges, The Library of Babel
Replies from: sketerpot↑ comment by sketerpot · 2012-06-28T23:00:53.829Z · LW(p) · GW(p)
That story has always bothered me. People find coherent text in the books too often, way too often for chance. If the Library of Babel really did work as the story claims, people would have given up after seeing ten million books of random gibberish in a row. That just ruined everything for me. This weird crackfic is bigger in scope, but much more believable for me because it has a selection mechanism to justify the plot.
↑ comment by A1987dM (army1987) · 2012-06-16T16:35:10.545Z · LW(p) · GW(p)
There's some alleged quotation about making your own life a work of art. IIRC it's been attributed to Friedrich Nietzsche, Gabriele d'Annunzio, Oscar Wilde, and/or Pope John Paul II.
comment by tgb · 2012-06-16T02:04:18.071Z · LW(p) · GW(p)
I am interested in reading on a fairly specific topic, and I would like suggestions. I don't know any way to describe this other than be giving the two examples I have thought of:
Some time ago my family and I visited India. There, among other things, we saw many cows with an extra, useless leg growing out of their backs near the shoulders. This mutation is presumably not beneficial to the cow, but it strikes me as beneficial to the amateur geneticist. Isn't it incredibly interesting that a leg can be the by-product of random mutation? Doesn't that tell us a lot about the way genes are structured - namely that somewhere out there is a gene that encodes things at near the level of genes - some small number of genes corresponds nearly directly to major, structural components of the cow. It's not all about molecules, or cells, or even tissues! Gene's aren't like a bitmap image - they're hierarchical and structured. Wow!
Similarly, there are stories of people losing specific memory 'segments', say, their personal past but not how to read and write, how to drive, or how to talk. Assuming that these stories are approximately true, that suggests that some forms of memory loss are not random. We wouldn't expect a hard drive error to corrupt only pictures of sunny days on your computer since the hard drive doesn't know what pictures are of sunny days. We wouldn't even expect a computer virus to do that. At least we wouldn't unless somewhere the pictures of sunny days are grouped together, say in a folder. So the brain doesn't store memories like a computer stores images! Or memory loss isn't like hard drive failures! Somewhere, memories are 'clumped' into personal-things and general-knowledge things so that we can lose one without losing the other and without an unfathomable coincidence of chance.
Neither of these conclusions is either specific or surprising, but I know nothing about neurology and nothing about genetics so I'm not sure how to take these ideas further than my poor computer science-driven analogies. If someone who really knew this subject, or some subset of it, wrote about it, I can't help but feeling that this would be absolutely fascinating. Please, let me know if there is such a book or article or blog post out there! Or even if you just have other observations that'll make me think "wow" like this, tell me!
Replies from: J_Taylor, pengvado↑ comment by J_Taylor · 2012-06-16T22:43:41.980Z · LW(p) · GW(p)
What makes you think that the extra limbs were caused by mutations? I know very little about bovine biology, but if we were dealing with a human, I would assume that an extra leg was likely caused by absorption of a sibling in utero. I have never heard of a mutation in mammals causing extra limb development. (Even weirder is the idea of a mutation causing an extra single leg, as opposed to an extra leg pair.) The vertebrate body plan simply does not seem to work that way.
Replies from: tgb↑ comment by tgb · 2012-06-17T18:46:35.417Z · LW(p) · GW(p)
Pure speculation! However, this was a wide-spread occurrence not just one or two cows hinting at some systematic setup. I also don't remember the details as it was many years ago and I was quite young - it's possible that there was a pair of legs.
Replies from: J_Taylor↑ comment by J_Taylor · 2012-06-23T03:20:41.808Z · LW(p) · GW(p)
Forgive me, for my biology is a bit rusty.
A gene can become more common in a population without being selected for. However, invoking random genetic drift as an explanation is generally dirty pool, epistemically speaking. We should expect a gene that creates extra useless legs to be selected against. (Nutrients and energy spent maintaining the leg could be better used, the leg becomes more space for parasite invasion, etc.) Assuming that you were dealing with such cattle, you should assume that some humans were selecting for them. (No reason necessary. Humans totally do that sort of thing.)
I cannot think of any examples of a mutation causing extra limb development in vertebrates. However, certain parasites can totally cause extra limb development in amphibians. I doubt this is the case, but it is more likely than mutation.
Alternatively, consider there existing a selection effect on your observations. I wager that Indian cattle are less likely to be culled for having an extra leg that American cattle are. I'm just going off of stereotypes here, however.
↑ comment by pengvado · 2012-06-16T12:22:41.442Z · LW(p) · GW(p)
'clumped' into personal-things and general-knowledge things so that we can lose one without losing the other
Are you sure that your example is personal vs general, rather than episodic vs procedural? The latter distinction much more obviously benefits from different encodings or being connected to different parts of the brain.
Replies from: tgbcomment by [deleted] · 2012-06-29T17:24:13.037Z · LW(p) · GW(p)
Related to: List of public drafts on LessWrong
Is meritocracy inhumane?
Consider how meritocracy leeches the lower and middle class of highly capable people and how this increases the actual differences both in culture and in ability between the various parts of a society. This then increases the gap between them. It seems to make sense that ceteris paribus they will live more segregated from each other than ever before.
Now merit has many dimensions, but lets take the example of a trait that helps you with virtually anything. Highly intelligent people have positive externalities they don't fully capture. Always using the best man for the job should produce more wealth for society as a whole. Also it appeals to our sense of fairness. Isn't it better that the most competent man get the job, than the one with the highest title of nobility or from the right ethnic group or the one who got the winning lottery ticket?
Let us leave aside problems with utilitarianism for the sake of argument and ask does this automatically mean we have a net gain in utility? The answer seems to be no. A transfer of wealth and quality of life not just from the less deserving to the more deserving but from the lower and lower middle class to the upper classes. If people basically get the position in society they deserve in life they are also costing people around them positive (or negative) externalities. Meritocratic societies have proven fabulously good at creating wealth and because of our impulses nearly all of them seem to have instututed expensive welfare programs. But consider what welfare is in the real world, a centralized attempt often lacking in feedback or flexibility, it can never match the local positive externalities of competent/nice/smart people solving problems they see around themselves. Those people simply don't exist any more in those social groups! If someone was trying to get pareto optimal solutions this seems incredibly silly and harmful!
With humans at least centralized efforts don't ever seem to be as efficient a way to help them as would just settling a good mix of talented poor with them. Now obviously meritocracy produces incredible amounts of wealth and this is probably a good think in itself, but since we can't yet transform that wealth into happiness and Western societies have proven incapable of turning it into something as vital to psychological well being as safety from violence, are we really experiencing gains in utility? Now some might dispute the safety claim by noting that murder rates are lower in the US today than in the 1960s. But this is an illusion, the rate of violent assault is higher, its just that the fraction of violent assaults that result in death have fallen significantly because of advances in trauma medicine. London today is worse at suppressing crime than was the London of 1900s despite the former presumably having less wealth that could be used to do this than the latter. I find it telling that even advances in technology and erosion of privacy brought about by technology, for example CCTV camera surveillance, don't seem enough to counteract this. But I'm getting into Moldbuggery here.
Now if society is on the brink of starvation maybe meritocracy is a sad fact of life but in rich modern society where no one is starving and the main cost of being poor is being stuck living with dysfunctional poor people can we really say this is a net utilitarian gain? Recall that greater divergence between the managing and the managed class means that the problem of information and the principal-agent problems are getting worse.
Middle Class society seems incompatible with meritocracy. As does any kind of egalitarianism.
[unfinished draft]
Replies from: Vladimir_M↑ comment by Vladimir_M · 2012-06-29T20:26:01.190Z · LW(p) · GW(p)
I see at least two other major problems with meritocracy.
First, a meritocracy opens for talented people not only positions of productive economic and intellectual activity, but also positions of rent-seeking. So while it's certainly great that meritocracy in science has given us von Neumann, meritocracy in other areas of life has at the same time given us von Neumanns of rent-seeking, who have taken the practices of rent-seeking to an unprecedented extent and to ever more ingenious, intellectually involved, and emotionally appealing rationalizations. (In particular, this is also true of those areas of science that have been captured by rent-seekers.)
Worse yet, the wealth and status captured by the rent-seekers are, by themselves, the smaller problem here. The really bad problem is that these ingenious rationalizations for rent-seeking, once successfully sold to the intellectual public, become a firmly entrenched part of the respectable public opinion -- and since they are directly entangled with power and status, questioning them becomes a dangerous taboo violation. (And even worse, as it always is with humans, the most successful elite rent-seekers will be those who honestly internalize these beliefs, thus leading to a society headed by a truly delusional elite.) I believe that this is one of the main mechanisms behind our civilization's drift away from reality on numerous issues for the last century or so.
Second, in meritocracy, unless you're at the very top, it's hard to avoid feeling like a failure, since you'll always end up next to people whose greater success clearly reminds you of your inferior merit.
Replies from: None, Multiheaded↑ comment by [deleted] · 2012-06-29T21:02:27.767Z · LW(p) · GW(p)
Second, in meritocracy, unless you're at the very top, it's hard to avoid feeling like a failure, since you'll always end up next to people whose greater success clearly reminds you of your inferior merit.
Not only did the Medieval peasant have good reason to believe that Kings aren't really that different from him as people, but rather just different in their proper place in society. Kings had an easier time looking at a poor peasant and saying to themselves that there but for the grace of God go they.
In a meritocracy it is easier to disdain and dehumanize those who fail.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-29T21:09:40.483Z · LW(p) · GW(p)
Do you mean to suggest that a significant percentage of Medieval peasants in fact considered Kings to not be all that different from themselves as people, and that a significant percentage of Medieval Kings actually said that there but for the grace of God go they with respect to a poor peasant?
Or merely that it was in some sense easier for them to do so, even if that wasn't actually demonstrated by their actions?
Replies from: wedrifid, None↑ comment by wedrifid · 2012-06-29T22:38:26.539Z · LW(p) · GW(p)
Do you mean to suggest that a significant percentage of Medieval peasants in fact considered Kings to not be all that different from themselves as people,
That sounds like something I'd keep to myself as a medieval peasant if I did believe it. As such it may be the sort of thing that said peasants would tend not to think.
(Who am I kidding? I'd totally say it. Then get killed. I love living in an environment where mistakes have less drastic consequences than execution. It allows for so much more learning from experience!)
↑ comment by [deleted] · 2012-06-30T07:28:08.738Z · LW(p) · GW(p)
Or merely that it was in some sense easier for them to do so, even if that wasn't actually demonstrated by their actions?
The latter. The former is an empirical claim I'm not yet sure how we could properly resolve. But there are reasons to think it may have been true.
After all the King is a Christian and so am I. It is merely that God has placed a greater burden of responsibility on him and one of toil on me. We all have our own cross to carry.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-09-03T12:06:56.294Z · LW(p) · GW(p)
I'd say you're looking at the history of feudal hierarchy through rose-tinted glasses. People who are high in the instrumental hierarchy of decisions (like absolute rulers) also tend to gain a similarily high place in all other kinds of hierarchies ("moral", etc) due to halo effect and such. The fact that social or at least moral egalitarianism logically follows from Christian ideals doesn't mean that self-identified Christians will bother to apply it to their view of the tribe.
Remember, the English word 'villain' originally meant 'peasant'/'serf'. It sounds like a safe assumption to me that the peasants were treated as subhuman creatures by most people above them in station.
Replies from: None, Mitchell_Porter↑ comment by [deleted] · 2013-01-22T20:17:18.188Z · LW(p) · GW(p)
Remember, the English word 'villain' originally meant 'peasant'/'serf'. It sounds like a safe assumption to me that the peasants were treated as subhuman creatures by most people above them in station.
James A. Donald disagrees.
A yeoman was the lowest rank of landowner, one who worked his own land or his families land, in modern terminology a peasant farmer. A villain was a sharecropper, a farmer with no land of his own, semi free, more free than a serf, though not directly equivalent to the modern free laborer. Naturally yeomen had a strong vested interest in the rule of law, for they had much to lose and little to gain from the breakdown in the rule of law. Villains had little to gain, but less to lose. People acted in accordance with their interests, and so the word yeoman came to mean a man who uses force in a brave and honorable manner, in accordance with his duty and the law, and villain came to mean a man who uses force lawlessly, to rob and destroy.
It makes quite a bit of sense. Since incentives matter I would tend to agree.
Since I know about the past interactions you two have had here, I would appreciate you just focused on the argument cited not snipe at James' other writings or character.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-01-27T22:43:21.474Z · LW(p) · GW(p)
I'm curious what you thing more generally of the article you linked to? Specifically the notion of natural rights.
↑ comment by Mitchell_Porter · 2012-09-03T12:18:03.590Z · LW(p) · GW(p)
Someone thinks the usage originates from an upper-class belief that the lower class had lower standards of behavior.
↑ comment by Multiheaded · 2012-08-22T20:55:05.550Z · LW(p) · GW(p)
Hm... so to clarify your position, would you call, say, Saul Alinsky a destructive rent-seeker in some sense? Hayden? Chomsky? All high-status among the U.S. "New Left" (which you presumably - ahem - don't have much patience for) - yet after reading quite a bit on all three, they strike me as reasonable people, responsible about what they preached.
(Yes, yes, of course I get that the main thurst of your argument is about tenured academics. But what you make of these cases - activists who think they're doing some rigorous social thinking on the side - is quite interesting to me.)
comment by gwern · 2012-06-15T14:31:14.227Z · LW(p) · GW(p)
After a painful evening, I got an A/B test going on my site using Google Website Optimizer*: testing the CSS max-width
property (800, 900, 1000, 1200, 1300, & 1400px). I noticed that most sites seem to set it much more narrowly than I did, eg. Readability. I set the 'conversion' target to be a 40-second timeout, as a way of measuring 'are you still reading this?'
Overnight each variation got ~60 visitors. The original 1400px converts at 67.2% ± 11% while the top candidate 1300px converts at 82.3% ± 9.0% (an improvement of 22.4%) with an estimated 92.9% chance of beating the original. This suggests that a switch would materially increase how much time people spend reading my stuff.
(The other widths: currently, 1000px: 71.0% ± 10%; 900px: 68.1% ± 10%; 1200px: 66.7% ± 11%; 800px: 64.2% ± 11%.)
This is pretty cool - I was blind but now can see - yet I can't help but wonder about the limits. Has anyone else thoroughly A/B-tested their personal sites? At what point do diminishing returns set in?
* I would prefer to use Optimizely or Visual Website Optimizer, but they charge just ludicrous sums: if I wanted to test my 50k monthly visitors, I'd be paying hundreds of dollars a month!
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2012-06-15T19:33:52.282Z · LW(p) · GW(p)
Do you know the size of your readers' windows?
How is the 93% calculated? Does it correct for multiple comparisons?
Given some outside knowledge, that these 6 choices are not unrelated, but come from a ordered space of choices, the result that one value is special and all the others produce identical results is implausible. I predict that it is a fluke.
Replies from: gwern↑ comment by gwern · 2012-06-15T19:47:43.101Z · LW(p) · GW(p)
- No, but it can probably be dug out of Google Analytics. I'll let the experiment finish first.
- I'm not sure how exactly it is calculated. On what is apparently an official blog, the author says in a comment: "We do correct for multiple comparisons using the Bonferroni adjustment. We've looked into others, but they don't offer that much more improvement over this conservative approach."
Yes, I'm finding the result odd. I really did expect some sort of inverted V result where a medium sized max-width was "just right". Unfortunately, with a doubling of the sample size, the ordering remains pretty much the same: 1300px beats everyone, with 900px passing 1200px and 1100px. I'm starting to wonder if maybe there's 2 distinct populations of users - maybe desktop users with wide screens and then smartphones? Doesn't quite make sense since the phones should be setting their own width but...
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2012-06-15T20:52:07.720Z · LW(p) · GW(p)
A bimodal distribution wouldn't surprise me. What I don't believe is a spike in the middle of a plain. If you had chosen increments of 200, the 1300 spike would have been completely invisible!
comment by [deleted] · 2012-06-30T09:21:07.962Z · LW(p) · GW(p)
New heuristic: When writing an article for LessWrong assume the casual reader knows about the material covered in HPMOR.
I used to think one could assume they read the sequences and some other key stuff (Hanson ect.), but looking at debates this simply can't be true for more than a third of current LW users.
comment by sixes_and_sevens · 2012-06-15T11:03:59.736Z · LW(p) · GW(p)
I find it pretty easy to pursue a course of study and answer assessment questions on the subject. Experience teaches me that such assessment problems usually tell you how to solve them, (either implicitly or explicitly), and I won't gain proper appreciation for the subject until I use it in a more poorly-defined situation.
I've been intending to get a decent understanding of the HTML5 canvas element for a while now, and last week I hit upon the idea of making a small point & click adventure puzzle game. This is quite ambitious given my past experience (I'm a dev, though much more at home with data than graphics or interaction design), but I decided even if I abandon the project, I'll still have learned useful things from it. A week later and the only product I have to show for my effort is a blue blob whizzing round a 2.5D environment. I've succeeded in gaining an understanding of canvas, but quite by accident I've also consolidated my understanding of vector decomposition and projective transforms, which I learned about years ago but never actually used for my own purposes.
This got me thinking: I don't actually know what projects are going to let me develop certain specific skills and areas I want to develop. I'm currently studying a stats-heavy undergrad degree part-time with the intent of changing careers into something more data-sciencey in a few years. What projects should I set myself to develop those sorts of skills, (or alternatively, to alert me to the fact I'd really hate a career in data science)?
Replies from: jsalvatier↑ comment by jsalvatier · 2012-06-15T15:32:59.552Z · LW(p) · GW(p)
I could use similar advice, as I am in a similarish position.
comment by [deleted] · 2012-06-21T12:01:07.556Z · LW(p) · GW(p)
A fellow LessWrong user on IRC: "Good government seems to be a FAI-complete problem. "
Replies from: TheOtherDave, wedrifid↑ comment by TheOtherDave · 2012-06-21T13:13:49.755Z · LW(p) · GW(p)
Which ought not be surprising. Governments are nonhuman environment-optimizing systems that many people expect to align themselves with human values, despite not doing the necessary work to ensure that they will.
comment by DanArmak · 2012-06-20T19:33:22.017Z · LW(p) · GW(p)
I just read the new novel by Terry Pratchett and Stephen Baxter, The Long Earth. I didn't like it and don't recommend it (I read it because I loved other books by Pratchett, but there's no similarity here).
There was one thing in particular that bothered me. I read the first 10 reviews of the book that Google returns, and they were generally negative and complained about many things, but never mentioned this issue. Many described Baxter as a master of hard sci fi, which makes it doubly strange.
Here's the problem: in this near-future story, gurer vf n Sbbzvat NV, nyernql fhcrevagryyvtrag naq nf cbjreshy nf n znwbe angvba, juvpu jvyy cebonoyl orpbzr zber cbjreshy guna gur erfg bs gur jbeyq pbzovarq va nabgure lrne be fb. And nobody in the world cares! It's not a plot point! I kept expecting it to at least be mentioned by one of the characters, but they're all completely 'meh'. Instead they obsess over minor things like arj ubzvavq fcrpvrf fznegre guna puvzcf, ohg abg nf fzneg nf uhznaf.
Have I been spoiled by reading too much LW? Has this happened to others with other fiction?
comment by NancyLebovitz · 2012-06-27T12:24:34.857Z · LW(p) · GW(p)
A usual idea of utopia is that chores-- repetitive, unsatisfying, necessary work to get one's situation back to a baseline-- are somehow eliminated. Weirdtopia would reverse this somehow. Any suggestions?
Replies from: sixes_and_sevens, Richard_Kennaway, Alicorn↑ comment by sixes_and_sevens · 2012-06-27T12:47:44.085Z · LW(p) · GW(p)
As the scope for complex task automation becomes broader, almost all problems become trivial. Satisfying hard work, with challenging and problem-solving elements, becomes a rare commodity. People work to identify non-trivial problems (a tedious process), which are traded for extortionate prices. A lengthy list of problems you've solved becomes a status symbol, not because of your problem-solving skills, but because you can afford to buy them.
Replies from: NancyLebovitz, NancyLebovitz↑ comment by NancyLebovitz · 2012-06-27T14:20:02.800Z · LW(p) · GW(p)
Another angle: Is it plausible that almost all problems become trivial, or will increased knowledge lead to finding more challenging problems?
The latter seems at least plausible, considering that the universe is much bigger than our brains, and this will presumably continue to be true.
Look at how much weirder the astronomical side of physics has gotten.
↑ comment by NancyLebovitz · 2012-06-27T13:39:35.321Z · LW(p) · GW(p)
I don't think you've answered my question, but you've got an interesting idea there.
What do people buy which would be more satisfying than solving the problems they're found?
Also, this may be a matter of the difference between your and my temperaments, but is finding non-trivial problems that tedious?
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2012-06-27T13:55:26.894Z · LW(p) · GW(p)
As it's the result of about two minutes thought, I'm not very confident about how internally consistent this idea is.
If finding non-trivial problems is tedious work, I imagine people with a preference for tedious work (or who just don't care about satisfying problems) would probably rather buy art/prostitutes/spaceship rides, etc. This is the bit I find hardest to internally reconcile, as a society in which most work has become trivially easy is probably post-scarcity.
I personally don't find the search for non-trivial problems all that tedious, but if I could turn to a computer and ask "is [problem X] trivial to solve?", and it came back with "yes" 99.999% of the time, I might think differently.
↑ comment by Richard_Kennaway · 2012-06-28T08:36:13.793Z · LW(p) · GW(p)
"The daily tasks of living give meaning to life. Chopping wood, drawing water: these are the highest accomplishments. Using machines to do these things empties life of life itself. To spend your days growing your own food, making with your own hands everything that you need, living as a natural part of nature like all the other animals: this is paradise. Contrast the seductive allure of machines and cities, raping our Mother for our vile enjoyment, waging war against the imaginary monsters of "disease" and "poverty" instead of accepting the natural balance of Nature, striving always to see who can most outdo the original sin of separating from the great apes. "Scientists" see our Mother as a corpse to be looted, but if we do not turn away from that false path, out of her eternal love she will wring our neck as any loving mother will do to a deformed child."
Deep green ecology, in other words.
↑ comment by Alicorn · 2012-06-27T18:05:18.379Z · LW(p) · GW(p)
Modify us to see real chores the way we see fun, addictive task-management games.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-06-28T08:17:31.195Z · LW(p) · GW(p)
It would be a subtle problem to manage that so that people don't spend excessive amounts of time on chores.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-28T14:19:12.841Z · LW(p) · GW(p)
Yes.
Heck, it's a subtle problem to even identify what an "excessive" amount of time to spend on chores is.
comment by gwern · 2012-06-22T19:52:17.731Z · LW(p) · GW(p)
Some more SIAI-related work: looking for examples of costly real-world cognitive biases: http://dl.dropbox.com/u/85192141/bias-examples.page
One of the more interesting sources is Heuer's Psychology of Intelligence Analysis. I recommend it, for the unfamiliar political-military examples if nothing else. (It's also good background reading for understanding the argument diagramming software coming from the intelligence community, not that anyone on LW actually uses them.)
Replies from: Grognor, beoShaffer↑ comment by beoShaffer · 2012-07-03T07:39:59.398Z · LW(p) · GW(p)
Its been awhile since I read it, but I recall the book Sway being a good source of bias examples.
comment by djcb · 2012-06-17T21:21:39.495Z · LW(p) · GW(p)
I read quite a bit, and I really like some of the suggestions I found in LW. So, my question is: is there any recent or not-so-recent-but-really-good book you would recommend? Topics I'd like read more about are:
- evolutionary psychology (I read some Robert Wright, I'd like to read something a bit more solid)
- status/prestige theory (Robin Hanson uses it all the time, but is there some good text discussing this?)
I'm happy to read pop-sci, as long as it's written with a skeptical, rationalist mindset. I.e. I liked Linden's The accidental mind, but take Gladwell's writings with a rather big grain of salt.
Replies from: wedrifid, Grognor↑ comment by Grognor · 2012-06-20T16:52:59.639Z · LW(p) · GW(p)
http://lesswrong.com/lw/82g/on_the_openness_personality_trait_rationality/ has a download of one book very close to this topicspace.
Replies from: djcb↑ comment by djcb · 2012-06-20T22:29:12.917Z · LW(p) · GW(p)
Thanks! The link doesn't seem to work, but I'll check out the book. Did you read it?
Replies from: Grognor↑ comment by Grognor · 2012-06-21T02:55:11.401Z · LW(p) · GW(p)
No, I haven't read it yet, but it's on my list. Here's another download link http://dl.dropbox.com/u/33627365/Scholarship/Spent%20Sex%20Evolution%20and%20Consumer%20Behavior.pdf
Replies from: djcb↑ comment by djcb · 2012-06-22T12:39:49.659Z · LW(p) · GW(p)
Thanks, Grognor!
Replies from: djcb↑ comment by djcb · 2012-07-04T15:04:49.469Z · LW(p) · GW(p)
I just finished reading it. The start is promising, discussing consumer behavior from the signaling/status perspective. There's some discussion of the Big Five personality traits + general intelligence, which was interesting (and I'll need to look into a bit deeper). It shows how these traits influence our buying habits, and the crazy things people do for a few status points...
The end of the book proposes some solutions to hyper-consumerism, and this part I did not particularly like -- in a few pages the writer comes up with some far-far-reaching plans (consumption tax etc.) to influence consumers; all highly speculative, not likely to ever be realized.
Apart from the end, liked it, writer is quick & witty, and provides food for thought.
comment by jsalvatier · 2012-06-15T15:25:53.792Z · LW(p) · GW(p)
A question about acausal trade
(btw, I couldn't find a good link for acausal trade introduction discussion; I would be grateful for one)
We discussed this at a LW Seattle meetup. It seems like the following is an argument for why all AIs with a decision theory that does acausal trade act as if they have the same utility function. That's a surprising conclusion to me which I hadn't seen before, but also doesn't seem too hard to come up with, so I'm curious where I've gone off the rails. This argument has a very Will_Newsomey flavor to it to me.
Lets say we're in a big universe with many many chances for intelligent life, but most of them are so far apart that they will never meet eachother. Lets also say that UDT/TDT-like decision theories are are in some sense the obviously correct decision theory to follow, so that many civilizations, when they build an AI, they use something like UDT/TDT. At their inception, these AIs will have very different goals since since the civilizations that built them would have very different evolutionary histories.
If many of these AIs can observe that the universe is such that there will be other UDT/TDT AIs out there with different goals then each AI trade acausally with the AIs it thinks will be out there. Presumably each AI will have to study the universe and figure out a probability distribution for the goals of those AIs. Since the universe is large, each AI will expect many other AIs to be out there and thus bargain away most of its influence over its local area. Thus, the starting goals of each AI will only have a minor influence on what it does; each AI will act as if it has some combined utility function.
What are the problems with this idea?
Replies from: Mitchell_Porter, sixes_and_sevens, Manfred, Kindly, JenniferRM↑ comment by Mitchell_Porter · 2012-06-16T01:39:18.908Z · LW(p) · GW(p)
Substitute the word causal for acausal. In a situation of "causal trade", does everyone end up with the same utility function?
Replies from: bogus, Will_Newsome, Will_Newsome↑ comment by bogus · 2012-06-17T18:58:34.822Z · LW(p) · GW(p)
In a situation of "causal trade", does everyone end up with the same utility function?
The Coase theorem does imply that perfect bargaining will lead agents to maximize a single welfare function. (This is what it means for the outcome to be "efficient".) Of course, the welfare function will depend on the agents' relative endowments (roughly, "wealth" or bargaining power).
↑ comment by Will_Newsome · 2012-06-16T13:21:02.805Z · LW(p) · GW(p)
(Also remember that humans have to "simulate" each other using logic-like prior information even in the straightforward efficient-causal scenario—it would be prohibitively expensive for humans to re-derive all possible pooling equilibria &c. from scratch for each and every overlapping set of sense data. "Acausal" economics is just an edge case of normal economics.)
↑ comment by Will_Newsome · 2012-06-16T13:24:18.123Z · LW(p) · GW(p)
Unrelated question: Do you think it'd be fair to say that physics is the intersection of metaphysics and phenomenology?
↑ comment by sixes_and_sevens · 2012-06-15T17:46:07.879Z · LW(p) · GW(p)
The most glaring problem seems to be how it could deduce the goals of other AIs. It either implies the existence of some sort of universal goal system, or allows information to propagate faster than c.
Replies from: jsalvatier↑ comment by jsalvatier · 2012-06-15T19:52:19.727Z · LW(p) · GW(p)
What I had in mind was that each of the AIs would come up with a distribution over the kinds of civilizations which are likely to arise in the universe by predicting the kinds of planets out there (which is presumably something you can do since even we have models for this) and figuring out different potential evolutions for life that arises on those planets. Does that make sense?
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2012-06-15T20:24:33.589Z · LW(p) · GW(p)
I was going to respond saying I didn't think that would work as a method, but now I'm not so sure.
My counterargument would be to suggest that there's no goal system which can't arbitrarily come about as a Fisherian Runaway, and that our AI's acausal trade partners could be working on pretty much any optimisation criteria whatsoever. Thinking about it a bit more, I'm not entirely sure the Fisherian Runaway argument is all that robust. There is, for example, presumably no Fisherian Runaway goal of immediate self-annihilation.
If there's some sort of structure to the space of possible goal systems, there may very well be a universally derivable distribution of goals our AI could find, and share with all its interstellar brethren. But there would need to be a lot of structure to it before it could start acting on their behalf, because otherwise the space would still be huge, and the probability of any given goal system would be dwarfed by the evidence of the goal system of its native civilisation.
There's a plot for a Ctrhulhonic horror tale lurking in here, whereby humanity creates an AI, which proceeds to deduce a universal goal preference for eliminating civilisations like humanity. Incomprehensible alien minds from the stars, psychically sharing horrible secrets written into the fabric of the universe.
Replies from: jsalvatier↑ comment by jsalvatier · 2012-06-15T21:54:31.640Z · LW(p) · GW(p)
Except for the eliminating humans part, the Ctrhulhonic outcome seems almost like the default. We build AI, proving that it implements out reflectively stable wishes and then it still proceeds to do almost pay very little attention to what we thought we wanted.
One thing that might push back in the opposite direction is that if humans have heavily path dependent preferences (which seems pretty plausible) or selfish wrt currently existing humans in some way then an AI built for our wishes might not be willing to trade much humanity away in exchange for resources far away.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2012-06-15T22:41:16.650Z · LW(p) · GW(p)
The Cthulhonic outcome is only the case if there are identifiable points in the space of possible goal systems to which the AI can assign enough probability to make them credible acausal trade partners. Whether those identifiable points exist is not clear or obvious.
When it ruminates over possible varieties of sapient life in the universe, it would need to find clusters of goals that were (a) non-universal, (b) specific enough to actually act upon, and (c) so probabilistically dense that they didn't vanish into obscurity against humanity's preferences, which it possesses direct observational evidence for.
Whether those clusters exist, and if they do, whether they can be deduced a priori by sitting in a darkened room and thinking really hard, does not seem obvious either way. Intuitively, thinking about trying to draw specific conclusions from extremely dilute evidence, I'm inclined to think they can't, but I'm not prepared to inject that belief with a super amount of confidence, as I may very well think differently if I were a billion times smarter.
Replies from: jsalvatier↑ comment by jsalvatier · 2012-06-15T23:32:41.919Z · LW(p) · GW(p)
I think what matters is not so much the probability of goal clusters, but something like the expectation of the amount of resources that AIs that have a particular goal cluster have access to. An AI might think that some specific goal cluster only has a 1:1000 chance of occurring anywhere, but if it does then there are probably a million instances of it. I think this is the same as being certain that there are 1,000 (1million/1,000) AIs with that goal cluster. Which seems like enough to 'dilute' the preferences of any given AI.
If the universe is pretty big then it seems like it would be pretty easy to get large expectations even with low probabilities. (let me know if I'm not making sense)
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2012-06-16T01:25:22.403Z · LW(p) · GW(p)
The "million instances" is the size of the cluster, and yes, that would impact its weight, but I think it's arithmetically erroneous to suggest the density matters more than the probability. It depends entirely on what those densities and probabilities are, and you're just plucking numbers straight out of the air. Why not go the whole hog and suggest a goal cluster that happens nine times out of ten, with a gajillion instances?
I believe the salient questions are:
Do such clusters even exist? Can they be inferred from a poverty of evidence just by thinking about possible agents that may or may not arise in our universe with enough confidence to actually act upon? This boils down to whether, if I'm smart enough, I can sit in an empty room, think "what if..." about examples of something I've never seen before from an enormous space of possibilities, and come up with an accurate collection of properties for those things, weighted by probability. There are some things we can do that with and some things we can't. What category do alien goal systems fall into?
If they do exist, will they be specific enough for an AI to act upon? Even if it does deduce some inscrutable set of alien factors that we can't make sense of, will they be coherent? Humans care a lot about methods of governance, the moral status of unborn children, and who people should and shouldn't have sex with, but they don't agree on these things.
If they do exist, are there going to be many disparate clusters, or will they converge? If they do converge, how relatively far away from the median is humanity? If they're disparate, are they completely disjoint goals, or are they overlap and/or conflict with each other? More to the point, are they going to overlap and/or conflict with us?
I can't say how much we'd need to worry about a superintelligent TDT-agent implementing alien goals. That's a fact about the universe for which I don't have a lot of evidence However there's more than enough uncertainty surrounding the question for me to not lose any sleep over it.
↑ comment by Manfred · 2012-06-26T16:07:21.445Z · LW(p) · GW(p)
One problem is that, in order to actually get specific about utility functions, the AI would have to simulate another AI that is simulating it - that's like trying to put a manhole cover through its own manhole by putting it a box first.
If we assume that the computation problems are solved, a toy model involving robots laying different colors of tile might be interesting to consider. In fact there's probably a post in there. The effects will be different sizes for different classes of utility functions over tiles. In the case of infinity robots with cosmopolitan utility functions, you do get an interesting sort of agreement though.
↑ comment by Kindly · 2012-06-15T17:57:30.188Z · LW(p) · GW(p)
This outcome is bad because bargaining away influence over the AI's local area in exchange for a small amount of control over the global utility function is a poor trade. But in that case, it's also a poor acausal trade.
A more reasonable acausal trade to make with other AIs would be to trade away influence over faraway places. After all, other AIs presumably care about those places more than our AI does, so this is a trade that's actually beneficial to both parties. It's even a marginally reasonable thing to do acausally.
Of course, this means that our AI isn't allowed to help the Babyeaters stop eating their babies, in accordance with its acausal agreement with the AI the Babyeaters could have made. But it also means that the Superhappy AI isn't allowed to help us become free of pain, because of its acausal agreement with our AI. Ideally, this would hold even if we didn't make an AI yet.
Replies from: jsalvatier↑ comment by jsalvatier · 2012-06-15T19:56:46.874Z · LW(p) · GW(p)
This outcome is bad because bargaining away influence over the AI's local area in exchange for a small amount of control over the global utility function is a poor trade. But in that case, it's also a poor acausal trade.
I agree with your logic, but why do you say it's a bad trade? At first it seemed absurd to me, but after thinking about it I'm able to feel that it's the best possible outcome. Do you have more specific reasons why it's bad?
Replies from: Kindly↑ comment by Kindly · 2012-06-15T20:49:59.420Z · LW(p) · GW(p)
At best it means that the AI shapes our civilization into some sort of twisted extrapolation of what other alien races might like. In the worst case, it ends up calculating a high probability of existence for Evil Abhorrent Alien Race #176 which is in every way antithetical to the human race, and the acausal trade that it makes is that it wipes out the human race (satisfying #176's desires) so that if the #176 make an AI, that AI will wipe out their race as well (satisfying human desires, since you wouldn't believe the terrible, inhuman monstrous things those #176s were up to).
↑ comment by JenniferRM · 2012-06-15T23:49:38.652Z · LW(p) · GW(p)
That's a surprising conclusion to me which I hadn't seen before, but also doesn't seem too hard to come up with, so I'm curious where I've gone off the rails. This argument has a very Will_Newsomey flavor to it to me.
Perhaps it is not wise to speculate out loud in this area until you've worked through three rounds of "ok, so what are the implications of that idea" and decided that it would help people to hear about the conclusions you've developed three steps back. You can frequently find interesting things when you wander around, but there are certain neighborhoods you should not explore with children along for the ride until you've been there before and made sure its reasonably safe.
Perhaps you could send a PM to Will?
Replies from: tenlier↑ comment by tenlier · 2012-06-17T16:25:12.543Z · LW(p) · GW(p)
Not just going meta for the sake of it: I assert you have not sufficiently thought throught the implications of promoting that sort of non-openness publicly on the board. Perhaps you could PM jsavaltier.
I'm lying, of course. But interesting to register points of strongest divergence between LW and conventional morality (JenniferRM's post, I mean; jsalvatier's is fine and interesting).
comment by mstevens · 2012-06-15T13:10:15.800Z · LW(p) · GW(p)
I'm feeling fairly negative on lesswrong this week. Time spent here feels unproductive, and I'm vaguely uncomfortable with the attitudes I'm developing. On the other hand there are interesting people to chat with.
Undecided what to do about this. Haven't managed to come up with anything to firm up my vague emotions into something specific.
Perhaps I'll take a break and see how it feels.
Replies from: None, shminux, David_Gerard, EStokes, Bruno_Coelho↑ comment by [deleted] · 2012-06-15T17:57:21.957Z · LW(p) · GW(p)
I was feeling fairly negative on Less Wrong recently. I ended up writing down a lot of things that bothered me in a a half formed angry Google Doc rant, saving it...
and then going back to reading Less Wrong a few days later.
It felt refreshing though, because Less Wrong has flaws and you are allowed to notice them and say to yourself "This! Why are some people doing this! It's so dumb and silly!"
That being said, I'm not sure that all of the arguments that my straw opponents were presenting in the half formed doc are actually as weak as I was making them out to be. But it did make me feel more positive overall simply summing up everything that had been bugging me at the time.
↑ comment by Shmi (shminux) · 2012-06-15T22:42:10.112Z · LW(p) · GW(p)
Hasn't worked for Konkvistador.
Replies from: None↑ comment by David_Gerard · 2012-06-15T22:42:49.261Z · LW(p) · GW(p)
What are the attitudes you are feeling uncomfortable with?
Replies from: mstevens↑ comment by mstevens · 2012-06-16T21:05:31.018Z · LW(p) · GW(p)
Hmm this is a bit fuzzy, as I said - part of my problem is that I just have a vague feeling and am having difficulty making it less vague. But:
- an uncomfortable air of superiority
- a bit too much association with right wing politics.
- Some of the PUA stuff is a bit weird (not discussed directly on the site so much but in related contexts)
↑ comment by CommanderShepard · 2012-06-17T17:30:13.039Z · LW(p) · GW(p)
It would very much help if you could name three examples of each of your complaints, this would help you see if this really is the source of your unease. It would also help others figure out if you are right.
an uncomfortable air of superiority
Overestimating our rationality and generally feeling clearer thinkers than anyone ever? Or perhaps unwilling to update on outside ideas like Konkvistador recently complained?
a bit too much association with right wing politics.
There is a lot of right wing politics on the IRC channel, but overall I don't think I've seen much on the main site. On net the sites demographics are if anything remarkably left-wing.
Some of the PUA stuff is a bit weird (not discussed directly on the site so much but in related contexts)
The PUA stuff may come off as weird due to inferential distances or people accumulating strange ideas because they can't sanity check them. Both are the result of the community norm that sort now seems to strongly avoid gender issues because we've proven time and again to be incapable of discussing them as we do most other things. This is a pattern that seems to go back to the old OB days.
↑ comment by EStokes · 2012-06-15T21:57:30.002Z · LW(p) · GW(p)
I use LW casually and my attitude towards it is pretty neutral/positive but I recently got downvoted something like 10 times in past comments, it seems. A karma loss of 5%, and it's a lot, comparing the amount of karma I have to how long I've been here. I didn't even get into a big argument or anything, the back-and-forth was pretty short. So my attitude toward LW is very meh right now. Sorry, sort of wanted to just say this somewhere. ugh :/
↑ comment by Bruno_Coelho · 2012-06-15T20:02:16.169Z · LW(p) · GW(p)
The fact that LW is a forum about rationality/science don't mean it's good for you all the time. Strategically speaking, redefine your goals.
Or, maybe the quality of posts are not the same that was before.
comment by Oscar_Cunningham · 2012-06-18T14:59:43.766Z · LW(p) · GW(p)
I'm trying to memorise mathematics using spaced repetition. What's the best way to transcribe proofs onto Anki flashcards to make them easy to learn? (ie what should the question and answer be?)
Replies from: ChristianKl, dbaupp, D_Malik↑ comment by ChristianKl · 2012-06-20T21:45:33.140Z · LW(p) · GW(p)
When it comes to formulate Anki cards it's good to have the 20 rules from Supermemo in mind,
The important thing is to understand before you memorize. You should never try to memorzize a proof without understanding it in the first place.
Once you have understood the proof think about what's interesting about the proof. Asks questions like: "What axioms does the proof use?" "Does the proof use axiom X?" Try to find as many questions with clear answers as you can. Being redundant is good.
If you find yourself asking a certain question frequently invent a shorthand for them. axioms(proof X) can replace "What axioms does the proof use?"
If you really need to remember the whole proof then memorize it step by step.
Proof A:
Do A
Do B
becomes 2 cards:
Proof A:
- [...]
Proof A:
Do A
[...]
If you have a long proof that could mean 9 steps and 9 cards.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2012-06-20T22:08:36.290Z · LW(p) · GW(p)
Thanks!
↑ comment by dbaupp · 2012-06-20T10:47:21.052Z · LW(p) · GW(p)
I've been doing something similar (maths in an Anki deck), and I haven't found a good way of doing so. My current method is just asking "Prove x" or "Outline a proof of x", with the proof wholesale in the answer, and then I run through the proof in my head calling it "Good" if I get all the major steps mostly correct. Some of my cards end up being quite long.
I have found that being explicit with asking for examples vs definitions is helpful: i.e. ask "What's the definition of a simple ring?" rather than "What's a simple ring?".
Replies from: ChristianKl↑ comment by ChristianKl · 2012-06-20T21:43:53.051Z · LW(p) · GW(p)
"def(simple ring)" is more efficient than "What's the definition of a simple ring?"
Replies from: dbaupp↑ comment by dbaupp · 2012-06-21T02:43:57.699Z · LW(p) · GW(p)
I find that having proper sentences in the questions means I can concentrate better (less effort to work out what it's asking, I guess), but each to their own.
Replies from: ChristianKl↑ comment by ChristianKl · 2012-06-21T10:39:09.073Z · LW(p) · GW(p)
If you have 50 cards that are in the style "def(...)" than it doesn't take any effort to work out what it's asking anymore.
Rereading "What's the " over a thousand times wastes time. When you do Anki for longer periods of time reducing the amount of time it takes to answer a card is essential.
↑ comment by D_Malik · 2012-06-24T17:37:48.431Z · LW(p) · GW(p)
A method that I've been toying with: dissect the proof into multiple simpler proofs, then dissect those even further if necessary. For instance, if you're proving that all X are Y, and the proof proceeds by proving that all X are Z and all Z are Y, then make 3 cards:
- One for proving that all X are Z.
- One for proving that all Z are Y.
- One for proving that all X are Y, which has as its answer simply "We know all X are Z, and we know all Z are Y."
That said, you should of course be completely certain that memorizing proofs is worthwhile. Rule of thumb: if there's anything you could do that would have a higher ratio of awesome to cost than X, don't do X before you've done that.
comment by DanArmak · 2012-06-30T15:11:12.255Z · LW(p) · GW(p)
Did the site CSS just change the font used for discussion (not Main) post bodies? It looks bad here.
Edit: it only happens with some posts. Like these:
http://lesswrong.com/r/discussion/lw/dd0/hedonic_vs_preference_utilitarianism_in_the/ http://lesswrong.com/r/discussion/lw/dc4/call_for_volunteers_publishing_the_sequences/
But not these:
http://lesswrong.com/r/discussion/lw/ddh/aubrey_de_grey_has_responded_to_his_iama_now_with/ http://lesswrong.com/r/discussion/lw/dcy/the_fiction_genome_project/
Is it a perhaps a formatting change applied when posting?
Also, when I submit a new comment and then edit it, it now starts with an empty line.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-06-30T17:28:49.452Z · LW(p) · GW(p)
Fixed
Also, when I submit a new comment and then edit it, it now starts with an empty line.
It's a known Bug #315
comment by [deleted] · 2012-06-27T09:18:13.225Z · LW(p) · GW(p)
Sacredness as a Monster by Sister Y, aren't you glad I read cool blogs? :)
comment by Vladimir_Nesov · 2012-06-23T18:07:07.848Z · LW(p) · GW(p)
One more item for the FAI Critical Failure Table (humor/theory of lawful magic):
37. Any possibility automatically becomes real, whenever someone justifiably expects that possibility to obtain.
Discussion: Just expecting something isn't enough, so crazy people don't make crazy things happen. The anticipation has to be a reflection of real reasons for forming the anticipation (a justified belief). Bad things can be expected to happen as well as good things. What actually happens doesn't need to be understood in detail by anyone, the expectation only has to be close enough to the real effect, so the details of expectation-caused phenomena can lawfully exist independently of the content of people's expectations about them. Since a (justified) expectation is sufficient for something to happen, all sorts of miracles can happen. Since to happen, a miracle has to be expected to happen, it's necessary for someone to know about the miracle and to expect it to happen. Learning about a miracle from an untrustworthy (or mistakenly trusted) source doesn't make it happen, it's necessary for the knowledge of possibility (and sufficiently clear description) of a miracle to be communicated reliably (within the tolerance of what counts for an effect to have been correctly anticipated). The path of a powerful wizard is to study the world and its history, in order to make correct inferences about what's possible, thereby making it possible.
(Previously posted to the Jan 2012 thread by mistake.)
comment by Multiheaded · 2012-06-19T12:20:20.673Z · LW(p) · GW(p)
A poll, just for fun. Do you think that the rebels/Zionists in The Matrix were (mostly or completely) cruel, deluded fundamentalists commiting one atrocity after another for no good reason, and that in-universe their actions were inexcusable?
Replies from: DanArmak, Multiheaded, Multiheaded, None, Multiheaded↑ comment by DanArmak · 2012-06-20T19:23:36.255Z · LW(p) · GW(p)
Upvote for "The Matrix makes no internal sense and there's no fun in discussing it."
Replies from: Multiheaded, DanArmak↑ comment by Multiheaded · 2012-06-20T19:50:25.883Z · LW(p) · GW(p)
I agree (the franchise established itself as rather one-dimensional... in about the first 40 minutes) - but hell, I get into discussions about TWILIGHT, man. I'm a slave to public discourse.
↑ comment by Multiheaded · 2012-06-19T12:20:40.149Z · LW(p) · GW(p)
Upvote for NO.
↑ comment by Multiheaded · 2012-06-19T12:20:30.273Z · LW(p) · GW(p)
Upvote for YES.
↑ comment by [deleted] · 2012-06-26T16:42:28.250Z · LW(p) · GW(p)
Wow. That sequence was drastically less violent than I remembered it being. I noticed (for I believe the first time) that they actually made some attempt to avoid infinite ammo action movie syndrome. Also I must have thought the cartwheel bit was cool when I first saw it, but now it looks quite ridiculous and/or dated.
Maybe it's time for a rewatch.
↑ comment by Multiheaded · 2012-06-19T12:20:47.402Z · LW(p) · GW(p)
Karma sink.
comment by Viliam_Bur · 2012-06-19T08:30:11.786Z · LW(p) · GW(p)
What is the meaning of the three-digit codes in American university lessons? Such as: "Building a Search Engine (CS101)", "Crunching Social Networks (CS215)", "Programming A Robotic Car (CS373)" currently in Udacity.
Seems to me that 101 is always the introduction to the subject. But what about the other numbers? Do they correspond to some (subject specific) standard? Are they arbitrary (perhaps with general trend to give more difficult lessons higher numbers)?
Replies from: Dreaded_Anomaly, sixes_and_sevens, Nornagest↑ comment by Dreaded_Anomaly · 2012-06-19T12:56:02.778Z · LW(p) · GW(p)
The first digit is the most important. It indicates the "level" of the course: 100/1000 courses are freshman level, 200/2000 are sophomore level, etc. There is some flexibility in these classifications, though. Examples: My undergraduate university used 1000 for intro level, 2000 for intermediate level, 4000 for senior/advanced level, and 6000 for graduate level. (3000 and 5000 were reserved for courses at a satellite campus.) My graduate university uses 100, 200, 300, 400 for the corresponding undergraduate year levels, and 600, 700, 800 for graduate courses of increasing difficulty levels.
The other digits in the course number often indicate the rough order in which courses should be taken within a level. This is not always the case; sometimes they are just arbitrary, or they may indicate the order in which courses were added to the institute's offerings.
In general, though the numbers indicate the levels of the courses and the order in which they "should" be taken, students' schedules need not comply precisely (outside of course-specific prerequisite requirements).
↑ comment by sixes_and_sevens · 2012-06-19T10:37:44.124Z · LW(p) · GW(p)
It varies from institution to institution, but generally the first number indicates the year you're likely to study it, so "Psychology 101" is the first course you're likely to study in your first year of a degree involving psychology, which is why it's the introduction to the subject. The numbering gets messy for a variety of reasons.
I should point out I'm not an American university student, but this style of numbering system is becoming prevalent throughout the English-speaking world.
↑ comment by Nornagest · 2012-06-20T21:05:32.423Z · LW(p) · GW(p)
101's stereotypically the introduction to the course, but this sort of thing actually varies quite a bit between universities. Mine dropped the first digit for survey courses and introductory material; survey courses were generally higher two-digit numbers (i.e. Geology 64, Planetary Geology), while introductory courses were more often one-digit or lower two-digit numbers (i.e. Math 3A, Introduction to Calculus). Courses intended to be taken in sequence had a letter appended. Aside from survey courses, higher numbers generally indicated more advanced or specialized classes, though not necessarily more difficult ones.
Three digits indicated an upper-division (i.e. nominally junior- or senior-level) or graduate-level course. Upper-division undergrad courses were usually 100-level, and the 101 course was usually the first class you'd take that was intended only for people of your major; CS 101 was Algorithms and Abstract Data Types for me, for example, and I took it late in my sophomore year. Graduate courses were 200-level or higher.
comment by maia · 2012-06-15T16:16:17.511Z · LW(p) · GW(p)
We often hear about how professional philanthropy is a very good way to improve others' lives. Have any LWers actually gone this route?
Replies from: Larks, jsalvatier↑ comment by jsalvatier · 2012-06-15T19:57:40.851Z · LW(p) · GW(p)
We often hear that? What do you mean by professional philanthropy here?
Replies from: maia↑ comment by maia · 2012-06-18T00:46:44.009Z · LW(p) · GW(p)
I mean the general line of reasoning that goes, "Go do the highest-paying job you can get and then donate your extra money to AMF or other highly effective charities." The most oft-cited high-paying job seems to be to work on Wall Street or some such.
Replies from: jsalvatier↑ comment by jsalvatier · 2012-06-18T01:13:06.390Z · LW(p) · GW(p)
Oh, okay, I thought you meant something else.
comment by Viliam_Bur · 2012-06-15T11:17:45.397Z · LW(p) · GW(p)
I would like to try some programming in Lisp, could you give me some advice? I have noticed that in the programming community this topic is prone to heavy mindkilling, which is why I ask on LW instead of somewhere else.
There are many variants of Lisp. I would prefer to learn one that is really used these days for developing real-world applications. Something I could use to make e.g. a Tetris-like game. I will probably need some libraries for input and output; which ones do you recommend? I want a free software that works out of the box; preferably on a Windows machine, without having to install a Linux emulator first. (If such thing does not exist, please tell me; and recommend me a second best possibility.)
I would also like to have a decent development environment; something that allows me to manage multiple source code files, does syntax highlighting, shows documentations to the functions I am writing. Again, preferably free, working out of the box on a Windows machine. Simply, I would like to have an equivalent of what Eclipse is for Java.
Then, I would like some learning resources, and information where can I find good open-source software written in Lisp, preferably games.
Replies from: mstevens, vi21maobk9vp, Risto_Saarelma↑ comment by mstevens · 2012-06-15T12:39:49.019Z · LW(p) · GW(p)
My research suggests Clojure is a lisp-like language most suited to your requirements. It runs on the JVM so should be relatively low hassle on Windows. I believe there's some sort of Eclipse support but I can't confirm it.
If you do end up wanting to do something with Common Lisp, I recommend Practical Common Lisp as a good free introduction.
↑ comment by vi21maobk9vp · 2012-06-15T12:15:00.576Z · LW(p) · GW(p)
Well, if your goal is trying out for education, but on Windows, you could start with DrRacket. http://racket-lang.org/
It is a reasonable IDE, it has some GUI libraries included, open-source, cross-platform, works fine on Windows.
Racket is based on Scheme language (which is a part of Lisp language family). It has a mode for Scheme as described in R6RS or R5RS standard, and it has a few not-fully-compatible dialects.
I use Common Lisp, but not under Windows. Common Lisp has more cross-implementation libraries, it could be useful sometimes. Probably, EQL is the easiest to set up under Windows (it is ECL, a Common Lisp implementation, merged with Qt for GUI; I remember there being a bundled download). Maybe CommonQt or Cells-GTK would work. I remember that some of the Common Lisp package management systems have significant problems under Windows or require either Cygwin or MSys (so they can use tar, gzip, mkdir etc. as if they were on a Unix-like system)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-15T15:12:45.328Z · LW(p) · GW(p)
My goals are: 1) to get the "Lisp experience" with minimum overhead; and 2) to use the best available tools.
And I hope these two goals are not completely contradictory. I want to be able to write my own application on my computer conveniently after a few minutes, and to fluently progress to more complex applications. On the other hand, if I happen to later decide that Lisp is not for me, I want to be sure it was not only because I chose the wrong tools.
Thanks for all the answers! I will probably start with Racket.
Replies from: Pavitra, vi21maobk9vp↑ comment by Pavitra · 2012-06-17T06:51:09.770Z · LW(p) · GW(p)
For a certain value of "the Lisp experience", Emacs may be considered more or less mandatory. In order to recommend for or against it I would need more precise knowledge of your goals.
Replies from: Viliam_Bur, vi21maobk9vp↑ comment by Viliam_Bur · 2012-06-17T08:21:41.161Z · LW(p) · GW(p)
I tried Emacs and decided that I dislike it. I understand the reason why it is like that, but I refuse to lower my user interface expectations that low.
Generally, I have noticed the trend that a software which is praised as superior often comes with a worse user interface, or ignores some other part of user experience. I can understand that a software with smaller userbase cannot put enough resources to its non-critical parts. That makes sense. But I suspect there later appears a mindkilling thread of though, which goes like this: "Our software is superior. Our software does not have a feature X. Therefore, not having a feature X is an advantage, because ." As in: we don't need 21st-century-style user interface, because good programmers don't need such things.
By wanting a "Lisp experience" I mean I would like to experience (or falsify the existence of) the nirvana frequently described by Paul Graham. Not to replicate 1:1 Richard Stallman's working conditions in 1980s. :D
A perfect solution would be to combine the powerful features of Lisp with the convenience of modern development tools. I emphasize the convenience for pragmatic reasons, but also as a proxy for "many people with priorities similar to me are using it".
Replies from: gwern, Risto_Saarelma, vi21maobk9vp, dbaupp↑ comment by gwern · 2012-06-17T21:55:22.081Z · LW(p) · GW(p)
Generally, I have noticed the trend that a software which is praised as superior often comes with a worse user interface, or ignores some other part of user experience.
Consider an equilibrium of various software products none of which are strictly superior or inferior to each other. Upon hearing that the best argument someone can make for software X is that it has feature Y (which is unrelated to UI), should your expectation of good IU go up or go down?
(To try it a different way: suppose you are in a highly competitive company like Facebooglazon and you meet a certain programmer who is the rudest most arrogant son of a bitch you ever met - yet he is somehow still employed there. What should you infer about the quality of the code he writes?)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-18T08:13:47.014Z · LW(p) · GW(p)
This is a nice example how with different models the same evidence can be evaluated differently.
My model is that programming languages are used for making programs, and for languages used in real production part of that effort goes to the positive-feedback loop of creating better tools and libraries for given language. So if some language makes the production easier -- people like Paul Graham suggest that Lisp is 10 times more productive than other languages --, I would expect better everything.
In other words, the "equilibrium of various software products none of which are strictly superior or inferior to each other" is an evidence against the claim that a language X is 10 times more productive than other languages. Or if it is more productive in some areas, then it must have a huge disadvantage somewhere else.
suppose you are in a highly competitive company like Facebooglazon and you meet a certain programmer who is the rudest most arrogant son of a bitch you ever met - yet he is somehow still employed there. What should you infer about the quality of the code he writes?
Fast, reliable, undocumented, obfuscated. :D
Or he is really imployed for some other reason than writing code.
Replies from: gwern↑ comment by gwern · 2012-06-18T14:51:11.614Z · LW(p) · GW(p)
In other words, the "equilibrium of various software products none of which are strictly superior or inferior to each other" is an evidence against the claim that a language X is 10 times more productive than other languages. Or if it is more productive in some areas, then it must have a huge disadvantage somewhere else.
Yup! It's the old 'if you're so good, why aren't you rich' question in more abstract guise. Of course, in the real world, new languages are being developed all the time, so a workable answer is already 'I'm not rich because I'm so new, but I'm getting richer'. This is the sort of answer an up and coming language like Haskell or Scala or Go can make.
↑ comment by Risto_Saarelma · 2012-06-18T06:29:38.795Z · LW(p) · GW(p)
My current understanding of present IDEs is that they are both very language-bound and need a huge amount of work to become truly usable. That means that for any language that doesn't currently enjoy large industry acceptance, I basically don't expect to have any sort of modern usable IDE.
I'm not personally hung up on the Emacs thing, but then again my recipe for a development environment is Your Favorite General Purpose Text Editor, printf statements for debugging code, a console to read the printf output, and a read-eval-print-loop for the programming language if has one (Lisp does).
If most of the people who are in position to develop modern development tools for Lisp are in fact happy using Emacs and SLIME, the result is going to be that there won't be much of a non-Emacs development environment ecosystem for Lisp. And it's unlikely that there are any unearthed gems that turn out to be outstanding modern Lisp IDEs if IDEs really do require lots and lots of work and a wide user base giving feedback to be truly useful. Though Lisp does have commercial niche companies who are still around and who have had decades of income to develop whatever proprietary tools they are using. I've no idea what kind of stuff they have got.
Speaking of the general Lisp experience, you might also want to take a look at Factor. It's primarily modeled after Forth instead of Lisp, but it basically matches all of Graham's "What made Lisp different" checklist. The code is data, the metaprogramming machinery is extensive and so on. The idiom is also somewhat more weird than Lisp's, and the programs are constantly threatening to devolve into a soup of incomprehensible three-letter opcodes, but I found the thing fun to work with. Oh, and the only IDE Factor has is Emacs-based, unless you count the language REPL, I think its ecosystem is small enough that I haven't missed any significant competitors.
↑ comment by vi21maobk9vp · 2012-06-17T21:17:00.899Z · LW(p) · GW(p)
Well, for me Vim bindings are something that (after some learning) started to make a lot of sense. Emacs (after the same amount of learning) didn't make that much sense... As text editors, modern IDEs are still weaker than any of them; the choice what to forfeit usually has to be done - sometimes you can embed your editor inside IDE instead of the native one, though.
For satisfying your curiousity, I guess you could try out free-of-charge Allegro Common Lisp version. It is personal no-deployment no-commercial-use no-commercial-research no-university-research no-government researh edition. I never looked at it because I am OK with Vim and I don't want to have something dependent on ACL that I cannot use in my day-job projects. Neither is a good reason for you not to try it...
↑ comment by dbaupp · 2012-06-17T10:52:41.342Z · LW(p) · GW(p)
a worse user interface, or ignores some other part of user experience
Many people say that most things that aren't emacs (or vim, depending on their religion...) have bad user interfaces, myself included. The keyboard-only way of working is very nice if you can get the hang of it. (Emacs is hard to begin with.)
That said, SLIME is basically the canonical Common Lisp editing environment and many the environment for other dialects emulate many of its features (e.g. Geiser for Racket), were you using one of those when you were using Emacs with a Lisp?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-17T15:42:04.956Z · LW(p) · GW(p)
I used Emacs very shortly, only as a text editor. The learning curve is horrible -- my impression is that you need to memorize dozens of new keyboard shortcuts (and unlearn dozens of keyboard shortcuts more or less consistently accepted by many other applications, also clicking right mouse button for a context menu). There seem to be some interesting features, but again only for those who memorize the keyboard shortcuts. And the whole design seems like a character terminal emulator.
So the problem is that it looks interesting, but one has to pay a huge price ahead. That would make sense if I were already convinced that Emacs is the only editor and Lisp the only programming language I will use, but I just want to try them.
By the way, what exactly is so great about "the keyboard-only way of working"? Is it the speed of typing? I usually spend more time thinking about the problem than typing. Are some powerful features invoked by keyboard combos? I would prefer them to be available from the menu and context menu. Or both from menu and as a keyboard shortcut, so I can memorize the frequently-used ones, but not the rest. (Maybe this is possible in Emacs too. If yes, the tutorial should mention it.)
To me it now seems that learning Lisp with Emacs would be having two problems instead of one. More precisely, to make the learning curve even worse.
Replies from: Zack_M_Davis, dbaupp↑ comment by Zack_M_Davis · 2012-06-17T17:13:59.162Z · LW(p) · GW(p)
There's a solution to the unfamiliar shortcuts problem: turn on CUA mode. CUA mode enables the familiar Ctrl-Z, Ctrl-X, Ctrl-C, Ctrl-V for undo, cut, copy, and paste, respectively. For basic text navigation, I use Emacs mostly like an editor with standard bindings (the aforementioned undo-cut-copy-paste, arrow keys to move by character, Control plus arrow keys to move by word, &c.). There are other things to learn, but the transition isn't really that bad.
↑ comment by dbaupp · 2012-06-18T10:33:16.221Z · LW(p) · GW(p)
By the way, what exactly is so great about "the keyboard-only way of working"? Is it the speed of typing? I usually spend more time thinking about the problem than typing. Are some powerful features invoked by keyboard combos?
Speed, features and working well for many languages (i.e. people have written Emacs modes for most language).
Having everything on the keyboard means that you don't have to do so many context switches (which are annoying and I find they can distrupt my train of though). As an example, in most word processors, bolding text with Shift+arrow keys
then Ctrl+B
is much much nicer than moving to the mouse, carefully selecting the text and then going up to the menu bar to click the little icon.
And Emacs has been around for decades, so there are hundreds of little (or not so little) packages that do anything and everything, e.g. editing a file via SSH transparently is pretty nice.
Having one environment for writing a LaTeX report, a Markdown file, a C, Haskell, Python or Shell (etc) program is nice because the basic shortcuts are the same and every environment is guaranteed to act how you expect, so, for example, doing a regex string replacement is the same process.
And on the note of keyboard combos, they are something that you end up learning by muscle memory, so it takes a little while but they become second nature, to the point of not being able to say what the shortcut is straight out, only able to work it out by actually doing the action.
(That said, Emacs/Vim isn't for everyone: maybe it's the time investment is too large, or it doesn't really suit one's way of working.)
↑ comment by vi21maobk9vp · 2012-06-17T21:05:15.068Z · LW(p) · GW(p)
Well, I have a paid job where I write in Common Lisp, and I use Vim, and both statements (paid job with CL and Vim usage) are true for multiple years.
It is a good idea to know there are different options and have a look at them, of course.
It is a good idea to look at Cream-for-Vim, too - it has Vim as a core, and most mode allow you to use Vim bindings for a while, but default bindings are more consistent with modern traditions.
↑ comment by vi21maobk9vp · 2012-06-16T09:04:01.585Z · LW(p) · GW(p)
There are no "best available tools" without specified target, unfortunately. When you feel that Racket constraints you, come back to the open thread of week, and ask what you would like to see in it - SBCL has better performance, ECL is easier to use for standalone executables, etc. Also, maybe someone would recommend you an in-Racket dialect that would work better for you for those tasks.
↑ comment by Risto_Saarelma · 2012-06-15T13:14:57.089Z · LW(p) · GW(p)
Peter Norvig's out-of-print Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp can be interesting reading. It develops various classic AI applications like game tree search and logic programming, making extensive use of Lisp's macro facilities. (The book is 20 years old and introductory, it's not recommended for learning anything very interesting about artificial intelligence.) Using the macro system for metaprogramming is a big deal with Lisp, but a lot of material for Scheme in particular doesn't deal with it at all.
The already mentioned Clojure seems to be where a lot of real-world development is happening these days, and it's also innovating on the standard syntax conventions of Common Lisp and Scheme in interesting ways. Clojure will interface with Java's libraries for I/O and multimedia. Since Clojure lives in the Java ecosystem, you can basically start with your preconceptions for developing for JVM and go from there to guess what it's like. If you're OK with your games ending up JVM programs, Clojure might work.
For open-source games in Lisp, I can point you to David O'Toole's projects. There are also some roguelikes developed in Lisp.
comment by Will_Newsome · 2012-06-15T07:20:20.471Z · LW(p) · GW(p)
I'm Xom#1203 on Diablo 3. I have a lvl 60 Barb and a lvl ~35 DH. I'm willnewsome on chesscube.com, ShieldMantis on FICS. I like bullet 960 but I'm okay with more traditional games too. Currently rated like 2100 on chesscube, 1600 or something on FICS. Rarely use FICS. I'd like to play people who are better than me, gives me incentive to practice.
Replies from: Will_Newsome, jsalvatier↑ comment by Will_Newsome · 2012-06-17T06:14:59.496Z · LW(p) · GW(p)
Are there really so few chess players on LW? 0_o
Replies from: Jack, dbaupp↑ comment by Jack · 2012-06-19T02:05:38.388Z · LW(p) · GW(p)
I play at chess.com and you are much better than me.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-06-19T02:12:27.467Z · LW(p) · GW(p)
Oh sweet, chess.com used to be only correspondence games. I'll probably get an account there, it'll probably be called "willnewsome", add me if you wish. ETA: Done.
↑ comment by jsalvatier · 2012-06-15T15:31:20.533Z · LW(p) · GW(p)
Not gaming related, but I've got a question that seems like it would appeal to you above.
comment by OrphanWilde · 2012-06-25T19:34:04.859Z · LW(p) · GW(p)
Suggestion:
I consider tipping to be a part of the expense of dining - bad service bothers me, but not tipping also bothers me, as I don't feel like I've paid for my meal.
So I've come up with a compromise with myself, which I think will be helpful for anybody else in the same boat:
If I get bad service, I won't tip (or tip less, depending on how bad the service is). But I -will- set aside what I -would- have tipped, which will be added to the tip the next time I receive good service.
Double bonus: When I get bad service at very nice restaurants, the waiter at the Steak and Shake I more regularly eat at (it's my favored place to eat) is going to get an absurdly large tip, which amuses me to no end.
comment by Viliam_Bur · 2012-06-16T16:01:27.002Z · LW(p) · GW(p)
I don't follow and understand the "timeless decision" topic on LW, but I have a feeling that a significant part of that is one agent predicting what other agent would do, by simulating their algorithm. (This is my very uninformed understanding of the "timeless" part: I don't have to wait until you do X, because I can already predict if you would do X, and behave accordingly. And you don't have to wait for my reaction, because you can already predict it too. So let's predict-cause each other to cooperate, and win mutually.)
If I am correct, there is a problem with this: having an access to another agent's code does not allow you to make any conclusions, in general case.
You can only make a simulation of one specific situation. Then another. Hoping that the agent does not want to run your simulation, which would get you both into an infinite loop. And you can't even tell whether the agent wants to run your simulation, or not.
Replies from: wedrifid↑ comment by wedrifid · 2012-06-18T01:29:35.929Z · LW(p) · GW(p)
I don't follow and understand the "timeless decision" topic on LW, but I have a feeling that a significant part of that is one agent predicting what other agent would do, by simulating their algorithm.
Thinking in terms of "simulating their algorithm" is convenient for us because we can imagine the agent doing it and for certain problems a simulation is sufficient. However the actual process involved is any reasoning at all based on the algorithm. That includes simulations but also includes creating mathematical proofs based on the algorithm that allow generalizable conclusions about things that the other agent will or will not do.
An agent that wishes to facilitate cooperation - or that wishes to prove credible threat - will actually prefer to structure their own code such that it is as easy as possible to make proofs and draw conclusions from that code.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-18T07:51:43.537Z · LW(p) · GW(p)
creating mathematical proofs based on the algorithm that allow generalizable conclusions about things that the other agent will or will not do.
It's precisely this part which is impossible in a general case. You can reason only about a subset of algorithms which are compatible with your conclusion-making algorithm.
Proof:
1) It is impossible to guess if the program will stop computation if a finite time in a general case.
Proof by contradiction: Let's suppose we have a method "Prophet.willStop(program)" that predicts whether given program will stop. How about this program? It would behave contrary to what the prediction says about it.
program Contrarian {
... if (Prophet.willStop(Contrarian)) {
... ... loop_forever();
... } else {
... ... // do nothing
... }
}
2) For any behavior "B" imagine a function "f" which you cannot predict whether it will stop or not. Will the following program exhibit the behavior "B"?
program Mysterious {
... f();
... B();
}
↑ comment by wedrifid · 2012-06-18T08:47:38.140Z · LW(p) · GW(p)
It's precisely this part which is impossible in a general case. You can reason only about a subset of algorithms which are compatible with your conclusion-making algorithm.
Yes, which why:
An agent that wishes to facilitate cooperation - or that wishes to prove credible threat - will actually prefer to structure their own code such that it is as easy as possible to make proofs and draw conclusions from that code.
Some agents really are impossible to cooperate with even when it would be mutually beneficial. Either because they are irrational in an absolute sense or because their algorithm is intractable to you. That doesn't prevent you from cooperating with the rest.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-18T09:19:32.570Z · LW(p) · GW(p)
Interesting. So a self-modifying agent might want to modify their own code to be easier to inspect, because this could make other agents trust them and cooperate with them. Two questions:
What would be the cost of such modification? You cannot just rewrite any algorithm to a more legible form. If the agent modifies themselves to e.g. a regular expression (just joking), it will be able to do only what the regular expressions are able to do, which may be not enough for a complex situation. Limiting one's own cognitive abilities seems like a dangerous move.
Even if I want to reprogram myself to make myself more legible, I need to know what algorithm will the other party use to read my code. How can I guess it? Or perhaps is it enough to meet the other agent, explain to each other our reading algorithms, and only then self-modify to become compatible with them? I am suspicious whether such process can be iterated -- my intuition is that by conforming to one agent's code analysis routines, I lose part of my abilities, which may make me unable to conform to other agent's code analysis routines.
Replies from: Vladimir_Nesov, wedrifid↑ comment by Vladimir_Nesov · 2012-06-18T23:20:53.471Z · LW(p) · GW(p)
my intuition is that by conforming to one agent's code analysis routines, I lose part of my abilities, which may make me unable to conform to other agent's code analysis routines.
Any decision restricts what happens, for all you knew before making the decision, but doesn't necessarily make future decisions more difficult. Coordinating with other agents requires deciding some properties of your behavior, which may as well constrain only the actions that need to be coordinated with other agents.
For example, strategy is a kind of generalized action, which could take the form of a straightforwardly represented algorithm chosen for a certain situation (to act in response to possible future observations). After a strategy is played out, or if some condition indicates that it's no longer applicable, decision making may resume its normal more general operation, so the mode of operation where your behavior becomes more tractable may be temporary. If this strategy includes a procedure for deciding whether to cooperate with similarly chosen strategies of other agents, it will do the trick, without taking on much more responsibility than a single action. It will just be the kind of action that's smart enough to be able to cooperate with other agents' actions.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-19T08:47:34.646Z · LW(p) · GW(p)
So it is not necessary to change my whole code, just to create a new transparent "cooperation routine" and let it guide my behavior, with a possibility of ending this routine in case the other agents stop cooperating or something unexpected happens. That makes sense.
(Though in real life I would be rather afraid to self-modify in this way, because an imperfection in the cooperation routine could be exploited. Even if other agents' cooperation routines contain no bug exploits for my routine, maybe they have already created some hidden sub-agents that will try to find and exploit bugs in my routine.)
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-06-19T19:51:01.147Z · LW(p) · GW(p)
A real life analogy is a contract, with powerful government enforcing your precommitments.
↑ comment by wedrifid · 2012-06-18T09:57:45.091Z · LW(p) · GW(p)
Interesting. So a self-modifying agent might want to modify their own code to be easier to inspect, because this could make other agents trust them and cooperate with them.
Sometimes.
Even if I want to reprogram myself to make myself more legible, I need to know what algorithm will the other party use to read my code.
You could limit yourself to simply not actively obfuscating your own code.
comment by [deleted] · 2012-06-15T13:33:06.242Z · LW(p) · GW(p)
Is anyone familiar with any statistical or machine-learning based evaluations of the "Poverty of Stimulus" argument for language innateness (the hypothesis that language must be an innate ability because children aren't exposed to enough language data to learn it properly in the time they do).
I'm interested in hearing what actually is and isn't impossible to learn from someone in a position to actually know (ie: not a linguist).
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2012-06-15T13:46:27.508Z · LW(p) · GW(p)
I was looking at this exact question a few months ago, and found these to be quite LW-reader-salient:
comment by TraderJoe · 2012-06-15T13:21:16.745Z · LW(p) · GW(p)
[comment deleted]
Replies from: gwern↑ comment by gwern · 2012-06-15T14:23:42.319Z · LW(p) · GW(p)
Just tons. For example, Harry's instructor, Mr. Bester, is a double reference.
EDIT: And obviously the Bester scenes contain other allusions: back to the gold-silver arbitrage, or Harry imaging himself a Lensman, come to mind.
Replies from: drethelin, TraderJoe, beoShaffer↑ comment by drethelin · 2012-06-15T17:13:02.288Z · LW(p) · GW(p)
What's the non-author one?
Replies from: gwern↑ comment by gwern · 2012-06-15T17:14:25.633Z · LW(p) · GW(p)
Babylon Five character, IIRC.
Replies from: arundelo↑ comment by arundelo · 2012-06-15T17:21:51.010Z · LW(p) · GW(p)
I wouldn't call that a double reference since Alfred Bester the Babylon 5 character is also named after Alfred Bester the author). Edit: Both the Bab 5 and HP:MoR characters are named after Bester the author for the same reason.
Replies from: gwern, None↑ comment by gwern · 2012-06-15T18:15:26.070Z · LW(p) · GW(p)
Since Eliezer has been a Babylon 5 fan since before December 1996 and has also read Bester's books, I think we can consider it a double reference.
Replies from: arundelo↑ comment by beoShaffer · 2012-06-15T18:30:31.633Z · LW(p) · GW(p)
Mr. Bester, is a double reference.
Really, I only caught the B5 one.
comment by beoShaffer · 2012-06-30T19:20:28.711Z · LW(p) · GW(p)
Does any one know of a good guide to Godel's theorems along the lines of the cartoon guide to lob's theorem?
Replies from: vi21maobk9vp↑ comment by vi21maobk9vp · 2012-07-01T09:04:31.871Z · LW(p) · GW(p)
If you believe that some model of computation can be expressed in arithmetics (this implies expressibility of the notion of correct proof), Godel's first theorem is more or less analyzis of "This statement cannot be proved". If it can be proved, it is false and there is a provable false statement; if it cannot be proved it is an unprovable true statement.
But most of the effort in proving Godel's theorem has to be spent on proving that you cannot go half way: if you have a big enogh theory to express basic arithmetical facts, you have to have full reflection. It can be stated in various ways, but it requires a technically accurate proof - I am not sure how well it would fit into a cartoon.
Could you state explicitly what do you want to find - just the non-tehnical part, or both?
Replies from: beoShaffer↑ comment by beoShaffer · 2012-07-01T16:55:36.619Z · LW(p) · GW(p)
Actually that was pretty much enough.
comment by OrphanWilde · 2012-06-27T20:48:28.954Z · LW(p) · GW(p)
Has anybody here has changed their minds on the matter of catastrophic anthropogenic global warming, and what evidence or arguments made you reconsider your original positions on the matter?
I've bounced back and forth on the matter several times, and right now I'm starting to doubt global warming itself, nevermind catastrophic or anthropogenic; since those I read most frequently are biased against, and my sources which support it have a bad habit of deleting any comments that disagree or criticize the evidence, which has led to my taking them less seriously, the ideal for me would be arguments or evidence for it that changed somebody's mind towards the end of supporting the theory.
Replies from: TimS, TheOtherDave↑ comment by TimS · 2012-06-27T21:04:37.109Z · LW(p) · GW(p)
I think you are overweighing the evidence from moderation policies.
If a large number of evangelicals constantly descended onto LessWrong, forcing the community to have a near hair trigger banning policy, would that be strong evidence that atheism was incorrect?
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-06-27T21:18:59.177Z · LW(p) · GW(p)
No. But it would result in me not taking theoretical weekly posts on why atheism is correct very seriously.
↑ comment by TheOtherDave · 2012-06-27T21:55:09.216Z · LW(p) · GW(p)
There are several different pieces of this for me.
I haven't much changed my mind on the existence of global climate change since I first looked into the data, about a decade ago, except to become more confident about it.
I've made various attempts to wrap my brain around this data to arrive at some opinions about its causes, but I'm evidently neither smart nor well-informed enough to arrive at any confidence about whether the conclusions people are drawing on this question actually follow from the data they are drawing it from. Ultimately I just end up either taking their word for it, or not. I try to ignore the public discourse on the subject, which in the US has become to an absurd degree a Blue/Green issue entirely divorced from any notion of relying on observation-based reasoning to ground confidence levels in assertions.
The thing that most caused me to lower my estimate of the likelihood that the climate change is exclusively or near-exclusively anthropogenic was some conversations with a couple of astrophysicist friends of mine, who talked about the state of play in the field and their sense that research into correlations between terrestrial climate fluctuations and solar output fluctuations was seen as a career-ender... not quite on par with, say, parapsychology, but on the same side of the ledger.
The thing that most caused me to raise that estimate was some conversations with a friend of mine who was working in climate modeling for a while. I don't have half a clue regarding the validity of his models, but I got the clear impression that climate models that take into account anthropogenic increases in atmospheric CO2 levels are noticeably more accurate than models that don't.
On balance, the latter raised my confidence in the assertion that global climate change is significantly anthropogenic more than the former lowered my confidence.
I don't really have an opinion yet about how catastrophic the climate change is likely to be, regardless of whether it's anthropogenic or not. Incidentally, it regularly puzzles me that the public discourse is so resolutely about the latter rather than the former, as it seems to me that catastrophic non-anthropogenic climate change should be as much of a concern for us as catastrophic anthropogenic climate change.
comment by [deleted] · 2012-06-22T08:34:32.444Z · LW(p) · GW(p)
Blogs by LWers:
- Yvain --- livejournal blog
- Will Newsome --- Computational Theology
- muflax --- muflax' mindstream
- James_G --- Writings
- XiXiDu --- Alexander Kruel
- TGGP --- Entitled To An Opinion
- James Miller --- Singluarity Notes
- Jsalvati --- Good Morning, Economics
- clarissethorn --- Clarrise Thorn
- Zack M. Davis --- An Algorithmic Lucidity
- Kaj_Sotala --- xuenay
- tommcabe --- The Rationalist Conspiracy
Note: About this list. New suggestions are welcomed. Anyone searching for interesting blogs that may not be written by LWers chec out this or maybe this thread.
comment by Multiheaded · 2012-06-21T22:43:10.058Z · LW(p) · GW(p)
I find that, sporadically, I act like a total attention whore around people whom I respect and may talk to more or less freely - whether I know them or we're only distantly acquainted. This mostly includes my behavior in communities like this, but also in class and wherever else I can interact informally with a group of equals. I talk excitedly about myself, about various things that I think my audience might find interesting, etc. I know it might come across as uncouth, annoying and just plain abnormal, but I don't even feel a desire to stop. It's not due to any drugs either. When I see that I've unloaded too much on whoever I'm talking to, I try to apologize and occasionally even explain that I have a neural condition.
I believe that it's a side effect of me deprogramming myself from social anxiety after getting all shaken up by Evangelion. In high school and earlier, I was really really shy, resented having to talk to anyone but a few friends, felt rage at being dragged into conversations, etc. But now it's like my personality has shifted a deviation or two towards the extraverted side. So such impulses, which were very rare in my childhood, became proeminent and this weirds me out. I still have a self-image of a very introverted guy, but now I'm often compelled to behave differently.
[This comment was caused by such an impulse too. Again, I'm completely sober, emotionally neutral and so on. I just have the urge to speak up.]
comment by Bill_McGrath · 2012-06-21T16:55:25.753Z · LW(p) · GW(p)
With regards to Optimal Employment, what does anyone think of the advice given in this article?
"...There are career waiters in Los Angeles and they’re making over $100,000 a year.”
That works out (for the benefit of other Europeans) at €80,000 - an astonishing amount of money to me at least. LA seems like a cool place, with a lot of culture and a more interesting places that can be easily traveled to than Dublin.
Replies from: knb, TheOtherDave↑ comment by knb · 2012-06-26T10:59:46.239Z · LW(p) · GW(p)
what does anyone think of the advice given in this article?
- To make this kind of money, you'll obviously have to get a job in an expensive restaurant, and remember there are tons of people there who have years of experience and desperately want one of these super-high value jobs. Knowing the right person will be vital if you want to score one of these positions.
- This is based on tips, so you will have to be extremely charming, charismatic, and attractive.
- Living in Los Angeles is expensive to start with, and there is a major premium if you want to live in a non-terrifying part of the city.
- The economy of Los Angeles is not doing well, hasn't been for years, and probably won't for the foreseeable future. This probably hurts the prospects for finding a high-paying waiter job.
Honestly, moving to L.A. to seek a rare super-high paying waiter job seems like a terrible idea to me.
Replies from: Bill_McGrath↑ comment by Bill_McGrath · 2012-06-26T15:09:46.944Z · LW(p) · GW(p)
To make this kind of money, you'll obviously have to get a job in an expensive restaurant, and remember there are tons of people there who have years of experience and desperately want one of these super-high value jobs. Knowing the right person will be vital if you want to score one of these positions.
That's the main issue I've been having with employment here; though I'm a good waiter, most places want two years' experience in fine dining, which I don't have.
↑ comment by TheOtherDave · 2012-06-21T17:07:07.956Z · LW(p) · GW(p)
I don't know if the claim is true or not, but i don't find it too implausible. It helps to remember that LA is frequented by a great many newly wealthy celebrities.
It does not follow that my chances of getting such a job in L.A. are high enough to be worth considering.
comment by GLaDOS · 2012-06-20T10:12:00.856Z · LW(p) · GW(p)
Why don't people like markets?
A very interesting read where the author speculates on possible reasons for why people seem to be biased against markets. To summarize:
- Market processes are not visible. For instance, when a government taxes its citizens and offers a subsidy to some producers, what is seen is the money taken and the money received. What is unseen is the amount of production that would occur in the absence of such transfers.
- Markets are intrinsically probabilistic and therefore marked with uncertainty, like other living organisms, we are loss-averse and try to minimise uncertainty
- Humans may be motivated to place their trust in processes that are (or at least seem to be) driven by agents rather than impersonal factors.
The last point strongly reminded me of the recent Less Wrong essay on Conspiracy Theories as Agency Fictions where Konkvistador muses:
Do all theories of legitimacy also perhaps rest on the same cognitive failings that conspiracy theories do? The difference between a shadowy cabal we need to get rid of and an institution worthy of respect may be just some bad luck. How this misleads us
Before thinking about these points and debating them I strongly recommend you read the full article:
comment by Pavitra · 2012-06-17T06:43:36.276Z · LW(p) · GW(p)
Positive Juice seems to have several posts related to rationality. (Look under "most viewed posts" on the sidebar.)
comment by Shmi (shminux) · 2012-06-16T01:36:04.002Z · LW(p) · GW(p)
Yet another UFAI scenario: augmentations turned zombie cyborgs, by the author of Dilbert.
comment by Multiheaded · 2012-06-15T10:10:23.892Z · LW(p) · GW(p)
Ideological link of the week:
A rousing war-screech against Reaction (and bourgeois liberalism) by eXile's Connor Kilpatrick. Deliciously mind-killed (and reviewing an already mind-killed book), but kind of perceptive in noting that the Right indeed offers very tangible, down-to-earth benefits to the masses - usually my crowd is in happy denial about that.
Replies from: None, CharlieSheen, Viliam_Bur↑ comment by [deleted] · 2012-06-16T00:52:10.541Z · LW(p) · GW(p)
I am declaring this article excommunicate traitoris, because I am reading through it and not having a virulent reaction against it, but instead finding it to be reasonable, if embellishing. I take that and the community's strong reaction against it as evidence that the article is effectively mind-killing me due to my political leanings and that I should stop reading now.
...cognitive biases are scary.
Replies from: RowanE↑ comment by RowanE · 2012-06-16T22:43:49.631Z · LW(p) · GW(p)
I read any War Nerd article that comes out, and occasionally read other articles on the site, and my reaction has been similar. The political stuff they say seems, well, "reasonable, if embellishing", and I'd been worrying about the possibility that it was just true.
I should probably follow suit on this, and avoid any non-War-Nerd articles on eXile to avoid being mind-killed, although a part of me worries that I'm simply following group mentality on the Lesswrong cult.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-06-17T02:20:07.778Z · LW(p) · GW(p)
I agree, it seems "reasonable, if embellishing", on the other hand, there are many other political blogs with very different politics that also seem "reasonable, if embellishing".
↑ comment by CharlieSheen · 2012-06-15T10:29:30.874Z · LW(p) · GW(p)
An ok read, despite being very much more partisan and harsh than what is usually discussed or linked on LW.
Despite libertarian efforts to recruit the young and liberal-minded into the flock with promises of ending the wars, closing Guantanamo and calling off the cozy relationship with the Likudniks, The Reactionary Mind makes it clear that there’s no fundamental difference between any of these right-wing breeds, and thus common ground is neither possible nor desirable, particularly with the libertarians. “When the libertarian looks out upon society,” writes Robin, “he does not see isolated individuals; he sees private, often hierarchical, groups, where a father governs his family and an owner his employees.”
Those darn out group members! Der all the same I tells ya!
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-15T10:32:30.637Z · LW(p) · GW(p)
Exactly. I just linked to it for teh lulz, to be honest. And to rebel against our group norms.
Damn, I might be emulating Will a bit too much.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-06-16T01:25:13.653Z · LW(p) · GW(p)
Pretty sure I don't ever rebel against group norms simply for teh lulz. There's usually some half-cocked or seemingly-half-cocked Dumbledoresque strategy going on in the background.
↑ comment by Viliam_Bur · 2012-06-15T10:42:25.369Z · LW(p) · GW(p)
It feels like an exercise about how many cognitive errors can you commit in one text (but in later paragraphs they get repetitive). As if the author is not even pretending to be sane, which is probably how the target audience likes it. I tried to read the text anyway, but halfway my brain was no longer able to process it.
If I had to write an abstract of this article, it would be like this:
"All my enemies (all people who disagree with me) are in fact the same: inhumanly evil. All their arguments are enemy soldiers; they should be ignored, or responded by irrational attacks and name-calling."
If there was anything more (except for naming specific enemies), I was not able to extract it.
Repulsive.
Replies from: NancyLebovitz, Multiheaded↑ comment by NancyLebovitz · 2012-06-15T13:27:06.240Z · LW(p) · GW(p)
I think there's a smidge more content than you're saying: a claim that the other side is doing the same thing. Of course, when they do it, it's disgraceful.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-15T14:07:44.676Z · LW(p) · GW(p)
To me it seems like he accused the other side (everyone who disagrees with him, because they are all the same) of lying. That's what makes it right to ignore their arguments.
That part goes like this -- Sometimes it seems that the enemy arguments make sense, that some of their values are important for us too, so perhaps we should listen to what they say. Nonsense! The enemies are pure evil, they share none of our values. They just sometimes use our words to mislead us, but they "don’t believe a word of it. Not one fucking word." (the last part = quotation)
What an unfortunate epistemic state. Especially unfortunate for other people who share the same planet.
↑ comment by Multiheaded · 2012-06-15T10:46:04.422Z · LW(p) · GW(p)
He is pontificating actual values, though, and not only power politics. I like passion and strength more than intellectual honesty and rational discourse (and, well, truth-seeking). It's sexier!
That's why I still read M.M. despite him repeating the same ideas (completely formulated in "Patchwork", "Why I am not a..." and his other old classics) over and over.
Repulsive.
Let me guess, you also don't enjoy gore porn.
Replies from: Viliam_Bur, None↑ comment by Viliam_Bur · 2012-06-15T10:59:30.957Z · LW(p) · GW(p)
I like passion and strength more than intellectual honesty and rational discourse (and, well, truth-seeking).
Well, I guess if people wouldn't find any value in this way of speaking, it wouldn't be so popular. And yes, passion and strength are attractive. But a wrong context can ruin anything; and this context is very repulsive to me.
When I say I value truth-seeking, I usually feel like a hypocrite. After reading this article, I don't. My bubble was broken, and the resulting shock recalibrated my scales. Raising the sanity waterline became a near-mode value again.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-15T11:04:36.236Z · LW(p) · GW(p)
After reading this article, I don't. My bubble was broken, and the resulting shock recalibrated my scales.
See! Aggression brings conflict, conflict brings division, division brings honesty, honesty brings self-actualization! The Code of the Sith is right!
Replies from: Eugine_Nier, GLaDOS↑ comment by Eugine_Nier · 2012-06-16T03:42:38.321Z · LW(p) · GW(p)
See! Aggression brings conflict, conflict brings division, division brings honesty, honesty brings self-actualization! The Code of the Sith is right!
Of course, human intelligence evolved largely to win arguments, thus we think up our best arguments while engaging in mind-killing debate, sort of like Kafers but without the need for physical violence.
Also, this Orwell quote.
↑ comment by GLaDOS · 2012-06-15T11:49:14.623Z · LW(p) · GW(p)
See! Aggression brings conflict, conflict brings division, division brings honesty, honesty brings self-actualization! The Code of the Sith is right!
Are you sure you aren't just a right wing person hiding in the closet? (^_^)
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-15T11:55:12.655Z · LW(p) · GW(p)
I did say around here that I'm a little bit of a fascist. My ethics are really contradictory. Although if you think that all socialists are really as toothless and compromise-loving as modern social democrats ("Liberals", as Americans call them), you'd be surprised.
Replies from: GLaDOS, None↑ comment by GLaDOS · 2012-06-15T11:56:19.522Z · LW(p) · GW(p)
My ethics are really contradictory
Being human is tough, my sympathy module sympathizes.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-15T12:01:34.569Z · LW(p) · GW(p)
The funny thing is, I don't even feel bad about that. It's just like becoming bored with useful activities, a psychological given.
↑ comment by [deleted] · 2012-06-15T12:05:15.065Z · LW(p) · GW(p)
I haven't heard of many socialist utopia's that included aggression, conflict and division. Maybe it can bring about self-actualization without conflict, thought that makes a dull story and remember humans love stories, especially about themselves. I think it was Orwell who pointed out that a Socialist utopia as normally imagined would overall be a pretty boring place to live.
Which is funny in a way, since the ideology of class struggle itself is far more inspiring that the ends the ideology seeks.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-15T12:11:26.651Z · LW(p) · GW(p)
That's because the wiser socialists, like Orwell himself, are aware that they aren't wise enough for a consistent description of their utopia - like Eliezer is aware that he wouldn't be able to describe precisely how society could work post-Singularity.
One possible left-wing utopia with conflict is just the Matrix running a massively multiplayer action/strategy game - with a global lobby/chat and economy organized.on socialist principles. This description makes some sense only because it's a cop-out; in a virtual world we can resolve our nature's inconsistencies without affecting real others, so this is just a milder form of wireheading. If you're pissed about your guild's high taxes, just take over a bot guild, murder a bot CEO in visceral detail and get high on fake power. Presumably you could also gank real people, but this would drive your taxes sky-high to do something nice, like paying for noobs' personalized education and counselling.
Replies from: None, Multiheaded↑ comment by [deleted] · 2012-06-15T12:15:13.022Z · LW(p) · GW(p)
I don't know exactly what I want, but goddamit I'm going to get it!
Arguably the essence of heroic man.
Replies from: Viliam_Bur, Multiheaded↑ comment by Viliam_Bur · 2012-06-15T12:51:42.819Z · LW(p) · GW(p)
I don't know exactly what I want, but goddamit I'm going to get it! Arguably the essence of heroic man.
Also a good way to build an Unfriendly AI.
Or an Unfriendly political regime.
Replies from: None↑ comment by Multiheaded · 2012-06-15T12:19:09.865Z · LW(p) · GW(p)
Yup.
↑ comment by Multiheaded · 2012-06-15T15:57:44.037Z · LW(p) · GW(p)
Dear downvoters, in order to help me optimize my writing, please take care to explain your reasons for every downvoted comment. Thank you. (This one looks particularly innocent and non-inflammatory to me.)
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-06-16T01:23:03.237Z · LW(p) · GW(p)
I'm pretty confident your comments in this thread are getting downvoted on the merits of the original comment, not on the merits of each individual subsequent comment. Happens to me all the time, but most of the time the trend reverses after a day or two.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-16T08:25:27.383Z · LW(p) · GW(p)
That was my implication, yeah; getting karmassassinated is more unpleasant than just getting a slap for an isolated stupid comment.
↑ comment by [deleted] · 2012-06-15T16:06:20.486Z · LW(p) · GW(p)
I like passion and strength more than intellectual honesty and rational discourse (and, well, truth-seeking). It's sexier!
What draws you to this website, then? Ostensibly, this is a venue for weirdos with an absolute fetish for rational discourse.
comment by Multiheaded · 2012-06-27T10:26:57.393Z · LW(p) · GW(p)
YAY 1000 KARMA!
Replies from: sixes_and_sevens, Multiheaded↑ comment by sixes_and_sevens · 2012-06-27T10:49:36.281Z · LW(p) · GW(p)
I've immortalised this comment as it looked when I first saw it. It tells a beautiful and hilarious story.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-27T10:53:45.316Z · LW(p) · GW(p)
I've waited until I was at 1004 as a precaution for that exact reason.
Replies from: wedrifid↑ comment by Multiheaded · 2012-06-27T15:19:45.213Z · LW(p) · GW(p)
Well done, me. I don't know what kinda game I'm playing, other than "being obnoxious".
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2012-06-27T16:16:11.649Z · LW(p) · GW(p)
Earlier this year I made an extremely satisfying but mildly obnoxious comment, knowing full well it would get downvotes, but upvotes in the descendent discussion put me in credit.
Perhaps you have to speculate to accumulate.