Open Thread, June 1-15, 2012
post by OpenThreadGuy · 2012-06-01T04:01:13.236Z · LW · GW · Legacy · 258 commentsContents
258 comments
If it's worth saying, but not worth its own post, even in Discussion, it goes here.
258 comments
Comments sorted by top scores.
comment by [deleted] · 2012-06-03T15:16:13.858Z · LW(p) · GW(p)
Moldbug on Cancer (and medicine in general)
I'm going to be a heretic and argue that the problem with cancer research is institutional, not biological. The biological problem is clearly very hard, but the institutional problem is impossible.
You might or might not be familiar with the term "OODA loop," originally developed by fighter pilots:
http://en.wikipedia.org/wiki/OODA_loop
If the war on cancer was a dogfight, you'd need an order from the President every time you wanted to adjust your ailerons. Your OODA loop is 10-20 years long. If you're in an F-16 with Sidewinder missiles, and I'm in a Wright Flyer with a Colt .45, I'm still going to kill you under these conditions. Cancer is not (usually) a Wright Flyer with a Colt .45.
Lots of programmers are reading this. Here's an example of what life as a programmer would be like if you had to work with a 10-year OODA loop. You write an OS, complete with documentation and test suites, on paper. 10 years later, the code is finally typed in and you see if the test suites run. If bug - your OS failed! Restart the loop. I think it's pretty obvious that given these institutional constraints, we'd still be running CP/M. Oncology is still running CP/M.
Most cancer researchers are not even in the loop, really. For one thing, 90% of your research is irreproducible:
http://www.pharmalot.com/2012/03/many-cancer-studies-are-act...
Even when the science is reproducible, your cell lines and mouse models are crap and bear little or no resemblance to real tumors. You know this, of course. But you keep on banging your heads against the wall.
What would a tight OODA loop look like? Imagine I'm Steve Jobs, with infinite money, and I have cancer. Everyone's cancer is its own disease (if not several), so the researchers are fighting one disease (or several), instead of an infinite family of diseases. They are not trying to cure pancreatic cancer - they are trying to cure Steveoma.
Second, they operate with no rules. They can find an exploit in Steve's cancer genome on Wednesday, design a molecule to hack it on Thursday, synthesize it on Friday and start titrating it into the patient on Saturday. Pharmacokinetics? Just keep doubling the dose until the patient feels side effects. Hey, it worked for Alexander Shulgin.
Moreover, Steve isn't on just one drug. He's got thirty or forty teams attacking every vulnerability, theoretical or practical, that may exist in his cancer cells. Why shouldn't he be attacking his cancer in 30 ways at the same time? He's a billionaire, after all.
Not everyone is a billionaire. But if you do this for enough billionaires, the common elements in the problem will start repeating and the researchers will learn a repertoire of common hacks. Eventually, the unusual becomes usual - and cheap. This is the way all technology is developed.
Of course, someone might screw up and a patient might die. You'll note that a lot of cancer patients die anyway. Steve got a lot, but he didn't get this - why not? It would be illegal, that's why. Sounds like something the Nazis would do. Nazis! In our hospitals! Oh noes!
The entire thrust of our medical regulatory system, from the Flexner Report to today, is the belief that it's better for 1000 patients to die of neglect, than 1 from quackery. Until this irrational fear of quack medicine is cured, there will be no real progress in the field.
The entire process we call "drug development" is an attempt to gain six-sigma confidence that we are not practicing quack medicine. Especially for cancer, do we need all these sigmas? And are we obtaining them in an efficient way? I can't imagine how anyone would even begin to argue the point.
What is the source of this phobia? It is ultimately a political fear - based on public opinion. Its root is in the morbid, irrational fear of poisoning. But it also has a political constituency - all the people it employs. In that it has much in common with other "anti-industries," like the software patent mafia.
He is right of course.
Edit: I didn't think I would have to clarify this, but the "He is right of course" comment was referring to the bolded text.
Replies from: othercriteria, shminux, drethelin, witzvo, Multiheaded, Athrelon, steven0461, TimS, JoshuaZ↑ comment by othercriteria · 2012-06-04T18:39:12.411Z · LW(p) · GW(p)
Following JoshuaZ, I also don't think this remark should stand unchallenged.
Why shouldn't he be attacking his cancer in 30 ways at the same time?
Off-target effects, which are difficult to predict even for a single, well-understood drug. Also, the CYPs in your liver can turn a safe drug into a much scarier metabolite. And the drugs themselves can also modify the activity of the CYPs. Combined with dynamic dosing ("keep doubling the dose until the patient feels side effects") the blood levels of the 30 drugs will be all over the place.
But if you do this for enough billionaires, the common elements in the problem will start repeating and the researchers will learn a repertoire of common hacks.
What are the common elements present when the patient has been dosed with varying amounts of 30 different drugs? If the cancer is cured, how should the credit be split among the teams? If the patient dies, who gets the blame?
The anti-quackery property of the current research regime is not just to prevent patients from being hoodwinked. It's epistemic hygiene to keep the researchers from fooling themselves into thinking they're helping when they're really doing nothing or causing active harm.
Replies from: None↑ comment by [deleted] · 2012-06-04T18:41:27.664Z · LW(p) · GW(p)
I was talking about the bolded part (though I happen to approve of the text that follows it too) when I said that he is of course right. Our dealings with medicine seem tainted by an irrational risk aversion.
Replies from: othercriteria↑ comment by othercriteria · 2012-06-04T19:35:08.028Z · LW(p) · GW(p)
Fair enough. Only my last point sort of engages with the bolded text.
I think there are much sounder ways to buy fewer undertreatment deaths from those extra sigmas of confidence than the plan that Moldbug proposes.
↑ comment by Shmi (shminux) · 2012-06-03T23:11:01.986Z · LW(p) · GW(p)
I'm wondering if the comparison with a dogfight is fair, though. With only the conservative treatment Steve has months or years to live, while a single wrong move kills him quickly. Dogfights are the opposite: the conservative approach (absence of a single right move) has a significant chance of doing you in.
In other words, the expected lifetime is reversed in a fight vs treatment between doing something and doing nothing.
↑ comment by drethelin · 2012-06-03T20:20:07.228Z · LW(p) · GW(p)
A doctor in australia has wanted to use the product our company makes to try in vivo treatment of cancer, and we are unable to let him because of how insanely liable we would be and the high cost of having a GMP facility (that would in no actual way improve the product) means it's unlikely to ever be a thing.
↑ comment by witzvo · 2012-06-08T05:07:28.638Z · LW(p) · GW(p)
The ethical principles of experimenting on human beings are pretty subtle. It's not just about protecting from quackery, though he is right that there is a legacy of Nuremburg involved. Read, for example, the guides that the Institutional Review Boards that approve scientific research must follow.
*Respect for persons involves a recognition of the personal dignity and autonomy of individuals, and special protection of those persons with diminished autonomy.
*Beneficence entails an obligation to protect persons from harm by maximizing anticipated benefits and minimizing possible risks of harm.
*Justice requires that the benefits and burdens of research be distributed fairly.
The most relevant principle here is "beneficience". Unless the experimenter can claim to be in equipoise, about which of two procedures will be more beneficial, they're obligated to use the presumed better option (which means no randomization). You can get away with more in pursuit of practice than you can in pursuit of research, but practice is deliberately restricted to prevent obtaining generalizable knowledge.
Roughly put, society has decided that it would rather that the only experiments that we perform are ones where there's no appreciable possibility of harm to the participants, than allow that it is sometimes necessary for the progress of science that noble volunteers try things which we can't be sure are good, and might be expected to be a bit worse, so that society can learn when they turn out to be better, or when they teach us things that suggest the better option. In a more rational society, everyone would have to accept that their treatment might not be the best possible for them (according to our current state of ignorance), but would require that the treatment be designed in order to lead to generalizable knowledge for the future.
↑ comment by Multiheaded · 2012-06-04T07:21:28.486Z · LW(p) · GW(p)
I'm going to be a heretic
Shocking! Why, who'd expect it from such a pillar of society!
(Sure, he's 110% right in this isolated argument, and the medical industry is indeed a blatant, painfully obvious mafia. But one could make a bit of a case against this by arguing disproportionately risky outliers: e.g. what if we try to make AIDS attack itself but instead make it air- and waterborne, and then it slips away from the lab? What if we protect the AI industry from intrusive regulation early on when it's still safe, then suddenly it's an arms race of several UFAI projects, each hoping to be a little less bad than the others?)
Replies from: None, None↑ comment by [deleted] · 2012-06-04T09:33:24.998Z · LW(p) · GW(p)
What if we protect the AI industry from intrusive regulation early on when it's still safe, then suddenly it's an arms race of several UFAI projects, each hoping to be a little less bad than the others?
imagines US congress trying to legislate friendliness or regulate AI safety
ಠ_ಠ
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-04T09:58:59.137Z · LW(p) · GW(p)
Weirder, absurder stuff has happened - and certainly has been speculated about. In fact, Stanislaw Lem has written several novels and stories that largely depict Western bureaucracy trying to cope with AIs and other emerging technologies, and the insane disreality produced by that (His Master's Voice, Peace on Earth, Golem XIV and the unashamed Dick ripoff... er, homage Memoirs Found in a Bathtub). I've read the first three, they're great.
For that matter, check out Dick's short stories and essays too.
↑ comment by steven0461 · 2012-06-03T23:28:42.027Z · LW(p) · GW(p)
I see he's commented about LessWrong, also.
Replies from: gwern, JoshuaZ↑ comment by JoshuaZ · 2012-06-04T00:38:40.318Z · LW(p) · GW(p)
Frankly, that bit came across as more or less projection. Although he is marginally correct that there's does seem to on occasion be an unhealthy attitude here that we're the only smart people.
Replies from: faul_sname↑ comment by faul_sname · 2012-06-05T21:35:21.891Z · LW(p) · GW(p)
On occasion? (note that the "People in my group are smarter/better/otherwise better" idea is not at all unique to LW)
↑ comment by JoshuaZ · 2012-06-04T17:26:23.652Z · LW(p) · GW(p)
He's not right. He's marginally correct: First, he ignores that even under current circumstances a lot of people die from quackry (and in fact, the example he uses of Steve Jobs is arguably an example since he used various ineffective alternative medicines until it was too late). Moreover, cancer mortality rates are declining, so the system isn't as ineffective as he makes it out to be. His basic thrust may be valid- there's no question that the FDA has become more bureaucratic, and that some laws and regulations are preventing research that might otherwise go ahead. But he is massively overstating the strength of his case.
Replies from: None↑ comment by [deleted] · 2012-06-04T18:34:01.133Z · LW(p) · GW(p)
First, he ignores that even under current circumstances a lot of people die from quackry (and in fact, the example he uses of Steve Jobs is arguably an example since he used various ineffective alternative medicines until it was too late)
Steve Jobs sought out quackery. You seem to be confused by what is meant by quackery here:
The entire thrust of our medical regulatory system, from the Flexner Report to today, is the belief that it's better for 1000 patients to die of neglect, than 1 from quackery. Until this irrational fear of quack medicine is cured, there will be no real progress in the field.
People who die because they rely on alternative medicine aren't going to be helped in the slightest by an additional six or five or four sigmas of certainty within the walled garden of our medical regulatory system. Medical malpractice and incompetence also is not the correct meaning of "death by quackery" in the above text. Death by quackery quite clearly refers to death caused as deaths caused by experimental treatments figuring out what the hell is happening.
You indeed miss a far better reason to criticize Moldbug here. A good reason for Moldbug being wrong is that even with those expensive six sigmas of certainty many people end up distrusting established medicine enough to seek alternatives. If you reduce the sigmas of certainty, more people will wander into the wild weeds outside the garden. These people seem more likely to be hurt than not.
Not only that, even controlling for these people, the six sigma's of certainty might also be buying us placebo for the masses. But this is easy to overestimate, since it is easy to forget how very ignorant people really are. They accept the "doctor's orders" and trust them with their lives not because the doctor is right or extremely likely to be right but because he is high status and it is expected of people to follow "doctor's orders". The reasons doctors are high status in our society has little to do with them being good at what they do. Doctors have been respected in the West for a long time and not so ancient is a time when it is plausible to argue that they killed more people than they saved. The truth of that last questions matters far less than the fact that it can be raised at all! Leaving aside doctors in particular it seems a near human universal that healers or at least one class of healers is high status regardless of their efficacy.
Replies from: DanArmak↑ comment by DanArmak · 2012-06-08T13:08:18.721Z · LW(p) · GW(p)
Nevertheless, today I believe doctors save many more than they kill. I want doctors to treat me, and I want them to become much better at treating me. And if there's no better choice, I will cheerfully pay the price of more people turning to quackery, because I won't do it myself.
comment by lsparrish · 2012-06-01T05:30:07.003Z · LW(p) · GW(p)
I've been getting the feeling lately that LW-main is an academic ghetto where you have to be sophisticated and use citations and stuff. My model of what it should be is more like a blog with content that is interesting and educational, but shorter and easier to understand. These big 50-page "ebooks" that seem to get a lot of upvotes aren't obviously better to me than shorter and more to-the-point posts with the same titles would be.
Are we suffering sophistication inflation?
Replies from: wgd, beoShaffer↑ comment by wgd · 2012-06-01T06:32:19.119Z · LW(p) · GW(p)
I feel like the mechanism probably goes something like:
- People are generally pretty risk-averse when it comes to putting themselves out in public in that way, even when the only risk is "Oh no my post got downvoted"
- Consequently, I'm only likely to post something to main if I personally believe that it exceeds the average quality threshold.
An appropriate umeshism might be "If you've never gotten a post moved to Discussion, you're being too much of a perfectionist."
The problem, of course, is that there are very few things we can do to reverse the trend towards higher and higher post sophistication, since it's not an explicit threshold set by anyone but simply a runaway escalation.
One possible "patch" which comes to mind would be to set it up so that sufficiently high-scoring Discussion posts automatically get moved to Main, although I have no idea how technically complicated that is. I don't even think the bar would have to be that high. Picking an arbitrary "nothing up my sleeve" number of 10, at the moment the posts above 10 points on the first page of Discussion are:
- Low Hanging Fruit -- Basic Bedroom Decorating
- Only say 'rational' when you can't eliminate the word
- Short Primers on Crucial Topics
- List of underrated risks?
- [Link] Reason: the God that fails, but we keep socially promoting….
- A Protocol for Optimizing Affection
- Computer Science and Programming: Links and Resources
- The rational rationalist's guide to rationally using "rational" in rational post titles
- Funding Good Research
- Expertise and advice
- Posts I'd Like To Write (Includes Poll)
- Share Your Checklists!
- A Scholarly AI Risk Wiki
Which means that in the past week (May 25 - June 1) there have been 13 discussion posts gaining over 10 points. If all of these were promoted to Main, this would be an average post rate of just under 2 per day, which is potentially just around the level which some might consider "spammy" if they get the Less Wrong RSS.
Personally, though, I would be fully in favor of getting a couple of moderately-popular posts in my feed reader every morning.
Replies from: maia↑ comment by maia · 2012-06-01T20:36:28.159Z · LW(p) · GW(p)
I'd be concerned about posts like "the rational rationalist's guide" being moved to main. It's an amusing post, but I really don't think it meets the standards I would want for the main blog. And it is quite highly upvoted. I think this shows that just going by upvotes may be insufficient.
Replies from: wgd↑ comment by wgd · 2012-06-02T01:55:19.272Z · LW(p) · GW(p)
I'm not particularly attached to that metric, it was mostly just an example of "here's a probably-cheap hack which could help remedy the problem". On the other hand, I'm not convinced that one post means that a "Automatically promote after a score of 10" policy wouldn't improve the overall state of affairs, even if that particular post is a net negative.
Replies from: vi21maobk9vp↑ comment by vi21maobk9vp · 2012-06-02T11:16:27.234Z · LW(p) · GW(p)
Well, if the general idea that Main-blog posts are a good read per se, even without reading comments or any Discussion threads, I'd say that of 13 posts in the list, there are:
Low Hanging Fruit -- Basic Bedroom Decorating
Expertise and advice
A Protocol for Optimizing Affection
Information-coveying medium-length posts liked by community
Only say 'rational' when you can't eliminate the word
The rational rationalist's guide to rationally using "rational" in rational post titles
Very relevant in Discussion, out of context in Main
Short Primers on Crucial Topics
Funding Good Research
A Scholarly AI Risk Wiki
Discussion of low-level strategy. Will be useful for general audience after we know how it turned out, maybe; currently it is a status update that is shown to those who are interested in the inner workings of community.
List of underrated risks?
Share Your Checklists!
Questions, not blog posts.
Posts I'd Like To Write (Includes Poll)
Between question and strategy discussion
[Link] Reason: the God that fails, but we keep socially promoting….
An interesting external link
Computer Science and Programming: Links and Resources
A set of external links
All in all, I would say that 3 of 13 clearly match my perception of idea of "Main" and 2 more match my perception of supposed reading pattern of "Main". For majority of posts, their moving to Main means somewhat redefining Main. I don't have an opinion if it is a good or a bad idea (I read both for fun and don't believe in LW core values), but I do think that majority of high-voted Discussion posts cited had a respectable reason to be in Discussion.
↑ comment by beoShaffer · 2012-06-01T05:43:37.332Z · LW(p) · GW(p)
Can you give a specific example of recent posts that you object to, because that is not the impression I've been getting.
Replies from: lsparrish↑ comment by lsparrish · 2012-06-01T06:06:13.453Z · LW(p) · GW(p)
I don't mean that I object to it completely, but the type that seem a bit overrepresented are like these:
- http://lesswrong.com/lw/cej/general_purpose_intelligence_arguing_the/
- http://lesswrong.com/lw/b7w/decision_theories_a_semiformal_analysis_part_iii/
- http://lesswrong.com/lw/bzy/crowdsourcing_the_availability_heuristic/
I did recently have an article of my own promoted which is less sophisticated, so I'm not complaining. I'm just wondering if people might be choosing to hold back out of fear that their less academic writing style is not good enough for main, and/or dressing it up more than need be.
Replies from: beoShaffer↑ comment by beoShaffer · 2012-06-01T06:25:47.371Z · LW(p) · GW(p)
I see what you're saying with the last one, but I don't think the middle one was over presented. The first one is a tricky case as the author was explicitly workshopping an academic piece in the making, and had good reason to use LW for that purpose.
↑ comment by gwern · 2012-06-12T17:11:49.846Z · LW(p) · GW(p)
Let me guess it was one of the top posters
Yes.
who thought your recent criticism of the direction of the community got too much karma.
Yes; his criticism was trivially wrong, as could be seen just by looking at posts systematically.
Or maybe someone who didn't like your responses here.
Actually, I laid out exactly what was wrong with the post: it was a good idea which hadn't been developed anywhere to the extent that it would be worth reading or referring back to, and I gave pointers to the literature he could use to develop it.
The reason I told Konk that his contributions were slightly net negative - when he specifically asked for my opinion on the matter - was exactly what Vladimir_Nesov guessed: he was flinging around and contributing all sorts of things, and just generally increasing the noise to signal ratio. I suggested he simply develop his ideas better and post less; Konk was the one who decided that he should leave/take a long break, saying that he had a lot of academic work coming up as well.
Replies from: TimS, CharlieSheen, CharlieSheen↑ comment by TimS · 2012-06-12T17:52:43.176Z · LW(p) · GW(p)
I'm not convinced his criticism is wrong. Lukeprog listed lots of substantive recent articles, but I question whether they were progress, given the current state of the community (for example, I'd like more historical analysis a la James Q Wilson)
Given the karma, it appears that the community is not convinced the criticism is wrong. Even if Konkvistador is wrong, he isn't trivially wrong.
Replies from: gwern↑ comment by gwern · 2012-06-12T18:50:03.945Z · LW(p) · GW(p)
Lukeprog listed lots of substantive recent articles, but I question whether they were progress, given the current state of the community (for example, I'd like more historical analysis a la James Q Wilson)
I think you're shifting goalposts. 'Progress', whatever that is, is different from being insular, and ironically enough, genuine progress can be taken as insularity. (For example, Rational Wiki mocks LW for being so into TDT/UDT/*DT which don't yet have proper academic credentials and insinuates they represent irrational cult-like markers, even though those are some of the few topics I think LW has made clear-cut progress on!)
Given the karma, it appears that the community is not convinced the criticism is wrong. Even if Konkvistador is wrong, he isn't trivially wrong.
I don't like to appeal to karma. Karma is changeable, does change, and should change as time passes, the karma at any point being only a provisional estimate: I have, here and on Reddit, on occasion flipped a well-upvoted (or downvoted) comment to the other sign by a well-reasoned or researched rebuttal to some comment that is flat-out wrong.
Perhaps people simply hadn't looked at the list of recent posts to notice that the basic claim of insularity was obviously wrong, or perhaps they were being generous and like you, read him as claiming something more interesting or subtle or not so obviously wrong like 'LW is not working on non-LW material enough'.
Replies from: TimS↑ comment by TimS · 2012-06-12T18:56:20.315Z · LW(p) · GW(p)
Fair enough about karma. But first sentence of Konkvistador's post (after the rhetorical question) says:
we very seldom seem to adopt useful vocabulary or arguments or information from outside of LessWrong.
And the second paragraph of the post begins:
The community seems to not update on ideas and concepts that didn't originate here.
That looks a lot like saying, "LW is not working on non-LW material enough"
Replies from: gwern↑ comment by gwern · 2012-06-12T19:05:23.879Z · LW(p) · GW(p)
Well, look through the examples, or heck, posts since then. Do you see people refusing to update? 'No, I refuse to believe the Greeks could have good empirical grounds for rejecting heliocentrism! I defy your data! And ditto for the possibility Glenn Beck wrote anything flattering to our beliefs!'
Replies from: TimS↑ comment by TimS · 2012-06-12T19:09:38.276Z · LW(p) · GW(p)
What I mean is that certain methodological approaches are heavily disfavored. Slightly longer version of my point here.
Edit: And who is moving the goalposts now? You said "position X" is not trivially wrong. I said, "Here's an example of Konkvistador articulating position X."
Replies from: gwern↑ comment by gwern · 2012-06-12T19:16:50.557Z · LW(p) · GW(p)
Since history is so often employed for political purposes ("It is a principle that shines impartially on the just and unjust that once you have a point of view, all history will back you up"), it's not surprising we don't discuss it much. If, even with this disfavoring, people still think posts like http://lesswrong.com/lw/cuk/progress/ are worth posting and inspiring pseudohistory like this - then this is not a disfavoring I can disfavor.
Not that excluding one area is much evidence of insularity. If one declares one will eat only non-apples, is one an insular and picky eater?
Replies from: TimS↑ comment by TimS · 2012-06-12T19:24:21.465Z · LW(p) · GW(p)
I absolutely agree that history is filled with politically motivated bias. But there are actual historical facts (someone won the Siege of Vienna of 1529, and it wasn't the Ottoman Empire). There are historical theories that actually fit most of the facts and pseudo-historical theories that fit carefully selected sets of facts. Being able to tell the difference is a valuable skill that members of this community should try to develop.
To put it differently, the falsity of the theory of moral progress has implications for assessing the difficulty of building a Friendly AI, doesn't it?
Replies from: gwern↑ comment by gwern · 2012-06-12T19:54:27.215Z · LW(p) · GW(p)
There are historical theories that actually fit most of the facts and pseudo-historical theories that fit carefully selected sets of facts. Being able to tell the difference is a valuable skill that members of this community should try to develop.
And how does one do that? The problem is that most historical facts are publicly available, so how does one distinguish a theory producing by data mining and overfitting from one that wasn't? The only historian I can think of who has anything close to an answer to that is Turchin via the usual statistics method of holding back data to test the extrapolations.
Turchin and Carrier are discussed occasionally, but not that much; why should I think this is not the right amount of discussion?
Replies from: TimS, Eugine_Nier↑ comment by TimS · 2012-06-12T20:06:52.196Z · LW(p) · GW(p)
The bigger problem with most historical analysis takes the following form:
1) Pick a historical thesis (usually because it supports one's pre-existing moral positions)
2) Find all historical evidence that supports that theory
3) Throw any remaining historical evidence in the trash
If you have successfully avoided that trap, congratulations. Society as a whole has not, and this community is not noticeably better than the greater societies we are draw from.
↑ comment by Eugine_Nier · 2012-06-14T00:33:12.258Z · LW(p) · GW(p)
There are historical theories that actually fit most of the facts and pseudo-historical theories that fit carefully selected sets of facts. Being able to tell the difference is a valuable skill that members of this community should try to develop.
And how does one do that? The problem is that most historical facts are publicly available, so how does one distinguish a theory producing by data mining and overfitting from one that wasn't?
This is a thick problem.
↑ comment by CharlieSheen · 2012-06-12T18:52:42.393Z · LW(p) · GW(p)
Apologies for the harsh language gwern. I shouldn't have used it. I will edit and retract to correct that.
↑ comment by CharlieSheen · 2012-06-12T17:40:24.839Z · LW(p) · GW(p)
Yes; his criticism was trivially wrong, as could be seen just by looking at posts systematically.
I didn't think so. Neither did the many posters who publicly endorsed the post.
Actually, I laid out exactly what was wrong with the post: was a good idea which hadn't been developed anywhere to the extent that it would be worth reading or referring back to, and I gave pointers to the literature he could use to develop it.
Also Lukeprog thought the article you found so clearly deficient worthy of inclusion on his productivity list. Either you are wrong and his article isn't crap. Or Luke's standards on what counts as productivity are too low in which case your argument on this criticism of his notion that we aren't making proper progress is that much weaker.
Also we have different styles of writing. Have you noticed how people are getting bored of Main? Guess what maybe that's because its becoming a wannabe Academic ghetto dominate with only your style where new posters don't dare contribute.
it may seem natural to a natural systematizing archiving outlier like you to spend a whole lot of time on your stuff polishing it to perfection, but all this will result in is a whole bunch of a small bag of boring posts of uniformly decent but not extraordinary quality. Isn't it funny that nearly any old Eliezer sequence post dosen't live up to such citation heavy, research made explicit standards you set? Such an article would be upvoted by the common poster make no mistake, but l33t busybodies like you would home in on the technicalities.
The reason I told Konk that his contributions were slightly net negative
First of the community obviously disagrees aside from positive comments on his contributions that I could dig up, he has received more karma in the past 30 days than any other single poster and ~7k overall isn't bad at all. And no this wasn't due to mass spamming. His average post has like 5 karma or something. Fracking Nerdling on a stick he's even currently like 50 points ahead of Eliezer HPMOR Yudkowsky who descended down from his throne to write an article answering criticism threatening his funding.
The reason I told Konk that his contributions were slightly net negative - when he specifically asked for my opinion on the matter
If Konkvistador flat out asked you if it would be overall better than the current situation to stop posting at all, and you responded with a yes, then you either lack a social brain, because the right answer is not "yes" but "no, but you should work harder on improving." especially since he apparently hero worships you.
I suggested he simply develop his ideas better and post less; Konk was the one who decided that he should leave/take a long break, saying that he had a lot of academic work coming up as well.
Have you heard about "saving face"? There is probably an added language and cultural barrier, misunderstandings are common even with those superficially well versed in English.
Also you dark artsily referring to him as "Konk" with faux affection to manipulate the crowd dosen't impress me.
Edit:
2nd Edit: Toned it down.
Replies from: ArisKatsaris, TimS, gwern↑ comment by ArisKatsaris · 2012-06-12T18:13:34.860Z · LW(p) · GW(p)
Tell me down voters did you even read my comment
I read your comment, and I downvoted you because it was rude towards gwern, calling him a "damn robot". And I'm one of the guys that urged Konkvistador to stay, in a comment above. That doesn't excuse your rudeness. So you get properly downvoted by me (and gwern got upvoted because I like that he spoke up and declared he was the "top poster" in question and also gave a clear explanation of his reasons).
That konkvistador gave gwern's criticism more weight than he should isn't gwern's fault, it's konkvistador's.
Replies from: CharlieSheen↑ comment by CharlieSheen · 2012-06-12T18:22:25.147Z · LW(p) · GW(p)
I guess you are right ok I'll edit away the "damn robot" part. My points however haven't been addressed.
↑ comment by gwern · 2012-06-12T19:00:11.072Z · LW(p) · GW(p)
Also Lukeprog thought the article you found so clearly deficient worthy of inclusion on his productivity list. Either you are wrong and his article isn't crap. Or Luke's standards on what counts as productivity are too low in which case your argument on this criticism of his notion that we aren't making proper progress is that much weaker.
Yeah, maybe. Other possibilities include being ironic: if he objects to his inclusion on the list...
Also we have different styles of writing. Have you noticed how people are getting bored of Main? Guess what maybe that's because its becoming a wannabe Academic ghetto dominate with only your style where new posters don't dare contribute. it may seem natural to a natural systematizing archiving outlier like you to spend a whole lot of time on your stuff polishing it to perfection, but all this will result in is a whole bunch of a small bag of boring posts of uniformly decent but not extraordinary quality. Isn't it funny that nearly any old Eliezer sequence post dosen't live up to such citation heavy, research made explicit standards you set? Such an article would be upvoted by the common poster make no mistake, but l33t busybodies like you would home in on the technicalities.
People are getting bored of Main because the best contributors like Yvain or Eliezer have other things to do, and the standard topics are hard to go over again without either repetition or going into depth beyond most readers. It happens: wells run dry or the material becomes too advanced. And everyone else isn't stepping up the plate. So, things become less interesting.
I don't criticize the posts because Eliezer uses cites all the time in the sequences, and where he isn't, I often know the citations anyway from past discussions on SL4, standard transhumanist reading materials, the old SIAI Bookshelf, book & paper recommendations, etc.
Also you dark artsily referring to him as "Konk" with faux affection to manipulate the crowd dosen't impress me.
I'm glad that you were able to explain why I and other chatters in #lesswrong
sometimes called him by that shortcut: we were just manipulating the IRC crowd.
like they pulled with K and Roko?
Good grief. Maybe I should just put up IRC logs for the past few days so people can see for themselves what was said...
Replies from: CharlieSheen↑ comment by CharlieSheen · 2012-06-12T19:03:53.282Z · LW(p) · GW(p)
Yeah, maybe. Other possibilities include being ironic: if he objects to his inclusion on the list...
That's not very nice. Apparently LW is big on being nice. See I'm learning.
I'm glad that you were able to explain why I and other chatters in #lesswrong sometimes called him by that shortcut: we were just manipulating the IRC crowd.
This is the first time I heard about this conversation occurring on IRC. Ok so I'm assuming Konk is a nick people use for him over there. But why use it on LW in this context? Come now, you where trying to communicate "oh look I'm socially near to him".
I don't criticize the posts because Eliezer uses cites all the time in the sequences, and where he isn't, I often know the citations anyway from past discussions on SL4, standard transhumanist reading materials, the old SIAI Bookshelf, book & paper recommendations, etc.
You aren't always the intended audience. Criticism from the perspective of those unfamiliar with Yudkwosky's arguments are more valuable don't you agree? The point of the sequences is to bring people up to speed.
Replies from: gwern↑ comment by gwern · 2012-06-12T19:08:34.876Z · LW(p) · GW(p)
That's not very nice.
It's both clever and a dilemma which teaches a relevant point; it may not be nice, but that doesn't matter.
This is the first time I heard about this conversation occurring on IRC.
Does it matter that it was IRC as opposed to a separate forum website? If it does matter, then perhaps you were jumping to conclusions in interpreting 'off-site'...
You aren't always the intended audience. Criticism from the perspective of those unfamiliar with Yudkwosky's arguments are needed.
Sure. But that's by definition criticism I am unable to give and an audience I am not in. Am I to be blamed for preferring the material I learn more from?
comment by Grognor · 2012-06-01T13:57:17.963Z · LW(p) · GW(p)
My brain came up with this thought:
All else being equal, a murder is better than an accidental death, because a murder at least satisfies someone's preferences.
I was very tempted to take this as a reductio ad absurdum of consequentialism, to find all the posts where I advocated consequentialism and edit them, saying I'm not a consequentialist anymore, and to rethink my entire object-level ethics from the ground up.
And then my brain came up with other thoughts that defeated the reductio and I'm just as consequentialist as before.
For some reason, this was all very scary to me. This is the third data point now in examples of, "Grognor's opinion being changed by arguments way too easily". I think I'm gullible.
Three things: 1) I'm curious if other consequentialists will find the same knockdown for the reductio that I did; 2) Should I increase my belief in consequentialism since it just passed a strenuous test, decrease it because it just barely survived a bout with a crippling illness, or leave it the same because they cancel out or some other reason? 3) I can't seem to figure out when not to change my mind in response to reasonable-looking arguments. Help
Replies from: gwern, wedrifid, Ezekiel, Oscar_Cunningham, djcb, Alicorn↑ comment by gwern · 2012-06-01T14:39:12.676Z · LW(p) · GW(p)
Maybe you need to pay more attention to the ceteris paribus. When you include that, it seems perfectly sensible to me.
Consider a world in which in 1945 Adolf Hitler will either choke to death on a piece of spaghetti or will be poisoned by a survivor of the death camps that bribed his way into Hitler's bunker...
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-04T07:08:08.496Z · LW(p) · GW(p)
Pop psych states that murder, especially first-time murder, induces lifelong psychological trauma in neurotypical adult people - and that, therefore, most of them lose more (I'm not saying "more utility") than they gain.
Clearly, that wouldn't be the case with the death camp survivor [1], but I can see a sane, relatively untraumatized civillian who'd volunteer for Hitler's post-war execution regretting their loss of innocence afterwards.
[1] I've heard that this was what happened with the commandant of Dachau and some of the SS guards there, who were turned over to the liberated prisoners by American soldiers, and presumably torn apart by them.
http://en.wikipedia.org/wiki/Dachau_massacre
↑ comment by [deleted] · 2012-06-04T09:51:08.877Z · LW(p) · GW(p)
Not being a murderer, regardless of the utilitarian pay-offs is an important part of people's identify and reasoning about morality.
It is fascinating just how damn crushing such tags can get once you start multiplying them. Even in the eyes of other people. Consider someone who has killed for the greater good. Now consider someone who has killed, raped and pillaged for the greater good (by historical standards this is the regular war hero pack).
Now consider someone who has killed, raped, blackmailed and tortured for the greater good. One may be glad that such people do exist and that they are on ones "side". But wouldn't you feel uneasy around such a person? Especially if you couldn't abstract away their acts but had to watch say videos of them being performed.
Imagine carrying those memories, what is your self-conception? The only tale of virtue you have left when you are alone at night is that you posses the virtue of being the kind of person who is capable of suspending moral inhibitions based on long chains of reasoning. Maybe you are just that good at reasoning. Maybe.
Replies from: wedrifid, Multiheaded↑ comment by wedrifid · 2012-06-04T10:47:03.173Z · LW(p) · GW(p)
Now consider someone who has killed, raped and pillaged for the greater good (by historical standards this is the regular war hero pack).
The parenthetical is true but the raping and (for most part) the pillaging was for personal gain, not the public good. It takes much more effort to contrive scenarios with folks who "rape for the public good".
Replies from: None, TheOtherDave↑ comment by [deleted] · 2012-06-04T12:08:31.223Z · LW(p) · GW(p)
No more than torture for the public good, since rape can be used as a form of torture. It also has been used as a form of psychological warfare. Also pillaging can be vital to easing logistic difficulties of your side.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-05T06:29:11.703Z · LW(p) · GW(p)
Also pillaging can be vital to easing logistic difficulties of your side.
Indeed, if the good guys are murdering whom they want and extorting stuff from the populace, it's called a resistance movement, and in a generation there's hardly anyone who thinks ill of them. See Russia, Spain, China etc.
↑ comment by TheOtherDave · 2012-06-04T13:39:34.130Z · LW(p) · GW(p)
By implying by omission that the killing was not mostly for personal gain, do you mean to suggest that it was for the public good, or to invoke a non-excluded middle?
Replies from: wedrifid↑ comment by wedrifid · 2012-06-04T19:29:25.580Z · LW(p) · GW(p)
By implying by omission that the killing was not mostly for personal gain, do you mean to suggest that it was for the public good, or to invoke a non-excluded middle?
I make no claim about the killing - that is at least arguable and inclusion would distract from the main point that the raping in the example given (historic war bands) was not.
↑ comment by Multiheaded · 2012-06-04T10:05:00.735Z · LW(p) · GW(p)
Also - and primarily - this. Damn, of course I considered those aspects too, I'm not so psychologically blind as to not understand them. I just was too lazy to hammer that idea into shape for my comment. So you deserve all the karma for it.
Um, sorry, I'm just frustrated by how often I neglect to mention some facet of an argument due to it being difficult to communicate through written word.
Replies from: None↑ comment by [deleted] · 2012-06-04T10:25:54.086Z · LW(p) · GW(p)
I was merely elaborating on an argument that I thought was already there but deserved some more attention. Particularly in this line:
Replies from: Multiheaded, Multiheadedregretting their loss of innocence afterwards
↑ comment by Multiheaded · 2012-06-04T10:38:37.294Z · LW(p) · GW(p)
Yep, but you see, there's a difference between "mere" emotional anguish, which is, after all, biologically constrained in a way, and identity-related problems (which, as I understand it, can ruin a person precisely by using their "normal" state, with conscious voiced thoughts, as a carrier). It's mostly bad to feel bad about yourself, but to know bad about yourself - seemingly in the empirical sense, just like you know there's a monitor in front of you - is even worse.
Not everyone would see this in my phrase; I should've elaborated.
↑ comment by Multiheaded · 2012-06-04T10:42:17.695Z · LW(p) · GW(p)
BTW, I've sent you a few PMs with interesting (IMO) questions over the last few weeks/months, and none have been answered! I don't wish to embarrass you, I'm just curious if they might've been simply eaten by the mail gremlins. :) Might I just copy them and re-send them, so you could share an opinion or two at your leisure?
Replies from: None↑ comment by [deleted] · 2012-06-04T12:00:33.147Z · LW(p) · GW(p)
Oh don't worry I'm going to respond to all of those in order, if you remember I did send you a PM explaining that I was going to respond to them eventually (that dreadful word). Quite honestly though I really dislike LW's PM system, for starters my inbox contains both PMs and regular public responses and they get kind of lost in the mail, so there is that trivial barrier to responding.
I think I've already mentioned that I'd like to move our correspondences to email, so if you wouldn't mind sending the text of your previous unanswered PMs in that format or me sending you an email quoting them I would much prefer that mode of communication. I'm also very much open to communicating live via skype or other IM programs.
Though obviously we'll probably PM such contact data instead of disclosing it publicly.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-04T12:14:23.049Z · LW(p) · GW(p)
Kthx.
↑ comment by wedrifid · 2012-06-04T11:00:43.256Z · LW(p) · GW(p)
All else being equal, a murder is better than an accidental death, because a murder at least satisfies someone's preferences.
I was very tempted to take this as a reductio ad absurdum of consequentialism, to find all the posts where I advocated consequentialism and edit them, saying I'm not a consequentialist anymore, and to rethink my entire object-level ethics from the ground up.
It can't be a reductio ad absurdium of consequentialism because the quoted claim isn't even implied by consequentialism. It is implied by some forms of utilitarianism. Consequentialism cares (directly) only about one set of preferences and the fact that the murderer has a preference for successfully murdering doesn't get a positive weighting unless the specific utility function arbitrarily happens to do so. It is just as easy to have a consequentialist utility function that prefers the accident to the murder as the reverse.
↑ comment by Ezekiel · 2012-06-02T18:09:48.598Z · LW(p) · GW(p)
3) I allot a reasonable-seeming amount of time to think before deciding to drastically change something important. The logic is that the argument isn't evidence in itself - the evidence is the fact that the argument exists, and that you're not aware of any flaws in it. If you haven't thought about it for a while, the probability of having found flaws is low whether or not those flaws exist - so not having found them yet is only weak evidence against your current position.
So basically, "Before you've had time to consider them".
↑ comment by Oscar_Cunningham · 2012-06-01T16:02:40.126Z · LW(p) · GW(p)
3) I can't seem to figure out when not to change my mind in response to reasonable-looking arguments. Help
See this quote. Presumably you already had strong arguments in favour of consequentialism. So when you came across a knock-down counterexample your first reaction should have been confusion. When you encounter a convincing argument against a position you hold strongly, bring to mind the arguments that first convinced you of that position and try to bring the opposing arguments into direct conflict with them. It should then be clear that one of the arguments has a logical flaw in it. Find out which one.
Replies from: Ezekiel↑ comment by Ezekiel · 2012-06-02T18:02:24.699Z · LW(p) · GW(p)
That seems like it could easily slip into rehearsing the evidence, which can be disastrous. Watch out for that.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2012-06-02T18:54:21.276Z · LW(p) · GW(p)
Yes, I only felt okay about recommending it because Grognor was complaining of exactly the opposite problem.
↑ comment by djcb · 2012-06-23T20:35:36.615Z · LW(p) · GW(p)
Hmmm. Murder decreases the 'expected utility' (cfg. life expentancy), so I think it would still be considered bad in some forms of consequentialism. The corner case - where expected utility would not change (much) would be e.g. shooting somebody who is falling off a cliff who will certainly not survive.
More general, it seems ethical systems are usually post-hoc organizing principles for our messy ethical intuitions. However, those intuitions are so messy, that for every simple set of rules, we can find some exception. Hence we get things like the trolley problem...
comment by Sniffnoy · 2012-06-04T05:23:09.095Z · LW(p) · GW(p)
Taken straight from the top of Hacker News: Eulerian Video Magnification for Revealing Subtle Changes in the World.
In short, some people have found an algorithm for amplifying periodic changes in a video. I suggest watching the video, the examples are striking.
The primary example they use is that of being able to measure someone's pulse by detecting the subtle variations of color in their face.
The relevance here, of course, is that it's a very concrete illustration of the fact that there's a hell of lot of information out there to be extracted (That Alien Message, etc.) Makes a nice companion example to the AI box experiment -- "Suppose you didn't even know this was possible, because the AI had figured it out first?"
comment by sixes_and_sevens · 2012-06-01T11:15:33.697Z · LW(p) · GW(p)
I've recently started Redditing, and would like to take the opportunity to thank the LW readership for generally being such a high calibre of correspondent.
Thank you.
Replies from: faul_sname↑ comment by faul_sname · 2012-06-01T17:44:02.510Z · LW(p) · GW(p)
Subscribe to the smaller subreddits, and also Depthhub. This will drastically improve your procrastination experience.
comment by Paul Crowley (ciphergoth) · 2012-06-02T10:09:07.587Z · LW(p) · GW(p)
Convergent instrumental goal: KIll All Humans
Katja Grace lists a few more convergent instrumental goals that an AGI would have absent some special measure to moderate that goal. It seems to me that the usual risk from AI can be phrased as a CIV of "kill all humans". Not just because you are made of atoms that can be used for something else, but because if our goals differ, we humans are likely to act to frustrate the goals of the AGI and even to destroy it, in order to maximize our own values; killing us all mitigates that risk.
comment by vi21maobk9vp · 2012-06-01T05:44:15.973Z · LW(p) · GW(p)
Do you consider Stupid Questions Open Thread a useful thing? Do you want new iterations to appear more reagularly? How often?
Even though I didn't ask anything in it, I enjoyed reading it and participating in discussions and I think that it could reduce "go to Sequences as in go to hell" problem and sophistication inflation.
I would like it to reoccur with approximately the regularity of usual Open Threads; maybe not on calendar basis, but after a week of silence in the old one or something like that.
Replies from: beoShaffer, Tuxedage↑ comment by beoShaffer · 2012-06-01T06:27:14.034Z · LW(p) · GW(p)
I consider them useful and roughly agree with the time interval you suggest.
Replies from: maia↑ comment by Tuxedage · 2012-06-01T14:06:40.793Z · LW(p) · GW(p)
I too, enjoy Open Threads, and I feel that they should occur with higher frequency, around every week or so.
Replies from: vi21maobk9vp↑ comment by vi21maobk9vp · 2012-06-01T18:33:35.535Z · LW(p) · GW(p)
Stupid Questions Open Threads are not simply Open Threads like this one. They are special threads where people come to ask questions that are probably answered in multiple discussions on LW. Latest seen such thread is: http://lesswrong.com/lw/bws/stupid_questions_open_thread_round_2/
comment by Kaj_Sotala · 2012-06-04T12:09:24.779Z · LW(p) · GW(p)
If there's still somebody who thinks that the word "Singularity" hasn't lost all meaning, look no further than this paper:
We agree with Vinge's suggestion for naming events that are “capable of rupturing the fabric of human history” (or leading to profound societal changes) as a “singularity” [...] In this paper, we consider two past singularities (arguably with important enough social change to qualify) [...]. The globalization occurring under Portuguese leadership of maritime empire building and naval technological progress is characterized by a metric describing diffusion. The revolution in time keeping, on the other hand, is characterized by a technological capability metric.
comment by [deleted] · 2012-06-03T09:30:54.065Z · LW(p) · GW(p)
Relevant to thinking about Moldbug's argument that decline in the quality of governance is masked by advances in technology and Pinker's argument on violence fading.
Murder and Medicine The Lethality of Criminal Assault 1960-1999
Despite the proliferation of increasinglydanger ous weapons and the verylar ge increase in rates of serious criminal assault, since 1960, the lethalityof such assault in the United States has dropped dramatically. This paradox has barely been studied and needs to be examined using national time-series data. Starting from the basic view that homicides are aggravated assaults with the outcome of the victim’s death, we assembled evidence from national data sources to show that the principal explanation of the downward trend in lethalityinvolves parallel developments in medical technologyand related medical support services that have suppressed the homicide rate compared to what it would be had such progress not been made. We argue that research into the causes and deterrabilityof homicide would benefit from a “lethality perspective” that focuses on serious assaults, only a small proportion of which end in death.
A blogger commenting on the study, and summarizing the bottom line pretty well:
"2010 homicide rate approximately the same as 1960 rate, despite around 3x the amount of aggravated assault. If you look in the paper's discussion on motor vehicle fatalities, you will see almost exactly the same story. Nearly all of the improvement in recent years is the result of technology, not governance."
comment by cousin_it · 2012-06-02T17:49:14.517Z · LW(p) · GW(p)
Here's a math problem that came up while I was cleaning up some decision theory math. Oh mighty LW, please solve this for me. If you fail me, I'll try MathOverflow :-)
Prove or disprove that for any real number between 0 and 1, there exist finite or infinite sequences and of positive real numbers, and a finite or infinite matrix of numbers each of which is either 0 or 1, such that:
\sum%20x_m=1%0A\\2)\sum%20y_n=1%0A\\3)\forall%20n\sum%20x_m\varphi_{mn}=p%0A\\4)\forall%20m\sum%20y_n\varphi_{mn}=p)
Right now I only know it's true for rational .
ETA Now I also know that finite sequences can only yield rational . It was quite fun to prove.
ETA 2 Asked it on MathOverflow several hours ago, no good answers yet.
ETA 3 Gerald Edgar on MathOverflow has solved it. This is the third math problem I posted to MO, each one had stumped me for more than a day, and each one was solved in less than a day.
comment by NancyLebovitz · 2012-06-01T09:30:54.570Z · LW(p) · GW(p)
The risk of supervolcanos looks higher than previously thought, though none are imminent.
Is there anything conceivable which can be done to ameliorate the risk?
Replies from: gwern↑ comment by gwern · 2012-06-01T14:37:06.959Z · LW(p) · GW(p)
The only suggestion I've heard is self-sustaining space colonies, which obviously is not doable anytime soon. Depending on the specifics, buried bunkers might work, as long as they're not on the same continent to be buried in the ash or lava.
Replies from: faul_sname, NancyLebovitz↑ comment by faul_sname · 2012-06-01T17:51:03.617Z · LW(p) · GW(p)
How long will it actually take for a self-sustaining colony in LEO to be plausible? We have the ISS and Biosphere 2, and have for quite some time. Zero G poses some problems, but certainly not insurmountable ones. It looks like we have at least a few hundred years of advance notice, which would likely be enough time to set up an orbital colony even with only current technology.
Besides, it looks like past a couple hundred miles, the eruption would be survivable without any special, though agriculture would be negatively impacted.
↑ comment by NancyLebovitz · 2012-06-01T17:24:59.357Z · LW(p) · GW(p)
Sounds like something best left for the future, which I hope will have much better tech. Tectonic engineering? Forcefields? Everyone uploaded?
Replies from: gwern↑ comment by gwern · 2012-06-01T17:28:15.223Z · LW(p) · GW(p)
Some existential risks simply may be intractable and the bullet bitten. It's not like we can do anything about a vacuum collapse either.
comment by [deleted] · 2012-06-08T07:45:49.185Z · LW(p) · GW(p)
Many articles don't use tags at all others often misuse or underuse them. Too bad only article authors and editors can edit tags. I can't count the times I was researching a certain topic on LW and felt a micro annoyance when I found as article that clearly should be tagged but isn't.
Could we perhaps make a public list of possible missing or poor tags by author, and then ask the author or an editor to fix it?
Replies from: Alicorncomment by Mitchell_Porter · 2012-06-01T06:34:49.186Z · LW(p) · GW(p)
Could someone involved with TDT justify the expectation of "timeless trade" among post-singularity superintelligences? Why can't they just care about their individual future light-cones and ignore everything else?
Replies from: wedrifid↑ comment by wedrifid · 2012-06-01T08:54:56.178Z · LW(p) · GW(p)
Could someone involved with TDT justify the expectation of "timeless trade" among post-singularity superintelligences?
People (with the exception of Will) have tended not to be forthcoming with public declarations that the extreme kinds of "timeless trade" that I assume you are referring to are likely occur.
Why can't they just care about their individual future light-cones and ignore everything else?
(There are a few reasons of various levels of credibility, but allow me to speak to the most basic application.)
If an agent really doesn't care about everything else then they can do that. Note that just caring about their individual future light-cones and ignoring everything else means:
- You would prefer to have one extra dollar than to have an entire galaxy just on the other side of your future light cone transformed from being tiled with tortured humans to being a paradise.
- If the above galaxy were one galaxy closer - just this side of your future light cone - then you will care about it fully.
- Your preferences are not stable. They are constantly changing. At time t you assign x utility to a state of (galaxy j at time t+10). At time t+1 you assign exactly 0 utility to the same state of (galaxy j at time t+10).
- Such an agent would be unstable and would self modify to be an agent that constantly cares about the same thing. That is, the future light cone of the agent at time of self modification.
Those aren't presented as insurmountable problems, just as implications. It is not out of the question that some people really do have preferences that literally care zero about stuff across some arbitrary threshold. It's even more likely for many people to have preferences that care only a very limited amount for stuff across some arbitrary threshold. Superintelligences trying to maximize those preferences would engage in no, or little acausal trade with drastically physically distant superintelligences. Trade - including acausal trade - occurs when both parties have something the other guy wants.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-06-01T09:49:07.948Z · LW(p) · GW(p)
So it seems that selfish agents only engage in causal trade but that altruists might also engage in acausal trade.
Replies from: wedrifid↑ comment by wedrifid · 2012-06-01T13:05:05.960Z · LW(p) · GW(p)
So it seems that selfish agents only engage in causal trade but that altruists might also engage in acausal trade.
We can't quite say that. It is certainly much simpler to imagine scenarios where acausal trade between physically distant agents occurs if those agents happen to care about things aside from their own immediate physical form. But off the top of my head "acausal teleportation" springs to mind as something that would result in potential acausal trading opportunities. Then there are things like "acausal life insurance and risk mitigation" which also give selfish agents potential benefits through trade.
comment by beoShaffer · 2012-06-01T05:37:07.564Z · LW(p) · GW(p)
I am doing a study on pick-up artistry. Currently I'm doing exploratory work to develop/find an abbreviated pick-up curriculum and operationalize pick-up success. I've been able to find some pretty good online resources*, but would appreciate any suggestions for further places to look. As this is undergraduate research I'm on a pretty nonexistent budget, so free stuff is greatly preferred. That said I can drop some out of pocket cash if necessary. If anyone with pick-up experience can talk to me, especially to give feedback on the completed materials that would be great.
*Seduction Chronicles and Attractology have been particularly useful
Replies from: James_Miller↑ comment by James_Miller · 2012-06-02T06:17:49.249Z · LW(p) · GW(p)
If you will need to convince a professor to someday give you a passing grade on this work I hope you are taking into account that most professors would consider what you are doing to be evil. Never, ever describe this kind of work on any type of graduate school application. Trust me, I know a lot about this kind of thing.
Replies from: Kaj_Sotala, beoShaffer, KPier↑ comment by Kaj_Sotala · 2012-06-02T07:43:15.489Z · LW(p) · GW(p)
Trust me, I know a lot about this kind of thing.
I'd be curious to hear more about the details of that episode.
Replies from: James_Miller↑ comment by James_Miller · 2012-06-02T16:58:46.129Z · LW(p) · GW(p)
I wrote up what happened for Forbes.. I later found out that it was Smith's President not its Board of Trustees that finally decided to give me tenure.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-06-02T20:48:29.102Z · LW(p) · GW(p)
Huh. I knew that academia had a liberal bias, but I didn't know it was quite that bad.
↑ comment by beoShaffer · 2012-06-02T06:41:33.938Z · LW(p) · GW(p)
The professor who will be grading this has actively encouraged this research topic and multiple other professors at my school have expressed approval, with none expressing disapproval.
Replies from: James_Miller↑ comment by James_Miller · 2012-06-02T06:43:31.376Z · LW(p) · GW(p)
Impressive. What school?
Replies from: beoShaffer↑ comment by beoShaffer · 2012-06-02T06:50:12.387Z · LW(p) · GW(p)
↑ comment by KPier · 2012-06-02T21:00:12.751Z · LW(p) · GW(p)
Your article describes the consequences of being perceived as "right-wing" on American campuses. Is pick-up considered "right wing"? Or is your point more generally that students do not have as much freedom of speech on campus as they think?
I'm specifically curious about the claim that most professors would consider what you are doing to be evil. Is that based on personal experience with this issue?
Replies from: James_Miller↑ comment by James_Miller · 2012-06-02T22:39:53.664Z · LW(p) · GW(p)
Racism, sexism and homophobia are the three primary evils for politically correct professors. From what I've read of pick-up (i.e. Roissy's blog) it is in part predicated on a negative view of women's intelligence, standards and ethics making it indeed sexist.
See this to get a feel for how feminist react to criticisms of women. Truth is not considered a defense for this kind of "sexism". (A professor suggested I should not be teaching at Smith College because during a panel discussion on free speech I said Summers was probably correct.)
I've never discussed pick-up with another professor but systematically manipulative women into having sex by convincing them that you are something that you are not (alpha) would be considered by many feminist, I suspect, as form of non-consensual sex.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-06-04T06:22:09.153Z · LW(p) · GW(p)
How comes they describe that in terms of 'convincing them that you are something you are not' rather than 'becoming something you didn't use to be'? Do they think people have an XML tag attached that reads 'beta' or something, independent of how they behave and interact? To me, the idea of convincingly faking being alpha makes as much sense as that of convincingly faking being fluent in English, and sounds like something a generalization of the anti-zombie principle would dismiss as utterly meaningless.
Replies from: Viliam_Bur, James_Miller↑ comment by Viliam_Bur · 2012-06-04T12:09:02.052Z · LW(p) · GW(p)
How comes they describe that in terms of 'convincing them that you are something you are not' rather than 'becoming something you didn't use to be'?
The given "something" is a package consisting of many parts. Some of them are easy to detect, some of them are difficult to detect. In real life there seems to be a significant correlation between the former and the latter, so people detect the former to predict the whole package.
After understanding this algorithm, other people learn the former parts, with intention to give a false impression that they have the whole package. The whole topic is difficult to discuss, because most package-detectors have a taboo against speaking about the package (especially admitting that they want it), and most package-fakers do not want to admit they actually don't have the whole package.
Thus we end with very vague discussions about whether it is immoral to do ...some unspecifed things... in order to create an impression of ...something unspecified... when ...something unspecified... is missing; usually rendered as "you should be yourself, because pretending otherwise is creepy". Which means: "I am scared of you hacking my decision heuristics, so I would like to punish you socially."
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-06-05T07:31:10.451Z · LW(p) · GW(p)
What is it that is difficult to detect in a person and still people care about potential partners having it? Income? (But I don't get the impression that the typical PUA is poverty-stricken, and I can't think of reasons for people to care about that in very-short-term relationships, which AFAIK are those most PUAs are after.) Lack of STDs? (But, if anything, I'd expect that to anticorrelate with alpha behaviour.) Penis size? (But why would that correlate with behaviour at all?)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-05T08:37:56.334Z · LW(p) · GW(p)
What is it that is difficult to detect in a person and still people care about potential partners having it?
I guess it is how the person will behave in the future, and in exceptional situations. We can predict it based on person's behavior here and now, unless that behavior is faked to confuse our algorithms.
Humans are not automatically strategic and nature is not antropomorphic, but if I tried to translate the nature's concerns for "I want an alpha male", it would be: "I want to be sure my partner will be able to protect me and our children in case of conflict."
This strategy is calibrated for an ancient environment, so it sometimes fails, but often it works; some traits are also useful now, and even the less useful traits still make impression on other people, so they give a social bonus. (For example higher people earn more on average, even if their height is not necessary for their work.)
Of course there is a difference between what our genes "want" and what we want. I guess a typical human female does not rationally evaluate male's capacity of protecting her in combat, but it's more like unconscious evaluation + halo effect. A male unconsciously evaluated as an alpha male seems superior in almost everything; he will seem at the same time stronger, wiser, more witty, nicer, more skilled, spiritually superior, whatever. A conflicting information will be filtered away. ("He beat the shit out of those people, because they looked at him the wrong way, and he felt the need to protect me. He loves me so much! No, he is not agresssive; he may give that impression, but only because you don't really know him. In fact he is very gentle, he has a good heart and wouldn't wish no harm to anyone. He is just a bit different, because he is such a strong personality. Don't judge him, because you don't know him as much as I do! And he did not really murder that guy in 2004, he was just framed by the corrupt police; he explained me everything, because he trusts me. And by the way the dead guy deserved it.")
Anyway, preference is a preference, you cannot explain it away. (Analogically, if a male prefers females with big breasts, you can't change his preference by explaining that bigger breasts are not necessary to feed children.)
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-06-06T19:30:51.461Z · LW(p) · GW(p)
Anyway, preference is a preference, you cannot explain it away. (Analogically, if a male prefers females with big breasts, you can't change his preference by explaining that bigger breasts are not necessary to feed children.)
What I meant to ask was what kind of information about someone is hard to detect in a few hours of face-to-face interaction but would still affect someone else's willingness to have a (usually very-short-term) sexual relationship with them, regardless of whatever evolutionary reasons caused such a preference to exist. (So, in your example, the equivalent “men like women with big breasts” would be a valid answer, but the equivalent of “men like women who could produce lots of milk for their children” wouldn't.) And I didn't mean that as a rhetorical question.
(FWIW, and I think I've read this argument before, it would make evolutionary sense for women to have different preferences for one-night stands than for marriage, because if you had to choose between a healthy man and a wealthy one you'd rather your child was raised by the latter but (unbeknownst to him) had half the genes of the former.)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-07T07:57:01.290Z · LW(p) · GW(p)
I think humans have general preference for "reals value" as opposed to faking their reward signals (a.k.a. "wireheading"). Of course sometimes we fake the reward signals, because it is pleasant and we are programmed to seek pleasure; but if we did it without restraints, our survival value would go down. So when someone enjoys "fake values" too much, they will get negative social feedback, because by putting themselves in danger they also decrease their value as an ally.
So a part of mechanism that warns women against "fake alpha males" may be a general negative response against "fake values", not necessarily related to specific real risks of having one-night sex on birth control with a fake alpha versus with a real alpha.
Another part could be this: it is good for a woman to have sex with a man whom other women consider attractive. (If the man is unattractive to other women, perhaps he has some negative trait that you did not notice, so it is better to avoid him anyway, because you don't want to risk your child to inherit a negative trait.) On the level of feelings -- not being a woman I can only guess here -- the information, or just a suspicion, that a man is unattractive to other women, probably makes the man less attractive. (It is a perception bias.) Simply said: "women like men liked by other women"; and they honestly like them, not just pretend that they do.
The idea of a "fake alpha male" (a PUA) probably evokes an image of man who was unattractive to women he met yesterday, and who is unattractive even today in moments where he stops playing by the PUA rules and becomes his old self. Therefore he is an unattractive man, who just uses some kind of reality-distortion technique to appear attractive. -- An analogy would be an ugly woman using hypnosis to convince men that she is a supermodel. The near-mode belief in existence of such women would make many men feel very uncomfortable, and they would consider "speed hypnosis" lessons unethical. (For better analogy, let's assume that this "speed hypnosis" cannot be used to break someone's will, only to alter their perceptions.)
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-06-07T11:04:36.716Z · LW(p) · GW(p)
"fake alpha males"
I mean, what's the difference between a fake alpha male and someone but didn't use to be an alpha male but has since become one? Is someone who didn't grow up speaking English but now does a “fake English speaker”?
only to alter their perceptions
Don't lots of men drink alcohol in order for women to look more attractive to them? :-)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-07T12:01:27.175Z · LW(p) · GW(p)
what's the difference between a fake alpha male and someone but didn't use to be an alpha male but has since become one?
Congruency. If someone became an alpha male by PUA training, they will probably have the "visible" traits of alpha male, but lack the "invisible" traits (where "invisible" is a short for "not easy to detect during the first date"), because the training will focus on the "visible" traits.
Unless it is a PUA training that explicitly focuses on teaching the "invisible" traits (because they believe that this is the best way to learn and maintain the "visible" traits in long term).
At this point people usually begin to discuss the definition of the term "PUA". People who like PUA will insist that such trainings belong under the PUA label, and perhaps they are the ultimate PUA trainings, the results of decades of field research. People who dislike the PUA will insist that the label "PUA" should be used only for those surface trainings that create "fake alpha males", which is a bad thing, and that any complex personality improvement program is just old-fashioned "manning up", which is a good thing, and should not be confused with the bad thing. This battle for the correct definition is simply the battle for attaching the bad or good label to the whole concept of PUA.
Good story. Someone who didn't use to be an alpha male but has become one often has a good story that explains why it happened. A story "first I was pathetic, but then I paid a lot of money to people who taught me to be less pathetic, so I could get laid" is not a good story. A good story involves your favorite dog dying, and you being lost in a jungle after the helicopter crash, feeding for weeks on scorpions and venomous snakes. Or spending a few years in army. If a miracle transformed you to an alpha male, your past is forgiven, because there is a clean line between the old you and the new you. Also if the shock was enough to wake you up, then you probably had a good potential, you just didn't use it fully; you had to be pushed forward, but you found the way instinctively.
There is a fear that if someone gained a trait too easily, they can also lose it easily. (Imagine the shame of being known as Joe's former girlfriend, if Joe returns to his previous pathetic behavior, because the PUA lessons did not stick.) And if their gaining the trait was based on education, not genetics, then what is the point of getting their genes? :D
Is someone who didn't grow up speaking English but now does a “fake English speaker”?
A proper analogy would be someone who memorizes a list of English phrases frequently used in some context, with a perfect accent, and then meets you in that context to make a good impression. Only when you ask something unexpected, it turns out the person does not understand most English words.
Of course there is a continuum between fake English knowledge and real English knowledge, but people are expected to cross the continuum in a predictable manner (gradually getting better in all topics). When people speak about "natural" and "fake", they often mean "predictable and reliable" and "optimized for cheap first impression". If someone knows 20% of English words in any context, then 50%, then 80%, then 98%, that is learning; if someone knows 100% in one context while knowing 5% in other contexts, that is cheating -- this path might finally take you to the same goal, but the mere fact that someone is using this path suggests that they are too lazy to finish it.
Don't lots of men drink alcohol in order for women to look more attractive to them? :-)
I am not sure if this describes the real behavior, but supposing that they do, they do it voluntarily.
Replies from: army1987, army1987↑ comment by A1987dM (army1987) · 2012-06-07T13:52:46.694Z · LW(p) · GW(p)
And if their gaining the trait was based on education, not genetics, then what is the point of getting their genes? :D
At least for short-term relationships, people don't actually want good genes; they want things which correlated with good genes in the ancestral environment. (Not all men would be outraged by the possibility that a woman has undergone breast enlargement surgery, for example.)
↑ comment by A1987dM (army1987) · 2012-06-07T12:56:00.358Z · LW(p) · GW(p)
(Why was that downvoted? It didn't explicitly answer my question, but it also contains lots of interesting points. Upvoted back to zero)
Congruency. If someone became an alpha male by PUA training, they will probably have the "visible" traits of alpha male, but lack the "invisible" traits (where "invisible" is a short for "not easy to detect during the first date"), because the training will focus on the "visible" traits.
My question was what those invisible traits are.
Also if the shock was enough to wake you up, then you probably had a good potential, you just didn't use it fully; you had to be pushed forward, but you found the way instinctively.
Well, I guess if someone just didn't have "a good potential" it'd be hardly possible for them to learn PUA stuff anyway, much like it'd be hardly possible for someone with an IQ of 75 to learn computational quantum chromodynamics (or even convincingly faking a knowledge thereof). I'm not terribly familiar with PUAs, but I was under the impression that most of their disciples are healthy, non-poor people who for some reason just didn't have a chance to learn alpha behaviour before (say, they weren't as interested in relationships as they are now, or they've just broken up from a ten-year-long relationship they had started when they were 14, or something).
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-06-07T13:23:34.748Z · LW(p) · GW(p)
My question was what the invisible traits are.
staying alpha when the situation becomes more intense. (A fake alpha may behave like a real alpha while in the bar, but lose his coolness when alone with the girl in her room. Or may behave like a real alpha the first night, but lose his coolness if they fall in love and a long-term relationship develops.)
heroic reaction in case of a real threat. (A fake alpha is only trained to overcome and perhaps overcompensate for shyness in social situations where real danger is improbable.)
other kinds of consistency. (A fake alpha may forget some parts of alpha behavior when he is outside of the bar, in a situation his PUA teachers did not provide him a script for. For example he does not fear to say "Hello" to a nice unknown girl, but still fears to ask his boss for a higher salary.)
Rationally, this should not be a problem for a one-night stand, if the probability of a real threat or falling in love is small. However, thinking that someone might have this kind of problem, can reduce his attractivity anyway.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-06-07T13:56:35.397Z · LW(p) · GW(p)
Thanks.
↑ comment by James_Miller · 2012-06-04T07:01:05.044Z · LW(p) · GW(p)
XML tag attached that reads 'beta'
Yes, in our DNA.
Replies from: wedrifid, army1987↑ comment by wedrifid · 2012-06-04T07:22:02.555Z · LW(p) · GW(p)
Yes, in our DNA.
Our DNA can give different degrees of bias towards different competitive strategies. It doesn't determine status or behavior in a given situation.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-06-04T12:20:22.112Z · LW(p) · GW(p)
I think that was the point.
Replies from: wedrifid↑ comment by A1987dM (army1987) · 2012-06-04T07:46:20.607Z · LW(p) · GW(p)
I'd guess that, except as far as physical attractiveness is concerned¹, socialization is much much much more relevant than DNA. A clone of Casanova raised in a poor, devoutly religious family in an Islamic country wouldn't become terribly good at picking up girls.
¹And even there, grooming does a lot.
comment by [deleted] · 2012-06-07T07:58:14.284Z · LW(p) · GW(p)
People here generally reading a big part of the sequences is important for participating in debate. I see large influence on the thinking of people on LessWrong from non-sequence and indeed non-LW writing such as Paul Grahams writing on Keeping your identity small or What you can't say. Why don't we include these in the promotion of material aspiring rationalists should ideally read?
Now consider building such a list. Don't include entire books. While a required reading list might complement the sequences nicely especially when Eliezer finally gets around to writing his rationality book that is I think a different worthy goal. Pick material that is broken up into digestible chunks available online (blog posts, sites, ect.), much as the original sequences where. Don't feel constrained by keeping the sequences on a single subject, there are plenty of great LW posts that get less attention than they deserve because they aren't included in a sequence, let us not repeat the mistake with non-LW material.
If you had to come up with one or several Sequences consisting of material that isn't on LessWrong what would you include?
Feel free to create miscellaneous lists as long as they are not off topic for LW.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-11T07:07:29.876Z · LW(p) · GW(p)
I'm pretty sure that one could extract a full sequence from Patrick McKenzie's blog.
comment by Paul Crowley (ciphergoth) · 2012-06-02T13:54:42.028Z · LW(p) · GW(p)
Disturbed to see two people I know linking to Dale Carrico on Twitter. Is there a standalone article somewhere that tries to explain the perils of trying to use literary criticism to predict the future? [EDIT: fixed name, thanks for the private message!]
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-06-03T00:28:28.283Z · LW(p) · GW(p)
I found Charlie Stross' recent blog post endorsing Carrico strange. Carrico has his baggage, but why is Stross suddenly so intent on painting transhumanism as a the evil mutant enemy?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-06-03T00:42:48.218Z · LW(p) · GW(p)
I think what we're seeing there is pretty easy- Stross says so himself- he's annoyed at the current economic situation which punctured the ridiculously optimistic sort of thing he had in Accelerando. This is to some extent getting wrapped up in his short term political concerns.
comment by bramflakes · 2012-06-01T08:45:13.376Z · LW(p) · GW(p)
What does it mean for a hypothesis to "have no moving parts"? Is that a technical thing or just a saying?
Replies from: dbaupp↑ comment by dbaupp · 2012-06-01T09:53:49.635Z · LW(p) · GW(p)
Not really either, it's a neat way of saying that a hypothesis doesn't actually explain anything: it doesn't provide a deeper explanation for the phenomenon in question (this explanation is the "moving parts").
A hypothesis allows you to make predictions; a good one will clearly express how and why various factors are combined to make the prediction, while a bad one will at best give the "how" without providing any deeper understanding. So a bad hypothesis is a little like a black box where the internal mechanism is hidden (sometimes, "no moving parts" might be better expressed as "unknown moving parts").
This idea occurs in the sequences, but the best explanation of the meaning I can find there is (source):
[...] the hypothesis has no moving parts—the secret sauce is not a specific complex mechanism, but a blankly solid substance or force.
(The "secret sauce" refers to the deeper explanation.)
Replies from: billswift↑ comment by billswift · 2012-06-01T16:16:15.175Z · LW(p) · GW(p)
Interesting, I hadn't encountered that in any of my studying, just seen it in passing, but with my mechanical and other technical experience (limited though it is) I automatically interpreted "no moving parts" as a good thing. Another case where sloppy writers should have thought things through a little further.
ADDED: Anyone with an engineering background would have thought the same, my experience is limited but every engineering design book stresses reducing or eliminating moving parts as a good thing.
For anyone interested, Ferguson's Engineering and the Mind's Eye is a wonderful, comprehensive look at engineering design for general audiences.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2012-06-01T18:44:19.775Z · LW(p) · GW(p)
I think the analogy holds. Hypotheses with too many "moving parts" can predict anything and so tell you nothing (they overfit the data). Hypotheses with too few moving parts aren't really hypotheses at all, just passwords like "phlogiston" that fail to explain anything (they underfit the data).
Analogously a mechanism with too many parts takes a lot of effort to get right, and it's weaknesses are hidden by its complexity. But if someone tried to sell you a car with no moving parts, you might be suspicious that it didn't work at all.
As Einstein said, things should be as simple as possible, but no simpler.
comment by [deleted] · 2012-06-11T14:51:59.939Z · LW(p) · GW(p)
Meta
Guys I'd like your opinion on something.
Do you think LessWrong is too intellectually insular? What I mean by this is that we very seldom seem to adopt useful vocabulary or arguments or information from outside of LessWrong. For example all I can think of is some of Robin Hanson's and Paul Graham's stuff. But I don't think Robin Hanson really counts as Overcoming Bias used to be LessWrong.
But for the most part not only has the LessWrong community not updated on ideas and concepts that haven't grown here. The only major examples fellow LWers brought up in conversation where works that Eliezer cited as great or influential. :/
Another thing, I could be wrong about this naturally, but it seems to clear that LessWrong has NOT grown. I can't put my finger to major progress done in the past 2 years. I recently realized this, when I saw LessWrong listed in a blogroll as a Eliezer's blog about rationality. I realized that essentially it is. And worse this makes it a very crappy blog since EY dosen't make new updates any more. Originally the man had high hopes for the site, something that could keep going, growing without him, but honestly its mostly just a community dedicated to studying the scrolls he left behind. We don't even seem to do a good job of getting others to read the scrolls.
Overall I think I'm not seeing enthusiasm for actually reading the old material systematically. A symptom of this is I think I've come to notice recently. I was debating what to call my first ever original content main article (it was previously titled "On Conspiracy Theories") and made what at first felt like a joke but then took on a horrible ring of truth to it.
"Over time the meaning of an article will tend to converge with the literal meaning of its title."
We like linking articles, and while people may read a link the first time, they don't tend to read it the second or third time it is linked. People also eventually also opick up the phrase and star using it out of context. Eventually a phrase that was supposed to be a shorthand for a nuanced argument starts to mean exactly what it says. Well not exactly, people still recall it is a vague applause light. Which is actually worse.
I cited precisely "Politics is the Mindkiller" as an example of this. In the original article Eliezer basically argues that gratuitous politics, political thinking that isn't outweighed by its value to the art of rationality is to be avoided. This soon came to meant it is forbidden to discuss politics in Main and Discussion articles, though politics does live in the comment sections.
Now the question if LessWrong is growing and productive intellectually is separate from the question of it being insular. Both I feel need to be discussed. If LW is not growing and wasn't insular, it could at least remain relevant. This site has a wonderful ethos for discussion and thought. Why do we seem to be wasting it?
comment by Multiheaded · 2012-06-06T06:03:52.742Z · LW(p) · GW(p)
Post some insightful, LW-relevant image macros.
http://www.quickmeme.com/meme/3plk5v/
http://www.quickmeme.com/meme/3p86uy/
Replies from: albeolacomment by NancyLebovitz · 2012-06-02T18:01:32.473Z · LW(p) · GW(p)
Very difficult words to spell, arranged for maximum errors-- the discussion includes descriptions of flash recognition of errors.
comment by Jack · 2012-06-01T19:04:53.075Z · LW(p) · GW(p)
Theories of Big Universes or Multiverses abound-- Inflation, Many Worlds, mathematical universes etc. Given a certain plausible, naturalistic account of personal identity (that for you to exist merely requires there to be something psychologically continuous with earlier stages of your existence) if any of these theories is true we are immortal (though not necessarily in the pleasant sense).
Questions: Is the argument valid? What are the chances that none of the multiverse theories are true? What, if anything, can we say about the likely character of this afterlife? Are we likely to simply find ourselves in futures where everything has gone right and scientific progress has granted us immortality? Or is our future most likely one of chaotic, disjointed and unintelligible sensory experiences in a Boltzman-fugue state? Do some big universe theories promise better futures than others?
Obviously I'm not expecting anyone to have conclusive thoughts. Speculation and hypothesizing is fine.
Replies from: wedrifid, Nisan↑ comment by wedrifid · 2012-06-07T11:32:19.825Z · LW(p) · GW(p)
Questions: Is the argument valid?
Only with a usage of "immortal" that abandons the cached thinking and preferences that we usually associate with the term.
↑ comment by Nisan · 2012-06-02T17:39:47.109Z · LW(p) · GW(p)
I think if you fully taboo the concepts of "personal identity" and "existence", the argument evaporates. Before tabooing, your argument looks like this:
- Alternate universes, containing persons psychologically continuous with me, exist.
- Persons psychologically continuous with me are me.
- Therefore I am immortal.
- Therefore I should anticipate never dying.
- Therefore I should make plans that rely on never dying.
On the face of it, it seems sound. But after tabooing, we're left with something like:
- Our best, most parsimonious models of reality refer to unseen alternate universes and unseen persons psychologically continuous with each other.
- In such models, alternate futures of a person-moment have no principled way of distinguishing one of themselves as "real".
- Therefore, our models of reality refer to arbitrarily long-lived versions of each person.
... - Therefore I have reason to anticipate never dying.
- Therefore I have reason to act as if I will never die.
Edit: My #4 and #5 were nonsense.
Replies from: Jack↑ comment by Jack · 2012-06-03T19:19:40.389Z · LW(p) · GW(p)
I either don't understand your re-write or don't understand how it dissolves the argument.
Replies from: Nisan↑ comment by Nisan · 2012-06-03T20:25:20.808Z · LW(p) · GW(p)
Oh, I should have been more explicit: I think there's a big logical leap between steps 3 and 4 of the rewritten argument, as indicated by the ellipsis. (Why is our models of reality referring to arbitrarily long-lived versions of each person a reason to act as if I will never die?) It's far from clear that this gap can be bridged. That's why I said the argument evaporates.
comment by Tuxedage · 2012-06-02T17:10:53.086Z · LW(p) · GW(p)
A disproportionate number of people involved with AI risk mitigation and the Singularity Institute have graduated from "Elite Universities" such as Princeton, Harvard, Yale, Berkeley, and so on and so fourth. How important are Elite Universities, besides from signalling status and intelligence? How important is signalling status by going to an elite University? Are they worth the investment?
Replies from: D_Malikcomment by smk · 2012-06-12T16:38:42.282Z · LW(p) · GW(p)
Liron's post about the Atkins Diet got me thinking. I'd often heard that the vast majority of people who try to lose weight end up regaining most of it after 5 years, making permanent weight loss an extremely unlikely thing to succeed at. I checked out a few papers on the subject, but I'm not good at reading studies, so it would be great to get some help if any of you are interested. Here are the links (to pdfs) with a few notes. Anyone want to tell me if these papers really show what they say they do? Or at any rate, what do you think about the feasibility of permanent weight loss?
Medicare's search for effective obesity treatments: Diets are not the answer.
Mann, Traci; Tomiyama, A Janet; Westling, Erika; Lew, Ann-Marie; Samuels, Barbra; Chatman, Jason
American Psychologist, Vol 62(3), Apr 2007, 220-233.
"In sum, the potential benefits of dieting on long-term weight outcomes are minimal, the potential benefits of dieting on long-term health outcomes are not clearly or consistently demonstrated, and the potential harms of weight cycling, although not definitively demonstrated, are a clear source of concern."
Meta-analysis: the effect of dietary counseling for weight loss.
Dansinger, ML; Tatsioni, A; Wong, JB; Chung, M; Balk, EM
Annals of Internal Medicine, 2007;147:41-50.
"All methods indicated that weight loss continued for approximately 6 to 12 months during the active phase of counseling and that participants steadily regained weight during the maintenance phase."
This meta-analysis did not include any low-carb diets, though it did mention a different analysis which did.
Dietary Therapy for Obesity: An Emperor With No Clothes.
Mark, Allyn L
Hypertension, 2008; 51: 1426-1434.
"Over 5 decades, it has been demonstrated repeatedly that dietary therapy fails to achieve weight loss maintenance."
This paper talks a lot about leptin.
Long-term weight-loss maintenance: a meta-analysis of US studies
Anderson, James W; Konz, Elizabeth C; Frederich, Robert C; Wood, Constance L
The American Journal of Clinical Nutrition, November 2001; vol. 74 no. 5:579-584.
"Five years after completing structured weight-loss programs, the average individual maintained a weight loss of >3 kg and a reduced weight of > 3% of initial body weight."
This is the most optimistic one. It compares VLEDs (very low energy diets) to HBDs (hypoenergetic balanced diets) and concludes that VLEDS are significantly better. After an average of 4.5 years, those who used VLEDs were still an average of 15.5 lbs lighter than their initial weight, while those who used HBDs were an average of 4.4 lbs lighter than initially.
comment by Shmi (shminux) · 2012-06-11T06:23:30.363Z · LW(p) · GW(p)
It occurred to me that on this forum QM/MWI discussions are a mind-killer, for the same reasons as religion and politics are:
As a rule, any mention of religion on an online forum degenerates into a religious argument. Why? Why does this happen with religion and not with Javascript or baking or other topics people talk about on forums?
What's different about religion is that people don't feel they need to have any particular expertise to have opinions about it. All they need is strongly held beliefs, and anyone can have those. No thread about Javascript will grow as fast as one about religion, because people feel they have to be over some threshold of expertise to post comments about that. But on religion everyone's an expert.
Then it struck me: this is the problem with politics too. Politics, like religion, is a topic where there's no threshold of expertise for expressing an opinion. All you need is strong convictions.
Do religion and politics have something in common that explains this similarity? One possible explanation is that they deal with questions that have no definite answers, so there's no back pressure on people's opinions. Since no one can be proven wrong, every opinion is equally valid, and sensing this, everyone lets fly with theirs.
The part in bold is due to people having read the QM sequence and believing that they are now experts in the ontology of quantum physics.
Replies from: wedrifid, Nornagest↑ comment by wedrifid · 2012-06-11T07:29:11.938Z · LW(p) · GW(p)
It occurred to me that on this forum QM/MWI discussions are a mind-killer, for the same reasons as religion and politics are:
Not particularly. To the extent that it is a mind killer it is a mind killer in the way discussions of FAI, SIAI capabilities, cryonics, Bayesianism or theories like this are. Whenever any keyword suitably similar to one of these subjects appears one of the same group of people can be expected to leap in and launch into an attack of lesswrong, its members, Eliezer, SingInst or all of the above - they may even try to include something on the subject matter as well.
The thing is most people here aren't particularly interested in talking about those subjects - at least they aren't interested in rehashing the same old tired arguments and posturing yet again. They have moved on to more interesting topics. This leads to the same abysmal quality of discussion - and belligerent and antisocial interactions - every time.
Replies from: vi21maobk9vp↑ comment by vi21maobk9vp · 2012-06-12T06:21:47.738Z · LW(p) · GW(p)
Any FAI discussions are mindkilling unless they are explicitly conditional on "assuming FOOM is logically possible". After all, we don't have enough evidence to bridge the difference in priors, and neither side (AI is a risk/AI is not a risk) explicitly acknowledges that fact (and this problem makes them sides more than partners).
↑ comment by Nornagest · 2012-06-11T06:42:53.553Z · LW(p) · GW(p)
I'm not sure I agree with Graham on the exact mechanics there. There are a number of mindkilling topics where empirically supportable answers should in principle exist: the health effects of obesity, for example. Effects of illegal drugs. Expected outcomes of certain childrearing practices.
Expertise exists on all these topics, and you can prove people wrong pretty conclusively with the right data set, but people -- at least within certain subcultures, and often in general -- usually feel free to ignore the data and expertise and expound their own theories. This is clearly not because these questions lack definite answers. I think it's more because social approval rides on the answers, and because of the importance of the social proof heuristic and its relatives.
QM interpretation may or may not fall into that category around here.
Replies from: taelor↑ comment by taelor · 2012-06-11T09:58:58.490Z · LW(p) · GW(p)
Graham actually agrees with you; the essay quoted above continues:
But this isn't true. There are certainly some political questions that have definite answers, like how much a new government policy will cost. But the more precise political questions suffer the same fate as the vaguer ones. I think what religion and politics have in common is that they become part of people's identity, and people can never have a fruitful argument about something that's part of their identity. By definition they're partisan.
comment by Shmi (shminux) · 2012-06-11T06:06:50.059Z · LW(p) · GW(p)
How anthropomorphizing penguins held up the research by some 100 years: link.
comment by Tuxedage · 2012-06-01T13:30:09.259Z · LW(p) · GW(p)
I'm thinking of writing a series of essays regarding applied rationality in terms of politics and utilitarianism, and the ways we can apply instrumental rationality to better help fight the mind-killingness of political arguments, but I'd like to make sure that lesswrong is open to this kind of thing. Is there any interest in this kind of thing? Is this against the no-politics rule?
Replies from: None, NancyLebovitz↑ comment by [deleted] · 2012-06-01T14:25:48.773Z · LW(p) · GW(p)
In other words, you want to write essays on how people can have political conversations without being mindkilled?
I don't think it would violate the "no politics" rule. Just please keep it sufficiently meta to avoid mindkilling people reading the essay. So if you give an example of a non-mindkilling political conversation, don't talk about a current day issue. Instead, talk about 17th century France and it's use of mercantilism. How might a king and his advisers come to the best policy decision regarding whether or not to allow free trade with England? If they so choose, people can apply those principles to actual political discussions somewhere other than LW.
That said, I don't think I'd read the essay. I try to avoid politics, meta or otherwise. It does sounding interesting, though. Best of luck with it.
( I hope it's obvious, but just in case... This is just my view, not necessarily an expression of LW policy or consensus.)
↑ comment by NancyLebovitz · 2012-06-01T15:47:04.542Z · LW(p) · GW(p)
I'm definitely interested. It's probably worth distinguishing between avoiding getting mind-killed oneself and trying to get other people out of mind-killed mode.
comment by James_Miller · 2012-06-01T04:37:37.588Z · LW(p) · GW(p)
Anyone else try the Bulletproof diet? Michael Vassar seems to have a high opinion of Dave Asprey, the diet's creator.
Replies from: Micaiah_Chang, Jayson_Virissimo, wmorgan, drethelin↑ comment by Micaiah_Chang · 2012-06-02T02:32:06.122Z · LW(p) · GW(p)
I have. I had something like 50-30% adherence to it during June-September, something like adherence 4 days out of a week from October until the start of December, one month off during December because of family obligations and then mostly bulletproof from the new year onwards, with me ordering the coffee and everything. I would say that I go "off" the diet about one meal per two weeks, but as much as possible "slightly" (for example, a single meal with enough rice in it to take me out of ketosis).
Since I've started the diet, I've also made other adjustments, such as striving for at least seven hours per night, scheduling my time better to reduce stress and not going hungry because I'm too unmotivated to make food. Keep these in mind on top of all the other 'wacky' self experiment biases. (Alternatively I CAN PRIME YOU TO ASSOCIATE ME WITH LOW STATUS BY TYPING IN ALL CAPS!!!1)
- Last year, I weighed about 177 +/-3 pounds. Now I weigh about 145 +/- 2 lb (Edit: actually 135 at the time of the post. Reweighed myself and that was the new weight.). I reached 145lb during late September while not fully on the diet and maintained it, except for an unknown period in late December - mid January where I went up to about 152lb, I think that implies something back home caused me to gain weight, although whether it's food or vacation loafing is hard to say.
- I used to have consistent problems staying awake in morning classes, and large amounts of brain fog after meals. These problems have largely vanished, except when I take a meal off the diet or have had insufficient sleep for other reasons such as (see 3).
- I, unlike wmorgan, have been food poisoned. I think I narrowed the likely causes down to either eating factory eggs from chickens without antibiotics used on them or improperly preparing chicken livers. Since then, I have replaced the factory eggs with pastured ones with cooked whites (which also happen to taste better) and replacing chicken livers with beef.
- I feel a lot more focused on days where I have bulletproof coffee to pull me through, until the coffee wears off anyway. Note that days where I take the coffee are also days where I feel I've had insufficient sleep.
- I took a blood test during the latter part of December, which indicates that my numbers are stellar. Now, I realize that I was off the diet by then and not adhering particularly well before, but at the very least, short term consumption of this diet wouldn't damage you more than half a month's worth of a standard diet could fix you. At most, it can heal you for more than half a month's worth of a standard diet!
- On occasions where I deviate from the diet (social events, dashing out without breakfast, cravings for pizza), the effect is very noticeable: About 20 minutes after eating, I get steadily increasing brain fog, then get nearly knocked out for the next 15 minutes. A headache then persists for two hours after. Sometimes I wake up with headaches the next day. This is either an effect of the diet, or me simply noticing I have a gluten allergy (I haven't been tested yet). Either way, if you value gluten products, pizza and soda more than you expect to gain from this diet, it's something you should be aware of.
Notes: My meals consist of one of the following: 5 cups of steamed broccoli + 3-4tbs grassfed butter, 4 softboiled eggs / Sunny side up + Bulletproof coffee. ~2/3rds a pound of liver + 2 sunny side up eggs + 2-3tbs butter, two baked sweet potatoes + 4tbs butter. Carb cravings are stopped with a handful of berries at night when I get them (once a month). I eat only twice per day, with no substantial pangs from hunger unless I skip breakfast or I end up staying 8+ hours at school (ah, the wonderful experience of a physics undergraduate on a quarter system.) It's not hard to stick to the diet when you know you lose two hours of the day to it and feel terrible to boot.
The one weekend where I made the bulletproof ice cream was a pretty damn decadent weekend of my life. If time and the spirit of adventure are available, I'd say it's a go.
↑ comment by Jayson_Virissimo · 2012-06-01T11:15:12.550Z · LW(p) · GW(p)
I drink a concoction loosely based on Bulletproof Coffee almost everyday and have been following a mostly Paleo Diet for over half a year now. I don't want to oversell it, but these kinds of dietary changes have been extremely valuable to me. I originally planned to run a series of about 3-5 diet experiments, but I had so much improvement on the first one I tried, I just stuck with it and made it a life-style change (although, I still make minor tweaks here and there). I made much of my data publicly available here. Let me know if you have any questions.
Replies from: drethelin↑ comment by drethelin · 2012-06-01T22:07:12.239Z · LW(p) · GW(p)
Tell me more about your concoction. At the moment I drink one of these every morning as the best balance I've found between being palatable and having high protein and low carbs but am thinking about changing. I'm lactose intolerant so I don't know if I can follow the butter advice but MCT might have potential.
Replies from: Jayson_Virissimo, James_Miller↑ comment by Jayson_Virissimo · 2012-06-02T13:06:51.243Z · LW(p) · GW(p)
I add and subtract new ingredients every few weeks, but this morning I had some Tierra Del Sol medium roast coffee, low-carb sugar-free vanilla protein powder, creatine powder, chia seeds, MCT oil, coconut milk, organic grass-fed butter, with a pinch of stevia.
↑ comment by James_Miller · 2012-06-02T02:53:57.407Z · LW(p) · GW(p)
I'm also lactose intolerant but I have no trouble consuming large quantities of grass fed butter, MCT oil and coconut oil.
↑ comment by wmorgan · 2012-06-01T05:34:26.464Z · LW(p) · GW(p)
I've been trying to adhere to it for a year or so. My main point of departure is that I drink a lot of diet soda and beer. My results:
- I lost five pounds in the first two months and the weight didn't come back, despite consuming slightly more calories, and a lot more calories from fat.
- It's easy for me not to graze on simple carbohydrates, because I feel fuller. Regardless of your nutritional philosophy, most of us agree that potato chips and cookies do nothing for you.
- I haven't gotten or given anyone food poisoning or any other indication that my food is too undercooked. Especially for beef and lamb, I strongly suspect that I could eat it raw and be OK. Similarly, but not so much, for eggs, fish, and pork. I still cook the pink out of chicken, but I'm eating much less chicken anyway.
↑ comment by drethelin · 2012-06-01T21:51:53.130Z · LW(p) · GW(p)
I can say that grass-fed roast beef is hugely tastier than regular grocery store sandwich meat, and is a big boon to my low carb diet. I started out doing the 4 hour body diet in december and lost 15 or so pounds fairly fast, and then started to stagnate. I ended up cutting beans also and am now going back down again. I don't strictly follow the bulletproof diet but mine may be a data point for similar diets.
comment by [deleted] · 2012-06-01T04:09:52.397Z · LW(p) · GW(p)
Anything non-obvious in job searching? I'm using my university's job listings and monster.com, but I welcome any and all advice as this is very new to me. While I won't ask, "What is the rational way of looking for jobs?" I will ask, "How can I look for jobs more effectively than with just online job postings?"
Replies from: Viliam_Bur, wmorgan, Alicorn, James_Miller, Kaj_Sotala, jsalvatier↑ comment by Viliam_Bur · 2012-06-01T13:40:39.067Z · LW(p) · GW(p)
Anything non-obvious in job searching?
This depends on what you consider obvious. (Many things that seem obvious to me now, would be a great advice 10 or 15 years ago; sometimes even 1 year ago.) Also there is a difference between knowing something and feeling it; or less mysteriously: between being vaguely aware that something "could help" and having an experience that something trivial and easy to miss did cause 50% improvement in results. So at the risk of saying obvious things:
Don't be needy. Search for a job before you have to; that is before you run out of money. Some employers will take a lot of time; first interview, a week or two waiting, second interview, another week or two, third interview... be sure you have enough time to play this game. If a month later you get an offer that is not what you wanted, be sure to have a freedom to say "no".
Speak with more companies. If you get two great offers, you can always take one and refuse the other. If you get two bad offers (or one bad offer and one rejection), your only choices are to take a bad offer, or start the whole process again, losing a month of your time. How many companies is enough? You probably don't want to make repeated interviews in 50 companies; but if you contact 10 companies, and only 4 respond, then 2 of them reject you and 2 of them give you a bad offer, that's also not what you want. This may depend on the region where you live and a position you seek, but I would recommend getting to the first interview (not just sending job application) with 7 or 10 companies. (Alternatively, if you are satisfied with your current job and don't mind staying there, and you are just curious whether there is a better opportunity, then contacting just 1 company is OK.)
Make a good CV. An afternoon or a weekend well spent with CV can bring you years of increased income. Try to get a feedback what kind of information do your employers want. -- For example as a programmer, I used to write a list of companies I have previously worked at; which is almost worthless to the next employers. My next iteration was focused on the technologies I have used, because this is what they often asked me at interviews. This is better, but only for junior positions; for senior positions it is also necessary to focus on my responsibilities in given projects. Essentially, your CV should describe why you are perfect for the position you apply for; it should make your whole life seem like a track towards this goal. You have enough time to describe things from the angle that shows them in the best light.
Know what you want (and what you want to avoid). There comes a moment in interview where you are supposed to ask questions. Be prepared for this moment (prepare your questions in advance). -- For example I would ask how many testers the company has, if people work in open spaces (how many in one room; I might want to see it), who will give me commands (one person or more?), whether I am supposed to work on more projects at the same time (how many?), what is the company policy for overtime, and what benefits does company provide: could I get more vacation days, or work from home? I want to know exactly, in a near mode, what will happen to me if I accept that job. Know your utility function, so you can make reasonable trade-offs, e.g. money for overtime.
Write down everything. Memory is unreliable.
Listen to your intuition, and if something feels bad, don't ignore the feeling and try to find out why you have it. -- For example in one interview the employer's first question was "What is your opinion about overtime?". It's a legitimate question, but it still felt weird that they started with this question. I responded diplomatically, but later I turned down the offer. Then I met an employee from that company, desperately searching for another job, and their description confirmed my suspicions: total chaos in the company, high-ranking employees neglecting their responsibilities, leading to a lot of overtime for low-ranking employees.
Know what you are worth. Try to find out salaries and benefits of people in similar positions. Yes, there is a taboo about sharing this information; and it is no accident that this taboo hurts your negotiating position. Find a way to overcome it (rich people are those who know when and how to bend rules). For example you could ask people about their salary in their previous job. Or you could use someone else (not working in the same sector) to ask this information for you; they will not feel like a competitor. If you absolutely have no way to get this info, just ask 50% more than you are making now, in 10 different companies (to avoid making statistics from 1 data point).
Have a long-term strategy and choose your job according to it. Some jobs only give you money. Some jobs give you money and increase your long-term worth at a job market. (You at least get the money even if you are wrong about the long-term consequences.) The strategy should fit to your personality traits. Also try to get a long-term perspective on what makes you happy; it's not as obvious as it seems. Beware of advice of other people: they don't know what makes you happy. (For example some people love work-related travelling, I hate it. But even after I tell them, they recommend me a job which seems cool to them, because it includes a lot of travelling.) Sometimes they even don't know what makes them happy; they only tell you what they think they are supposed to tell you.
Think out of the box. Is "having a job" your only way to make money? And even if today it is, what about tomorrow? In addition to choosing between a few jobs, it might be useful to also have an opportunity to choose something else.
Don't be afraid to ask too much. There are some things you would (almost) never do, such as things morally unacceptable to you. Then there are things that you wouldn't like to do under normal circumstances, but if someone offered you $1,000,000 a month, you would be happy to take them (if only for doing them for one month or one year, and then quitting). In the latter situations, try to estimate the amount of money that would make you satisfied. It is too high? Ask anyway. You lose nothing by being rejected. And you might be surprised. Generally, most of "no" answers can be replaced by asking a very high number.
↑ comment by wmorgan · 2012-06-01T05:56:48.537Z · LW(p) · GW(p)
I notice you have a STEM degree. Since the job market is in your favor, I'll assume you will find multiple employers interested in hiring you. Learn about salary negotiation now, before you go into an interview. If you're as clueless as I was when I got my first job, then you can pick up thousands of dollars for a few hours of research.
Recommended reading: http://www.kalzumeus.com/2012/01/23/salary-negotiation/
↑ comment by Alicorn · 2012-06-01T04:46:59.794Z · LW(p) · GW(p)
If you intend to donate income acquired via job to reducing x-risk, there is a network thing.
Replies from: None↑ comment by [deleted] · 2012-06-01T05:17:27.372Z · LW(p) · GW(p)
I need to step away and think about that for a while before I decide whether or not it's a good thing and whether it would work for me. I'm pattern-matching heavily to phygishness and tithing, but flinching isn't fair without really examining the issue. Besides ideological differences (if any): is charitable donation to x-risk organizations tax deductible? Also, there is little evidence in my post history that I would donate a significant portion of income to reducing x-risk, and any attempt on my part to establish that now could just be self-interest and so should not be evidence.
Thanks for linking this, though, I'm astonished that it exists because it means that people here are taking an abstract idea very seriously, which is rare and dangerous (see the section "Explicit reasoning is often nuts").
↑ comment by James_Miller · 2012-06-01T04:41:14.868Z · LW(p) · GW(p)
Network. Write down the names and numbers of all the adults you know in a notebook. Call each and ask if they know of anyone who might be helpful to you in a job search. Iterate until employed.
↑ comment by Kaj_Sotala · 2012-06-01T16:27:52.784Z · LW(p) · GW(p)
See also the Job Search Advice thread for some suggestions.
↑ comment by jsalvatier · 2012-06-01T04:14:52.867Z · LW(p) · GW(p)
I've heard good things about the book What color is your parachute. The section on negotiating and asking for a raise seemed pretty useful.
comment by Multiheaded · 2012-06-13T19:13:46.868Z · LW(p) · GW(p)
For someone who hopes for lots of medical/bionic wonders going on the market within the next 2-3 decades, how stupid/costly it really is to start smoking a little bit today? I'm only asking because I tried it for the 1st time this week, and right now I'm sitting here smoking Dunhills, browsing stuff and listening to Alice in Chains, having a great night.
I insist on doing some light drug as I have an addictive personality that longs for a pleasant daily routine anyway - and I quit codeine this winter before it was made prescription-only (and not a moment too soon; withdrawal was already becoming a problem, to be honest), and marijuana is hard to get here, and MMOs take too much time & socializing, and I kind of hate alcohol. Anyone else trying to run a "controlled habit"?
comment by khafra · 2012-06-06T15:12:05.071Z · LW(p) · GW(p)
Reddit "ask me anything" coming tomorrow from the Stanford Prison Experiment guy.
comment by Shmi (shminux) · 2012-06-05T15:01:17.706Z · LW(p) · GW(p)
Post-Singularity you might BE a hoverboard.
(Of course, the premise of the comic is incompatible with the Singularity, since human-level AIs are widespread as companions, without ever going FOOM.)
comment by khafra · 2012-06-05T10:53:30.836Z · LW(p) · GW(p)
Does a Tegmark Level IV type Big World completely break algorithmic probability? Is there any sort of probability that's equipped to deal with including a Big World as a possibility in your model?
Replies from: wedrifid↑ comment by wedrifid · 2012-06-07T11:27:49.837Z · LW(p) · GW(p)
Does a Tegmark Level IV type Big World completely break algorithmic probability?
No. Why would it?
Replies from: khafra↑ comment by khafra · 2012-06-07T12:03:21.180Z · LW(p) · GW(p)
Caveat: I can not math good.
But, if "all mathematical structures are real," and possible universes, that must include structures like the diophantine equations isomorphic to Chaitin's Omega, and other noncomputable stuff, right? Can algorithmic probability tell me what mathematical structure generated a string, when some of the possible mathematical structures are not computable?
Replies from: None↑ comment by [deleted] · 2012-06-07T12:54:51.675Z · LW(p) · GW(p)
Can algorithmic probability tell me what mathematical structure generated a string, when some of the possible mathematical structures are not computable?
Presumably you'll only ever attempt to infer from a finite prefix of such a string, which is guaranteed to have a computable description. (Worst case scenario: "the string whose first character is L, whose second character is e, ...").
Replies from: khafra, khafra↑ comment by khafra · 2012-06-08T15:47:40.560Z · LW(p) · GW(p)
But, we're trying to infer the actual generator, right? If, big-worldily, the actual generator is in some set of incomputible generators, it doesn't help us at all to come up with an n-parameter generating program for the first n bits--although no computable predictor can do better. If the set of possible generators is, itself, incomputable, how do we set a probability distribution over it?
comment by Alejandro1 · 2012-06-04T21:45:42.016Z · LW(p) · GW(p)
Sean Carroll has a nice post at Cosmic Variance explaining how Occam's razor, properly interpreted, does not weigh against Many Worlds or multiverse theories. Sample quote:
When it comes to the cosmological multiverse, and also the many-worlds interpretation of quantum mechanics, many people who are ordinarily quite careful fall into a certain kind of lazy thinking. The hidden idea seems to be (although they probbly wouldn’t put it this way) that we carry around theories of the universe in a wheelbarrow, and that every different object in the theory takes up space in the wheelbarrow and adds to its weight, and when you pile all those universes or branches of the wave function into the wheelbarrow it gets really heavy, and therefore it’s a bad theory.
That’s not actually how it works.
It is in response to some remarks by philosopher Craig Callender, who comes to join the discussion in the comments.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-04T22:10:23.726Z · LW(p) · GW(p)
Another quote:
By these standards, the ontological commitments of the multiverse or the many-worlds interpretation are actually quite thin. This is most clear with the many-worlds interpretation of quantum mechanics, which says that the world is described by a state in a Hilbert space evolving according to the Schrodinger equation and that’s it. It’s simpler than versions of QM that add a completely separate evolution law to account for “collapse” of the wave function. That doesn’t mean it’s right or wrong; but it doesn’t lose points because there are a lot of universes. We don’t count universes, we count elements of the theory, and this one has a quantum state and a Hamiltonian. A tiny number!
I agree with all that (except for "and that’s it" part for MWI, given that the Born rule is still a separate assumption).
Counting worlds or universes towards complexity of a quantum theory is as silly as counting species towards complexity of the theory of evolution.
comment by CWG · 2012-06-04T07:08:02.974Z · LW(p) · GW(p)
I was annoyed after first hearing the Monty Hall problem. It wasn't clear that the host must always open the door, which fundamentally changes the problem. Glad to see that it's a recognized problem.
"The problem is not well-formed," Mr. Gardner said, "unless it makes clear that the host must always open an empty door and offer the switch. Otherwise, if the host is malevolent, he may open another door only when it's to his advantage to let the player switch, and the probability of being right by switching could be as low as zero." Mr. Gardner said the ambiguity could be eliminated if the host promised ahead of time to open another door and then offer a switch. - http://www.nytimes.com/1991/07/21/us/behind-monty-hall-s-doors-puzzle-debate-and-answer.html?pagewanted=5&src=pm
More discussion on the previous page of that article: http://www.nytimes.com/1991/07/21/us/behind-monty-hall-s-doors-puzzle-debate-and-answer.html?pagewanted=4&src=pm
comment by Multiheaded · 2012-06-04T06:55:27.419Z · LW(p) · GW(p)
Hypothetical: what do you think would happen if, in a Western country with a more or less "average" culture of ligitation - whether using trial by jury, by judge or a mix of both - all courts were allowed to judge not just the interpretation, applicability, spirit, etc, but also the constitutional merit of any law in every case (without any decision below the Supreme Court becoming precedent)?
Say, someone is arrested and brought to trial for illegal possession of firearms, but the judge just decides that the country's Constitution allows anyone to own any weapons they want, so ve just releases the suspect and gives them some notice that they can't be sued a second time on the same charges. I'm not saying it's a good idea; my knowledge of the law is limited to a few novels by John Grisham. I'm just asking your opinion: would things be very horrible if every court was its own authority on constitutional matters? (Of course, I realise that none of the unitary, control-hungry modern states would ever willingly agree to such decentralization.)
Replies from: None↑ comment by [deleted] · 2012-06-04T12:26:12.403Z · LW(p) · GW(p)
This would mess horribly with jurisprudence constante, the principle that the legal system must above all be predictable, so that people can execute contracts or choose behaviors knowing the legal implications. But since we are already, despite our claims to the otherwise, ok with using things like retroactive laws, how problematic this would be in practice is hard to ascertain.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-06-05T06:23:01.339Z · LW(p) · GW(p)
I know about that principle, yeah - even EY mentioned it around here somewhere - but in practice the modern judiciary branch (to my eye) is very chaotic and even corrupt anyway (enforcement of copyright is, in practice, enforcement of insanity, mass murderers can more or less walk away if their fans are threatening enough, the policy towards juvenile offenders seems to be designed for destroying their lives and making them more dangerous, etc, etc), so indeed it's largely only predictable in its consistent badness. (That applies to more or less all modern nations with reasonably independent courts.)
If we've got to have such chaos - if installing a formal tyrant, etc are our only alternatives - at least it should be a chaos made of better, more commonsensical decisions; today's hierarchy of courts filters away common sense.
comment by Will_Newsome · 2012-06-13T23:44:50.587Z · LW(p) · GW(p)
I'm Xom#1203 on Diablo 3. I have a lvl 60 Barb and a hilariously OP lvl ~30 DH. I'm willnewsome on chesscube.com, ShieldMantis on FICS. I like bullet 960 but I'm okay with more traditional games too. Currently rated like 2100 on chesscube, 1600 or something on FICS. Rarely use FICS. I'd like to play people who are better than me, gives me incentive to practice. (ETA: I'll probably re-post this next Open Thread, I hope people don't mind too much.)
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-06-15T03:03:51.667Z · LW(p) · GW(p)
My Playstation Network name is Thomas_Bayes. One of my goals for the year was to learn Chess, but I've been very busy recently and haven't allocated much time to it.
comment by DanArmak · 2012-06-02T20:34:47.967Z · LW(p) · GW(p)
Unless Clippy has been brainwashing some humans, the joys of paperclipping are not as alien to the human mind as we had thought:
Replies from: Grognorcomment by r_claypool · 2012-06-02T20:33:07.458Z · LW(p) · GW(p)
To those who know Sam Harris' views on free will, how do they compare to the LW solution)?
I'll get around to reading his eBook eventually, but it's not the highest priority in my backlog unless a few people say, "Yeah, read that. It's awesome."
comment by billswift · 2012-06-01T16:00:28.709Z · LW(p) · GW(p)
Our key insight is a pessimistic one: this is the sort of situation which, though individuals and markets don’t handle it well, isn’t actually handled well by governments either. The fundamental mistake of statist thinking is to juxtapose the tragically, inevitably flawed response of individuals and markets to large collective-action problems like this one against the hypothetical perfection of idealized government action, without coping with the reality that government action is also tragically and inevitably flawed.
Or, more simply, most statists, including those here, are fantasists. I strongly recommend the people here who have actually been advocating the regulation of AI research, for example, to read Hayek's The Fatal Conceit, with a view as how politicians and bureaucrats, those who would actually implement such a regulation, would really use it, rather than your fantasies about how it should be enforced.
And that is entirely beside the fact that it would require a world-wide totalitarian dictatorship to enforce.
Replies from: CronoDAS↑ comment by CronoDAS · 2012-06-02T01:04:53.480Z · LW(p) · GW(p)
Well, as Robin Hanson said, coordination is hard.
Replies from: timtyler↑ comment by timtyler · 2012-06-02T01:24:46.819Z · LW(p) · GW(p)
It seems easy enough - among close relatives. Check out eucaryotic cells, for instance.
We might not all be genetically related - but we could easily become more memetically related.
Replies from: CronoDAS↑ comment by CronoDAS · 2012-06-02T08:56:11.583Z · LW(p) · GW(p)
I made that same criticism to Robin Hanson, that the cells in an organism don't seem to have that kind of trouble, but within-organism coordination is clearly at least somewhat imperfect, considering that many animals do end up developing cancer during their lifetime. And I certainly have no idea what kinds of costs the body's various anti-cancer mechanisms end up imposing.
Replies from: timtyler↑ comment by timtyler · 2012-06-02T11:24:20.224Z · LW(p) · GW(p)
Yes, but we understand that cancer is a 'disposable soma' tradeoff, and that there are large, complex organisms that never get cancer. So any idea that cancer-like effects will necessarily prevent large-scale coordination seems pretty ridiculous. If empires get cancer, it will be because they are prepared to pay the costs of chopping out the occasional tumor. There are some costs to coordination, but it isn't that hard. Even ants manage it.
Replies from: CronoDAS↑ comment by CronoDAS · 2012-06-02T13:34:31.505Z · LW(p) · GW(p)
there are large, complex organisms that never get cancer.
Well, I know that plants don't get cancer the same way animals do, but it's even possible for insects to get cancer, although they don't usually live long enough to accumulate enough mutations to produce a tumor. Which complex organisms in particular did you have in mind? ("Sharks don't get cancer" is a myth invented by people who were trying to make money by selling shark cartilage.)
Replies from: timtylercomment by Risto_Saarelma · 2012-06-01T10:25:45.290Z · LW(p) · GW(p)
Is anyone else occasionally browsing LW on Android devices and finding the image based vote up, reply, parent etc. links on the comments much more difficult to hit correctly than regular links?
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-06-01T11:09:03.173Z · LW(p) · GW(p)
The edit button used to be almost impossible for me to push, but now seems to be working. I don't know what changed, so have no idea how to help. Sorry. BTW, what version of Android are you using?
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-06-01T13:56:34.744Z · LW(p) · GW(p)
I'm getting the problem with an Acer A500 tablet with Android 3.2.1 (the one it came with, after the updates it pushed on me) and the default browser, and on a HTC Desire with Cyanogenmod, Android 2.3.7 and Cyanogenmod's default browser.
The edit button is indeed pretty much unpressable. I also can't seem to navigate to it using HTC Desire's trackpad, which can be used to highlight the other comment control links.
comment by [deleted] · 2012-06-12T16:06:00.642Z · LW(p) · GW(p)
A user who's judgement I deeply admire has told me off site that my posts are harmful to the community and it is better that I stop posting. I will respect his opinion and discontinue posting until further notice.
Please down vote this post if I make responses after it.
Thanks for all the fun and cool conversation! It was a great ride while it lasted, I will try to live up to the spirit of LW in the future.
First Checkpoint
I delayed the break from LW because of some of the feedback to this post as well as plain force of habit. I did some posts I considered clearly valuable to the community. As of August 8th, its been exactly a month since my last entry, I don't think much has changed so far so I'm going to stick it out until the next check point which will be at the 3 month mark.
Second Checkpoint
The breaks I took where somewhat useful. Currently resuming normal participation.
Replies from: shokwave, wedrifid, drethelin, ArisKatsaris, TimS, Vladimir_Nesov, thomblake, None, None, lsparrish↑ comment by shokwave · 2012-06-12T20:14:41.274Z · LW(p) · GW(p)
until further notice.
You should note this on a calendar or something: two months from now you should re-evaluate your position. It seems to me like there's a chance you'll change to the point you're net positive; re-evaluation is cheap; that small chance should be allowed for, not discarded.
Replies from: None↑ comment by wedrifid · 2012-06-13T02:32:51.688Z · LW(p) · GW(p)
I'm sorry to see you go.
I do agree with gwern that your recent critical lamentations have been a negative contribution. Particularly because I find it is too easy to be influenced towards cynicism. However your recent dissatisfaction aside your contributions in general are fine, making you a valuable community member. I never see the name "Konkvistador" and think "Oh damn, that moron is commenting again", which puts you ahead of rather a lot of people and almost constitutes high praise!
I can perhaps empathise with becoming disgruntled with intellectual standards on lesswrong. People are stupid and the world is mad - including most people here and everywhere else I have interacted with humans. I recently took a whole 30 days off, getting my score down to '0', weakening the addiction and also relieving a lot of frustration. I enjoy lesswrong much more after doing that. Hopefully you decide to return some time in the future as well.
Replies from: TimS↑ comment by TimS · 2012-06-13T03:10:34.765Z · LW(p) · GW(p)
Honestly, I think you were too easily mollified by lukeprog - for the reasons I said to him there.
Replies from: wedrifid↑ comment by wedrifid · 2012-06-13T03:54:34.928Z · LW(p) · GW(p)
I tend to agree with Shokwave's replay. Lesswrong users not learning a bunch of history is not a big deal. The subject is fairly boring. Someone else can learn it.
Lesswrong isn't supposed to be a site where all users must learn arbitrary amounts of information about arbitrary subjects. Most people have better things to do.
↑ comment by drethelin · 2012-06-12T18:23:18.093Z · LW(p) · GW(p)
I find your style of commenting both fun to read and interesting. I think your posts are valuable even if they're more "thinking out loud" than "I have studied ALL THE LITERATURE". As a community I think we can and SHOULD be able to talk about things in ways that don't involve 50 citations at the bottom of the page, even though I think those posts are valuable. I don't know who you're scaring away with your amount of commenting, but I don't miss them.
↑ comment by ArisKatsaris · 2012-06-12T17:20:37.190Z · LW(p) · GW(p)
Jeez.
You've been the top contributor in the past 30 days.
This departure of yours is the most harmful thing you've ever done to the community. I wish you'd stay.
This is bloody stupid.
↑ comment by Vladimir_Nesov · 2012-06-12T16:20:44.566Z · LW(p) · GW(p)
It's somewhat plausible that 20 comments a day may be too much (in someone's perception), or that it's better to develop certain kinds of posts more carefully, maybe even to avoid certain topics (that would shift the focus of conversations on LW in an undesirable direction), but it's not a case for not posting at all.
(That is, the questions of whether Konkvistador's posts are slightly harmful for the community (in what specific way) and whether the best intervention in response to that hypothetical is to discontinue posting entirely don't seem to me clearly resolved, and low rate of posting seems like a better alternative for the time being, absent other considerations.)
Replies from: CharlieSheen, CharlieSheen↑ comment by CharlieSheen · 2012-06-12T16:54:36.540Z · LW(p) · GW(p)
whether Konkvistador's posts are slightly harmful for the community
It is ridiculous to argue that an eloquent and prolific poster who actually seems to have read the motherfucking sequences and doesn't get tired of trying help new people access them (a rare trait these days) is causing harm.
Even if that was so for every single thing he wrote, and note that when Lukeprog cites against his argument that productivity and openness to outside ideas on LW is lower than it should be, the bundle includes many of Konkvistador's posts as examples of openness and productivity! Imagine that!
At the very least his excellent taste in outside links that he regularly shares with the community make him definitely a signal not a noise man.
But please lets pile on him. I bet soon someone will bring up how he "violated the mindkilling" taboo or even acusse him of getting "minkilled".
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-06-12T17:02:27.381Z · LW(p) · GW(p)
Your rhetoric is for dismissing the question as ridiculous. I suggest actually considering the question, and expect that the answer accepted by Konkvistador is wrong on both levels (their contributions don't seem harmful on net, there are multiple meanings of "harmful" that should be addressed separately with different interventions, and stopping participation entirely doesn't seem to be the best response to the hypothetical of their contributions being harmful in some of these senses).
(For example, it's likely that for most posters, there is some aspect of their participation that is harmful, and the appropriate response is to figure out what that aspect is and fix it. So it's useful to consider these questions.)
Replies from: CharlieSheen↑ comment by CharlieSheen · 2012-06-12T17:08:08.292Z · LW(p) · GW(p)
My rhetoric is what it is, I'm pissed. Feel free to make an argument for why Konkvistador's output is on net "harmful", I will try to consider it properly.
Though naturally we are left at a disadvantage here, since we will likely only ever hear one side of the story. The man himself has probably already scrambled his password or something and won't be putting up a defence.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-06-12T17:11:30.435Z · LW(p) · GW(p)
My rhetoric is what it is, I'm pissed. Feel free to make an argument for why Konkvistador's output is on net "harmful", I will try to consider it properly.
This is not my argument, please re-read the discussion when you calm down.
↑ comment by CharlieSheen · 2012-06-12T16:27:59.666Z · LW(p) · GW(p)
I generally love his 20 comments a day.
↑ comment by [deleted] · 2012-06-12T17:21:14.011Z · LW(p) · GW(p)
I'm not sure why Gwern's and Nesov's replies are being downvoted to the point that they are hidden. Surely there is disagreement, but I see the quality of their posts as high. I urge voters to vote on the quality of the posts, not whether you agree/disagree with them.
Replies from: CharlieSheen, CharlieSheen↑ comment by CharlieSheen · 2012-06-12T17:54:31.917Z · LW(p) · GW(p)
But even if I'm wrong in this case, it seems obvious we have a split community on this.
I'm better the subconscious parts of the brains of the "Top Poster" Clique are running their little hamster wheels trying to find clever reasons why to associate with the high status gwern rather than low status absent underdog.
Replies from: TimS↑ comment by TimS · 2012-06-12T18:16:14.651Z · LW(p) · GW(p)
Two points:
I don't know what's going on inside your head, but this looks like motivated cognition from the outside.
Regardless of why you are saying this, it doesn't help change the community norm in the direct that you seem to want.
↑ comment by CharlieSheen · 2012-06-12T18:19:49.735Z · LW(p) · GW(p)
I don't know what's going on inside your head
I am on a drug. It’s called Charlie Sheen. It’s not available because if you try it you will die. Your face will melt off and your children will weep over your exploded body.
but this looks like motivated cognition from the
Regardless of why you are saying this, it doesn't help change the community norm in the direct that you seem to want.
I'm not sure there is any hope for this community. But ok you seem reasonable, I'll quit and let the conformist contrarian wolves tear K's corpse.
Replies from: TimS↑ comment by TimS · 2012-06-12T18:23:15.186Z · LW(p) · GW(p)
Quitting doesn't advance your goals either. If your goal isn't posturing for your own emotional satisfaction, stop doing posturing and do some real work. Do the impossible and try to actually convince the community that gwern's advice was bad.
Replies from: CharlieSheen↑ comment by CharlieSheen · 2012-06-12T18:23:55.203Z · LW(p) · GW(p)
That isn't a job for CharlieSheen. Though I think if you look past the rude language you will find the arguments sound. But yes I was in the wrong here on tone.
↑ comment by CharlieSheen · 2012-06-12T17:47:39.019Z · LW(p) · GW(p)
Gwern is wrong. Its that simple.
edit Corrected typo.
2nd edit Haters gonna hate. I'd love to hear some actual arguments though clique men. So predictable on LW someone bitches about karma and you insta up vote him and downvote the opponent. In a prisoners dilemma with the options of defect or cooperate, LWers always pick CONTRARIAN.
Replies from: TimS↑ comment by TimS · 2012-06-12T17:54:49.370Z · LW(p) · GW(p)
Gwern is being flippant, but what's wrong with Nesov's statements.
Replies from: CharlieSheen↑ comment by CharlieSheen · 2012-06-12T17:58:46.360Z · LW(p) · GW(p)
Yeah I guess I can agree.
I originally misread him. He apparently dosen't think K's been on net "hurting" the community. I've edited my posts to reflect this. So apologies to Nesov.
↑ comment by [deleted] · 2012-06-21T17:29:06.665Z · LW(p) · GW(p)
Ok this didn't work. Please down vote parent and this comment to punish me for posting.
Edit: Ok who up voted this. Not funny. :/
Replies from: wedrifid, TimS, None↑ comment by wedrifid · 2012-06-21T18:35:20.576Z · LW(p) · GW(p)
I don't downvote on request (for reasons I have occasionally expressed I consider that self control strategy to be poor).
Go edit C:\Windows\System32\drivers\etc\hosts or /etc/hosts to point lesswrong to 127.0.0.1. Works for me. In fact, whenever I get the slightest impulse to go look at lesswrong I deliberately and actively type in lesswrong.com, anticipating somewhat eagerly the Server Not Found message and giving myself a mental reward. This was amazingly effective in achieving extinction) in an excessively reinforced behavioral pattern.
Replies from: None↑ comment by TimS · 2012-06-21T17:58:48.289Z · LW(p) · GW(p)
Konkvistador,
For some of us, your behavior looks like evaporative cooling from the inside. For those of us who don't want cooling, this is not a good thing.
But I respect you too much to upvote something you don't want upvoted.
comment by witzvo · 2012-06-11T15:19:34.353Z · LW(p) · GW(p)
English is a viciously ambiguous language.
1) The preceding is not a quote, really, it's just a sentence I made up and want to analyze.
2) I think the sentence has more than an element of truth to it. While also being self-referential. This can be amusing in poetry, I guess, but I'm getting pretty sick of it right now.
3) I do not know what to do about this. I do not know how we even manage to talk to each other at all some times (!). Shades of meaning. Tones of voice running all out of sync to spoken words in order to hint at things that are better left unsaid.
4) I mean it's not like ambiguity isn't useful. Consider this delightfully clever piece of rhetoric due to Eliezer that I bumbled into while trying to find this OpenThreadGuy thing. [Meta: Where's the FAQ on things like this? (edit: I mean the OpenThread. Damn ambiguity! Although a FAQ of FAQs could be handy too.) ]
Just because research shows that human beings are insane, does not mean that turning power over to a government composed of human beings will cause it to fix the problem.
5) Help.
Edit: PS: No. I haven't read the sequences yet. Yes, English is my native language, I just don't feel like it is.
! Edit: I think that we communicate by something cognitive that resembles Bayesian updating. Something effectively like a particle system. I have no evidence for this. When I want to be especially clear, I run a "simulation" in my head of someone else reading my writing, and think of all the places their particles might go wrong.
Edit: Braces added as an experiment below. At first I interspersed them above, but this way you can tell me if you understood me accurately the first time.
English is a viciously ambiguous language.
1) The preceding is not a quote, really, it's just a sentence I made up and want to analyze.
2) I think the sentence has more than an element of truth to it {understatement}. While also being self-referential {deliberate fragment; sometimes extra periods are helpful, I think}. This can be amusing in poetry, I guess, but I'm getting pretty sick of it right now. {garden variety ambiguous; "this" refers to self-referential and ambiguity itself, with intended emphasis on the latter, though here, I think the ambiguity was a subconscious resonance than any deliberate poetry}
3) I do not know what to do about this. {plain honest truth, but plenty of elliptical bits and a dangling this pointer left to the reader as an excercise} I do not know how we even manage to talk to each other at all some times {deliberate exaggeration, !}. Shades of meaning. {ellipsis} Tones of voice running all out of sync to spoken words in order to hint at things that are better left unsaid. {imprecise; deal with it.}
4) I mean it's not like ambiguity isn't useful. Consider this delightfully clever piece of rhetoric due to Eliezer that I bumbled into while trying to find this OpenThreadGuy thing. [Meta: Where's the FAQ on things like this? {accidentally ambiguous pronoun} (edit: I mean the OpenThread. Damn ambiguity! Although a FAQ of FAQs could be handy too.) ]
Just because research shows that human beings are insane, does not mean that turning power over to a government composed of human beings will cause it to fix the problem.
5) Help.
Edit^2: I notice that I never really analyzed the lead sentence. Briefly:
- English: I mean this noun not "The people of England."
- Is: Oh don't get me started. The range of metaphor! When I hear "is," I hope that it means "is a" or "is about to" and only go back if these fail. Ah. I guess you cross off the "people of England" case here if you even gave it much prior mass.
- a: Meh.
- viciously: Here we go! I mean a WHOLE mess of these ALL at "once." I'm not ready to go through and figure out exactly which. Ok fine. Wow! I think I mean ALL of them literally. I'm surprised. I didn't realize I "meant" all of them.
- ambiguous: gosh is this ambiguous too? I dunno. I guess so
- language: probably ambiguous by itself, but now you know for sure what I meant by English, well "for sure".
Edit^3: I shouldn't have to run a huge simulation just to speak or to listen! {motivated thinking} expletive!
I'm sure this has been discussed before. Thanks in advance.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-11T15:42:19.072Z · LW(p) · GW(p)
What sort of help do you want?
Or, put a different way: how would you recognize something as helpful, if such help were provided?
↑ comment by witzvo · 2012-06-11T17:15:13.853Z · LW(p) · GW(p)
What sort of help do you want? Or, put a different way: how would you recognize something as helpful, if such help were provided?
Excellent question! I didn't even think of that kind of ambiguity. I like the way you phrased it and then clarified it. Already helpful!
I consider a link to something really good (and preferably brief) helpful. Or the right sentence.
I would recognize something as helpful if I perceived that it would change my future behavior so that I communicate better. Better means: with less effort, more persuasive, not feeling left behind in a conversation, more accurate empathy. Heck, asking the right question, as you just did. Although I'm probably too "Socratic" for most people already.
Since this is too lofty, here's a limited goal. I would like to know how to communicate with like-minded folks on this site as well as possible. E.g. I didn't know the "friends" option existed, or what it does, or that I wasn't seeing all posts until I finally clicked on preferences.
I feel that there's too much unsaid wisdom here, or rather that it's spread out to the winds, so an effort to fix that would be great. (or at least I haven't found the succinct/definitive source and no I haven't read the sequences. Frankly I've been put off by wordiness and colloquially. I'll get over it eventually, I guess, because I'm picking at it already.)
Also, in the rant, I noticed ambiguity as an asset and a liability and as a CPU sink. Are there alternatives that I don't know about for coping with this? And/or comments on my brace elaboration?
By the way, started reading Cialdini's Influence and I judge it as helpful, though not for English per se yet. Honestly, i found HPMOR more amusing than helpful, but yes, it got me here, I suppose, so points!
Edit^2: I think that this line is my most immediate pain point:
I shouldn't have to run a huge simulation just to speak or to listen! {motivated thinking}
Anything that helps me figure out work arounds or accept the necessity is great!
PS I just noticed that your question anticipated and resolved the answer "I don't know." Very slick. But let me say "I don't know" too.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-11T17:44:18.592Z · LW(p) · GW(p)
Regarding your most immediate pain point... if there's a way around that, I don't know it. Humans are complicated, understanding natural language requires a huge amount of pre-existing knowledge, and understanding it well enough to carry on a conversation requires building some sort of model of my interlocutor. I recognize that this is a more difficult task for some people than others, and that this is essentially unfair.
Replies from: witzvo↑ comment by witzvo · 2012-06-11T17:51:51.520Z · LW(p) · GW(p)
Well if there's not a work around, there are coping strategies, surely. Here's what I do {idealization}:
- Ignore most of it
- Wait until something catches my attention. Something worth thinking about or responding to
- Try to think what to say and hope/dream-on that there's a way to fit it into the conversation by the time I figured it out.
(Let's see, how can I reverse what you did to me, on you?)
Replies from: TheOtherDaveTheOtherDave, is coping with the complexity of communication a problem for you? If so, how do you deal with it? What would you recognize as a step forward on this point? And where do we go / who do we go to to get help on this?
↑ comment by TheOtherDave · 2012-06-11T18:20:00.513Z · LW(p) · GW(p)
It's not a problem for me, in that I enjoy it. Communication is a complex puzzle, and I succeed at it regularly enough to find it rewarding. But I acknowledge that that approach isn't available to everyone.
That said, I think it's a skill worth mastering.
As for how to master it... yeah, that's a great question. The best technique I know of is to make explicit predictions (typically private ones) of other people's reactions, and when they are wrong, pay a lot of attention to exactly how they were wrong... what that tells me about what I thought was true about that person that turns out not to be true.
Replies from: witzvo↑ comment by witzvo · 2012-06-11T18:46:32.508Z · LW(p) · GW(p)
Good advice. Thanks!
Edit: Yes, I am one of those look at the floor types. I'm trying to break that habit. Some improvement, maybe.
Replies from: witzvo↑ comment by witzvo · 2012-06-13T09:39:53.354Z · LW(p) · GW(p)
I am beginning to Embrace Constructive ambiguity, and think that I might enjoy Communication after all.
My current Stylistic plan: capitalize the letters of words where you intend the reader to notice a potential for ambiguity that you intend constructively.
the capitals above are in draft status; written by instinct. I like that I and My happen to come out capital, though.
e.g.
english is a Viciously ambiguous language.
... would get the emphasis more right. (and I notice that starting sentences clearly is going to be a bit of a problem)
Replies from: witzvo↑ comment by witzvo · 2012-06-13T09:49:43.520Z · LW(p) · GW(p)
wow. how do I give more Attention to your advice? it's great! I have not learned the explicit predictions part yet. I'm still just reacting. My cognition has been so overloaded I never had time for that and I haven't figured out where to fit it in. help? all I can predict right now is that what you right will be helpful.
PS have I mentioned how much I admire Your Pseudonym?
edit: hah! I wrote "right". I never Grokked Puns. (although some were clear enough I did find them funny, of course; and I've been sensitive to Irony for Forever) edit^2: {the pun was unintentional{conciously}, but awesome} PPS I sent a link to my dad about this. I think he'll get it. beyond that? also, notation matters (link?) but I'm sure you know that in your own way. so much happening by Accident these days. It seems to be coming from Communication. neat-o.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-13T14:22:08.111Z · LW(p) · GW(p)
You might also want to know that when you reply to yourself, as you do above, you get notifications but I don't. (I just happened to notice this in Recent Comments.)
I find lists useful for keeping track of things I want to get to later but don't have time/capacity for now.
In principle, I endorse the idea of typographic markers for particular meta-level emphasis -- what in speech I would use tone of speech for -- but in practice I find it distracts me more than it helps, and I pattern-match it to crankishness. I sometimes use italics for the purpose, but I find even that more and more distasteful as time goes by. This all seems arbitrary and even unfair of me, but there it is anyway.
Re: pseudonym... thanks.
Two lines of text without an intervening blank line will get parsed as a single line, unless you put two blank spaces at the end of the first line.
Replies from: witzvocomment by DanArmak · 2012-06-08T13:24:01.031Z · LW(p) · GW(p)
I want to talk about human intelligence amplification (IA), including things like brain-machine interfaces, brain/CNS mods, and perhaps eventually brute-force uploading/simulation. There are parallels between the dangers of AI and IA.
IA powerful enough to be or create an x-risk might be created before AGI. (E.g., successful IA might jump-start AGI development.) IA is likely to be created without a complete understanding of the human brain, because the task is just to modify existing brains, not to design one from scratch. We will then need FIA - the IA equivalent of Friendliness theory. When a human self-modifies using IA, how do we ensure value stability?
Are there organizations, forums, etc. dedicated to building FIA the way SIAI etc. are dedicated to building FAI?
Reposted from here hoping more people will read and respond.
Replies from: vi21maobk9vp, TimS↑ comment by vi21maobk9vp · 2012-06-09T08:48:41.711Z · LW(p) · GW(p)
IA is likely to go up in big steps, but IA FOOM makes even less sense than for AI because of the human in the loop. Also, it would probably give humans a lot of improvements before solving the problem of low number of independent simultaneous attention threads. So it is not clear that any IA direction would produce a situation of single unstoppable entity.
If IA simply greatly increases the thinking power of a thousand people by different amount, I would not be sure that medium-term existential threat of this field is greater than overall short-term existential threat created by something existing here and now like Sony...
↑ comment by TimS · 2012-06-08T14:32:05.170Z · LW(p) · GW(p)
My sense is that explicit technological modifications of humans are already heavily concerned with the question "Will we still be human after this modification" which at least gestures at the problems you identify. It is exact the lack of this type of concern in the AI field that motivates SIAI's Friendliness activism. But the sorts of technological advances you are pointing towards seem more likely to arise in part from medical researcher methodologies, which seem more concerned with potential negative psychological and sociological effects than some other forms of technological research.
In short, if every AI researcher was already worried about safety to the extent that medical researchers seem to be worried, then there would be no need for SIAI to exist - all AI researchers worrying about the Friendliness problem is what winning looks like for SIAI. Since medical researchers are already worried about these types of problems, an SIAI-equivalent is not necessary. Consider all the different medical ethics councils - which are much more powerful than their institutional equivalents in AI research.
comment by [deleted] · 2012-06-07T19:24:50.225Z · LW(p) · GW(p)
I just read the following comment from Ben123:
http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/6rzn
And it mentioned a chain of words I had not thought of before: "Multiplied by the reciprocal of the probability of this statement being true." At that point, I felt like I couldn't get much past "I notice I am confused." without feeling like I was making a math error. (And that it was possible the math error was in assuming the statement could be evaluated at all, which I wasn't sure of.)
In general, how should I assess that statement or others like it? Does it's recursion simply make it impossible to evaluate under some circumstances?
comment by Richard_Kennaway · 2012-06-06T07:31:01.307Z · LW(p) · GW(p)
A Transit of Venus finished a few hours ago. (100% overcast where I am, alas.)
The next one is in 2117. How many of us expect to see it?
ETA: So far, one nitpick and one Singularitarian prediction.
Personally, I expect to be dead in the usual way by the middle of this century at the latest, and even if I had myself frozen, I don't expect cryonic revival to be possible by 2117. I am not expecting a Singularity by then either. Twenty-year-olds today might reasonably have a hope of life extension of the necessary amount.
ETA2: A little sooner than that, there's a transit of the Earth from Mars in 2084. Anyone up for that?
Replies from: Thomascomment by khafra · 2012-06-05T14:23:26.771Z · LW(p) · GW(p)
Would we lose much by not letting new, karmaless accounts post links? Active moderation is never going to be fast enough to keep stuff like this off the first page of http://lesswrong.com/comments, and it diminishes my enjoyment.
Or we could use some AI spam detection, I guess.
comment by [deleted] · 2012-06-05T05:28:41.647Z · LW(p) · GW(p)
I've just realized that my information diet is well characterized as an eating disorder. Unfortunately, I'm not able to read about eating disorders (to see if their causes could plausibly result in information-diet analogs and whether their treatments can be used/adapted for information consumption disorders), because I get "sympathy attacks" (pain in my abdomen, weakness in my arms, mild nausea) when I see, hear salient descriptions of, or read about painful or very uncomfortable experiences.
I don't know what to do at this point. I'd like to have a moderate, preplanned, "3 meals a day" type research schedule. My first thought was to dive into social cognition literature to understand what's going on behind my sympathy attacks, but then I immediately realized where diving into tangential research usually gets me. What does CBT prescribe in situations like this? Is gradually exposing myself to eating disorder related imagery while in a safe, comfortable environment going to help me extinguish my aversion enough that I can read an article? And not so much that I lose most of my useful negative reaction to eating disorders?
comment by Shmi (shminux) · 2012-06-03T22:49:17.647Z · LW(p) · GW(p)
To rationalize dust specks over torture, one can construct a utility function where utility of dust specks in n people is of the Zeno type, -(1-1/2^n), and the utility of torture is -2. Presumably, something else goes wrong when you do that. What is it?
Replies from: Zack_M_Davis, CuSithBell, wedrifid↑ comment by Zack_M_Davis · 2012-06-04T00:28:37.408Z · LW(p) · GW(p)
As commenter Unknown pointed out in 2008, there must then exist two events A and B, with B only worse than A by an arbitrarily small amount, such that no number of As could be worse than some finite number of Bs.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-04T07:10:32.295Z · LW(p) · GW(p)
Thanks, that's a valid point, pretty formal, too. I wonder if it invalidates the whole argument.
↑ comment by CuSithBell · 2012-06-04T00:22:57.433Z · LW(p) · GW(p)
Many find that sort of discounting to be contrary to intuition and desired results, e.g. the suffering of some particular person is more or less significant depending on how many other people are suffering in a similar enough way.
↑ comment by wedrifid · 2012-06-04T04:21:23.268Z · LW(p) · GW(p)
To rationalize dust specks over torture, one can construct a utility function where utility of dust specks in n people is of the Zeno type, -(1-1/2^n), and the utility of torture is -2. Presumably, something else goes wrong when you do that. What is it?
Nothing. If that is your actual preferences then that is the choice you should make. Not because you can rationalize it but because that is, in fact, what you want to do all things considered.
comment by cousin_it · 2012-06-02T17:25:59.657Z · LW(p) · GW(p)
Here's a little math problem that came up while I was cleaning up some decision theory math. Oh mighty LW, please solve this for me. If you fail me, I'll try MathOverflow :-)
Prove or disprove that for any real number between 0 and 1, there exist finite or infinite sequences and of positive reals, and a finite or infinite matrix of numbers each of which is either 0 or 1, such that:
\sum%20x_m=1%0A\\2)\sum%20y_n=1%0A\\3)\forall%20m,n\;\varphi_{mn}=\varphi_{nm}%0A\\4)\forall%20n\sum%20x_m\varphi_{mn}=p%0A\\5)\forall%20m\sum%20y_n\varphi_{mn}=p)
Right now I only know it's true for rational .
comment by Jayson_Virissimo · 2012-06-02T12:20:00.000Z · LW(p) · GW(p)
I just added a new post on my blog about some of my experiences with PredictionBook. It may be of interest to some here, but understand that the level of discourse is meant to be exactly in-between Less Wrong and my family and friends. It is very awkward for me to write this way and I don't really have the hang of it yet, so go easy. It is a very delicate balance between saying things imprecisely (and even knowingly wrong or incomplete) and keeping things jargon free and understandable to a wider audience.
comment by CronoDAS · 2012-06-02T13:41:54.163Z · LW(p) · GW(p)
Amusing link: Supervillain lair for sale, $17.3 million.