Open Thread, March 16-31, 2012
post by OpenThreadGuy · 2012-03-16T04:53:33.878Z · LW · GW · Legacy · 117 commentsContents
117 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
117 comments
Comments sorted by top scores.
comment by thescoundrel · 2012-03-16T21:17:22.693Z · LW(p) · GW(p)
In the notes for the current chapter of HPMOR, we have the following:
General P/S/A: If you were good at algebra and are presently making less than $120,000/year, you should test yourself to see if you enjoy computer programming. Demand for programmers far outweighs supply, and if you have high talent it's an extremely easy and well-paying career to enter. I expect that at least 1% of the people reading this could be better employed as programmers than in their present occupations.
I greatly enjoy programming, and am currently employed at about half that doing tech support, where my only time to actively program is in bash scripts. I followed the link to the quixey challenge, and while I was not solving them in under a minute, I am consistently solving the practice problems. My question is this: now what?
I have no experience in actual development, beyond the algorithm analysis classes I took 6 years ago. I have a family of 6, and live in the KCMO area- how do I make the jump into development, from no background? Anyone have any experience in that transition?
Replies from: maia↑ comment by maia · 2012-03-31T20:20:16.228Z · LW(p) · GW(p)
I don't, but you might want to check out communities like Slashdot (http://slashdot.org) or Stack Overflow (http://stackoverflow.com) if you don't get responses here.
comment by Grognor · 2012-03-16T05:29:55.758Z · LW(p) · GW(p)
A meta-anthropic explanation for why people today think about the Doomsday Argument: observer moments in our time period have not solved the doomsday argument yet, so only observer moments in our time period are thinking about it seriously. Far-future observer moments have already solved it, so a random sample of observer moments that think about the doomsday argument and still are confused are guaranteed to be on this end of solving it.
(I don't put any stock in this. [Edit: this may be because I didn't put any stock in the Doomsday argument either.])
Replies from: Thomas, Oscar_Cunningham, orthonormal, steven0461, syzygy↑ comment by Thomas · 2012-03-16T10:27:05.210Z · LW(p) · GW(p)
You have reduced the DA to an absurdity, which comes from the DA itself. Clever.
Any self referencing is quite a dangerous thing for a statement. If something can be self referenced it is often prone to some paradoxical consequences what invalidates it.
↑ comment by Oscar_Cunningham · 2012-03-16T10:03:00.360Z · LW(p) · GW(p)
If the conditions of this argument were true, it would annul the Doomsday Argument, thus bringing about its own conditions!
Replies from: Grognor↑ comment by orthonormal · 2012-07-31T15:18:46.726Z · LW(p) · GW(p)
The moon and sun are almost exactly the same size as seen from Earth, because in worlds where this is not the case, observers pick a different interesting coincidence to hold up as non-anthropic in nature.
Replies from: Grognor↑ comment by Grognor · 2012-08-01T22:33:22.451Z · LW(p) · GW(p)
What?
Replies from: orthonormal↑ comment by orthonormal · 2012-08-04T11:54:10.848Z · LW(p) · GW(p)
Meta-anthropics is fun!
↑ comment by steven0461 · 2012-07-31T06:42:13.502Z · LW(p) · GW(p)
But if even a tiny fraction of future observers thinks seriously about the hypothesis despite knowing the solution...
Replies from: Grognor↑ comment by syzygy · 2012-03-16T08:20:36.664Z · LW(p) · GW(p)
Isn't this true about any conceivable hypothesis?
Replies from: Grognor↑ comment by Grognor · 2012-03-16T08:50:08.946Z · LW(p) · GW(p)
Yes, but most hypotheses don't take the form, "Why am I thinking about this hypothesis?" and so your comment is completely irrelevant.
To elaborate: the doomsday argument says that the reason we find ourselves here rather than in an intergalactic civilization of trillions is because such a civilization never appears. I give a different explanation which relies on the nature of anthropic arguments in general.
comment by NancyLebovitz · 2012-03-18T13:24:04.243Z · LW(p) · GW(p)
A notion I got from reading the game company discussion-- how much important invention comes from remembering what you wanted before you got used to things?
comment by sixes_and_sevens · 2012-03-16T14:49:31.959Z · LW(p) · GW(p)
I didn't want to put this as a discussion post in its own right, since it's not really on topic, but I suspect it might be of use to people. I'd like a "What the hell do you call this?" thread. It's hard to Google a concept, even when it might be a well-established idea in some discipline or other. For example:
Imagine you're playing a card game, and another player accidentally exposes their cards just before you make some sort of play. You were supposed to make that play in ignorance, but you now can't. There are several plays you could make which would have been beyond suspicion had you made them in ignorance, but if you make them now, they will be seen as suspect and opportunist in light of your opponent's blunder, so you feel obliged to make a less favourable play that is at least beyond reproach.
Is there a term to describe this? It, and various other social analogues, seem to crop up quite a lot in various guises, but I don't have a satisfactory label for it.
Alternatively, does anyone else have a "what the hell do you call this?" they want to throw out to the crowd?
Replies from: VincentYu, Grognor, spqr0a1↑ comment by VincentYu · 2012-03-17T19:21:02.741Z · LW(p) · GW(p)
The English Stack Exchange is a great site for getting answers to "what is a word or short phrase for ... ?" questions.
↑ comment by Grognor · 2012-03-17T02:26:07.257Z · LW(p) · GW(p)
That heuristic where, to make questions of fact easier to process internally, you ask "what does the world like if X is true? what are the consequences and testable predictions of X?" rather than just "is X true?" which tends to just query your inner Google and return the first result, oftentimes after a period of wait that feels like thinking but isn't.
I want to know what to call that heuristic.
Replies from: folkTheory↑ comment by folkTheory · 2012-03-19T14:50:34.137Z · LW(p) · GW(p)
Making beliefs pay rent?
comment by Grognor · 2012-03-16T05:27:02.999Z · LW(p) · GW(p)
Because the number of quotes already used is increasing, and the number of LW users is increasing, I propose that the next quotes thread should include a new rule: use the search feature to make sure your quote has not already been posted.
Replies from: Viliam_Bur, Oscar_Cunningham↑ comment by Viliam_Bur · 2012-03-16T10:16:18.710Z · LW(p) · GW(p)
For a balance, once every two years there could be a thread for already posted quotes. Like "choose the best quote ever", to filter the best from the best.
Then the winning quotes could randomly appear on the LW homepage.
↑ comment by Oscar_Cunningham · 2012-03-16T10:01:01.354Z · LW(p) · GW(p)
It's already considered bad form to repeat a quote. I thought this was one of the listed rules, but since it isn't (at least in the current thread) I agree that it should be added.
Replies from: TimS↑ comment by TimS · 2012-03-16T15:54:42.944Z · LW(p) · GW(p)
No repeats should be in the rules, but a posting on the rationality quotes pages is not and should not be a certification that the posters has investigated and is confident that there is no repeat.
If I had to investigate that hard before posting on that thread, I'd never do it because it wouldn't be worth the investment of time. And the real consequences for repeating a rule are so low. In short:
Avoid repeating quotes.
Good rule.
Use the search feature to make sure your quote has not already been posted.
Bad rule, as phrased.
Replies from: wedrifid↑ comment by wedrifid · 2012-03-16T17:13:36.490Z · LW(p) · GW(p)
No repeats should be in the rules, but a posting on the rationality quotes pages is not and should not be a certification that the posters has investigated and is confident that there is no repeat.
It certainly should be a certification that poster copied some keywords from the quote into the search box and pressed enter.
Use the search feature to make sure your quote has not already been posted.
Bad rule, as phrased.
If you are referring specifically to the literal meaning of 'sure' then fine. If you refer to the more casual meaning of "yeah, I checked this with search" then I disagree and would suggest that you implement the "it's not worth it for you" contingency.
Replies from: TimS↑ comment by TimS · 2012-03-16T17:24:34.753Z · LW(p) · GW(p)
I've always found the search engine quite clunky, and of questionable reliability. I think an actually explicit social norm will solve most of the problem. That said, I won't be put out if posting rationality quotes is not worth my effort.
Replies from: NancyLebovitz, Grognor↑ comment by NancyLebovitz · 2012-03-17T11:48:03.552Z · LW(p) · GW(p)
So far as I know, the rule is just that a quote shouldn't have appeared in a quotes thread, but if it's appeared elsewhere, it's ok to post it in a quotes thread.
A cached thought: We need a decent search engine, and the more posts and comments accumulate, the more we need it.
↑ comment by Grognor · 2012-03-17T00:59:12.639Z · LW(p) · GW(p)
I think an actually explicit social norm will solve most of the problem.
I don't. Posting rationality quotes is one of the few things new members can do effectively, and new members are the least liable to know of any social norms. That's why I said make the search feature explicit. Also, it's good at finding quotes, since exact words are used, if at all possible (which is why it's not called "Rationality Paraphrases").
Replies from: TimS↑ comment by TimS · 2012-03-17T01:04:03.563Z · LW(p) · GW(p)
I suspect most of our disagreement is about how bad it is for there to be repeats. At the level of bad I assign, making the norm explicit is sufficient to diminish the problem sufficiently. You think the downside is a bit worse, so you support a more intrusive, but more effective, solution.
comment by cousin_it · 2012-03-19T22:32:26.390Z · LW(p) · GW(p)
I want to post some new decision theory math in the next few days. The problem is that it's a bit much for one post, and I don't like writing sequences, and some people don't enjoy seeing even one mathy post, never mind several. What should I do? Compress it into one post, make it a sequence, keep it off LW, or something else?
Replies from: arundelo, GLaDOS, WrongBot↑ comment by arundelo · 2012-03-19T23:34:37.157Z · LW(p) · GW(p)
I for one often don't do more than skim mathy posts, but I think they're important and I'm glad people make them. (So my vote is for either one post or a sequence, and it sounds like you're leaning towards the former.)
Edit:
The reasons I often skim mathy posts (probably easy to guess, but included for completeness):
- The math is often above my level.
- They take more time and attention to read than non-mathy ones.
"What do you know about magma?" he asked.
She turned slightly, looked at him sidelong. "More than you, I would guess."
"You can do heat flow simulations. What about magma flow simulations?"
"The capability is out there," she said.
"Tensors?" Richard had no idea what a tensor was, but he had noticed that when math geeks started throwing the word around, it meant that they were headed in the general direction of actually getting something done.
"I suppose," she said nervously, and he knew that his question had been ridiculous.
-- Neal Stephenson, Reamde
↑ comment by GLaDOS · 2012-03-24T09:51:19.746Z · LW(p) · GW(p)
and some people don't enjoy seeing even one mathy post, never mind several
Those people need to learn to live with seeing math if they want to be on a site trying its best to refine human rationality.
Post it please.
Replies from: cousin_itcomment by lucent · 2012-03-18T17:29:31.682Z · LW(p) · GW(p)
Hi. Long time reader, first time poster (under a new name). I posted once before, than quit because I am not good at math and this website doesn't offer many examples of worked out problems of Bayes theorem.
I have looked for a book or website that gives algebraic examples of basic Bayesian updates. While there are many books that cover Bayes, all require calculus, which I have not taken.
In a new article by Kaj_Sotala, fallacies are interpreted in the light of Bayes theorem. I would like to participate in debates and discussion where I can identify common fallacies and try to calculate them using Bayesian methods, which may not require calculus but simple algebra and basic logic of probability.
However, if someone could simply create an article with a few worked examples of Bayesian updating, that would still be very helpful. I have read the explanations but I am just not very good at math. I have passed (A's and B's) in college trig, algebra, and precal, but I flunked out of calculus. Maybe in the future when I am more financially secure I could spend the time to really understand more complicated Bayesian updates.
Right now, I feel like there is a real need to simply have some basic worked out examples. Not long explanations, just problems with the math worked out. Preferably non calculus based problems.
Replies from: KPier↑ comment by KPier · 2012-03-19T04:04:48.837Z · LW(p) · GW(p)
My favorite explanation of Bayes' Theorem barely requires algebra. (If you don't need the extended explanation, just scroll to the bottom, where the problem is solved.)
Replies from: lucentcomment by XiXiDu · 2012-03-16T10:42:02.768Z · LW(p) · GW(p)
I decided to finally start reading the The Hanson-Yudkowsky AI-Foom Debate. I am not sure how much time I will have but I will post my thoughts along the way as replies to this comment. This also an opportunity for massive downvotes :-)
Replies from: XiXiDu, XiXiDu, XiXiDu, XiXiDu↑ comment by XiXiDu · 2012-03-16T11:37:07.531Z · LW(p) · GW(p)
In The Weak Inside View Eliezer Yudkowsky writes that it never occured to him that his views about optimization ought to produce quantitative predictions.
Eliezer further argues that we can't use historical evidence to evaluate completely new ideas.
Not sure what he means by "loose qualitative conclusions".
He says that he can't predict how long it will take an AI to solve various problems.
One thing which makes me worry that something is "surface", is when it involves generalizing a level N feature across a shift in level N-1 causes.
Argh...I am getting the impression that it was a really bad idea to start reading this at this point. I have no clue what he is talking about.
Now, if the Law of Accelerating Change were an exogenous, ontologically fundamental, precise physical law, then you wouldn't expect it to change with the advent of superintelligence.
I don't know what the law of 'Accelerating Change' is and what exogenous means and what ontologically fundamental means and why not even such laws can break down beyond a certain point.
Oh well...I'll give up and come back to this when I have time to look up every term and concept and decrypt what he means.
Replies from: Grognor↑ comment by Grognor · 2012-03-16T11:52:28.770Z · LW(p) · GW(p)
Not sure what he means by "loose qualitative conclusions".
Some context:
In this case, the best we can do is use the Weak Inside View - visualizing the causal process - to produce loose qualitative conclusions about only those issues where there seems to be lopsided support.
He means that, because the inside view is weak, it cannot predict exactly how powerful an AI would foom, exactly how long it would take for an AI to foom, exactly what it might first do after the foom, exactly how long it will take for the knowledge necessary to make a foom, and suchlike. Note how three of those things I listed are quantitative. So instead of strong, quantitative predictions like those, he sticks to weak general qualitative ones: "AI go foom."
One thing which makes me worry that something is "surface", is when it involves generalizing a level N feature across a shift in level N-1 causes.
Argh...I am getting the impression that it was a really bad idea to start reading this at this point. I have no clue what he is talking about.
He means, in this example anyway, that the reasoning "historical trends usually continue" applied to Moore's Law doesn't work when Moore's Law itself creates something that affects Moore's Law. In order to figure out what happens, you have to go deeper than "historical trends usually continue".
I don't know what the law of 'Accelerating Change' is and what exogenous means and what ontologically fundamental means and why not even such laws can break down beyond a certain point.
I didn't know what exogenous means when I read this either, but I didn't need to to understand. (I deigned to look it up. It means generated by the environment, not generated by organisms. Not a difficult concept.) Ontologically fundamental is a term we use on LW all the time; it means at the base level of reality, like quarks and electrons. The Law of Accelerating Change is one of Kurzweil's inventions; it's his claim that technological change accelerates itself.
Oh well
Indeed, if you're not even going to try to understand, this is the correct response, I suppose.
Incidentally, I disapprove of your using the open thread as your venue for this rather than commenting on the original posts asking for explanations. And giving up on understanding rather than asking for explanations.
Replies from: khafra, XiXiDu, XiXiDu, XiXiDu↑ comment by khafra · 2012-03-16T15:02:10.503Z · LW(p) · GW(p)
Incidentally, I disapprove of your using the open thread as your venue for this rather than commenting on the original posts asking for explanations. And giving up on understanding rather than asking for explanations.
He's not really giving up, he's using a Roko algorithm again.
↑ comment by XiXiDu · 2012-03-16T15:43:54.739Z · LW(p) · GW(p)
In retrospect I wish I would have never come across Less Wrong :-(
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-16T17:07:40.121Z · LW(p) · GW(p)
This is neither a threat nor a promise, just a question: do you estimate that your life would be improved if you could somehow be prevented from ever viewing this site again? Similarly, do you estimate that your life would be improved if you could somehow be prevented from ever posting to this site again?
↑ comment by XiXiDu · 2012-03-16T15:34:36.539Z · LW(p) · GW(p)
I didn't know what exogenous means when I read this either, but I didn't need to to understand. (I deigned to look it up.
My intuitive judgement of the expected utility of reading what Eliezer Yudkowsky writes is low enough that I can't get myself to invest a lot of time on it. How could I change my mind about that? It feels like reading a book on string theory, there are no flaws in the math but you also won't learn anything new about reality.
ETA That isn't the case for all people. I have read most of Yvain's posts for example because I felt that it is worth it to read them right away. ETA2 Before someone is going to nitpick, I haven't read posts like 'Rational Home Buying' because I didn't think it would be worth it. ETA3 Wow I just realized that I really hate Less Wrong, you can't say something like 99.99% and mean "most" by it.
Incidentally, I disapprove of your using the open thread as your venue for this rather than commenting on the original posts asking for explanations.
I thought it might help people to see exactly how I think about everything as I read it and where I get stuck.
Indeed, if you're not even going to try to understand, this is the correct response, I suppose.
I do try, but I got the impression that it is wrong to invest a lot of time on it at this point when I haven't even learnt basic math yet.
Now you might argue that I invested a lot of time into commenting here, but that was rather due to a weakness of will and psychological distress than anything else. Deliberately reading the Sequences is very different here, because it takes an effort that is high enough to make me think about the usefulness of doing so and decide against it.
When I comment here it is often because I feel forced to do it. Often because people say I am wrong etc. so that I feel forced to reply.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-03-16T15:40:30.091Z · LW(p) · GW(p)
I don't know if it's something you want to take public, but it might make sense to do a conscious analysis of what you're expecting the sequences to be.
If you do post the analysis, maybe you can find out something about whether the sequences are like your mental image of them, and even if you don't post, you might find out something about whether your snap judgement makes sense.
↑ comment by XiXiDu · 2012-03-16T11:01:47.641Z · LW(p) · GW(p)
In Engelbart As UberTool? Robin Hanson talks about a dude who actually tried to apply recursive self-improvement to his company. He is till trying (wow!).
It seems humans, even groups of humans, are not capable of fast recursive self-improvement. That they didn't take over the world might be partly due to strong competition from other companies that are constantly trying the same.
What is it that is missing that doesn't allow one of them to prevail?
Robin Hanson further asks what would have been a reasonable probability estimate to assign to the possibility of a company taking over the world at that time.
I have no idea how I could possible assign a number to that. I would just have said that it is unlikely enough to be ignored. Or that there is not enough data to make a reasonable guess either way. I don't have the resources to take every idea seriously and assign a probability estimate to it. Some things get just discounted by my intuitive judgment.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-03-16T16:48:59.000Z · LW(p) · GW(p)
It seems humans, even groups of humans, are not capable of fast recursive self-improvement. What is it that is missing that doesn't allow one of them to prevail?
I would guess that the reason is people don't work with exact numbers, only with approximations. If you make a very long equation, the noise kills the signal. In mathematics, if you know "A = B" and "B = C" and "C = D", you can conclude that "A = D". In real life your knowledge is more like "so far it seems to me that under usual conditions A is very similar to B". A hypothetical perfect Bayesian could perhaps assign some probability and work with it, but even our estimates of probabilities are noisy. Also, the world is complex, things do not add to each other linearly.
I suspect that when one tries to generalize, one gets a lot of general rules with maybe 90% probabilities. Try to chain dozen of them together, and the result is pathetic. It is like saying "give me a static point and a lever and I will move the world" only to realize that your lever is too floppy and you can't move anything that is too far and heavy.
↑ comment by XiXiDu · 2012-03-16T10:51:28.495Z · LW(p) · GW(p)
In Fund UberTool?, Robin Hanson talks about a hypothetical company that applies most of its resources to its own improvement until it would burst out and take over the world. He further asks what evidence it would take to convince you to invest in them.
This post goes straight to the heart of Pascal's mugging, vast utilities that outweigh tiny probabilities. I could earn a lot by investing in such a company if it all works as promised. But should I do that? I have no idea.
What evidence would make me invest money into such a company? I am very risk averse. Given my inability to review mathematical proofs, and advanced technical proofs of concept, I'd probably hesitant and fear that they are bullshitting me.
In the end I would probably not invest in them.
Replies from: faul_sname↑ comment by faul_sname · 2012-03-16T20:43:58.167Z · LW(p) · GW(p)
By "a hypothetical company that applies most of its resources to its own improvement" do you mean a tech company? Because that's exactly what tech companies do, and they seem to be pretty powerful, if not "take over the world" powerful. And I do invest in those companies.
↑ comment by XiXiDu · 2012-03-16T11:16:57.529Z · LW(p) · GW(p)
In Friendly Teams Robin Hanson talks about the guy who tried to get his company to undergo recursive self-improvement and how he was a really smart fellow who saw a lot of things coming.
Robin Hanson further argues that key insights are not enough but that it takes many small insights that are the result of a whole society of agents.
Robin further asks what it is that makes the singleton AI scenario more reasonable if does not work out for groups of humans, not even remotely. Well, I can see that people would now say that an AI can directly improve its own improvement algorithm. I suppose the actual question that Robin asks is how the AI will reach that point in the first place. How is it going to acquire the capabilities that are necessary to improve its capabilities indefinitely.
comment by daenerys · 2012-03-25T06:49:56.516Z · LW(p) · GW(p)
f.lux and sleep aid follow-up: About a month or two ago, I posted on the open thread about some things I was experimenting with to get to bed regularly at a decent hour. Here are the results:
f.lux: I installed f.lux on my computer. This is a program that through the course of the day, changes your display from blue light to red light, on the theory that the blue light from your computer keeps you awake. When I first installed it, the red tint to my screen was VERY noticeable. Anecdotally, I ended up feeling EXTREMELY tired right after installing it, and fell asleep within about half hour, despite the fact that it was only 8:30p.
Now, the red tint to my screen at night is barely noticeable, which is good, but it doesn't make me feel super-tired like it did the first night I installed it. I didn't keep any quantitative measurements of my sleep patterns, so I can't tell if f.lux has helped or not, but I think its possible effects may be negated by the extremely bright lights I have in my room. I definitely feel more tired when I turn those off, which I don't normally do. (But I should...)
Verdict: I would recommend installing f.lux if you are having trouble getting yourself to bed on time, especially if you do a lot of computer usage at night. The results are iffy, but the cost is minimal (maybe 2 minutes of your time to download, and then it's on your computer and you never have to worry about it again.)
Sleep Aid: I also tried taking a sleep aid (Diphenhydramine HCl- it's non-addictive and OTC) about half an hour before I thought I should start going to bed. This worked great! It is much easier to think "I should go to bed soon-ish. I'll take a sleep aid now," than to think "I should stop everything I'm doing and get ready for bed NOW." Once you've taken the pill, you're pretty much committed to going to bed soon, plus it makes you nice and tired.
Unfortunately, I also ended up starting to have really bad headaches within a couple of days, so I stopped using them. When I used the pill at night, I would have a headache the next day. If I didn't use the pill, I wouldn't have a headache. I intend to try this again at some point with melatonin. I might also try again with the same pills, just to make sure that the first trial didn't just happen to coincide with me being sick.
Verdict: It's definitely effective, so give it a try, but recognize that they might cause headaches.
comment by Bobertron · 2012-03-20T01:50:16.857Z · LW(p) · GW(p)
I have been wondering for a while: what's the source for the A Human's Guide to Words sequence? I mean EY had to come up with that somehow and unlike with the probability and cognitive science stuff, I have no idea what kind of books inspired A Human's Guide to Words. What are the keywords here?
Replies from: Grognor, beoShaffer↑ comment by Grognor · 2012-03-24T22:58:14.447Z · LW(p) · GW(p)
Eliezer had read Language in Thought and Action prior to writing this sequence, and he might have gotten some of it from Steven Pinker or the MIT Encyclopedia of the Cognitive Sciences as well.
↑ comment by beoShaffer · 2012-03-22T18:19:42.679Z · LW(p) · GW(p)
If I understand correctly, it was partially inspired by general semantics.
comment by OpenThreadGuy · 2012-03-16T05:23:48.123Z · LW(p) · GW(p)
Activity on these seems to be dying down, so my own reply to this comment is a poll.
Replies from: OpenThreadGuy↑ comment by OpenThreadGuy · 2012-03-16T05:24:37.240Z · LW(p) · GW(p)
Upvote this comment if you prefer the status quo of two open threads per month. Downvote it if you prefer to go back to one open thread per month.
Replies from: OpenThreadGuy↑ comment by OpenThreadGuy · 2012-03-16T05:25:01.591Z · LW(p) · GW(p)
Karma balance: do the opposite of whatever you did for the parent comment. (Not that it matters much, since this is a sock-puppet account!)
comment by GLaDOS · 2012-03-24T09:49:48.332Z · LW(p) · GW(p)
The Unreasonable Effectiveness of Data talk by Peter Norvig.
comment by oliverbeatson · 2012-03-23T22:17:25.702Z · LW(p) · GW(p)
I'm often walking to somewhere and I notice that I have a good amount of thinking time, but that I find my head empty. Has anyone any good ideas on useful things to occupy my mind during such time? Visualisation exercises, mental arithmetic, thinking about philosophy?
It depresses me a little, how much easier it is to make use of nothing but a pen and paper, than it is to make use of when that is removed and one has only one's own mind.
Replies from: Crux↑ comment by Crux · 2012-03-24T00:34:11.615Z · LW(p) · GW(p)
How often do you think in words, and how often in visuals, sounds, and so on? Do you normally think by picturing things, or engaging in an internal monologue, or what? Or is the distribution sort of even?
Replies from: oliverbeatson↑ comment by oliverbeatson · 2012-03-24T02:09:30.502Z · LW(p) · GW(p)
I'd say something like internal monologue, for thinking anyway (this may be internally sounded, I know that I think word-thoughts in my own voice, but I regularly think much faster than I could possibly speak, until I realise that fact, when the voice becomes slow and I start repeating myself, and then get annoyed at my brain for being so distracting).
For calculating or anything vaguely mathematical I use abstractly spatial/visual sorts of thoughts -- abstract meaning I don't have sufficient awareness of the architecture of my brain to tell you accurately what I even mean. Generally I'm not very visual, but I would say I use a spatial sort of visual awareness quite often in thought. If this makes sense.
Does this imply something about the sorts of tasks I could do that were most useful? I'm intrigued by the reasons you have for requesting the data you did. :)
Replies from: Crux↑ comment by Crux · 2012-03-24T17:12:46.983Z · LW(p) · GW(p)
I requested that data because for some reason, in my own experience, I've noticed the tendency you mentioned in your previous post as being strongest when I'm trying to avoid the internal monologue way of thinking.
If I try to avoid using words in my thought process, I often find myself walking around empty-headed for some reason. It's as if it's a lot harder to start a non-verbal thought, or something. I don't know.
When walking around with a lot of thinking time on my hands, I've found a lot of success keeping myself occupied by simply saying words to myself and then seeing where it takes me. For example, I may vocalize in my head "epistemology", or "dark arts", or something like that, and then see where it takes me (making sure to start verbalizing my thought process if I stall at any point).
Maybe I'm on a different topic though. Are you simply asking what you should spend your time thinking about, and I'm going into the topic of how to start a thought process (whatever it is)? This seems like an unlikely interpretation though because you said the problem is not having a pen and paper, which suggests to me that you know what to think about, but end up not doing anything if all you can't write or draw.
Sorry if this is pretty messy. I wanted to respond to this, but didn't have much time.
Replies from: oliverbeatson↑ comment by oliverbeatson · 2012-03-25T02:53:31.797Z · LW(p) · GW(p)
I see, that's interesting. That feels recognisable: I think when I hear my own voice/internal monologue, it brings to memory things I've already said or talked about, so I dwell on those things rather than think of fresh topics. So I think of the monologue itself as being the source of the stagnant thinking, and shut it down hoping insight will come to me wordlessly. Having said all that about having an internal monologue, I now think I do have a fair number of non-verbal thoughts, but these still use some form of mental labelling to organise concepts as I think about them.
That sounds an interesting experiment to do, next time I need to travel bipedally I'll get on to checking out those default conceptual autocompletes* that I get from different words. Thanks!
*Hoping I haven't been presumptious in my use of technical metaphors -- in the course of writing this I've had to consciously reign in my desire to use programming metaphors for how my brain seems to work.
I suppose among the questions I was interested in, was indeed what I should spend my time thinking about. I had the idea that there must be high-computational-requiring and low-requisite-knowledge-requiring mental tasks, akin to how one learning electronics might spend time extrapolating the design of a one-bit adder with a pen and paper and requisite knowledge of logic gates. But crucially, without a pen and paper. So in what area can I use my pre-existing knowledge to productively generate new ideas or thoughts without a pen and paper. Possibly advancing in some sense my 'knowledge' of those areas at the same time.
Sidenote: I like reading detailed descriptions of people's thought-processes like this, because of the interleaved data on what they pay attention to when thinking; and especially when there isn't necessarily a point to it in the sequences-/narrative-/this post has a lesson related to this anecdote-style, and when it's just describing the mechanics of their thought stream for the sake of understanding another brain. For some reason it feels like a rich source of data for me, and I would like to see more of it. Particularly because it feels to give insight on a slightly lower level than cognitive biases themselves. I sometimes think I use my micro-thought processes to evade or disrupt the act of changing my mind simply because they have the advantage of being on a lower level. A level that interacts with feelings, of which I seem to have many. Alternately, my desire for detailed descriptions of people's thought-processes might be down to my personality and not be something generally useful.
comment by timtyler · 2012-03-30T20:03:18.814Z · LW(p) · GW(p)
Did you folk see this one?
The Problem with ‘Friendly’ Artificial Intelligence - Adam Keiper and Ari N. Schulman.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-04-01T08:23:04.886Z · LW(p) · GW(p)
Though Friendly AI researchers seem only dimly aware of this, they are actually not the first to argue over which system of ethics is best — and those prior efforts have hardly met with consensus.
[...]
Simply picking certain outcomes — like pain, death, bodily alteration, and violation of personal environment — and asserting them as absolute moral wrongs does nothing to resolve the difficulty of ethical dilemmas in which they are pitted against each other (as, fully understood, they usually are). Friendly AI theorists seem to believe that they have found a way to bypass all of the difficult questions of philosophy and ethics, but in fact they have just closed their eyes to them.
Wow, something has gone horribly wrong if this is outsiders' perception of FAI researchers.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-04-01T11:18:12.173Z · LW(p) · GW(p)
The article Tim linked is a reply to another article that only quotes some of CFAI, so it's possible that the author was only exposed to the quotations from CFAI in that article.
comment by glunkthunker · 2012-03-24T13:08:43.308Z · LW(p) · GW(p)
Universal power switch symbols are counter-intuitive. A straight line ends. It doesn't go anywhere. It should mean "stop." A circle is continuous and should mean "on". A line penetrating a circle has certain connotations that means keep it going (or coming) but definitely not "standby". How can we change this?
comment by gwern · 2012-03-22T03:55:27.651Z · LW(p) · GW(p)
Polyamory: if anyone is interested in my notes ( http://dl.dropbox.com/u/5317066/2012-gwern-polyamory.txt ), I've updated them with a big extract from Anapol 2010 - apparently she noticed a striking frequency of Asperger's in polyamory circles. Of course LW has never been accused of hosting very many of those...
Replies from: David_Gerard↑ comment by David_Gerard · 2012-03-30T17:59:42.491Z · LW(p) · GW(p)
The Poly-English Dictionary may need updating.
Replies from: wedrifid↑ comment by wedrifid · 2012-03-30T18:59:14.683Z · LW(p) · GW(p)
Poly Phrase: "So, which conventions do you like to attend, what kind of books do you like to read, what are your spiritual beliefs, and what is your ideal occupation?" English Translation: "Which science fiction conventions do you like to attend, who is your favorite fantasy author, what form of neo-paganism do you ascribe do, and where in the computer industry would you like to work?"
I think I just got converted. I'm willing to sleep with lots of people so long as it means I get to hang out with lots of nerds and discuss fantasy books. Hang on... how many females are there in this community? 3?
comment by sixes_and_sevens · 2012-03-21T15:14:37.518Z · LW(p) · GW(p)
Are there any good examples of what would be considered innate human abilities (cognitive or otherwise) that are absent or repressed in an entire culture?
For example, are there examples of culture-wide face-blindness/prosopagnosia? Are there examples of cultures that can't apply the Gaze heuristic, or can't subitize?
This is for reasoning about criticisms to universal grammar, in particular the lack of recursion in the Pirahã language, so that one is kind of begging the question. The closest I can come up with at the moment (which really isn't very close at all) is the high incidence of perfect pitch amongst native speakers of tonal languages.
comment by bentarm · 2012-03-21T00:17:23.038Z · LW(p) · GW(p)
A vague discussion of AI risks has just broken out at http://marginalrevolution.com/marginalrevolution/2012/03/amazing-bezos.html#comments Marginal Revolution gets a lot of readers who are roughly in the target demographic for LW - anyone fancy having a go at making a sensible comment in that thread that points people in the right direction?
comment by David_Gerard · 2012-03-30T17:58:00.268Z · LW(p) · GW(p)
Richard Carrier's book looks like it's going to spread the word of Bayes. To the theists, too. And there's a media-friendly academic fight in progress. Just the thing!
comment by Will_Newsome · 2012-03-21T10:00:42.386Z · LW(p) · GW(p)
Any recommendations for books/essays on contemporary hermeneutics whose authors are aware of Schellingian game theory and signalling games? Google Scholar has a few suggestions but not many and they're hard to access.
comment by Viliam_Bur · 2012-03-20T12:15:44.036Z · LW(p) · GW(p)
Would it be useful to make a compressed version of the Sequences, at the ratio one Sequence into one article which is approximately one article into one paragraph? It would provide an initial information for people who would like to read the Sequences, but do not have enough time. Each paragraph would be followed by a "read more" hyperlink to the original article.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-03-20T12:22:13.512Z · LW(p) · GW(p)
There are summary posts like this, but if you're thinking about a more coherent presentation, "one article into one paragraph" probably won't work.
comment by sixes_and_sevens · 2012-03-19T13:38:20.432Z · LW(p) · GW(p)
A proposal: make public an anonymised dataset of all Karma activity over an undisclosed approximate three-month period from some point in the past 18 months.
What I would like is a list of anonymised users, a list of posts and comments in the given three-month period (stripped of content and ancestry, but keeping a record of authorship), and all incidents of upvotes and downvotes between them that took place in the given period. This is for purposes of observing trends in Karma behaviour, and also sating my curiosity about how some sort of graph-theoretic-informed equivalent of Karma (kind of like Google PageRank) might work. I would also be curious to see what other data-types might make of it.
What good reasons are there for not making this data available?
Someone has to go to the trouble of pulling it from the database I would personally be prepared to pay up to $13.50 for your time and effort. I would also be surprised if someone hasn't at least snook a peak at this data already, because it's kind of interesting.
Violation of LW user privacy The biggie, really. It's possible that a tenacious individual could use this data to deduce the voting habits of specific users. I've been thinking about how I might go about doing this if given the data in question, which informed the "approximate three months at some point in the past eighteen months" time frame. Without timestamps or details of comment ancestry, and without knowing the exact length of the snapshot period, I suspect anyone trying to extract this information would struggle enormously.
I am fascinated in how people would try and accomplish this, though, so please tell me how you'd go about it. My personal method would be to scrape the site to build up a record of post and comment authorship over time. Any given period would then have a "fingerprint" of authors to number of posts that you could compare against the dataset. This becomes harder, but not impossible, with a time period of unspecified length. This could be mitigated by the data being deliberately sabotaged prior to publication, in such a way that confounds this method while still keeping the broader trends available for analysis.
Any other concerns people would have with this? Alternatively, any awesome things they'd like to do with the data?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-03-20T12:10:28.756Z · LW(p) · GW(p)
Is the LW database structure available? If yes, you could prepare some SELECT queries and ask admins to run them for you and send you the result.
Anonymization: Replace user ids with "f(id+c)" where "f" is a hash function and "c" is a constant that will be modified by the admin before running you script. Replace times of karma clicks with "ym(time+r)" where "r" is a random value between 0 and 30 days, and "ym" is a function that returns only month and year. Select only data from recent year and only from users who are were active during the whole year (made at least one vote in the first and last months of the time period). Would such data be still useful to you?
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2012-03-20T13:45:24.621Z · LW(p) · GW(p)
My day job is DB admin and development. In the unlikely event of LW back-end admin-types being comfortable running a query sent in by some dude off the site, I wouldn't be comfortable giving it to them. The effort of due diligence on a foreign script is probably greater than that required to put it together.
The data I want correspond to:
- the IDs (i.e. primary key, not the username) of all the users
- the IDs (PK) and authorship (user ID) of all posts and comments in a contiguous ~3 month period
- the adjacency of users and posts as upvotes and downvotes over this period (I assume this is a single junction table)
If I were providing this data, I would also scramble the IDs in some fashion while maintaining the underlying relationships, as consecutive IDs could provide some small clue as to the identity and chronology of users or posts. While this is pretty straightforward, the mechanism for such scrambling should not be known to recipients of the data.
comment by billswift · 2012-03-16T18:59:39.681Z · LW(p) · GW(p)
Is there a term in many-party game theory for a no-win, no-lose scenario; that is where by sacrificing a chance of winning you can prevent losing (neutrality or draw)?
Replies from: TimS↑ comment by TimS · 2012-03-16T23:47:04.241Z · LW(p) · GW(p)
I don't know any game theory terms, but in law, there's the high-low agreement, where the plaintiff agrees that the maximum exposure is X, and the defendant agrees that the minimum exposure is Y (a lower number). It aims to reduce the volatility of trial.
comment by curiousepic · 2012-03-16T14:34:36.999Z · LW(p) · GW(p)
Jane McGonigal's new project SuperBetter may be useful to you as an incentive framework for self-improvement.
Replies from: curiousepic↑ comment by curiousepic · 2012-03-16T14:39:13.606Z · LW(p) · GW(p)
I've been using the Epic Win iPhone app as an organizer, task reminder and somewhat effective akrasia-defeater for about a year now, and think it has helped me quite a bit. SuperBetter is similar, but has more aspects, and is not portable (for now). I anticipate that I will prefer Epic Win's simplicity and accessibility to SuperBetter.
comment by [deleted] · 2012-05-19T20:25:36.404Z · LW(p) · GW(p)
The Essence Of Science Explained In 63 Seconds
A one minute piece of Feynman lecture candy wrapped in reasonable commentary. Excellent and most importantly brief intro level thinking about our physical world. Apologies if it has been linked to before, especially since I can't say I would be surprised if it was.
Here it is, in a nutshell: The logic of science boiled down to one, essential idea. It comes from Richard Feynman, one of the great scientists of the 20th century, who wrote it on the blackboard during a class at Cornell in 1964. YouTube
Think about what he's saying. Science is our way of describing — as best we can — how the world works. The world, it is presumed, works perfectly well without us. Our thinking about it makes no important difference. It is out there, being the world. We are locked in, busy in our minds. And when our minds make a guess about what's happening out there, if we put our guess to the test, and we don't get the results we expect, as Feynman says, there can be only one conclusion: we're wrong.
The world knows. Our minds guess. In any contest between the two, The World Out There wins. It doesn't matter, Feynman tells the class, "how smart you are, who made the guess, or what his name is, if it disagrees with the experiment, it is wrong."
This view is based on an almost sacred belief that the ways of the world are unshakeable, ordered by laws that have no moods, no variance, that what's "Out There" has no mind. And that we, creatures of imagination, colored by our ability to tell stories, to predict, to empathize, to remember — that we are a separate domain, creatures different from the order around us. We live, full of mind, in a mindless place. The world, says the great poet Wislawa Szymborska, is "inhuman." It doesn't work on hope, or beauty or dreams. It just...is.
comment by sixes_and_sevens · 2012-03-19T14:24:43.549Z · LW(p) · GW(p)
Something I would quite like to see after looking at this post: a poll of LW users' stances on polarised political issues.
There are a whole host of issues which we don't discuss for fear of mindkilling. While I would expect opinion to be split on a lot of politically sensitive subjects, I would be fascinated to see if the LW readership came down unilaterally on some unexpected issue. I'd also be interested to see if there are any heavily polarised political issues that I currently don't recognise as such.
Why would this be a bad idea?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-19T15:06:34.863Z · LW(p) · GW(p)
I would be astonished if one result of such a poll was not quite a lot of discussion of the polarized political issues that we don't discuss for fear of mindkilling. Whether that's a bad thing or not depends on your beliefs about such discussion, of course.
Also, if what you're interested in is (a) issues where we all agree, and (b) issues you don't think of as polarized political issues in the first place, it seems a poll is neither necessary nor sufficient for your goals. For any stance S, you can find out whether S is in class (a) by writing up S and asking if anyone disagrees. And no such poll will turn up results about any issue the poll creator(s) didn't consider controversial enough to include in the poll.
That said, I'd be vaguely interested (not enough to actually do any work to find out) in how well LW users can predict how popular various positions are on LW, and how well/poorly accuracy in predicting the popularity of a position correlates with holding that position among LW users.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2012-03-19T15:48:50.385Z · LW(p) · GW(p)
How I imagined it going:
0) Prohibit actual discussion of the subjects in question, with the understanding that comments transgressing this rule would be downvoted to oblivion by a conscientious readership (as they generally are already)
1) Request suggestions for dichotomies that people believe would split popular opinion. Let people upvote and downvote these on the basis of whether they'd be fit for the purpose of the poll.
2) Take the most popular dichotomies and put them in a poll, with a "don't care" and "wrong dichotomy" option, which I hope are fairly self-explanatory.
2) a) To satisfy your curiosity on how well LW users can predict the beliefs of other LW users, also have a "what do you think most LW users would pick as an answer to this question?" option.
3) Have people vote, and see what patterns emerge.
comment by beoShaffer · 2012-03-18T21:56:06.166Z · LW(p) · GW(p)
Does anyone know much about general semantics? Given the very strong outside view similarities between it and less wrong. Not to mention the extent to which it directly influenced the sequences it seems like it's history could provide some useful lessons. Unfortunately, I don't really know that much about it.
Replies from: erratio, NancyLebovitz↑ comment by erratio · 2012-03-18T23:08:43.610Z · LW(p) · GW(p)
EDIT: disregard this comment, I mistook general semantics for, well, semantics.
I'm no expert on semantics but I did take a couple of undergrad courses on philosophy of language and so forth. My impression was that EY has already taken all the good bits, unless you particularly feel like reading arguments about whether a proposition involving "the current king of France" can have a truth value or not. (actually, EY already covered that one when he did rubes and bleggs).
In a nutshell, the early philosophers of language were extremely concerned about where language gets its meaning from. So they spent a lot of time talking about what we're doing when we refer to people or things, eg. "the current king of France" and "Sherlock Holmes" both lack real-world referents. And then there's the case where I think your name is John and refer to you as such, but your name is really Peter, so have I really succeeded in referring to you? And at some point Tarski came up with "snow is white" is a true proposition if and only if snow is white. And that led into the beginning of modern day formal/compositional semantics, where you have a set of things that are snow, and a set of things that are white, and snow is white if and only if the set of things that are snow overlaps completely with the set of things that are white.
Replies from: beoShaffer↑ comment by beoShaffer · 2012-03-19T00:38:14.447Z · LW(p) · GW(p)
I see. Do you know much about the history of it as a movement? While I do have some interest in the actual content of the area I was mostly looking at it as a potentail member of the same refernce class as LW. Specially, I was wondering if its history might contain lessons that are generally useful to any organization that is trying to improve peoples' thinking abilities. Particularly those that have formed a general philosophy based off of insights gained from cross-disiplinary study.
Replies from: erratio↑ comment by erratio · 2012-03-19T01:02:22.935Z · LW(p) · GW(p)
My apologies, I went off in completely the wrong direction there. I don't know too much of it as a movement, other than that all the accounts of it I've seen make it sound distinctly cultish, and that the movement was carried almost entirely by Korzybski and later by one of his students.
↑ comment by NancyLebovitz · 2012-03-19T07:58:42.712Z · LW(p) · GW(p)
I was and am very influenced by Stuart Chase's The Tyranny of Words-- what I took away from it is to be aware that you never have the complete story, and that statements frequently need to be pinned down as to time, place, and degree of generality.
Cognitive psychology has a lot of overlap with general semantics-- I don't know whether there was actual influence or independent invention of ideas.
comment by TimS · 2012-03-16T23:57:47.101Z · LW(p) · GW(p)
I just thought of a way to test one of my intuitions about meta-ethics, and I'd appreciate others thoughts.
I believe that human morality is almost entirely socially constructed (basically an anti-realist position). In other words, I think that the parts of the brain that implement moral decision-making are incredibly plastic (at least at some point in life).
Independently, I believe that behaviorism (i.e. the modern psychological discipline descended from classical conditioning and operant conditioning) is just decision theory plus an initially plastic punishment/reward system.
In short, if behaviorism makes false predictions of human behavior - the same error in different eras and cultures - then that seems like evidence that my plasticity based meta-ethics theory is wrong.
Does anyone see any holes in that logic? Is anyone aware of examples in which behaviorism has failed to accurately predict in the ways I have described?
comment by beoShaffer · 2012-03-22T18:24:22.623Z · LW(p) · GW(p)
I think have seen offers to help edit LW post, but can't remember were. Does anyone know what I may be thinking of?
comment by Nectanebo · 2012-03-17T17:37:43.327Z · LW(p) · GW(p)
People have irrational beliefs. When people come to lesswrong and talk about them, many say "oops" and change their mind. However, often they keep their decidedly irrational beliefs despite conversation with other Lesswrongers who often point out where they went wrong, and how they went wrong, and perhaps a link to the Sequence post where the specific mistake is discussed in more detail.
Some examples:
http://lesswrong.com/user/Jake_Witmer/
This guy was told he was being Mindkilled. Many people explained to him what was wrong with his thinking, and why it was wrong, and how it was wrong, and what he could do, and all manner of helpful advice and discussion. He rejected it, left the site and hasn't been seen since.
Another: http://lesswrong.com/user/911truther/
Not much to say. Eliezer the Wise and Always Correct himself declared him a troll.
Another: http://lesswrong.com/user/sam0345/
Generally pretty irrational dude. Asked to leave lesswrong by the powerful and great Eliezer because his comments were so bad.
Another, different example:
http://lesswrong.com/lw/1lv/the_wannabe_rational/
MrHen had great insights into rationality and seemed to be a well upvoted member of lesswrong. He also believed in God. He was around in 2010 and again in 2011 for a bit, and hasn't posted in a while now.
Perhaps a more controversial example:
http://lesswrong.com/user/Mitchell_Porter/?count=50&after=t1_5tl5
Around Feb 3rd Mitchell Porter brought a debate about colour, the mind, dualism, and similar thoughts. I'm actually not sure if this was resolved, but there seemed to be some small consensus (kinda) that he was talking the Crackpot Offer. This was suggested to him.
Mitchell Porter is still around, and is an active user that seems to have lots of useful insights to many things. he is very well upvoted.
To any of these people, I am sorry for mentioning you guys like this if you are offended or anything like that.
So why am I bringing this up?
Well, people fail at being rational all the time. However, there are countless examples like these, from people who turned up, got insanely downvoted, then left, and regular users who otherwise get lots of karma and are very rational.
The main thing I wanted to do was just POINT IT OUT and see if anyone wants to comment on the fact that this happens, in LessWrong, surely the place where they are MOST likely to see why and how they are wrong.
What does this mean that so many people do not? What does it mean that such failures happen so often that I could choose random examples off the top of my head? I mean, some of the things it means are obvious, but this pains me and I need it to discussed somewhere because I find it important and I think that more people should be aware that this happens and should make more concerned, perhaps vapid comments about it.
Also, thinking of upgrading to discussion post. Tell me if that's a bad idea.
If you have read this, please tell me what you think.
Replies from: ArisKatsaris, Grognor, GLaDOS↑ comment by ArisKatsaris · 2012-03-19T01:44:06.597Z · LW(p) · GW(p)
Half the people you listed were insanely rude at pretty much every single comment they posted.
Jake Witmer was pretty much accusing of communism everyone who downvoted him.
911truther deliberately chose a provocative name and kept wailing in every single post about the downvotes he received (which of course caused him to get more downvotes).
sam0345's main problem wasn't that he was irrational, it was that he was an ass all the time.
But I don't even know why you chose to list the above as belonging to the same category with decent people like Mitchell_Porter and MrHen, people who don't follow assholish tactics, and are therefore generally well received and treated as proper members of the community, even if occasionally downvoted (whether rightly or wrongly). As you yourself saw.
The main thing I wanted to do was just POINT IT OUT and see if anyone wants to comment on the fact that this happens, in LessWrong, surely the place where they are MOST likely to see why and how they are wrong. What does this mean that so many people do not?
The main problem with half the people you listed was that they were assholes, not that they were wrong. If people enjoy being assholes, if their utility function doesn't include a factor for being nice at people, how do you change that with mere unbiasing? Not caring about how whether you treat others nicely or nastily has to do with empathy, not with intellectual power.
Replies from: Nectanebo↑ comment by Nectanebo · 2012-03-19T10:50:29.228Z · LW(p) · GW(p)
The rudeness wouldn't help with the downvotes, I can understand that.
But the factor that I was pointing out, and the common factor for my grouping them together was the lack of being able to say "oops". I am sorry, I didn't make it very clear. Thus why I listed the assholes with nice people.
MrHen left LessWrong believing in a God, and Mitchell_Porter (as far as I can tell) still believes dualism needs to be true if colour exists (or whatever his argument was, I'm embarrasing myself by trying to simplify it when I had a poor understanding of what he was trying to say). They were/are also great rationalists apart from that, and they both make sure to be very humble in general while on the site.
The other 3 were often rude, but the main reason I pointed them out was their lack of ability to say "oops" when their rational failings were pointed out to them. Unlike the other two, these 2 them proceeded to act very douchey until friven from the site, but their first posts are much less abrasive and rude.
In general though, if they aren't going to work out they are wrong at LessWrong, where are they going to?
Some of these people may work it out with time, and it may be unreasonable to expect them to change their mind straight away.
But this should show at least how difficult it is for an irrational person to attempt to become more rational; it's like having to know the rules to play the rules.
What does it take to commit to wanting rationality from a beginning of irrationality?
These examples show the existence of people on LessWrong who aren't rational, and while that isn't a surprise, I feel like the Lesswrong community should be perhaps learn from the failings of some of these people, in order to better react to situations like this in the future, or something. I don't know.
In any case, thank you for replying.
Replies from: GLaDOS, Mitchell_Porter, TheOtherDave, Viliam_Bur↑ comment by GLaDOS · 2012-03-24T10:31:25.074Z · LW(p) · GW(p)
MrHen left LessWrong believing in a God, and Mitchell_Porter (as far as I can tell) still believes dualism needs to be true if colour exists (or whatever his argument was, I'm embarrasing myself by trying to simplify it when I had a poor understanding of what he was trying to say). They were/are also great rationalists apart from that, and they both make sure to be very humble in general while on the site.
Bold statment that somehow still seems true: Most LessWrongers probably have a belief of comparable wrongness. MrHen is just unlucky.
↑ comment by Mitchell_Porter · 2012-03-24T10:47:11.123Z · LW(p) · GW(p)
Mitchell_Porter (as far as I can tell) still believes dualism needs to be true if colour exists (or whatever his argument was, I'm embarrasing myself by trying to simplify it when I had a poor understanding of what he was trying to say)
The argument is that for dualism not to be true, we need a new ontology of fundamental quantum monads that no-one else quite gets. :-) My Chalmers-like conclusion that the standard computational theory of mind implies dualism, is an argument against the standard theory.
↑ comment by TheOtherDave · 2012-03-19T15:32:28.172Z · LW(p) · GW(p)
What does it take to commit to wanting rationality from a beginning of irrationality?
Deciding that being less wrong than I am now is valuable, realizing that doing what I've been doing all along is unlikely to get me there, and being willing to give up familiar habits in exchange for alternatives that seem more likely to get me there. These are independently fairly rare and the intersection of them is still more so.
This doesn't get me to wanting "rationality" per se (let alone to endorsing any specific collection of techniques, assumptions, etc., still less to the specific collection that is most popular on this site), it just gets me looking for some set of tools that is more reliable than the tools I have.
I've always understood the initial purpose of LW to be to present a specific collection of tools such that someone who has already decided to look can more easily settle on that specific collection (which, of course, is endorsed by the site founder as particularly useful), at-least-ostensibly in the hope that some of them will subsequently build on it and improve it.
Getting someone who isn't looking to start looking is a whole different problem, and more difficult on multiple levels (practical, ethical, etc.).
↑ comment by Viliam_Bur · 2012-03-19T13:38:35.124Z · LW(p) · GW(p)
But this should show at least how difficult it is for an irrational person to attempt to become more rational; it's like having to know the rules to play the rules. What does it take to commit to wanting rationality from a beginning of irrationality?
You need some intial luck. It's like human mind is a self-modifying system, where the rules can change the rules, and again, and again. Thus human mind is floating around in a mindset space. The original setting is rather fluid, for evolutionary reasons -- you should be able to join a different tribe if it becomes essential for your survival. On the other hand, the mindset space contains some attractors; if you happen to have some set of rules, these rules keep preserving themselves. Rationality could be one of these attractors.
Is the inability to update one's mind really so exceptional on LW? One way of not updating is "blah, blah, blah, I don't listen to you". This happens a lot everywhere on the internet, but for these people probably LW is not attractive. The more interesting case is "I listen to you, and I value our discussion, but I don't update". This seems paradoxical. But I think it's actually not unusual... the only unusual thing is the naked form -- people who refuse to update, and recognize that they refuse to update. The usual form is that people pretend to update... except that their updates don't fully propagate. In other words, there is no update, only belief in update. Things like: yeah I agree about Singularity and stuff, but somehow I don't subscribe for cryopreservation; and I agree human lives are valuable and there are charities which can save hundred human lifes for every dollar sent to them, but somehow I didn't send a single dollar yet; and I agree that rationality is very important and being strategic can increase one's utility, and then I procrastinate on LW and other web sites and my everyday life goes on without any changes.
We are so irrational that even our attempts to become rational are horribly irrational, and that's why they often fail.
↑ comment by Grognor · 2012-03-17T23:06:49.192Z · LW(p) · GW(p)
What does this mean that so many people do not? What does it mean that such failures happen so often that I could choose random examples off the top of my head?
Absolutely nothing. Your sample is a selection bias of all the worst examples you can think of. Please don't make a discussion post about this.
↑ comment by GLaDOS · 2012-03-24T10:26:50.422Z · LW(p) · GW(p)
Another: http://lesswrong.com/user/sam0345/
Generally pretty irrational dude.
Not really. He had major problems with his tone though.
comment by Will_Newsome · 2012-03-16T06:18:36.856Z · LW(p) · GW(p)
Recommendations for a book/resource on comparative religion/mythology, ideally theory-laden and written by someone with good taste for hermeneutics? Preferably something that doesn't assume that gods aren't real. (I'm approaching the subject from the Gaimanian mythological paradigm, i.e. something vaguely postmodern and vaguely Gods Need Prayer Badly, but that perspective is only provisional and I value alternative perspectives.)
Replies from: None, Incorrect, NancyLebovitz↑ comment by [deleted] · 2012-03-16T06:22:12.233Z · LW(p) · GW(p)
I mean, the classic is Jospeh Cambell and The Hero with a Thousand Faces. There's also The Masks of God and other books by him.
Replies from: khafra, Will_Newsome↑ comment by khafra · 2012-03-16T15:05:23.349Z · LW(p) · GW(p)
It's not book-length, but Eric S. Raymond's Dancing With the Gods treats them as, at least, intersubjectively real.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-17T11:34:13.964Z · LW(p) · GW(p)
I've read it. ESR is... a young soul, hard for me to learn from.
↑ comment by Will_Newsome · 2012-03-17T11:21:55.832Z · LW(p) · GW(p)
Thanks yo, will read.
↑ comment by NancyLebovitz · 2012-03-17T11:57:29.540Z · LW(p) · GW(p)
Not what you're asking for, but possibly interesting: A World Full of Gods: An Inquiry into Polytheism, a polytheistic theology. The author said it was the first attempt at such.
This review has enough quotes that you should be able to see whether you want to read it.
comment by Multiheaded · 2012-03-28T22:15:55.738Z · LW(p) · GW(p)
[Weird irrational rant]
A week and a half ago, I either caught some bug or went down with food poisoning. Anyway, in the evening I suddenly felt like shit and my body temperature jumped to 40C. My mom gave me some medicine and told me to try and get some sleep. My state of mind felt a bit altered, and I started praying fervently to VALIS. My Gnostic faith has been on and off for the last few years, but in that moment, I was suddenly convinced that it was a test of some sort, and that the fickle nature of reality would be revealed to me if I wouldn't waver in my belief. I felt that it's a point where my life could change, possibly for the better.
Therefore, I thought of and swore three oaths: an oath of scholarship - to obtain both rational and subjective ("spiritual") knowledge, and use it to search for truth; an oath of compassion - to treat all deserving beings with kindness and fairness, and to oppose evil with a healing word rather than hatred; and an oath of evangelism - to seek out fellow nutjobs who would be interested in this woo, and try to convert them. Hence this comment.
I kept praying for two hours or so, then slept for two more hours, and when I woke up I felt completely normal. The doctor came by later that day and found nothing wrong with me. I need to reflect on the whole thing more thoroughly. Anyway, I now believe with more certainty than before that there's a benevolent entity (which I'll keep calling VALIS, although she's better known as St. Sophia) acting in the simulation around us, and that it influences minds and events subtly, helping a fallen spark from outside the simulation that is within us to break free of its bondage.
If you felt that this comment is worthless, yeah, I guess it's hardly in line with LW's goals. But maybe someone will feel sympathetic. Hmm, perhaps I should really have a serious discussion with Will_Newsome about it all. From what I've seen of his posts on Catholicism, he seems to hold the opposing view, worshipping what the Gnostics would call the Demiurge. But at least he'd ponder these matters seriously.