Welcome to Less Wrong! (7th thread, December 2014)
post by Gondolinian · 2014-12-15T02:57:01.853Z · LW · GW · Legacy · 639 commentsContents
A few notes about the site mechanics A few notes about the community A list of some posts that are pretty awesome None 639 comments
A few notes about the site mechanics
A few notes about the community
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
- The Worst Argument in the World
- That Alien Message
- How to Convince Me that 2 + 2 = 3
- Lawful Uncertainty
- Your Intuitions are Not Magic
- The Planning Fallacy
- The Apologist and the Revolutionary
- Scope Insensitivity
- The Allais Paradox (with two followups)
- We Change Our Minds Less Often Than We Think
- The Least Convenient Possible World
- The Third Alternative
- The Domain of Your Utility Function
- Newcomb's Problem and Regret of Rationality
- The True Prisoner's Dilemma
- The Tragedy of Group Selectionism
- Policy Debates Should Not Appear One-Sided
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)
If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.
Finally, a big thank you to everyone that helped write this post via its predecessors!
639 comments
Comments sorted by top scores.
comment by BayesianMind · 2015-07-20T02:25:15.630Z · LW(p) · GW(p)
Hi,
I am Falk. I am a PhD student in the computational cognitive science lab at UC Berkeley. I develop and test computational models of bounded rationality in decision making and reasoning. I am particularly interested in how we can learn to be more rational. To answer this question I am developing a computational theory of cognitive plasticity. I am also very interested in self-improvement, and I am hoping to develop strategies, tools and interventions that will help us become more rational.
I have written a blog post on what we can do to accelerate our cognitive growth that I would like to share with the LessWrong community, but it seems that I am not allowed to post it yet.
Replies from: James_Miller↑ comment by James_Miller · 2015-07-20T02:29:39.042Z · LW(p) · GW(p)
I look forward to reading your post.
comment by vernvernvern · 2014-12-16T00:12:44.463Z · LW(p) · GW(p)
New to the site. LW came to my attention today in a Harper's Magazine article "Come With Us If You Want To Live (Among the apocalyptic libertarians of Silicon Valley)" January 2015. I hope to learn about rationalism. My background includes psychology, psycho-metrics, mechanics, and history but my interests are best described as eclectic. I value clarity of expression but also like creativity and humor. I view the world skeptically, sometimes cynically. For amusement I often speak ironically and this, at times, offends my listeners when I fail to adequately signal it. I do not hesitate to apologize when I see that I have offended someone. Hello.
Replies from: dxu, Capla↑ comment by dxu · 2014-12-16T03:52:13.908Z · LW(p) · GW(p)
Welcome to Less Wrong!
(Wow. So you came here after reading the Harper's article, huh? That's actually pretty surprising to me. It's only one data point, but I feel as though I should significantly weaken what I said here about the article. Color me impressed.)
comment by Acty · 2015-06-28T15:21:34.227Z · LW(p) · GW(p)
Hey! <retracted because I changed my mind about the sensibleness of putting personal info on the internet and more people started recognising my name than I'm happy with>
Replies from: John_Maxwell_IV, Gram_Stone, None, Squark, ChristianKl↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-07-05T15:24:36.494Z · LW(p) · GW(p)
I think it's a bit of a shame that society seems to funnel our most intelligent, logical people away from social science. I think social science is frequently much more helpful for society than, say, string theory research.
Replies from: John_Maxwell_IV, btrettel, James_Miller, VoiceOfRa↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-07-21T08:21:17.309Z · LW(p) · GW(p)
Note: I do find it plausible that doing STEM in undergrad is a good way to train oneself to think, and the best combo might be a STEM undergrad and a social science grad degree. You could do your undergrad in statistics, since statistics is key to social science, and try to become the next Andrew Gelman.
Replies from: Acty↑ comment by Acty · 2015-07-21T08:36:00.403Z · LW(p) · GW(p)
As advice for others like me, this is good. For me personally it doesn't work too well; my A level subjects mean that I won't be able to take a STEM subject at a good university. I can't do statistics, because I dropped maths last year. The only STEM A level I'm taking is CompSci, and good universities require maths for CompSci degrees. I could probably get into a good degree course for Linguistics, but it isn't a passionate adoration for linguistics that gets me up in the mornings. I adore human and social sciences.
I don't plan to be completely devoid of STEM education; the subject I actually want to take is quite hard-science-ish for a social science. If I get in, I want to do biological anthropology and archaeology papers, which involve digging up skeletons and chemically analysing them and looking at primate behaviour and early stone tools. It would be pretty cool to do some kind of PhD involving human evolution. From what I've seen, if I get onto the course I want to get onto, it'll teach me a lot of biology and evolutionary psychology and maybe some biochemistry and linguistics.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-21T16:36:27.271Z · LW(p) · GW(p)
I want to do biological anthropology and archaeology papers, which involve digging up skeletons and chemically analysing them and looking at primate behaviour and early stone tools.
While archaeology certainly seems fun, do you think it will help you understand how to build a better world?
Replies from: Acty↑ comment by Acty · 2015-07-21T16:51:05.320Z · LW(p) · GW(p)
--
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-21T17:47:44.832Z · LW(p) · GW(p)
But to an extent, the biggest problems - coordination problems, how-do-we-build-a-half-decent-state problems - have been around since the very beginning.
No. The problem of building a state out of 10,000 people who's fasted way of transport is the horse and who have no math is remarkably different from the problem of building a state of tens of millions of people in the age of the internet, cellphones fast airplanes and cars that allow people to travel fast.
The Ancient Egyptians didn't have the math to even think about running a randomized trial to find out whether a certain policy will work. Studying them doesn't tell you anything about how to get our current political system to be more open to make policy based on scientific research.
Evolutionary psychology is incredibly useful for understanding our own biases and fallacies.
I think cognitive psychologists who actually did well controlled experiments were a lot more useful for learning about biases and fallacies than evolutionary psychology.
rather than just carrying my magnifying glass straight over to political science and becoming the three gazillionth and fourth person to ever look for a better more ideal way to do politics.
Most people in political science don't do it well. I don't know of a single student body that changed to a new political system in the last decade.
I did study at the Free University of Berlin which has a very interesting political structure that came out of 68's. At the time there was a rejection of representative democracy and thus even through the government of Berlin wants the student bodies of universities in Berlin to be organised according to representative democracy, out university effectively isn't. Politics students thought really hard around 68 about how to create a more soviet style democracy and the system is still in operation today.
Compared to designing a system like that today's politics students are slacking. The aren't practically oriented.
I'm interested in doing work on rationality problems and cooperation problems, and looking at the origins of the problems and how our current solutions came into being over the course of human history seems worthwhile as part of understanding the problems and figuring out more/better solutions.
If you are interested in rationality problems, there the field of decision science. It's likely more yielding then anthropology. Having a good grasp of academic decision science would be helpful when it comes to designing political systems and likely not enough people in political science deal with that subject.
Are you aware that the American Anthropological Association dropped science from their long-range plan 5 years ago?
Replies from: Richard_Kennaway, Acty, Vaniver↑ comment by Richard_Kennaway · 2015-07-21T18:25:52.403Z · LW(p) · GW(p)
soviet style democracy
Is that the system where everyone can vote, but there's only one candidate?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-21T18:51:31.014Z · LW(p) · GW(p)
No, that's not the meaning of the word soviet. Soviet translates into something like "counsel" in English.
Reducing elections to a single candidate also wouldn't fly legally. You can't just forbid people from being a candidate without producing a legal attack surface.
As I said, it's actually a complex political system that need smart people to set up.
It's like British Democracy also happens to "democracy" where there a queen and the prime minister went to Eton and Oxford and wants to introduce barrier on free communication that are is some way more totalitarian than what the Chinese government dares to do.
Democracy always get's complicated if it comes to the details ;).
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-07-21T19:13:52.987Z · LW(p) · GW(p)
No, that's not the meaning of the word soviet. Soviet translates into something like "counsel" in English.
In English, "Soviet" is the adjectival form of "USSR".
Never mind the word. What is the actual structure at the Free University of Berlin that you're referring to? And in 1968, did they believe that this was how things were done in the USSR?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-21T19:51:30.600Z · LW(p) · GW(p)
In English, "Soviet" is the adjectival form of "USSR".
Because Soviets are a central part of how the USSR was organised.
And in 1968, did they believe that this was how things were done in the USSR?
Copying on things were in the USSR wasn't the point. The point are certain Marxist ideas about the value of Soviets for political organisation.
What is the actual structure at the Free University of Berlin (FU) that you're referring to?
A system of of soviets, as I said above. There a lot of ideas involved. On the left you had a split between people who believe in social democracy and people who are Marxists. The FU Asta is Marxist.
The people sitting in it are still Marxist even through the majority of the student population of the FU isn't and they don't have a problem with that as they don't believe in representative democracy. They also defend their right to use their printing press to print whatever they want by not disclosing what they are printing. By law they are only allowed to print for university purposes and not for general political activism.
↑ comment by Acty · 2015-07-21T18:49:54.627Z · LW(p) · GW(p)
--
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-21T19:22:08.732Z · LW(p) · GW(p)
The problem of studying people in the first villages is not only that their problems don't map directly to today. It's also that it's get's really hard to get concrete data. It's much easier to do good science when you have good reliable data.
With 10,000 people you can solve a lot via tribal bonds and clans. Families stick together. You can also do a bit of religion and everyone follows the wise local priest. Those solutions don't scale well.
It seems mostly irrelevant to me though, since I am aware that rubbish social scientists exist and I just want to try and improve and be a good social scientist.
You are likely becoming like the people that surround you when you go into university. You also build relationships with them.
Going to Cambridge is good. Cambridge draws a lot of intelligent people together and also provides you with very useful contacts for a political career. On the other hand that means that you have to go to those place in Cambridge where the relevant people are. Find out which professors at Cambridge actually do good social science. Then go to their classes.
Just make sure that you don't get lost and go on a career of digging up old stuff and not affecting the real world. A lot of smart people get lost in programs like that. It's like smart people who get lost in theoretical physics.
↑ comment by Vaniver · 2015-07-21T19:00:24.163Z · LW(p) · GW(p)
I think cognitive psychologists who actually did well controlled experiments were a lot more useful for learning about biases and fallacies than evolutionary psychology.
The two are not mutually exclusive.
↑ comment by btrettel · 2015-07-12T03:38:05.287Z · LW(p) · GW(p)
I agree wholeheartedly. A field like theoretical physics is much more glamorous to large number of intelligent people. I think it's partly signaling, but I'm not sure that explains everything.
What makes the least sense to me are people who seem to believe (or even explicitly confirm!) that they are only interested in things which have no applications. Especially when these people seem to disparage others who work in applied fields. I imagine this teasing might explain a bit of why so many smart people work in less helpful fields.
Replies from: Acty↑ comment by Acty · 2015-07-20T14:37:39.831Z · LW(p) · GW(p)
I think to an extent, physics is more intellectually satisfying to a lot of smart people. It's much easier to prove things for definite in maths and physics. You can take a test and get right answers, and be sure of your right answers, so when you're sufficiently smart it feels like a lot of fun to go around proving things and being sure of yourself. It feels much less satisfying to debate about which economics theories might be better.
Knowing proven facts about high level physics makes you feel like an initiate into the inner circles of secret powerful knowledge, knowing a bunch about different theories of politics (especially at first) just makes you feel confused. So if you're really smart, 'hard' sciences can feel more fun. I know I certainly enjoy learning computer science and feeling the rush of vague superiority when I fix someone's computer for them (and the rush of triumph when my code finally compiles). When I attempt to fix people's sociological opinions for them, there's no rush of vague superiority, just a feeling of intense frustration and a deeply felt desire to bang my head against the wall.
Then there's the Ancient Greek cultural thing where sitting around thinking very hard is obviously superior to going out and doing things - cool people sit inside their mansions and think, leaving your house and mucking around in the real world actually doing things is for peasants - which has somehow survived to this day. The real world is dirty and messy and contains annoying things that mess up your beautiful neat theories. Making a beautiful theory of how mechanics works is very satisfying. Trying to actually use the theory to build a bridge when you have budget constraints and a really big river is frustrating. Trying to apply our built up knowledge about small things (molecules) to bigger things (cells) to even bigger things (brains) to REALLY BIG AND COMPLICATED things (lots and lots of brains together, eg a society) is really intensely frustrating. And the intense frustration and higher difficulty (more difficult to do it right, anyway) means there's more failure and less conclusive results / slower progress, which leads some people to write off social science as a whole. The rewarding rush of success when your beautifully engineered bridge looks shiny and finished is not something you really get in the social sciences, because it will be a very long time before someone feels the rewarding rush of success that their beautiful preference-satisfying society is shiny and perfect.
I do think that the natural sciences are hopelessly lost without the social sciences, but for most super-clever people, is studying natural science more fun than doing social science? Definitely - I mean, while the politics students are busy reading books and banging their heads against walls and yelling at each other, physics students are putting liquid nitrogen in barrels of ping pong balls so that the whole thing explodes! (I loved chemistry in secondary school for years, right up until I finally caught on that coloured flames were the closest we were going to get to scorching our eyebrows off. Something about health and safety, thirteen year olds, and fire. I wish I hadn't stopped loving chemistry, because I hear once you're at university they do actually let you set things on fire sometimes.)
Replies from: btrettel↑ comment by btrettel · 2015-07-21T15:52:42.954Z · LW(p) · GW(p)
I don't think that something being (more) mathematically rigorous explains all of what we see. Physicists at one time used to study fluid dynamics. Rayleigh, Kelvin, Stokes, Heisenberg, etc., all have published in the field. You can do quite a lot mathematically in fluids, and I have felt like part of some inner circle because of what I know about fluid dynamics.
Now the field has been basically displaced by quantum mechanics, and it's usually not considered part of "physics" in some sense, and is less popular than I think you might expect if a subject being amenable to mathematical treatment is attractive to some folks. Physicists are generally taught only the most basic concepts in the field. My impression is that the majority of physics undergrads couldn't identify the Navier-Stokes equations, which are the most basic equations for the movement of a fluid.
It could also be that fluids have obvious practical applications (aerodynamics, energy, etc.) and this makes the subject distasteful to pedants. That's just speculation, however. I'm really not sure why fields like physics, etc., are so attractive to some people, though I think you've identified parts of it.
You do make a good point about the sense of completion being different in engineering vs. social science. I suppose the closest you could get in social science is developing some successful self-help book or changing public policy in a good way, but I think these are much harder than building things.
Replies from: Acty↑ comment by Acty · 2015-07-21T16:58:54.493Z · LW(p) · GW(p)
I think there's also definitely a prestige/coolness factor which isn't correlated with difficulty, applicability, or usefulness of the field.
Quantum mechanics is esoteric and alien and weird and COOL and saying you understand it whilst sliding your glasses down your nose makes you into Supergeek. Saying "I understand how wet stuff splashes" is not really so... high status. It's the same thing that makes astrophysics higher status than microbiology even though the latter is probably more useful and saves more lives / helps more people - rockets spew fire and go to the moon, bacteria cells in a petri dish are just kind of icky and slimy. I am quite certain that, if you are smart enough to go for any field you want, there is a definite motivation / social pressure to select a "cool" subject involving rockets and quarks and lasers, rather than a less cool subject involving water and cells or... god forbid... political arguments.
And, hmm, actually, not quite true on the last point - a social scientist could develop an intervention program, like a youth education program, that decreases crime or increases youth achievement/engagement, and it would probably feel awesome and warm and fuzzy to talk to the youths whose lives were improved by it. So you could certainly get closer than "developing some successful self-help book". It is certainly harder, though, I think, and there's certainly a higher rate of failure for crime-preventing youth education programs than for modern bridge-building efforts.
Replies from: btrettel↑ comment by btrettel · 2015-07-21T19:47:48.343Z · LW(p) · GW(p)
Quantum mechanics is esoteric and alien and weird and COOL
To be honest, I found QM to be the least interesting subject of all physics which I've learned about.
Also, I don't think the features you highlighted work either. Fluid dynamics has loads of counterintuitive findings, perhaps even more so than QM, e.g., streamlining can increase drag at low Reynolds numbers, increasing speed can decrease drag in certain situations ("drag crisis"). Fluids also has plenty of esoteric concepts; very few people reading the previous sentence likely know what the Reynolds number or drag crisis is.
Physicists, even astrophysicists, know little more about how rockets work than educated laymen. Rocketry is part of aerospace engineering, of which the foundation is fluid dynamics. Maybe rocketry is a counterexample, but I don't really think so, as there are a lot more people who think rockets are interesting than who know what a de Laval nozzle is. Even that has some counterintuitive effects; the fluid accelerates in the expansion!
Replies from: Acty↑ comment by Acty · 2015-07-25T03:01:27.765Z · LW(p) · GW(p)
You make me suddenly, intensely curious to find out what a Reynolds number is and why it can make streamlining increase drag. I am also abruptly realising that I know less than I thought about STEM fields, given I just kind of assumed that astrophysicists were the official People Who Know About Space and therefore rocketry must be part of their domain. I don't know whether I want to ask if you can recommend any good fluid dynamics introductions, or whether I don't want to add to the several feet high pile of books next to my bed...
Okay - so why do you think quantum mechanics became more "cool" than fluid dynamics? Was there a time when fluid dynamics held the equivalent prestige and mystery that quantum mechanics has today? It clearly seems to be more useful, and something that you could easily become curious about just from everyday events like carrying a cup of tea upstairs and pondering how near-impossible it is not to spill a few drops if you've overfilled it.
Replies from: btrettel, Good_Burning_Plastic↑ comment by btrettel · 2015-07-25T14:06:50.288Z · LW(p) · GW(p)
The best non-mathematical introduction I have seen is Shape and Flow: The Fluid Dynamics of Drag. This book is fairly short; it has 186 pages, but each page is small and there are many pictures. It explains some basic concepts of fluid dynamics like the Reynolds number, what controls drag at low and high Reynolds numbers, why golf balls (or roughened spheres in general) have less drag than smooth spheres at high Reynolds number (this does not imply that roughening always reduces drag; it does not on streamlined bodies as is explained in the book), how drag can decrease as you increase speed in certain cases, how wind tunnels and other similar scale modeling works, etc.
You could also watch this series of videos on drag. They were made by the same person who wrote Shape and Drag. There is also a related collection of videos on other topics in fluid dynamics.
Beyond that, the most popular undergraduate textbook by Munson is quite good. I'd suggest buying an old edition if you want to learn more; the newer editions do not add anything of value to an autodidact. I linked to the fifth edition, which is what I own.
I'll offer a few possibilities about why fluids is generally seen as less attractive than QM, but I want to be clear that I think these ideas are all very tentative.
This study suggests that in an artificial music market, the popularity charts are only weakly influenced by the quality of the music. (Note that I haven't read this beyond the abstract.) Social influence had a much stronger effect. One possible application of this idea to different fields is that QM became more attractive for social reasons, e.g., the Matthew effect is likely one reason.
The vast majority of the field of fluid mechanics is based on classical mechanics, i.e., F = m a is one of the fundamental equations used to derive the Navier-Stokes equations. Maybe because the field is largely based on classical effects, it's seen as less interesting. This could be particularly compelling for physicists, as novelty is often valued over everything else.
I've also previously mentioned that fluid dynamics is more useful than quantum mechanics, so people who believe useless things are better might find QM more interesting.
There also is the related issue that a wide variety of physical science is lumped into the category "physics" at the high school level, so someone with a particular interest might get the mistaken impression that physics covers everything. I majored in mechanical engineering in college, and basically did it because my father did. My interest even when I was a teenager was fluids, but I hadn't realized that physicists don't study the subject in any depth. I was lucky to have picked the right major. I suppose this is a social effect of the type mentioned above.
(Also, to be clear, I don't want to give the impression that more people do QM than fluids. I actually think the opposite is more likely to be true. I'm saying that QM is "cooler" than fluids.)
Fluid mechanics used to be "cooler" back in the late 1800s. Physicists like Rayleigh and Kelvin both made seminal contributions to the subject, but neither received their Nobel for fluids research. I recall reading that two very famous fluid dynamicists in the early 20th century, Prandtl and Taylor, were recommended for the prize in physics, but neither received it. These two made foundational contributions to physics in the broadest sense of the word. Taylor speculated the lack of Nobels for fluid mechanics was due to how the Nobel prize is rewarded. I also recall reading that there was indications that the committee found the mathematical approximations used to be distasteful even when they were very accurate. Unfortunately those approximations were necessary at the time, and even today we still use approximations, though they are different. Maybe the lack of Nobels contributes to fluids not being as "cool" today.
Replies from: Acty, Gram_Stone↑ comment by Acty · 2015-07-25T20:05:16.602Z · LW(p) · GW(p)
Ooh, yay, free knowledge and links! Thankyou, you're awesome!
The linked study was a fun read. I was originally a bit skeptical - it feels like songs are sufficiently subjective that you'll just like what your friends like or is 'cool', but what subjects you choose to study ought to be the topic of a little more research and numbers - but after further reflection the dynamics are probably the same, since often the reason you listen to a song at all is because your friend recommended it, and the reason you research a potential career in something is because your careers guidance counselor or your form tutor or someone told you to. And among people who've not encountered 80k hours or EA, career choice is often seen as a subjective thing. It'd be like with Asch's conformity experiments where participants aren't even aware that they're conforming because it's subconscious, except even worse because it's subconscious and seen as subjective...
That seems like a very plausible explanation. There could easily be a kind of self-reinforcing loop, as well, like, "I didn't learn fluid dynamics in school and there aren't any fluid dynamics Nobel prize winners, therefore fluid dynamics isn't very cool, therefore let's not award it any prizes or put it into the curriculum..."
At its heart, this is starting to seem like a sanity-waterline problem like almost everything else. Decrease the amount that people irrationally go for novelty and specific prizes and "application is for peasants" type stuff, and increase the amount they go for saner things like the actual interest level and usefulness of the field, and prestige will start being allocated to fields in a more sensible way. Fluid dynamics sounds really really interesting, by the way.
↑ comment by Gram_Stone · 2015-07-25T19:41:37.626Z · LW(p) · GW(p)
Also perhaps worth noting that the effect within the LW subculture in particular may have to do with lots of LW users knowing a lot about ideas or disciplines where there are a lot of popular but wrong positions so they know how not to go astray. Throughout the Sequences, before you figure out how to do it right, you hear about how a bunch of other people have done it wrong: MWI, p-zombies, value theory, evolutionary biology, intellectual subcultures, etc. I don't know that there are any sexy controversies in fluid mechanics.
Replies from: btrettel↑ comment by btrettel · 2015-07-26T01:14:40.495Z · LW(p) · GW(p)
Interesting points. There are controversies in fluid mechanics, and they are discussed at great length in the field, but I don't know of any popular treatments of them.
In particular, there a large number of debates centering around turbulence modeling which actually are extremely relevant to modeling in general. The LES vs. RANS debate is interesting, and while in some sense LES has "won", this does not mean that LES is entirely satisfactory. A lot of turbulence theory is also quite controversial. I recall reading a fair bit about isotropic turbulence decay in 2012 and I was surprised by the wide variety of results different theoretical and experimental approaches give. Isotropic turbulence decay, by the way, is the among easiest turbulence problems you could devise.
The debate in turbulence about the log law vs. power law is a waste of time, and should be recognized as such. Both basically give you the same result, so which you use is inconsequential. There are some differences in interpretation that I don't think are important or even remember to be honest.
Thinking about it, things like QM are a fair bit easier to explain than turbulence. To actually explain these things in detail beyond what I've mentioned would take a considerable amount of time.
↑ comment by Good_Burning_Plastic · 2015-07-25T08:11:27.503Z · LW(p) · GW(p)
"I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic." (Horace Lamb)
(Indeed, today quantum electrodynamics makes correct predictions within one part per billion and fluid dynamics has an open million-dollar question.)
↑ comment by James_Miller · 2015-07-20T16:07:23.138Z · LW(p) · GW(p)
If you consider finance a subset of social science then the U.S. puts a lot of its best and brightest there.
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2015-07-20T16:21:45.001Z · LW(p) · GW(p)
Hedge funds do manage to employ the best and brightest, on the other hand I'm not sure whether the same is true for the academic subject of finance.
↑ comment by Lumifer · 2015-07-20T16:10:34.286Z · LW(p) · GW(p)
Finance is not social science. I think it's more similar to engineering: you need to have a grasp of the underlying concepts and be able to do the math, but the real world will screw you up on a very regular basis and so you need to be able to deal with that.
Replies from: James_Miller↑ comment by James_Miller · 2015-07-20T17:22:27.538Z · LW(p) · GW(p)
Behavioral finance is supposedly a big thing.
Replies from: Lumifer↑ comment by VoiceOfRa · 2015-07-12T03:01:15.477Z · LW(p) · GW(p)
The bigger shame is the kind of BS that passes for humanities/social science these days.
Replies from: TheAncientGeek, John_Maxwell_IV↑ comment by TheAncientGeek · 2015-07-21T07:25:56.549Z · LW(p) · GW(p)
Is that a fact? I've seen social scientists complain that social science is trying too hard to emulate the hard science.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-21T07:41:08.611Z · LW(p) · GW(p)
Yes, most social science is cargo cult science. That's perfectly consistent with it being BS.
Replies from: hg00↑ comment by hg00 · 2015-07-21T08:55:52.295Z · LW(p) · GW(p)
Look, it may very well be that social science is low-quality. But your comments in this thread are not at all up to LW standards. You need to cite evidence for your positions and stop calling people names.
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-07-21T09:03:44.930Z · LW(p) · GW(p)
calling people names
Well, to be pedantic he's called social sciences names but AFAICT he hasn't called social scientists names.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-07-19T20:40:56.606Z · LW(p) · GW(p)
I think there may be a self-reinforcing spiral where highly logical people aren't impressed by social science, leading them to avoid it, leading to social science being unimpressive to highly logical because it's done by people who aren't highly logical. But I could be wrong--maybe highly logical people are misperceiving.
Replies from: VoiceOfRa, Lumifer↑ comment by VoiceOfRa · 2015-07-20T20:33:32.504Z · LW(p) · GW(p)
It's not just a self-reinforcing spiral. There is also a driver, namely since social science has more political implications and there is a lot of political control over science funding, social science selects for people willing to reach the "correct" conclusions even if they have to torture logic and the evidence to do so.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-07-21T04:19:09.694Z · LW(p) · GW(p)
Well that's a self-reinforcing spiral of a different type. In general, I see a number of forces pushing newcomers to a group towards being similar to whoever the folks already in the group are:
The Iron Law of Bureaucracy, insofar as it's accurate.
Self-segregation. It's less aversive to interact with people who agree with you and are similar to you, which nudges people towards forming social circles of similar others.
Reputation effects. If Google has a reputation for having great programmers, other great programmers will want to work there so they can have great coworkers.
This is why it took someone like Snowden to expose NSA spying. The NSA was the butt of jokes in the crypto community for probably doing illicit spying long before Snowden... which meant people who cared about civil liberties didn't apply for jobs there (who wants to work for the evil empire?) (Note: just my guess as someone outside crypto; could be totally wrong on this one.)
Edit: evaporative cooling should probably be considered related to the bullet points above.
↑ comment by Lumifer · 2015-07-20T01:24:48.513Z · LW(p) · GW(p)
You're assuming that "intelligent" == "logical". That just ain't so and especially ain't so in social sciences.
"The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function." -- F. Scott Fitzgerald
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-20T10:45:31.276Z · LW(p) · GW(p)
Is there data about the average IQ of PHD's or professors in the social sciences?
Replies from: Acty↑ comment by Acty · 2015-07-20T15:22:26.772Z · LW(p) · GW(p)
I did a bit of googling, and it really surprised me. I thought the social science IQs would be lower on average than the STEM IQs, but I found a lot of conflicting stuff. Most sources seem to put physics and maths at the top of the ranks, but then there's engineering, social science and biology and I keep seeing those three in different orders. If you split up 'social science' and 'humanities', then humanities stays at the top and social science drops a few places, presumably because law is a very attractive profession for smart people (high prestige and pay) and law is technically a humanity. I'm not very confident in any of my Google results, though - they all looked slightly dodgy - so I'm not linking to any and would love it if someone else could find some better data.
I don't think it's an argument for disregarding social science, even if we did find data that showed all social scientists are stupider than STEM scientists. I mean, education came last for IQ on almost all of the lists I looked up. Education. Nobody is going to say that this means we should scrap education. If education really does attract a lot of stupid people, I think that is cause to try and raise the prestige and pay of education as a profession so that more smart people do it - not to cut funding for schools. (Though the reason education is so lowly ranked for IQ could be that a lot of countries don't require teachers to have education degrees, you get a different degree and then a teaching certificate, so you only take Education as a bachelor's if you want to do Childhood Studies and go into social care/work.)
It's clearly very important that our governments are advised by smart social scientists who can do experiments and tell them whether law X or policy Y will decrease the crime rate or just annoy people, or we're just letting politicians do whatever their ideology tells them to do. So, even though the IQ of people in social sciences is lower on average than the IQ of people in physics, we shouldn't conclude that social science is worthless - I think we should conclude that efforts must be made to get more smart people to consider becoming social scientists.
I also don't think you necessarily need a high IQ to be a successful social scientist. Being a successful mathematician requires a lot of processing power. Being a successful social scientist requires a lot of rationality and a lot of carefulness. If you're trying to do some problems with areas of circles, then you will not be distracted by your religious belief that pi is an evil number and cannot be the answer, nor will you have to worry about the line your circle is drawn with being a sentient line and deliberately mucking up your results. Social scientists don't need as much processing power to throw at problems, but it takes a lot of care and ability to change one's mind to do good social science, because you're doing research on really complicated high-level things with sentient agents who do weird things and you were probably raised with an ideology about it. Without a good amount of rationality, you will just end up repeatedly "proving" whatever your ideology says.
To make physics worthwhile you need high IQ; without that, you'd produce awful physics. To make social science worthwhile, you need to be very very careful and ignore what your ideology is telling you in the back of your mind; without that, you produce awful social science. Unfortunately, our society's ability to test for IQ is much better than our society's ability to test for rationality, which could explain why more people get away with BS social science than they do with BS physics. (The other explanation is that there are both awful social science papers and awful physics papers, but awful physics papers get ignored by everyone, whereas awful social science papers are immediately picked up by whatever group whose ideology they support and linked to on facebook with accompanying comments in all-caps.)
Replies from: CCC, Lumifer, ChristianKl, VoiceOfRa↑ comment by CCC · 2015-07-21T09:17:20.657Z · LW(p) · GW(p)
If you're trying to do some problems with areas of circles, then you will not be distracted by your religious belief that pi is an evil number and cannot be the answer,
That might actually have been a problem once. Apparently the Pythagoreans had serious problems with irrational numbers...
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-22T22:28:35.646Z · LW(p) · GW(p)
And current mathematician have them with infinitesimally small numbers ;)
Replies from: None, CCC, nyralech↑ comment by Lumifer · 2015-07-20T15:34:43.098Z · LW(p) · GW(p)
I don't think it's an argument for disregarding social science
It's not an argument for disregarding social science, but it is an argument to be more sceptical of its claims.
I also don't think you necessarily need a high IQ to be a successful social scientist.
I disagree but let me qualify that. If we define "successful" as "socially successful", that is, e.g., you have your tenure and your papers are accepted in reasonable peer-reviewed journals, then yes, you do not need high IQ to be be successful social scientist.
However if we define "successful" as "actually advancing the state of human knowledge" then I feel fairly confident in thinking that a high IQ is even more of a necessity for a social scientists than it is for someone who does hard sciences.
As you pointed out yourself , hard sciences are easier :-)
Replies from: Acty↑ comment by Acty · 2015-07-20T22:50:58.797Z · LW(p) · GW(p)
Ah, I'm sorry - I actually agree with everything you just wrote. I fear I may have miscommunicated slightly in the comment you're replying to.
You're right, I did point that out. And I do think that it can be harder in social science to weed out the good stuff from the bad stuff, and as such, you can get reasonably far in social science terms by being well-spoken and having contacts with a similar ideology even if your science isn't great. This is an undesirable state of affairs, of course, but I think it's just because doing good social science is really difficult (and in order to even know what good social science looks like, you've gotta be smart enough to do good social science). It's part of the reason I think I can be useful and make a difference by doing social science, if I can do good rational social science and encourage others to do more rational social science.
My point isn't that you don't need to be as smart to do social science; doing it well is actually harder, so you'd expect social scientists to be at least as smart as hard scientists. I think that social science and hard science require slightly different kinds of intelligence, and IQ tests better for the hard science kind rather than the social science kind.
It's really difficult to make a formula that calculates how to get a rocket off the ground. You have to crunch a lot of numbers. However, once you've come up with that formula, it is easy to test it; when you fire your rocket, does it go to the moon or does it blow up in your face?
It's really easy to come up with a social science intervention/hypothesis. You just say "people from lower classes have worse life outcomes because of their poor opportunities (so we should improve opportunities for poor people)" or "people from lower classes are in the lower class because they're not smart, and their parents were not smart and gave them bad genes, so they have worse life outcomes because they're not smart (so we should do nothing)" or "people from lower classes have a culture of underachievement that doesn't teach them to work hard (so we should improve life/study skills education in poor areas)". I mean, coming up with one of those three is way easier than designing a rocket. However, once you've come up with them... how do you test it? How do you design a program to get people to achieve higher? Run an intervention program involving education and improved opportunities for years, carefully guarding against all the ideological biases you might have and the mess that might be made by various confounding factors, and still not necessarily have a clear outcome? There's not as much difficulty in hypothesis-generation or coming-up-with-solutions, but there's a lot more difficulty in hypothesis-testing and successful-solution-implementing.
Hard science requires more raw processing power to come up with theories; social science requires more un-biased-ness and carefulness in testing your theories. They're subtly different requirements and I think IQ is a better indicator of the former than the latter.
↑ comment by ChristianKl · 2015-07-20T15:38:12.008Z · LW(p) · GW(p)
I mean, education came last for IQ on almost all of the lists I looked up. Education. Nobody is going to say that this means we should scrap education.
Given that teachers who have a masters in education don't do better than teachers who haven't, I think there a good case of scrapping the current professors in that fields from their titles.
Replies from: Acty↑ comment by Acty · 2015-07-20T23:28:17.575Z · LW(p) · GW(p)
Given this fact, it gives very good support to an argument like "we should scrap Masters programs in education". But it could also give very good support to "we should try out a few variations on Masters programs in education to see if any of them would do better than the current one, and if we find one that actually works, we should change our current one to that thing. If and only if we try a bunch of different variations and none of them work, we should scrap Masters programs in education."
I mean, if we could create a program that consistently made people better teachers, that would be a very worthwhile endeavour. If our current program aiming to make people better teachers is utterly failing, maybe we should scrap that particular program, but surely we should also have a go at doing a few different programs and seeing if any of those succeed?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-20T23:50:06.105Z · LW(p) · GW(p)
Who's responsible for creating such a program? The current professors. Given that they don't do so, we need different people.
Replies from: Acty↑ comment by Acty · 2015-07-21T00:50:09.426Z · LW(p) · GW(p)
Very true. We should task them with creating a better program, and if they don't produce results, we should fire them and find new professors. Just the same as firing any employee who is incapable of doing their job, really.
The thing I disagree with would be if we scrapped the positions and programs entirely; I am entirely on board with the idea of firing the people currently holding the positions and running the programs, and finding new people to hold the positions and run the programs differently. I think that I now understand your position better and you're advocating the latter, not the former, in which case I entirely agree with you.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-21T15:20:49.548Z · LW(p) · GW(p)
There are many different ways to teach knowledge. Academia isn't the only way. You could have a education system where teachers don't go to university to learn how to teach but where they do apprenticeships programs. They sit in the classrooms of experienced teachers and help.
Decrease the amount of time that teachers spend in the classroom to allow for time where teachers discuss with their colleagues what works best.
Replies from: Acty↑ comment by Acty · 2015-07-21T15:53:34.167Z · LW(p) · GW(p)
Different people learn in different ways. I'm really good at textbook learning and hate hands on learning (and suspect that is common among introverted intellectual people). Ideally, why not offer both a university course that qualifies you as a teacher and an apprenticeship system that qualifies you as a teacher, and allow prospective teachers to decide which best suits their learning style? We could even do cognitive assessments on the prospective teachers to recommend to them which program would be best for what their strengths seem to be.
Although, as someone who lives with a teacher - we definitely don't need to reduce the time they spend in the classroom, we need to change the fact that they spend double that time marking and planning and doing pointless paperwork.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-21T17:37:26.849Z · LW(p) · GW(p)
The job of being a teacher is not ideal for introverts. At the core teaching is about social interaction.
You can't learn charisma through reading textbooks. Textbooks don't teach you to be a authority in the classroom and get the children to pay attention to what you are saying.
They don't teach empathy either. Empathy is a strong predictor for success of psychologists in therapy session and likely also useful for teachers.
"Learning styles" are a popular concept but there no good research that suggests that giving different students different training based on learning style is helpful.
Although, as someone who lives with a teacher - we definitely don't need to reduce the time they spend in the classroom, we need to change the fact that they spend double that time marking and planning and doing pointless paperwork.
I agree. Get rid of the whole business of giving students grades outside of automatically graded tests to allow a teacher to focus on teaching.
↑ comment by VoiceOfRa · 2015-07-20T20:36:21.133Z · LW(p) · GW(p)
It's clearly very important that our governments are advised by smart social scientists who can do experiments and tell them whether law X or policy Y will decrease the crime rate or just annoy people, or we're just letting politicians do whatever their ideology tells them to do.
Unfortunately, what is actually happening is that the politicians and beaurocrats decide which policy they prefer for ideological reasons and then fund social scientists willing to produce "science" to justify the desision.
Replies from: Acty↑ comment by Acty · 2015-07-20T23:25:32.834Z · LW(p) · GW(p)
I'm not sure this is necessarily always true. There are absolutely certainly instances of this happening, but more and more governments are adopting "evidence-led policy" policies, and I'd hope that at least sometimes those policies do what they say on the tin. The UK has this: https://www.gov.uk/what-works-network and I'm going to try and do more reading up on it to see whether it looks like it's doing any good or just proving what people want it to prove.
It would certainly be preferable to live in a world where social scientists did good unbiased social science and then politicians listened to them. The question is, how do we change our current world into such a world? It certainly isn't by disparaging social science or assigning it low prestige. We need to make it so that science>ideology in prestige terms, which will be really tricky.
Replies from: CCC, VoiceOfRa↑ comment by CCC · 2015-07-21T09:19:56.681Z · LW(p) · GW(p)
I'm not sure this is necessarily always true.
Yes; you'll get some politicians who actually want to reduce the crime rate and are willing to look for advice on how to do that effectively.
They're hard to spot, because all politicians want to look like that sort of politician, leaving the genuine ones hidden in a crowd of lookalikes...
Replies from: Acty↑ comment by Acty · 2015-07-21T09:52:12.699Z · LW(p) · GW(p)
There could be solutions to this, I'm sure, or at least ways of minimising the problems. Maybe an independent-from-current-ruling-party research institute that ran studies on all proposed laws/policies put forward by both the in-power and opposition power, which required pre-registration of studies, and then published its findings very publicly in an easy-for-public-to-read format? Then it would be very obvious which parties were saying the same things as the science and which were ignoring the science, and it would be hard for the parties to influence the social scientists to just get them to say what they want them to say.
Replies from: CCC↑ comment by CCC · 2015-07-21T10:45:30.077Z · LW(p) · GW(p)
There could be solutions to this, I'm sure, or at least ways of minimising the problems.
I'm sure there could be. It's not an easy problem to solve - after all, right now, there are professors in social sciences, economics, and other subjects who can tell pretty quickly whether or not a given policy is at least vaguely sensible or not. But how often are they listened to?
Also, it's not always easy to see which option is the best. If Policy A might or might not reduce crime but makes it look like everyone's trying; Policy B will reduce crime but also reduce civil liberties; Policy C will reduce the amount of crime but increase its potential lethality... then how can one tell which policy is the best?
Having said that... there should be solutions. Your proposed institute is an improvement on the status quo, and would be a good thing to set up in many countries (assuming that they can be funded).
↑ comment by VoiceOfRa · 2015-07-21T01:10:46.744Z · LW(p) · GW(p)
We need to make it so that science>ideology in prestige terms, which will be really tricky.
People tried this in the late 19th/early 20th century (look up "technocracy" if you want to learn more). That's how we got into the mess we are in now.
Replies from: skeptical_lurker, TheAncientGeek↑ comment by skeptical_lurker · 2015-07-21T22:49:16.592Z · LW(p) · GW(p)
My understanding is that the technocracy movement were more engineers than social scientists, and were not an influential movement anyway.
Anyway, the problem isn't that scientists are inherently biased, its that if they mention certain hypotheses publicly they will be fired because of journalists.
Incidentally, I know neuro/cognitive scientists at a very left-wing university, and they believed in certain gender/racial cognitive differences, despite ideology.
↑ comment by TheAncientGeek · 2015-07-21T07:17:18.032Z · LW(p) · GW(p)
The mess where we're wealthier, living longer, etc?
Replies from: VoiceOfRa↑ comment by Gram_Stone · 2015-06-30T00:22:09.098Z · LW(p) · GW(p)
I am learning (or have learnt and am now struggling to keep up with) Spanish, German, French, Mandarin, and Ancient Greek.
I've studied Spanish for some time and would be happy to converse with you. I'm not sure if you only want to converse with native speakers. I've been wanting to learn how to talk about LessWrongian stuff in Spanish.
Replies from: Acty↑ comment by [deleted] · 2015-07-21T04:06:49.894Z · LW(p) · GW(p)
I've been in the #lesswrong IRC channel for the past couple weeks, arguing with people and delighting in the fact that I can present my arguments without first having to explain the empiricist utilitarian framework they rely upon. It's a wonderful feeling which is the exact opposite of the hair-tearing-out frustration I feel in Philosophy class, where the inferential distance is too great for me to really argue with my classmates (and when I retreat to the basics and attempt to close the inferential gap before addressing the issue, the teacher penalises or silences me for being off topic).
You seem legit. Also, wait, the #lesswrong IRC channel stopped being dead?
Replies from: Acty↑ comment by Squark · 2015-07-20T18:50:00.762Z · LW(p) · GW(p)
Hi Act, welcome!
I will gladly converse with you in Russian if you want to.
Why do you want a united utopia? Don't you think different people prefer different things? Even if assume the ultimate utopia is unform, wouldn't we want to experiment with different things to get there?
Would you feel "dwarfed by an FAI" if you had little direct knowledge of what the FAI is up to? Imagine a relatively omniscient and omnipotent god taking care of things on some (mostly invisible) level but doesn't ever come down to solve your homework.
Replies from: Acty↑ comment by Acty · 2015-07-21T00:31:19.051Z · LW(p) · GW(p)
--
Replies from: Squark, Squark, hairyfigment↑ comment by Squark · 2015-07-24T18:39:42.522Z · LW(p) · GW(p)
P.S.
I am dismayed that you were ambushed by the far right crowd, especially on the welcome thread.
My impression is that you are highly intelligent, very decent and admirably enthusiastic. I think you are a perfect example of the values that I love in this community and I very much want you on board. I'm sure that I personally would enjoy interacting with you.
Also, I am confident you will go far in life. Good dragon hunting!
Replies from: Acty, Lumifer, VoiceOfRa↑ comment by Squark · 2015-07-22T19:38:58.844Z · LW(p) · GW(p)
I value unity for its own sake...
I sympathize with your sentiment regarding friendship, community etc. The thing is, when everyone are friends the state is not needed at all. The state is a way of using violence or the threat of violence to resolve conflicts between people in a way which is as good as possible for all parties (in the case of egalitarian states; other states resolve conflicts in favor of the ruling class). Forcing people to obey any given system of law is already an act of coercion. Why magnify this coercion by forcing everyone to obey the same system rather than allowing any sufficiently big group of people choose their own system?
Moreover, in the search of utopia we can go down many paths. In the spirit of the empirical method, it seems reasonable to allow people to explore different paths if we are to find the best one.
I would not actually be awfully upset if the FAI did my homework for me...
I used "homework" as a figure of speech :)
Being told "you're not smart enough to fight dragons, just sit at home and let Momma AI figure it out" would make me sad.
This might be so. However, you must consider the tradeoff between this sadness and efficiency of dragon-slaying.
So really, once superintelligence is possible and has been made, I would like to become a superintelligence.
The problem is, if you instantly go from human intelligence to far superhuman, it looks like a breach in the continuity of your identity. And such a breach might be paramount to death. After all, what makes tomorrow you the same person as today you, if not the continuity between them? I agree with Eliezer that I want to be upgraded over time, but I want it to happen slowly and gradually.
Replies from: Acty↑ comment by Acty · 2015-07-25T03:18:39.889Z · LW(p) · GW(p)
I do think that some kind of organisational cooperative structure would be needed even if everyone were friends - provided there are dragons left to slay. If people need to work together on dragonfighting, then just being friends won't cut it - there will need to be some kind of team, and some people delegating different tasks to team members and coordinating efforts. Of course, if there aren't dragons to slay, then there's no need for us to work together and people can do whatever they like.
And yeah - the tradeoff would definitely need to be considered. If the AI told me, "Sorry, but I need to solve negentropy and if you try and help me you're just going to slow me down to the point at which it becomes more likely that everyone dies", I guess I would just have to deal with it. Making it more likely that everyone dies in the slow heat death of the universe is a terribly large price to pay for indulging my desire to fight things. It could be a tradeoff worth making, though, if it turns out that a significant number of people are aimless and unhappy unless they have a cause to fight for - we can explore the galaxy and fight negentropy and this will allow people like me to continue being motivated and fulfilled by our burning desire to fix things. It depends on whether people like me, with aforementioned burning desire, are a minority or a large majority. If a large majority of the human race feels listless and sad unless they have a quest to do, then it may be worthwhile letting us help even if it impedes the effort slightly.
And yeah - I'm not sure that just giving me more processor power and memory without changing my code counts as death, but simultaneously giving a human more processor power and more memory and not increasing their rationality sounds... silly and maybe not safe, so I guess it'll have to be a gradual upgrade process in all of us. I quite like that idea though - it's like having a second childhood, except this time you're learning to remember every book in the library and fly with your jetpack-including robot feet, instead of just learning to walk and talk. I am totally up for that.
Replies from: Squark↑ comment by Squark · 2015-07-27T18:30:00.015Z · LW(p) · GW(p)
I do think that some kind of organisational cooperative structure would be needed even if everyone were friends...
We don't need the state to organize. Look at all the private organizations out there.
It could be a tradeoff worth making, though, if it turns out that a significant number of people are aimless and unhappy unless they have a cause to fight for...
The cause might be something created artificially by the FAI. One idea I had is a universe with "pseudodeath" which doesn't literally kill you but relocates you to another part of the universe which results in lose of connections with all people you knew. Like in Border Guards but involuntary, so that human communities have to fight with "nature" to survive.
Replies from: g_pepper↑ comment by g_pepper · 2015-07-27T18:34:27.620Z · LW(p) · GW(p)
One idea I had is a universe with "pseudodeath" which doesn't literally kill you but relocates you to another part of the universe which results in lose of connections with all people you knew.
Sort of a cosmic witness relocation program! :).
↑ comment by hairyfigment · 2015-07-22T07:10:14.872Z · LW(p) · GW(p)
The following is pure speculation. But I imagine an FAI would begin its work by vastly reducing the chance of death, and then raising everyone's intelligence and energy levels to those of John_von_Neumann. That might allow us to bootstrap ourselves to superhuman levels with minimal guidance.
↑ comment by ChristianKl · 2015-06-29T20:12:17.372Z · LW(p) · GW(p)
I'll then be heading off to university in September 2016, unless applications go so badly that I take a gap year and reapply next year. I am dreaming of going to Cambridge to read Human, Social and Political Sciences.
Why do you dream of doing Human, Social and Political Sciences?
Replies from: Acty↑ comment by Acty · 2015-07-05T00:30:28.428Z · LW(p) · GW(p)
--
Replies from: None, ChristianKl, Lumifer, VoiceOfRa↑ comment by ChristianKl · 2015-07-05T07:44:54.740Z · LW(p) · GW(p)
Politics 1 is about democracy and how it works and whether it actually works and whether the alternatives might work.
You assume that studying politics in university tells you a good answer to that question. To me that doesn't seem true.
If you look at a figure like Julian Assange who actually plays and make meaningful moves, Assange didn't study politics at university.
Studying politics at Cambridge on the other hand will make it easier to become an elected politician in the UK. But that's not necessarily because of the content of lectures but because of networking.
It quite often happens that young people don't speak to older more experienced people when making their decisions about what to study. As your goal is making a difference in the world, it could be very useful to ask 80,000 for coaching to make that choice: https://80000hours.org/career-advice/ You might still come out of that with wanting to go to the same program in Cambridge but you will likely have better reasons for doing so and will be less naive.
Replies from: Acty↑ comment by Acty · 2015-07-10T11:42:20.395Z · LW(p) · GW(p)
--
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-10T13:36:07.038Z · LW(p) · GW(p)
Getting elected in the UK is certainly a valid move, but it comes with buying into the status quo to the extend that you hold opinions that make you fit into a major party.
And a very good way to improve in the direction of actually having decent ideas about alternatives to representative first-past-the-post democracy might be to spend a lot of time explaining those ideas to people who subsequently describe all of their flaws at length.
I think the substantial discussion about Liquid Democracy doesn't happen inside the politics departments of universities but outside of them. A lot of 20th century and earlier political philosophy just isn't that important for building something new. It exists to justify the status quo and a place like Cambridge exists to justify the status quo.
Even inside Cambridge you likely want to spend time in student self-governance and it's internal politics.
Replies from: Acty↑ comment by Acty · 2015-07-21T00:44:06.684Z · LW(p) · GW(p)
--
Replies from: Journeyman↑ comment by Journeyman · 2015-07-21T01:18:24.223Z · LW(p) · GW(p)
To some degree, the idea of a "Friendship and Science Party" has already been tried. The Mugwumps wanted to get scholars, scientists and learned people more involved in politics to improve its corrupt state. It sounds like a great idea on paper, but this is what happened:
So the Mugwumps believed that, by running a pipe from the limpid spring of academia to the dank sewer of American democracy, they could make the latter run clear again. What they might have considered, however, was that there was no valve in their pipe. Aiming to purify the American state, they succeeded only in corrupting the American mind.
When an intellectual community is separated from political power, as the Mugwumps were for a while in the Gilded Age, it finds itself in a strange state of grace. Bad ideas and bad people exist, but good people can recognize good ideas and good people, and a nexus of sense forms. The only way for the bad to get ahead is to copy the good, and vice pays its traditional tribute to virtue. It is at least reasonable to expect sensible ideas to outcompete insane ones in this "marketplace," because good sense is the only significant adaptive quality.
Restore the connection, and the self-serving idea, the meme with its own built-in will to power, develops a strange ability to thrive and spread. Thoughts which, if correct, provide some pretext for empowering the thinker, become remarkably adaptive. Even if they are utterly insane. As the Latin goes: vult decipi, decipiatur. Self-deception does not in any way preclude sincerity.
...
In particular, when the power loop includes science itself, science itself becomes corrupt. The crown jewel of European civilization is dragged in the gutter for another hundred million in grants, while journalism, our peeking impostor of the scales, averts her open eyes.
Science also expands to cover all areas of government policy, a task for which it is blatantly unfit. There are few controlled experiments in government. Thus, scientistic public policy, from economics ("queen of the social sciences") on down, consists of experiments that would not meet any standard of relevance in a truly scientific field.
Bad science is a device for laundering thoughts of unknown provenance without the conscious complicity of the experimenter.
According to this account, the more contact science has with politics, the more corrupted it becomes.
Replies from: Acty↑ comment by Acty · 2015-07-21T01:56:34.358Z · LW(p) · GW(p)
--
Replies from: ErikM↑ comment by ErikM · 2015-07-21T16:20:57.042Z · LW(p) · GW(p)
I think you missed what I see as the main point in "What they might have considered, however, was that there was no valve in their pipe. Aiming to purify the American state, they succeeded only in corrupting the American mind." Not surprising, because Moldbug (the guy quoted about the Mugwumps) is terribly long-winded and given to rhetorical flourishes. So let me try to rephrase what I see as the central objection in a format more amenable to LW:
The scientific community is not a massive repository of power, nor is it packed to the gills with masters of rhetoric. The political community consists of nothing but. If you try to run your new party by listening to the scientific community without first making the scientific community far more powerful and independent, what's likely to happen is that the political community makes a puppet of the scientific community, and then you wind up running your politics by listening to a puppet of the political community.
To give a concrete relatable figure: The US National Science Foundation receives about 7.5 billion dollars a year from the US Congress. (According to the NSF, they are the funding source for approximately 24 percent of all federally supported basic research conducted by America's colleges and universities, which suggests 30 billion federal dollars are out there just for basic research)
The more you promote "Do what the NSF says", the more Congress is going to be interested in using some of those billions of dollars to lean on the NSF and other similar organizations so that you will be promoting "Do what Congress says" at arm's remove. No overt dishonesty needs be involved. Just little things like hiring sympathetic scientists, discouraging controversial research, asking for a survey of a specific metric, etc.
Suppose you make a prediction that a law will decrease the crime rate. You pass the law. You wait a while and see. Did the crime rate go down? Well, how are you measuring crime rate? Which crimes are you counting? To take an example discussed on Less Wrong a while ago, if you use the murder rate as proxy for crime rate over the past few decades, you are going to severely undercount crime because of improvements in medical technology that make worse wounds more survivable.
Obviously you can fix this particular metric now that I've pointed it out. But can you spot and fix such issues in advance faster and better than people throwing around 30 billion dollars and with a massive vested interest in retaining policy control?
When trying to solve something like whether P=NP, you can throw more and brighter scientists at the problem and trust that the problem will remain the same. But the problem of trying to establish science-based policy, particularly when "advocating loads of funding for science", gets harder as it gets more important and you throw more people at it. This is a Red Queen's Race where you have to keep running just to stay in place, because you're not dealing with a mindless question that has an objective answer floating out there, you're dealing with an opposed social force with lots of minds and money that learns from its own mistakes and figures out how to corrupt better, and with more plausible deniability.
Replies from: Acty↑ comment by Acty · 2015-07-21T16:34:41.946Z · LW(p) · GW(p)
Thankyou - this statement of the idea was much, much clearer to me. :)
It seems like the solution - well, a possible part of one possible solution - is to make the social science research institute that everyone listens to have some funding source which is completely independent from the political party in power. That would hopefully make the scientific community more independent. We now need to make it more powerful, which is... more difficult. I think a good starting point would be to try and raise the prestige associated with a social science career (and thus the prestige given to individual social scientists and the amount of social capital they feel they have to spend on being controversial) and possibly give some rhetoric classes to the social science research institute's spokesperson. Assuming the scientists are rational scientists, this gives them politician-power with which to persuade people of their correct conclusions. (Of course, if they have incorrect conclusions influenced by their ideologies, this is... problematic. How do we fix this? I dunno yet. But this is the very beginning of a solution, but I've not been thinking about the problem very long and I am just one kid with a relatively high IQ. If multiple people work together on a solution, I'm sure much more and much better stuff will be come up with.)
↑ comment by Lumifer · 2015-07-10T14:43:40.743Z · LW(p) · GW(p)
"The more you believe you can create heaven on earth the more likely you are to set up guillotines in the public square to hasten the process." -- James Lileks
Replies from: Acty↑ comment by Acty · 2015-07-19T18:25:50.647Z · LW(p) · GW(p)
--
Replies from: Lumifer↑ comment by Lumifer · 2015-07-20T01:19:04.780Z · LW(p) · GW(p)
Which of these seems like it will inevitably lead to setting up guillotines in the public square?
That thing:
The reason I want to fix the world is, well, the world contains stuff like war, and poverty, and people who buy plasma TVs for their dog's kennel instead of donating to charity, and kids who can't get an education because they're busy fetching filthy water and caring for their siblings who are dying from drinking the dirty water, and people who abuse kids or rape people or blow up civilians, and malaria and cancer and dementia, and lack of funding for people who are trying to cure diseases and stop ageing, and sexism and racism and homophobia and transphobia, and preachers who help spread AIDS by trying to limit access to contraception, and all of those things make me REALLY REALLY ANGRY.
Besides, we're talking about "more likely", not "inevitably".
Replies from: Acty↑ comment by Acty · 2015-07-20T23:09:46.997Z · LW(p) · GW(p)
--
Replies from: Journeyman, Lumifer↑ comment by Journeyman · 2015-07-21T00:55:47.678Z · LW(p) · GW(p)
There is historical precedent for groups advocating equality, altruism, and other humanitarian causes to do a lot of damage and start guillotining people. You would probably be horrified and step off the train before it got to that point. But it's important to understand the failure modes of egalitarian, altruistic movements.
The French Revolution, and Russian Revolution / Soviet Union ran into these failure modes where they started killing lots of people. After slavery was abolished in the US, around one quarter of the freed slaves died.
These events were all horrible disasters from a humanitarian perspective. Yet I doubt that the original French Revolutionaries planned from the start to execute the aristocracy, and then execute many of their own factions for supposedly being counter-revolutionaries. I don't think Marx ever intended for the Russian Revolution and Soviet Union to have a high death toll. I don't think the original abolitionists ever expected the bloody Civil War followed by 25% of the former slaves dying.
Perhaps, once a movement for egalitarianism and altruism got started, an ideological death spiral caused so much polarization that it was impossible to stop people from going overboard and extending the movement's mandate in a violent direction. Perhaps at first, they tried to persuade their opponents to help them towards the better new world. When persuasion failed, they tried suppression. And when suppression failed, someone proposed violence, and nobody could stop them in such a polarized environment.
Somehow, altruism can turn pathological, and well-intentioned interventions have historically resulted in disastrous side-effects or externalities. That's why some people are cynical about altruistic political attitudes.
Replies from: Acty, Username↑ comment by Acty · 2015-07-21T01:04:24.253Z · LW(p) · GW(p)
--
Replies from: Journeyman, Lumifer↑ comment by Journeyman · 2015-07-21T01:40:26.940Z · LW(p) · GW(p)
You yourself are unlikely to start the French Revolution, but somehow, well-intentioned people seem to get swept up in those movements. Even teachers, doctors, and charity workers can contribute to an ideological environment that goes wrong; this doesn't mean that they started it, or that they supported it every step of the way. But they were part of it.
The French Revolution and guillotines is indeed a rarer event. But if pathological altruism can result in such large disasters, then it's quite likely that it can also backfire in less spectacular ways that are still problematic.
As you point out, many interventions to change the world risk going wrong and making things worse, but it would be a shame to completely give on making the world a better place. So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.
Replies from: Acty↑ comment by Acty · 2015-07-21T02:02:10.465Z · LW(p) · GW(p)
"So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions."
That is exactly why I want to study social science. I want to do lots of experiments and research and reading and talking and thinking before I dare try and do any world-changing. That's why I think social science is important and valuable, and we should try very hard to be rational and careful when we do social science, and then listen to the conclusions. I think interventions should be well-thought-through, evidence-based, and tried and observed on a small scale before implemented on a large scale. Thinking through your ideas about laws/policies/interventions and gathering evidence on whether they might work or not - that's the kind of social science that I think is important and the kind I want to do.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-21T02:18:05.431Z · LW(p) · GW(p)
You're ignoring the rather large pachyderm in the room which goes by the name of Values.
Differences in politics and policies are largely driven not by disagreements over the right way to reach the goal, but by decisions which goals to pursue and what trade-offs are acceptable as the price. Most changes in the world have both costs and benefits, you need to balance them to decide whether it's worth it, and the balancing necessarily involves deciding what is more important and what is less important.
For example, imagine a trade-off: you can decrease the economic inequality in your society by X% by paying the price of slowing down the economic growth by Y%. Science won't tell you whether that price is acceptable -- you need to ask your values about it.
Replies from: Username, Acty↑ comment by Username · 2015-07-25T21:16:41.521Z · LW(p) · GW(p)
Differences in politics and policies are largely driven not by disagreements over the right way to reach the goal, but by decisions which goals to pursue and what trade-offs are acceptable as the price.
Disagreements including this one? It sounds as though you are saying in a conversation such as this one, you are more focused on working to achieve your values than trying to figure out what's true about the world... like, say, Arthur Chu. Am I reading you correctly in supporting something akin to Arthur Chu's position, or do I misunderstand?
Given how irrational people can be about politics, I'd guess that in many cases apparent "value" differences boil down to people being mindkilled in different ways. As rationalists, the goal is to have a calm, thoughtful, evidence-based discussion and figure out what's true. Building a map and unmindkilling one another is a collaborative project.
There are times when there is a fundamental value difference, but my feeling is that this is the possibility to be explored last. And if you do want to explore it, you should ask clarifying values questions (like "do you give the harms from a European woman who is raped and a Muslim woman who is raped equal weight?") in order to suss out the precise nature of the value difference.
Anyway, if you do agree with Arthur Chu that the best approach is to charge ahead imposing your values, why are you on Less Wrong? There's an entire internet out there of people having Arthur Chu style debates you could join. Less Wrong is a tiny region of the internet where we have Scott Alexander style debates, and we'd like to keep it that way.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-27T15:58:25.345Z · LW(p) · GW(p)
you are more focused on working to achieve your values than trying to figure out what's true about the world
That's a false dichotomy. Epistemic rationality and working to achieve your values are largely orthogonal and are not opposed to each other. In fact, epistemic rationality is useful to achieving your values because of instrumental rationality.
I'd guess that in many cases apparent "value" differences boil down to people being mindkilled in different ways.
So you do not think that many people have sufficiently different and irreconcilable values?
I wonder how are you going to distinguish "true" values and "mindkill-generated" values. Take some random ISIS fighter in Iraq, what are his "true" values?
my feeling is that this is the possibility to be explored last.
I disagree, I think it's useful to figure out value differences before spending a lot of time on figuring out whether we agree about how the world works.
...where we have...
Who's that "we"? It is a bit ironic that you felt the need to use the pseudonymous handle to claim that you represent the views of all LW... X-)
↑ comment by Acty · 2015-07-21T02:55:56.616Z · LW(p) · GW(p)
In my (admittedly limited, I'm young) experience, people don't disagree on whether that tradeoff is worth it. People disagree on whether the tradeoff exists. I've never seen people arguing about "the tradeoff is worth it" followed by "no it isn't". I've seen a lot of arguments about "We should decrease inequality with policy X!" followed by "But that will slow economic growth!" followed by "No it won't! Inequality slows down economic growth!" followed by "Inequality is necessary for economic growth!" followed by "No it isn't!" Like with Obamacare - I didn't hear any Republicans saying "the tradeoff of raising my taxes in return for providing poor people with healthcare is an unacceptable tradeoff" (though I am sometimes uncharitable and think that some people are just selfish and want their taxes to stay low at any cost), I heard a lot of them saying "this policy won't increase health and long life and happiness the way you think it will".
"Is this tradeoff worth it?" is, indeed, a values question and not a scientific question. But scientific questions (or at least, factual questions that you could predict the answer to and be right/wrong about) could include: Will this policy actually definitely cause the X% decrease in inequality? Will this policy actually definitely cause the Y% slowdown in economic growth? Approximately how large is X? Approximately how much will a Y% slowdown affect the average household income? How high is inflation likely to be in the next few years? Taking that expected rate of inflation into account, what kind of things would the average family no longer be able to afford / not become able to afford, presuming the estimated decrease in average household income happens? What relation does income have to happiness anyway? How much unhappiness does inequality cause, and how much unhappiness do economic recessions cause? Does a third option (beyond implement this policy / don't implement it) exist, like implementing the policy but also implementing another policy that helps speed economic growth, or implementing some other radical new idea? Is this third option feasible? Can we think up any better policies which we predict might decrease inequality without slowing economic growth? If we set a benchmark that would satisfy our values, like percentage of households able to afford Z valuable-and-life-improving item, then which policy is likely to better satisfy that benchmark - economic growth so that more people on average can afford Z, or inequality reduction so that more poor people become average enough to afford an Z?
But, of course, this is a factual question. We could resolve this by doing an experiment, maybe a survey of some kind. We could take a number of left-wing policies, and a number of right-wing policies, and survey members of the "other tribe" on "why do you disagree with this policy?" and give them options to choose between like "I think reducing inequality is more important than economic growth" and "I don't think reducing inequality will decrease economic growth, I think it will speed it up". I think there are a lot of issues where people disagree on facts.
Like prisons - you have people saying "prisons should be really nasty and horrid to deter people from offending", and you have people saying "prisons should be quite nice and full of education and stuff so that prisoners are rehabilitated and become productive members of society and don't reoffend", and both of those people want to bring the crime rate down, but what is actually best at bringing crime rates down - nasty prisons or nice prisons? Isn't that a factual question, and couldn't we do some science (compare a nice prison, nasty prison, and average-kinda-prison control group, compare reoffending rates for ex-inmates of those prisons, maybe try an intervention where kids are deterred from committing crime by visiting nasty prison and seeing what it's like versus kids who visit the nicer prison versus a control group who don't visit a prison and then 10 years later see what percentage of each group ended up going to prison) to see who is right? And wouldn't doing the science be way better than ideological arguments about "prisoners are evil people and deserve to suffer!" versus "making people suffer is really mean!" since what we actually all want and agree on is that we would like the crime rate to come down?
So we should ask the scientific question: "Which policies are most likely to lead to the biggest reductions in inequality and crime and the most economic growth, keep the most members of our population in good health for the longest, and provide the most cost-efficient and high-quality public services?" If we find the answer, and some of those policies seem to conflict, then we can consult our values to see what tradeoff we should make. But if we don't do the science first, how do we even know what tradeoff we're making? Are we sure the tradeoff is real / necessary / what we think it is?
In other words, a question of "do we try an intervention that costs £10,000 and is 100% effective, or do we do the 80% effective intervention that costs £80,000 and spend the money we saved on something else?" is a values question. But "given £10,000, what's the most effective intervention we could try that will do the most good?" is a scientific question and one that I'd like to have good, evidence-based answers to. "Which intervention gives most improvement unit per money unit?" is a scientific question and you could argue that we should just ask that question and then do the optimal intervention.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-21T03:08:41.857Z · LW(p) · GW(p)
In my (admittedly limited, I'm young) experience, people don't disagree on whether that tradeoff is worth it. People disagree on whether the tradeoff exists.
The solution to this problem is to find smarter people to talk to.
We could resolve this by doing an experiment
Experiment? On live people? Cue in GlaDOS :-P
Replies from: UsernameThis was a triumph!
I'm making a note here:
"Huge success!!"
It's hard to overstate
My satisfaction.
Aperture science:
We do what me must
Because we can.
For the good of all of us.
Except the ones who are dead.
But there's no sense crying
Over every mistake.
You just keep on trying
Till you run out of cake.
And the science gets done.
And you make a neat gun
For the people who are
Still alive.
↑ comment by Username · 2015-07-25T22:04:45.739Z · LW(p) · GW(p)
Experiment? On live people?
It sounded to me like she recommended a survey. Do you consider surveys problematic?
Replies from: Lumifer↑ comment by Lumifer · 2015-07-27T16:07:24.620Z · LW(p) · GW(p)
Surveys are not experiments and Acty is explicitly talking about science with control groups, etc. E.g.
Replies from: Vanivercompare a nice prison, nasty prison, and average-kinda-prison control group, compare reoffending rates for ex-inmates of those prisons, maybe try an intervention where kids are deterred from committing crime by visiting nasty prison and seeing what it's like versus kids who visit the nicer prison versus a control group who don't visit a prison and then 10 years later see what percentage of each group ended up going to prison
↑ comment by Vaniver · 2015-07-27T16:51:09.903Z · LW(p) · GW(p)
Surveys are not experiments
According to every IRB I've been in contact with, they are. Here's Cornell's, for example.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-27T17:01:13.806Z · LW(p) · GW(p)
I'm talking common sense, not IRB legalese.
According to the US Federal code, a home-made pipe bomb is a weapon of mass destruction.
Replies from: advael↑ comment by advael · 2015-07-27T17:41:56.661Z · LW(p) · GW(p)
A survey can be a reasonably designed experiment that simply gives us a weaker result than lots of other kinds of experiments.
There are many questions about humans that I would expect to be correlated with the noises humans make when given a few choices and asked to answer honestly. In many cases, that correlation is complicated or not very strong. Nonetheless, it's not nothing, and might be worth doing, especially in the absence of a more-correlated test we can do given our technology, resources, and ethics.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-27T17:48:35.555Z · LW(p) · GW(p)
What I had in mind was the difference between passive observation and actively influencing the lives of subjects. I would consider "surveys" to be observation and "experiments" to be or contain active interventions. Since the context of the discussion is kinda-sorta ethical, this difference is meaningful.
Replies from: advael↑ comment by advael · 2015-07-27T19:00:36.006Z · LW(p) · GW(p)
What intervention would you suggest to study the incidence of factual versus terminal-value disagreements in opposing sides of a policy decision?
Replies from: Lumifer↑ comment by Lumifer · 2015-07-27T19:14:09.837Z · LW(p) · GW(p)
I am not sure where is this question coming from. I am not suggesting any particular studies or ways of conducting them.
Maybe it's worth going back to the post from which this subthread originated. Acty wrote:
If we set a benchmark that would satisfy our values ... then which policy is likely to better satisfy that benchmark...? But, of course, this is a factual question. We could resolve this by doing an experiment, maybe a survey of some kind.
First, Acty is mistaken in thinking that a survey will settle the question of which policy will actually satisfy the value benchmark. We're talking about real consequences of a policy and you don't find out what they are by conducting a public poll.
And second, if you do want to find the real consequences of a policy, you do need to run an intervention (aka an experiment) -- implement the policy in some limited fashion and see what happens.
Replies from: advael↑ comment by advael · 2015-07-27T21:01:09.250Z · LW(p) · GW(p)
Oh, I guess I misunderstood. I read it as "We should survey to determine whether terminal values differ (e.g. 'The tradeoff is not worth it') or whether factual beliefs differ (e.g. 'There is no tradeoff')"
But if we're talking about seeing whether policies actually work as intended, then yes, probably that would involve some kind of intervention. Then again, that kind of thing is done all the time, and properly run, can be low-impact and extremely informative.
Replies from: Acty↑ comment by Lumifer · 2015-07-21T01:50:06.070Z · LW(p) · GW(p)
But the choice is between trying your best, accepting that you might fail, and doing nothing.
Failure often comes with worse consequences than just an unchanged status quo.
↑ comment by Username · 2015-07-25T18:11:37.847Z · LW(p) · GW(p)
My model is that these revolutions created a power vacuum that got filled up. Whenever a revolution creates a power vacuum, you're kinda rolling the dice on the quality of the institutions that grow up in that power vacuum. The United States had a revolution, but it got lucky in that the institutions resulting from that revolution turned out to be pretty good, good enough that they put the US on the path to being the world's dominant power a few centuries later. The US could have gotten unlucky if local military hero George Washington had declared himself king.
Insofar as leftist revolutions create worse outcomes, I think it's because since the leftist creed is so anti-power, leftists don't carefully think through the incentives for institutions to manage that power. So the stable equilibrium they tend to drift towards is a sociopathic leader who can talk the talk about egalitarianism while viciously oppressing anyone who contests their power (think Mao or Stalin). Anyone intelligent can see that the sociopathic leader is pushing cartoon egalitarianism, and that's why these leaders are so quick to go for the throats of society's intellectuals. Pervasive propaganda takes care of the rest of the population.
Leftism might work for a different species such as bonobos, but human avarice needs to be managed through carefully designed incentive structures. Sticking your head in the sand and pretending avarice doesn't exist doesn't work. Eliminating it doesn't work because avaricious humans gain control of the elimination process. (Or, to put it another way, almost everyone who likes an idea like "let's kill all the avaricious humans" is themselves avaricious at some level. And by trying to put this plan in to action, they're creating a new "defect/defect" equilibrium where people compete for power through violence, and the winners in this situation tend not to be the sort of people you want in power.)
↑ comment by Lumifer · 2015-07-21T01:43:46.943Z · LW(p) · GW(p)
Okay, if other altruists aren't motivated by being angry about pain and suffering and wanting to end pain and suffering, how are they motivated?
Ask them, I'm not an altruist. But I heard it may have something to do with the concept of compassion.
I genuinely don't see how wanting to help people is correlated with ending up killing people.
Historically, it correlates quite well. You want to help the "good" people and in order to do this you need to kill the "bad" people. The issue, of course, is that definitions of "good" and "bad" in this context... can vary, and rather dramatically too.
I think setting up guillotines in the public square is much more likely if you go around saying "I'm the chosen one and I'm going to singlehandedly design a better world".
If we take the metaphor literally, setting up guillotines in the public square was something much favoured by the French Revolution, not by Napoleon Bonaparte.
If I noticed myself causing any death or suffering I would be very sad, and sit down and have a long think about a way to stop doing that.
Bollocks. You want to change the world and change is never painless. Tearing down chunks of the existing world, chunks you don't like, will necessarily cause suffering.
Replies from: Acty, ChristianKl↑ comment by Acty · 2015-07-21T02:16:29.422Z · LW(p) · GW(p)
--
Replies from: None, Lumifer↑ comment by [deleted] · 2015-07-21T03:09:10.195Z · LW(p) · GW(p)
Don't mind Lumifer. He's one of our resident Anti-Spirals.
But, here's a question: if you're angry at the Bad, why? Where's your hope for the Good?
Of course, that's something our culture has a hard time conceptualizing, but hey, you need to be able to do it to really get anywhere.
Replies from: Username, Acty, John_Maxwell_IV↑ comment by Username · 2015-07-21T10:11:36.092Z · LW(p) · GW(p)
And yet he's consistently one of the highest karma earners in the 30-day karma leaderboard. It seems to be mainly due to his heavy participation... his 80% upvote rate is not especially high. I find him incredibly frustrating to engage with (though I try not to let it show). I can't help but think that he is driving valuable people away; having difficult people dominate the conversation can't be a good thing.
(To clarify, I'm not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable. I just wish they would make a stronger effort to engage in civil & charitable discussion, and I think having people who don't do this and participate heavily is likely to have pernicious effects on LW culture in the long term. In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.)
Replies from: None↑ comment by [deleted] · 2015-07-21T12:33:46.026Z · LW(p) · GW(p)
To clarify, I'm not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable.
Really? Their "perspective" appears to consist in attempting to tear down any hopes, beliefs, or accomplishments someone might have, to the point of occasionally just making a dumb comment out of failure to understand substantive material.
Of course, I stated that a little too disparagingly, but see below...
In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.
Not just civility and niceness, but affirmative statements. That is, if you're trying to achieve group epistemic rationality, it is important to come out and say what one actually believes. Statistical learning from a training-set of entirely positive or entirely negative examples is known to be extraordinarily difficult, in fact, nigh impossible (modulo "blah blah Solomonoff") to do in efficient time.
I think a good group norm is, "Even if you believe something controversial, come out and say it, because only by stating hypotheses and examining evidence can we ever update." Fully General Critique actually induces a uniform distribution across everything, which means one knows precisely nothing.
Besides which, nobody actually has a uniform distribution built into their real expectations in everyday life. They just adopt that stance when it comes time to talk about Big Issues, because they've heard of how Overconfidence Is Bad without having gotten to the part where Systematic Underconfidence Makes Reasoning Nigh-Impossible.
↑ comment by Acty · 2015-07-21T05:09:17.263Z · LW(p) · GW(p)
I think that anger at the Bad and hope for the Good are kind of flip sides of the same coin. I have a vague idea of how the world should be, and when the world does not conform to that idea, it irritates me. I would like a world full of highly rational and happy people cooperating to improve one another's lives, and I would like to see the subsequent improvements taking effect. I would like to see bright people and funding being channeled into important stuff like FAI and medicine and science, everyone working for the common good of humanity, and a lot of human effort going towards the endeavour of making everyone happy. I would like to see a human species which is virtuous enough that poverty is solved by everyone just sharing what they need, and war is solved because nobody wants to start violence. I want people to work together and be rational, basically, and I've already seen that work on a small scale so I have a lot of hope that we can upgrade it to a societal scale. I also have a lot of hope for things like cryonics/Alcor bringing people back to life eventually, MIRI succeeding in creating FAI, and effective altruism continuing to gain new members until we start solving problems from sheer force of numbers and funding.
But I try not to be too confident about exactly what a Good world looks like; a) I don't have any idea what the world will look like once we start introducing crazy things like superintelligence, b) that sounds suspiciously like an ideology and I would rather do lots of experiments on what makes people happy and then implement that, and c) a Good world would have to satisfy people's preferences and I'm not a powerful enough computer to figure out a way to satisfy 7 billion sets of preferences.
Replies from: CCC, None↑ comment by CCC · 2015-07-21T10:49:35.397Z · LW(p) · GW(p)
I would like a world full of highly rational and happy people cooperating to improve one another's lives
If you can simply improve the odds of people cooperating in such a manner, then I think that you will bring the world you envision closer. And the better you can improve those odds, the better the world will be.
Replies from: Acty↑ comment by Acty · 2015-07-21T10:58:12.988Z · LW(p) · GW(p)
--
Replies from: CCC↑ comment by CCC · 2015-07-22T11:45:36.173Z · LW(p) · GW(p)
Do those goals sound like world-improving ones?
Let us consider them, one by one.
I want to figure out ways to improve cooperation between people and groups.
This means that the goals of the people and groups will be more effectively realised. It is world-improving if and only if the goals towards which the group works are world-improving.
A group can be expected, on the whole, to work towards goals which appear to be of benefit to the group. The best way to ensure that the goals are world-improving, then, might be to (a) ensure that the "group" in question consists of all intelligent life (and not merely, say, Brazilians) and (b) the groups' goals are carefully considered and inspected for flaws by a significant number of people.
(b) is probably best accomplished be encouraging voluntary cooperation, as opposed to unquestioning obedience of orders. (a) simply requires ensuring that it is well-known that bigger groups are more likely to be successful, and punishing the unfair exploitation of outside groups.
On the whole, I think this is most likely a world-improving goal.
I want to do research on cultural attitudes towards altruism and ways to get more people to be altruistic/charitable
Alturism certainly sounds like a world-improving goal. Historically, there have been a few missteps in this field - mainly when one person proposes a way to get people to be more altruistic, but then someone else implements it and does so in a way that ensures that he reaps the benefit of everyone else's largesse.
So, likely to be world-improving, but keep an eye on the people trying to implement your research. (Be careful if you implement it yourself - have someone else keep a close eye on you in that circumstance).
I want to try and get LW-style critical thinking classes introduced in schools from an early age so as to raise the sanity waterline
Critical thinking is good. However, again, take care in the implementation; simply teaching students what to write in the exam is likely to do much less good than actually teaching critical thinking. Probably the most important thing to teach students is to ask questions and to think about the answers - and the traditional exam format makes it far too easy to simply teach students to try to guess the teacher's password.
If implemented properly, likely to be world-improving.
...that's my thoughts on those goals. Other people will likely have different thoughts.
↑ comment by [deleted] · 2015-07-21T13:14:27.625Z · LW(p) · GW(p)
But I try not to be too confident about exactly what a Good world looks like; a) I don't have any idea what the world will look like once we start introducing crazy things like superintelligence, b) that sounds suspiciously like an ideology and I would rather do lots of experiments on what makes people happy and then implement that, and c) a Good world would have to satisfy people's preferences and I'm not a powerful enough computer to figure out a way to satisfy 7 billion sets of preferences.
And these are all very virtuous things to say, but you're a human, not a computer. You really ought to at least lock your mind on some positive section of the nearby-possible and try to draw motivation from that (by trying to make it happen).
Replies from: Acty↑ comment by Acty · 2015-07-21T14:13:28.146Z · LW(p) · GW(p)
--
Replies from: Richard_Kennaway, ErikM, ChristianKl, Lumifer↑ comment by Richard_Kennaway · 2015-07-25T08:00:19.404Z · LW(p) · GW(p)
My intuitions say that specialism increases output, so we should have an all-controlling central state with specialist optimal-career-distributors and specialist psychologist day-planners who hand out schedules and to-do lists to every citizen every day which must be followed to the letter on pain of death and in which the citizens have zero say.
"Greetings, Comrade Acty. Today the Collective has decreed that you..." Do these words make your heart skip a beat in joyous anticipation, no matter how they continue?
Have you read "Brave New World"? "1984"? "With Folded Hands"? Do those depict societies you find attractive?
To me, this seems like a happy wonderful place that I would very much like to live in.
Exinanition is an attractive fantasy for some, but personal fantasies are not a foundation to build a society on.
What I can do is think: a lot of aspects of the current world (war, poverty, disease etc) make me really angry and seem like they also hurt other people other than me, and if I were to absolutely annihilate those things, the world would look like a better place to me and would also better satisfy others' preferences. So I'm going to do that.
You are clearly intelligent, but do you think? You have described the rich intellectual life at your school, but how much of that activity is of the sort that can solve a problem in the real world, rather than a facility at making complex patterns out of ideas? The visions that you have laid out here merely imagine problems solved. People will not do as you would want? Then they will be made to. How? "On pain of death." How can the executioners be trusted? They will be tested to ensure they use the power well.
How will they be tested? Who tests them? How does this system ever come into existence? I'm sure your imagination can come up with answers to all these questions, that you can slot into a larger and larger story. But it would be an exercise in creative fiction, an exercise in invisible dragonology.
And all springing from "My intuitions say that specialism increases output."
I'm going to pursue the elimination of suffering until the suffering stops.
Exterminate all life, then. That will stop the suffering.
I'm sure you're really smart, and will go far. I'm concerned about the direction, though. Right now, I'm looking at an Unfriendly Natural Intelligence.
Replies from: Acty, skeptical_lurker↑ comment by Acty · 2015-07-25T10:02:39.576Z · LW(p) · GW(p)
--
Replies from: Jiro↑ comment by Jiro · 2015-07-26T17:33:59.331Z · LW(p) · GW(p)
That's why I don't want to make such a society. I don't want to do it. It is a crazy idea that I dreamed up by imagining all the things that I want, scaled up to 11. It is merely a demonstration of why I feel very strongly that I should not rely on the things I want
Wait a minute. You don't want them, or you do want them but shouldn't rely on what you want?
And I'm not just nitpicking here. This is why people are having bad reactions. On one level, you don't want those things, and on another you do. Seriously mixed messages.
Also, if you are physically there with your foot on someone's toe, that triggers your emotional instincts that say that you shouldn't cause pain. If you are doing things which cause some person to get hurt in some faraway place where you can't see it, that doesn't. I'm sure that many of the people who decided to use terrorism as an excuse for NSA surveillance won't step on people's toes or hurt any cats. If anything, their desire not to hurt people makes it worse. "We have to do these things for everyone's own good, that way nobody gets hurt!"
Replies from: Acty↑ comment by Acty · 2015-07-26T18:55:12.146Z · LW(p) · GW(p)
--
Replies from: None, Jiro↑ comment by [deleted] · 2015-07-27T00:38:35.994Z · LW(p) · GW(p)
Currently my thought processes go something more like: "When I think about the things that make me happy, I come up with a list like meritocracy and unity and productivity and strong central authority. I don't come up with things like freedom. Taking those things to their logical conclusion, I should propose a society designed like so... wait... Oh my god that's terrifying, I've just come up with a society that the mere description of causes other people to want to run screaming, this is bad, RED ALERT, SOMETHING IS WRONG WITH MY BRAIN. I should distrust my moral intuitions. I should place increased trust in ideas like doing science to see what makes people happiest and then doing that, because clearly just listening to my moral intuitions is a terrible way to figure out what will make other people happy. In fact, before I do anything likely to significantly change anyone else's life, I should do some research or test it on a small scale in order to check whether or not it will make them happy, because clearly just listening to what I want/like is a terrible idea."
I'm not so sure you should distrust your intuitions here. I mean, let's be frank, the same people who will rave about how every left-wing idea from liberal feminism to state socialism is absolutely terrible, evil, and tyrannical will, themselves, manage to reconstruct most of the same moral intuitions if left alone on their own blogs. I mean, sure, they'll call it "neoreaction", but it's not actually that fundamentally different from Stalinism. On the more moderate end of the scale, you should take account of the fact that anti-state right-wing ideologies in Anglo countries right now are unusually opposed to state and hierarchy across the space of all human societies ever, including present-day ones.
POINT BEING, sometimes you should distrust your distrust of certain intuitions, and ask simply, "How far is this intuition from the mean human across history?" If it's close, actually, then you shouldn't treat it as, "Something [UNUSUAL] is wrong with my brain." The intuition is often still wrong, but it's wrong in the way most human intuitions are wrong rather than because you have some particular moral defect.
So if the "motivate yourself by thinking about a great world and working towards it" is a terrible option for me because my brain's imagine-great-worlds function is messed up, then clearly I need to look for an alternative motivation. And "motivate yourself by thinking about clearly evil things like death and disease and suffering and then trying to eliminate them" is a good alternative.
See, the funny thing is, I can understand this sentiment, because my imagine-great-worlds function is messed-up in exactly the opposite way. When I try to imagine great worlds, I don't imagine worlds full of disciplined workers marching boldly forth under the command of strong, wise, meritorious leadership for the Greater Good -- that's my "boring parts of Shinji and Warhammer 40k" memories.
Instead, my "sample great worlds" function outputs largely equal societies in which people relate to each-other as friends and comrades, the need to march boldly forth for anything when you don't really want to has been long-since abolished, and people spend their time coming up with new and original ways to have fun in the happy sunlight, while also re-terraforming the Earth, colonizing the rest of the Solar System, and figuring out ways to build interstellar travel (even for digitized uploads) that can genuinely survive the interstellar void to establish colonies further-out.
I consider this deeply messed-up because everyone always tells me that their lives would be meaningless if not for the drudgery (which is actually what the linked post is trying to refute).
I am deeply disturbed to find that a great portion of "the masses" or "the real people, outside the internet" seem to, on some level, actually feel that being oppressed and exploited makes their lives meaningful, and that freedom and happiness is value-destroying, and that this is what's at the root of all that reactionary rhetoric about "our values" and "our traditions"... but I can't actually bring myself to say that they ought to be destroyed for being wired that way.
I just kinda want some corner of the world to have your and my kinds of wiring, where Progress is supposed to achieve greater freedom, happiness, and entanglement over time, and we can come up with our own damn fates rather than getting terminally depressed because nobody forced one on us.
Likewise, I can imagine that a lot of these goddamn Americans are wired in such a way that "being made to do anything by anyone else, ever" seems terminally evil to them. Meh, give them a planetoid.
↑ comment by Jiro · 2015-07-26T23:38:38.428Z · LW(p) · GW(p)
On some level, you do need a motivation, so it would be foolish to say that anger is a bad reason to do things. I would certainly never tell you to do only things you are indifferent about.
On another level, though, doing things out of strong anger causes you to ignore evidence, think short term, ignore collateral damage, etc. just as much as doing things because they make you happy does. You think that describing the society that will make you feel happy makes people run screaming? Describing the society that would alleviate your anger will make people run screaming too--in fact it already has made people run screaming in this very thread.
Or at least, it has a bad track record in the real world. Look at the things that people have done because they are really angry about terrorism.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-07-27T11:41:07.492Z · LW(p) · GW(p)
Look at the things that people have done because they are really angry about terrorism.
And for one level less meta, look at the terrorism that people have done because they are so angry about something.
↑ comment by skeptical_lurker · 2015-07-25T13:09:26.729Z · LW(p) · GW(p)
Have you read "Brave New World"? "1984"? "With Folded Hands"? Do those depict societies you find attractive?
Of course, while most people would not want to live in BNW, most characters in BNW would not want to live in our society.
↑ comment by ErikM · 2015-07-21T20:39:15.225Z · LW(p) · GW(p)
My intuitions say that specialism increases output, so we should have an all-controlling central state with specialist optimal-career-distributors and specialist psychologist day-planners who hand out schedules and to-do lists to every citizen every day which must be followed to the letter on pain of death and in which the citizens have zero say.
To me, this seems like a happy wonderful place that I would very much like to live in. Unfortunately, everyone else seems to strongly disagree.
I think there's an implicit premise or two that you may have mentally included but failed to express, running along the lines of:
The all-controlling state is run by completely benevolent beings who are devoted to their duty and never make errors.
Sans such a premise, one lazy bureaucrat cribbing his cubicle neighbor's allocations, or a sloppy one switching the numbers on two careers, can cause a hell of a lot of pain by assigning an inappropriate set of tasks for people to do. Zero say and the death penalty for disobedience then makes the pain practically irremediable. A lot of the reason for weak and ineffective government is trying to mitigate and limit government's ability to do terribly terribly wicked things, because governments are often highly skilled at doing terribly terribly wicked things, and in unique positions to do so, and can do so by minor accident. You seem to have ignored the possibility of anything going wrong when following your intuition.
Moreover, there's a second possible implicit premise:
These angels hold exactly and only the values shared by all mankind, and correct knowledge about everything.
Imagine someone with different values or beliefs in charge of that all-controlling state with the death penalty. For instance, I have previously observed that Boko Haram has a sliver of a valid point in their criticism of Western education when noting that it appears to have been a major driver in causing Western fertility rates to drop below replacement and show no sign of recovery. Obviously you can't have a wonderful future full of happy people if humans have gone extinct, therefore the Boko Haram state bans Western education on pain of death. For those already poisoned by it, such as you, you will spend your next ten years remedially bearing and rearing children and you are henceforth forbidden access to any and all reading material beyond instructions on diaper packaging. Boko Haram is confident that this is the optimal career for you and that they're maximizing the integral of human happiness over time, despite how much you may scream in the short term at the idea.
With such premises spelled out, I predict people wouldn't object to your ideal world so much as they'd object to the grossly unrealistic prospect. But without such, you're proposing a totalitarian dictatorship and triggering a hell of a lot of warning signs and heuristics and pattern-matching to slavery, tyranny, the Soviet Union, and various other terrible bad things where one party holds absolute power to tell other people how to live their life.
"But it's a benevolent dictatorship", I imagine you saying. Pull the other one, it has bells on. The neoreactionaries at least have a proposed incentive structure to encourage the dictator to be benevolent in their proposal to bring back monarchy. (TL;DR taxes go into the king's purse giving the king a long planning horizon) What have you got? Remember, you are one in seven billion people, you will almost certainly not be in charge of this all-powerful state if it's ever implemented, and when you do your safety design you should imagine it being in the hands of randoms at the least, and of enemies if you want to display caution.
Replies from: Acty↑ comment by Acty · 2015-07-25T02:53:42.334Z · LW(p) · GW(p)
--
Replies from: hairyfigment, David_Bolin↑ comment by hairyfigment · 2015-07-25T03:45:58.563Z · LW(p) · GW(p)
There are reasons to suspect the tests would not work. "It would be nice to think that you can trust powerful people who are aware that power corrupts. But this turns out not to be the case." (Content Note: killing, mild racism.)
↑ comment by David_Bolin · 2015-07-25T07:47:22.599Z · LW(p) · GW(p)
If you are "procrastinate-y" you wouldn't be able to survive this state yourself. Following a set schedule every moment for the rest of your life is very, very difficult and it is unlikely that you would be able to do it, so you would soon be dead yourself in this state.
↑ comment by ChristianKl · 2015-07-21T15:01:48.245Z · LW(p) · GW(p)
An ideology would just bias my science and make me worse.
I don't know you well enough to say, but it's quite easy to pretend that one has no ideology. For clear thinking it's very useful to understand one's own ideological positions.
There also a difference between doing science and scientism with is about banner wearing.
Replies from: Acty↑ comment by Acty · 2015-07-21T15:58:35.029Z · LW(p) · GW(p)
Oh, I definitely have some kind of inbuilt ideology - it's just that right now, I'm consciously trying to suppress/ignore it. It doesn't seem to converge with what most other humans want. I'd rather treat it as a bias, and try and compensate for it, in order to serve my higher level goals of satisfying people's preferences and increasing happiness and decreasing suffering and doing correct true science.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-21T17:54:07.704Z · LW(p) · GW(p)
Ignoring something and working around a bias are two different things.
↑ comment by Lumifer · 2015-07-21T21:02:02.416Z · LW(p) · GW(p)
we should have an all-controlling central state with specialist optimal-career-distributors and specialist psychologist day-planners who hand out schedules and to-do lists to every citizen every day which must be followed to the letter on pain of death and in which the citizens have zero say. Nobody would have property, you would just contribute towards the state of human happiness when the state told you to and then you would be assigned the goods you needed by the state. To me, this seems like a happy wonderful place that I would very much like to live in
Why do you call inhabitants of such a state "citizens"? They are slaves.
To me, this seems like a happy wonderful place that I would very much like to live in
Interesting. So you would like to be a slave.
Unfortunately, everyone else seems to strongly disagree.
...and do you understand why?
Replies from: Acty↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-07-21T08:46:09.544Z · LW(p) · GW(p)
Don't mind Lumifer. He's one of our resident Anti-Spirals.
And yet he's consistently one of the highest karma earners in the 30-day karma leaderboard. It seems to be mainly due to his heavy participation... his 80% upvote rate is not especially high. I find him incredibly frustrating to engage with (though I try not to let it show). I can't help but think that he is driving valuable people away; having difficult people dominate the conversation can't be a good thing. I've tried to talk to him about this.
Hypothesized failure mode for online forums: Online communities are disproportionately populated by disagreeable people who are driven online because they have trouble making real-life friends. They tend to "win" long discussions because they have more hours to invest in them. Bystanders generally don't care much about long discussions because it's an obscure and wordy debate they aren't invested in, so for most extended discussions, there's no referee to call out bad conversational behavior. The end result: the bulldog strategy of being the most determined person in the conversation ends up "winning" more often than not.
(To clarify, I'm not trying to speak out against the perspectives people like Lumifer and VoiceOfRa offer, which I am generally sympathetic to. I think their perspectives are valuable. I just wish they would make a stronger effort to engage in civil & charitable discussion, and I think having people who don't do this and participate heavily is likely to have pernicious effects on LW culture in the long term. In general, I agree with the view that Paul Graham has advanced re: Hacker News moderation: on a group rationality level, in an online forum context, civility & niceness end up being very important.)
↑ comment by Lumifer · 2015-07-21T02:30:24.898Z · LW(p) · GW(p)
Burning fury does, and if it makes me help people... whatever works, right?
There is a price to be paid. If you use fury and anger too much, you will become a furious and angry kind of person. Embrace the Dark Side and you will become one with it :-/
I'm just a kid who wants to grow up and study social science and try and help people.
Maybe :-) The reason you've met a certain... lack of enthusiasm about your anger for good causes is because you're not the first kid who wanted to help people and was furious about the injustice and the blindness of the world. And, let's just say, it does not always lead to good outcomes.
Replies from: Acty↑ comment by ChristianKl · 2015-07-22T22:33:49.355Z · LW(p) · GW(p)
The French Revolution wanted to design a better world to the point of introducing the 10-day week. Napoleon just wanted to conquer.
↑ comment by VoiceOfRa · 2015-07-12T03:13:09.433Z · LW(p) · GW(p)
sexism and racism and homophobia and transphobia, and preachers who help spread AIDS by trying to limit access to contraception, and all of those things make me REALLY REALLY ANGRY. If I think about them too hard I see red.
In otherwords, you're completely mindkilled about the topics in question and thus your opinions about them are likely to be poorly thought out. For example, when you think about, most of what is called "racism/sexism/etc." is actually perfectly valid Baysian inference (frequently leading to true conclusions that some people would prefer not to believe). As for AIDS, are you also angry at people opposing traditional morality since they also help spread AIDS?
Frankly, given your list, it looks like you merely stumbled up on the causes fashionable where you grew up and implicitly assumed that since everyone is so worked up about them they must be good causes. Consider that if you had grown up differently you would feel just as angry at anyone standing in the way of saving people's souls.
Replies from: Acty↑ comment by Acty · 2015-07-19T18:14:39.117Z · LW(p) · GW(p)
--
Replies from: None, Jiro, VoiceOfRa↑ comment by [deleted] · 2015-07-21T04:01:55.149Z · LW(p) · GW(p)
At age 6, I quote my younger self, I wanted "to follow Jesus' way". I have improved away from my upbringing and the fashionable things where I grew up. I came to lefty conclusions all on my ownsies, because they make sense.
Ah, so you're a socialist?
Replies from: Acty↑ comment by Acty · 2015-07-21T05:45:35.723Z · LW(p) · GW(p)
Eh, I'm not sure I'm an anything-ist. Socialist ideas make a lot of sense to me, but really I'm a read-a-few-more-books-and-go-to-university-and-then-decide-ist. If I have to stand behind any -ist, it's going to be "scientist". I want to do research to find out which policies most effectively make people happy, and then I want to implement those policies regardless of whether they fall in line with the ideologies that seem attractive to me.
But yeah, I do think that it is morally wrong to let people suffer and morally right to make people happy, and I think you can create a lot of utility by taking money from people who already have a lot (leaving them with enough to buy food and maybe preventing them from going on holiday / buying a nice car) and giving it to people who have nothing (meaning they have enough money for food and education so they can survive and try and change their situation). So I agree with taxing people and using the money to provide universal healthcare, housing, food, etc. Apparently that makes me a socialist.
Replies from: None, VoiceOfRa↑ comment by [deleted] · 2015-07-21T13:19:37.518Z · LW(p) · GW(p)
So I agree with taxing people and using the money to provide universal healthcare, housing, food, etc. Apparently that makes me a socialist.
The correct term is social-democrat, actually. Among the different systems, social democracy has very rarely received full-throated support, but seems to have done among the best at handling the complexity of the values and value-systems that humans want to be materially represented in our societies.
(And HAHAHA!, finally I can just come out and say that without feeling the need to explain reams and reams of background material on both value-complexity and left-wing history!)
Eh, I'm not sure I'm an anything-ist. Socialist ideas make a lot of sense to me, but really I'm a read-a-few-more-books-and-go-to-university-and-then-decide-ist. If I have to stand behind any -ist, it's going to be "scientist". I want to do research to find out which policies most effectively make people happy, and then I want to implement those policies regardless of whether they fall in line with the ideologies that seem attractive to me.
Oh, that's all well and good. I just tend to bring up socialism because I think that "left-wing politics" is more of a hypothesis space of political programs than a single such program (ie: the USSR), but that "bad vibes" in the West from the USSR (and lots and lots of right-wing propaganda) have tended to succeed in getting people to write off that entire hypothesis space before examining the evidence.
I do think that an ideally rational government would be "more" left-wing than right-wing, as current alignments stand, but I too think it would in fact be mixed.
Replies from: Acty, Lumifer↑ comment by Acty · 2015-07-21T14:25:30.458Z · LW(p) · GW(p)
--
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-07-22T07:31:10.055Z · LW(p) · GW(p)
Every system that works is covert or overt meritocracy. Social democracy works, so ....
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-07-24T22:56:33.229Z · LW(p) · GW(p)
Misformatted link at the end of the sentence?
↑ comment by Lumifer · 2015-07-21T16:37:11.600Z · LW(p) · GW(p)
Among the different systems, social democracy has very rarely received full-throated support, but seems to have done among the best at handling the complexity of the values and value-systems that humans want to be materially represented in our societies.
...among the various socio-political systems the one I prefer is the best one because it is the best... X-)
Replies from: None↑ comment by [deleted] · 2015-07-21T23:24:29.316Z · LW(p) · GW(p)
Actually, in voting and activism, I'm a full-throated socialist. Social democracy is weaksauce next to a fully-developed socialism, but we don't have a fully-developed socialism, so you're often stuck with the weaksauce.
And as an object-level defense: social democracy, as far as I can tell, does the best at aggregating value information about diverse domains of life and keeping any one optimization criterion from running roughshod over everything else that people happen to care about.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-21T23:36:06.562Z · LW(p) · GW(p)
I'm a full-throated socialist
For which value of the word "socialism"?
And as an object-level defense
You just repeated your assertion, you didn't provide any arguments or evidence.
Replies from: Dahlen↑ comment by Dahlen · 2015-07-22T00:39:59.196Z · LW(p) · GW(p)
You know, you don't have to jump on him and demand that he defends his socialist stance merely because he expressed it and tried to discuss it with someone else. It's not like he's answerable to you for being a socialist. And this is not the first time I've seen you and others intervene in a discussion (that otherwise didn't involve or concern them) solely for calling out people on leftist ideas. What the hell are you doing that for?
Replies from: None, Lumifer↑ comment by Lumifer · 2015-07-22T01:05:07.062Z · LW(p) · GW(p)
you don't have to jump on him and demand that he defends his socialist stance merely because he expressed it
Demand? I can't demand anything. This is an internet forum, all eli_sennesh needs to do is just ignore my comment. That seems easy enough.
solely for calling out people on leftist ideas
It's not like it is something to be ashamed of, is it? If he says he is a "full-throated socialist" I get curious what does that mean. The last place that said it implemented a "fully-developed socialism" was USSR, but I don't think that's what eli_sennesh means.
↑ comment by VoiceOfRa · 2015-07-21T06:18:31.923Z · LW(p) · GW(p)
But yeah, I do think that it is morally wrong to let people suffer and morally right to make people happy, and I think you can create a lot of utility by taking money from people who already have a lot (leaving them with enough to buy food and maybe preventing them from going on holiday / buying a nice car) and giving it to people who have nothing (meaning they have enough money for food and education so they can survive and try and change their situation).
That would increase utility in the very short term, agreed. Of course, it would destroy the motivation to work, thus leading to a massive drop in utility shortly there after.
Replies from: Acty↑ comment by Acty · 2015-07-21T06:34:25.740Z · LW(p) · GW(p)
Well, "providing universal healthcare and welfare will lead to a massive drop in motivation to work" is a scientific prediction. We can find out whether it is true by looking at countries where this already happens - taxes pay for good socialised healthcare and welfare programs - like the UK and the Nordics, and seeing if your prediction has come true.
The UK employment rate is 5.6%, the United States is 5.3%. Not a particularly big difference, nothing indicating that the UK's universal free healthcare has created some kind of horrifying utility drop because there's no motivation to work. We can take another example if you like. Healthcare in Iceland is universal, and Iceland's unemployment rate is 4.3% (it also has the highest life expectancy in Europe).
This is not an ideological dispute. This is a dispute of scientific fact. Does taxing people and providing universal healthcare and welfare lead to a massive drop in utility by destroying the motivation to work (and meaning that people don't work)? This experiment has already been performed - the UK and Iceland have universal healthcare and provide welfare to unemployed citizens - and, um, the results are kind of conclusive. The world hasn't ended over here. Everyone is still motivated to work. Unemployment rates are pretty similar to those in the US where welfare etc isn't very good and there's not universal healthcare. Your prediction didn't come true, so if you're a rationalist, you have to update now.
Replies from: Journeyman, VoiceOfRa↑ comment by Journeyman · 2015-07-21T07:15:46.574Z · LW(p) · GW(p)
Scandinavia and the UK are relatively ethnically homogenous, high-trust, and productive populations. Socialized policies are going to work relatively better in these populations. Northwest European populations are not an appropriate reference class to generalize about the rest of the world, and they are often different even from other parts of Europe.
Socialized policies will have poorer results in more heterogenous populations. For example, imagine that a country has multiple tribes that don't like each other; they aren't going to like supporting each other's members through welfare. As another example, imagine that multiple populations in a country have very different economic productivity. The people who are higher in productivity aren't going to enjoy their taxes being siphoned off to support other groups who aren't pulling their weight economically. These situations are a recipe for ethnic conflict.
Icelanders may be happy with their socialized policies now, but imagine if you created a new nation with a combination of Icelanders and Greeks called Icegreekland. The Icelanders would probably be a lot more productive than the Greeks and unhappy about needing to support them through welfare. Icelanders might be more motivated to work and pay taxes if it's creating a social safety net for their own community, but less excited about working to pay taxes to support Greeks. And who can blame them?
There is plenty of valid debate about the likely consequences of socialized policies for populations other than homogenous NW European populations. Whoever told you these issues were a matter of scientific fact was misleading you. This is an excellent example of how the siren's call of politically attractive answers leads people to cut corners during their analysis so it goes in the desired direction, whether they are aware they are doing it or not.
Generalizing what works for one group as appropriate for another is a really common failure mode through history which hurts real people. See the whole "democracy in Iraq" thing as another example.
↑ comment by VoiceOfRa · 2015-07-21T06:39:30.570Z · LW(p) · GW(p)
Well, "providing universal healthcare and welfare will lead to a massive drop in motivation to work" is a scientific prediction.
I wasn't talking about providing people with universal healthcare. (That merely leads to a somewhat dysfunctional healthcare system). I was talking about taking so much from the "haves" that you "[prevent] them from going on holiday / buying a nice car".
Word of advice, try actually reading what I wrote before replying next time. Yes, I realize this is hard to do while one is angry; however, that's an argument for not using anger as your primary motivation.
Replies from: Good_Burning_Plastic, None↑ comment by Good_Burning_Plastic · 2015-07-24T22:20:56.694Z · LW(p) · GW(p)
(That merely leads to a somewhat dysfunctional healthcare system).
And yet somehow western European healthcare systems manage to result in similar or better outcomes than the US one at less than half the cost.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-07-25T19:06:29.985Z · LW(p) · GW(p)
Of course, I wouldn't say that the US system is free-market, because medicine is heavily regulated. I read somewhere that only one company has a licence to produce methamphetamine for ADHD, giving them a state-enforced monopoly.
Healthcare seems to be one of the most difficult areas to run under a free market.
↑ comment by [deleted] · 2015-07-21T12:08:04.230Z · LW(p) · GW(p)
I would approach this from a different angle. It is fairly well known that the measurable GINI level of inequality is not primarily caused by the people who are upper-middle or reasonably wealthy but by the 1% of the 1% (so 0.01%). So why are taxes even progressive for the 99,99%? They achieve just about nothing in reducing GINI, they piss of the upper-middle who may be unable to buy a nice car, and if that whole burden (of tax rate progressivity) was shifted over to the 0,.01% they'd still be buying whole fleets of cars. So it just makes no sense.
However I also think it is because the 0.01% and their wealth is extremely mobile. The sad truth is that modern taxation is based on a flypaper principle, tax those whom you can because they stay put, and that is the upper-middle.
Replies from: Nornagest↑ comment by Nornagest · 2015-07-22T00:32:25.637Z · LW(p) · GW(p)
So why are taxes even progressive for the 99,99%? They achieve just about nothing in reducing GINI, they piss of the upper-middle who may be unable to buy a nice car...
The purpose of progressive taxation is not to reduce the Gini coefficient; it's to efficiently extract funding and to sound good to fairness-minded voters. With regard to the former, there's a lot more people around the 90th percentile than the 99.99th, more of their money comes in easily-taxable forms, and they're generally more tractable than those far above or below. They may be unable to buy a nicer car after taxes, and it may piss them off, but they're not going to be rioting in the streets over it, and they can't afford lobbyists or many of the more interesting tax dodges.
With regard to the latter, your average voter has never heard of Gini nor met anyone truly wealthy, but you can expect them to be acutely aware of their managers and their slightly richer neighbors. Screwing Bill Gates might make good pre-election press, but screwing Bill Lumbergh who parks his Porsche in the handicapped spots every day is viscerally satisfying and stays that way.
↑ comment by Jiro · 2015-07-20T16:11:10.475Z · LW(p) · GW(p)
I don't know why you automatically leap to assuming that I am really angry about, say, people reading studies comparing male and female IQs when what I'm actually angry about is, say, people beating LBGTQA+ individuals to death in dark alleys (which I am presuming you would not defend).
Because the former is what a lot of other people using your rhetoric mean. And assuming that you mean what a lot of other people using your rhetoric mean is a reasonable assumption.
Also, even interpreting what you said as "I am angry about people beating LBGTQA+ individuals", it sounds like you are angry about it as long as it happens at all, regardless of its prevalence. Terrorism really happens too, but disproportionate anger against terrorism that ignores its prevalence has led to (or has been an excuse for) some pretty awful things.
Replies from: Acty↑ comment by Acty · 2015-07-21T03:35:08.816Z · LW(p) · GW(p)
--
Replies from: Lumifer, Jiro, VoiceOfRa↑ comment by Lumifer · 2015-07-21T03:41:00.555Z · LW(p) · GW(p)
How much of my rhetoric have you actually had the chance to observe?
Well, right here is a nice example:
that reveals a set of values which are kinda disturbing to me. It signals that you care about whether you can read IQ-by-race-and-gender studies more than you care about genocide and acid attacks and lynchings
Would you care to be explicit about the connection between IQ-by-race studies and genocide..?
Replies from: Acty↑ comment by Acty · 2015-07-21T05:16:02.085Z · LW(p) · GW(p)
There is no connection. I'm not trying to imply a connection. The only connection is that they are both things possibly implied by the word "racism".
I'm trying to say that when I say "I oppose racism", intending to signal "I oppose people beating up minorities", and people misunderstand badly enough that they think I mean "I oppose IQ-by-race studies", it disturbs me. If people know that "I oppose racism" could mean "I oppose genocide", but choose to interpret it as "I oppose IQ-by-race studies", that worries me. Those things are completely different and if you think that I'm more likely to oppose IQ-by-race studies than I am to oppose genocide, or if you think IQ-by-race studies are more important and worthy of being upset about than genocide, something has gone very wrong here.
A sentence like "I oppose racism" could mean a lot of different things. It could mean "I think genocide is wrong", "I think lynchings are wrong", "I think people choosing white people for jobs over black people with equivalent qualifications is wrong", or "I think IQ by race studies should be banned". Automatically leaping to the last one and getting very angry about it is... kind of weird, because it's the one I'm least likely to mean, and the only one we actually disagree about. You seriously want to reply to "I oppose racism" with "but IQ by race studies are valid Bayesian inference!" and not "yes, I agree that lynching people is very wrong"? Why? Are IQ by race studies more important to your values than eliminating genocide and lynchings? Do you genuinely think that I am more likely to oppose IQ-by-race studies than I am to oppose lynchings? The answer to neither of those questions should be yes.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-21T06:29:49.039Z · LW(p) · GW(p)
I'm trying to say that when I say "I oppose racism", intending to signal "I oppose people beating up minorities", and people misunderstand badly enough that they think I mean "I oppose IQ-by-race studies", it disturbs me.
That's because most people who say "I oppose racism" mean the latter, and no one except you means the former. That's largely because most people oppose beating people up for no good reason and thus they don't feel the need to constantly go about saying so.
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-07-21T07:09:28.377Z · LW(p) · GW(p)
no one except you means the former
I don't think so.
↑ comment by Jiro · 2015-07-21T03:59:15.969Z · LW(p) · GW(p)
Racism and sexism and transphobia and homophobia have a lot of effects. They run the gamut, from racism causing literal genocides and the murders of millions of people, to a vaguely insulting slur being used behind someone's back
The same is true for terrorism, but if someone came here saying "I'm really angry at terrorism and we have to do something", you'd be justified in thinking that doing what they want might not turn out well.
Can we apply the principle of charity, and establish that we agree on certain things, before we leap to yell at one another?
I'm sure we can agree that terrorism is bad, too. In fact, I'm sure we can agree that Islamic terrorism specifically is bad. So being really angry at it is likely to produce good results, right?
Replies from: Acty↑ comment by Acty · 2015-07-21T05:28:54.432Z · LW(p) · GW(p)
I am very angry about terrorism. I think terrorism is a very bad thing and we should eliminate it from the world if we can.
Being very angry about terrorism =/= thinking that a good way to solve the problem is to randomly go kill the entire population of the Middle East in the name of freedom (and oil). I hate terrorism and would prevent it if I could. In fact, I hate people killing each other so much, I think we should think rationally about the best way to eliminate it utterly (whilst causing fewer deaths than it causes) and then do that.
Replies from: Jiro, VoiceOfRa, VoiceOfRa↑ comment by Jiro · 2015-07-21T06:36:16.744Z · LW(p) · GW(p)
If you see someone else very angry about terrorism, though, wouldn't you think there's a good chance that they support (or can be easily led into supporting) anti-terrorism policies with bad consequences? Even if you personally can be angry at terrorism without wanting to do anything questionable, surely you recognize that is commonly not true for other people?
It's the same for racism.
Replies from: Acty↑ comment by Acty · 2015-07-21T06:55:16.168Z · LW(p) · GW(p)
I think that there's a good chance in general that most people can be led into supporting policies with bad consequences. I don't think higher levels of idiocy are present in people who are annoyed about racism and terrorism compared with those who aren't. The kind of people who say "on average people with black skin are slightly less smart, therefore let's bring back slavery and apartheid" are just as stupid and evil, if not stupider and eviler, than the people who support burning down the whole Middle East in order to get rid of terrorism.
Replies from: David_Bolin, Jiro↑ comment by David_Bolin · 2015-07-21T09:41:37.449Z · LW(p) · GW(p)
Caricatures such as describing people who disagree with you as saying "let's bring back slavery" and supporting "burning down the whole Middle East" are not productive in political discussions.
Replies from: Acty↑ comment by Acty · 2015-07-21T09:55:13.287Z · LW(p) · GW(p)
I'm not trying to describe the people who disagree with me as wanting to bring back slavery or supporting burning down the whole Middle East; that isn't my point and I apologise if I was unclear.
As I understood it, the argument levelled against me was that: people who say they're really angry about terrorism are often idiots who hold idiotic beliefs, like, "let's send loads of tanks to the Middle East and kill all the people who might be in the same social group as the terrorists and that will solve everything!" and in the same way, people who say they're really angry about racism are the kind of people who hold idiotic beliefs like "let's ban all science that has anything to do with race and gender!" and therefore it was reasonable of them to assume, when I stated that I was opposed to racism, that I was the latter kind of idiot.
To which my response is that many people are idiots, both people who are angry about terrorism and people who aren't, people who are angry about racism and people who aren't. There are high levels of idiocy in both groups. Being angry about terrorism and racism still seems perfectly appropriate and fine as an emotional arational response, since terrorism and racism are both really bad things. I think the proper response to someone saying "I hate terrorism" is "I agree, terrorism is a really bad thing", not "But drone strikes against 18 year olds in the middle east kill grandmothers!" (even if that is a true thing) and similarly, the proper response to someone saying "I hate racism" is "I agree, genocide and lynchings are really bad", not "But studies about race and gender are perfectly valid Bayesian inference!" (even if that is a true thing).
↑ comment by Jiro · 2015-07-21T07:34:26.148Z · LW(p) · GW(p)
The kind of people who say "on average people with black skin are slightly less smart, therefore let's bring back slavery and apartheid" are just as stupid and evil, if not stupider and eviler, than the people who support burning down the whole Middle East in order to get rid of terrorism.
That compares racists to anti-terrorists, not anti-racists to anti-terrorists.
↑ comment by VoiceOfRa · 2015-07-21T05:49:10.481Z · LW(p) · GW(p)
Being very angry about terrorism =/= thinking that a good way to solve the problem is to randomly go kill the entire population of the Middle East in the name of freedom (and oil).
You do realize no one thinks that. In particular that wasn't the position Jiro was arguing against.
↑ comment by VoiceOfRa · 2015-07-21T06:31:13.377Z · LW(p) · GW(p)
I am very angry about terrorism. I think terrorism is a very bad thing and we should eliminate it from the world if we can.
Then why wasn't it included along with racism/sexism/etc. in your list of things your angry about in the ancestor?
Replies from: Acty↑ comment by Acty · 2015-07-21T07:17:17.245Z · LW(p) · GW(p)
I don't know, maybe because I was randomly listing some things that I'm angry about to explain why I'm motivated to try and improve the world, not making a thorough and comprehensive list of everything I think is wrong?
Could also fit under "war", which I listed, and "death", which I listed.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-21T07:51:30.419Z · LW(p) · GW(p)
I don't know, maybe because I was randomly listing some things that I'm angry about to explain why
So what can I conclude from the things you found salient enough to include and the things you didn't? Especially since it correlates a lot better with what it is currently fashionable to be angry about then with any reasonable measure of how much disutility they produce.
↑ comment by VoiceOfRa · 2015-07-21T05:23:19.667Z · LW(p) · GW(p)
Racism and sexism and transphobia and homophobia have a lot of effects. They run the gamut, from racism causing literal genocides and the murders of millions of people,
False beliefs in equality are also responsible for millions of people being dead, and in fact have a much higher body-count then racism.
Replies from: Acty↑ comment by Acty · 2015-07-21T05:42:41.027Z · LW(p) · GW(p)
--
Replies from: None, VoiceOfRa↑ comment by [deleted] · 2015-07-21T12:32:41.880Z · LW(p) · GW(p)
An excellent way to stop people from being killed is to make them strong or get them protected by someone who is strong. Strong in a broad sense here, from courage to coolness under pressure etc.
Here is a problem. To be a strong protector correlates with having the kind of transphobic and so on, long list of anti-social justice stuff or bigotry, because that list reduces to either disliking weakness or distrusting difference / having strong ingroup loyalty, and there is a relationship between these (a tribal warrior would have all).
Here is a solution. Basically moderate, reciprocal bigotocracy. Accept a higher-status, somewhat elevated i.e. clearly un-equal social role of the strong protector type i.e. that of traditional men, in return for them actively protecting all the other groups from coming to serious harm. The other groups will have to accept having lower social status, and it will be hard on their pride, but will be safer. This can be made official and perhaps more palatable by conscripting straight males, everybody claiming genderqueer status getting an exemption, and also after the service expecting some kind of community protection role, in return for higher elevated social status and respect. Note: this would be the basic model of most European countries up to the most recent times, status-patriarchy and male privilege explicitly deriving from the sacrifice of conscription.
This is not easy to swallow. However there seem to be not many other options. You cannot have strong protectors who are 100% PC because then they will have no fighting spirit. Without strong protectors, all you can hope is a utopia and hoping the whole Earth adopts it or else any basic tribe with gusto will take you over.
But I think a compromise model of not 100% complete equality and providing a proctor role in return should be able to work, as this has always been the traditional civilized model. In the recent years it was abandoned due to it being oppressive, and perhaps it was, but perhaps there is a way to find a compromise inside it.
Replies from: ChristianKl, Acty↑ comment by ChristianKl · 2015-07-21T15:59:30.044Z · LW(p) · GW(p)
You cannot have strong protectors who are 100% PC because then they will have no fighting spirit.
Policeman don't need fighting spirit to be able to go after violent criminals. Being PC is no problem for them.
Replies from: Vaniver↑ comment by Vaniver · 2015-07-21T16:52:11.248Z · LW(p) · GW(p)
Being PC is no problem for them.
Eh... Rotherham?
Replies from: ChristianKl, TheAncientGeek↑ comment by ChristianKl · 2015-07-21T18:14:17.206Z · LW(p) · GW(p)
The last time I read an article on Rotherham even the Telegraph said that the officers in question were highly chauvinistic and therefore don't really follow the usual ideal of being PC.
At the same time reading articles about Rotherham is still registers me: "This story doesn't make sense, the facts on the ground are likely to be different than the mainstream media reports I'm reading" instincts. Have you read the actual report about it in-depth?
Replies from: Journeyman, VoiceOfRa↑ comment by Journeyman · 2015-07-21T21:38:34.779Z · LW(p) · GW(p)
(trigger warning for a bunch of things, including rape and torture)
The Rotherham scandal is very well-documented on Wikipedia. There have been multiple independent reports, and I recommend reading this summary of one of the reports by the Guardian. This event is a good case study because it is easily verifiable; it's not just right-wing sources and tabloids here.
What we know:
- Around 1,400 girls were sexually abused in Rotherham, many of them lower-class white girls, but also Pakistani girls
- Most of the perpetrators were Muslim Pakistani men, though it seems like other Middle-Eastern and Roma men were also involved
- The political and multiculturalist environment slowed down the reporting of this tragedy until eventually it got out
To substantiate that last claim, you can check out one of the independent reports from Rotherham's council website:
By far the majority of perpetrators were described as 'Asian' by victims, yet throughout the entire period, councillors did not engage directly with the Pakistani-heritage community to discuss how best they could jointly address the issue. Some councillors seemed to think it was a one-off problem, which they hoped would go away. Several staff described their nervousness about identifying the ethnic origins of perpetrators for fear of being thought racist; others remembered clear direction from their managers not to do so. ...
The issue of race, regardless of ethnic group, should be tackled as an absolute priority if it is known to be a significant factor in the criminal activity of organised abuse in any local community. There was little evidence of such action being taken in Rotherham in the earlier years. Councillors can play an effective role in this, especially those representing the communities in question, but only if they act as facilitators of communication rather than barriers to it. One senior officer suggested that some influential Pakistani-heritage councillors in Rotherham had acted as barriers...
In her 2006 report, she stated that 'it is believed by a number of workers that one of the difficulties that prevent this issue [CSE] being dealt with effectively is the ethnicity of the main perpetrators'.
She also reported in 2006 that young people in Rotherham believed at that time that the Police dared not act against Asian youths for fear of allegations of racism. This perception was echoed at the present time by some young people we met during the Inquiry, but was not supported by specific examples.
Several people interviewed expressed the general view that ethnic considerations had influenced the policy response of the Council and the Police, rather than in individual cases. One example was given by the Risky Business project Manager (1997- 2012) who reported that she was told not to refer to the ethnic origins of perpetrators when carrying out training. Other staff in children’s social care said that when writing reports on CSE cases, they were advised by their managers to be cautious about referring to the ethnicity of the perpetrators...
Issues of ethnicity related to child sexual exploitation have been discussed in other reports, including the Home Affairs Select Committee report, and the report of the Children’s Commissioner. Within the Council, we found no evidence of children’s social care staff being influenced by concerns about the ethnic origins of suspected perpetrators when dealing with individual child protection cases, including CSE. In the broader organisational context, however, there was a widespread perception that messages conveyed by some senior people in the Council and also the Police, were to 'downplay' the ethnic dimensions_ of CSE. Unsurprisingly, frontline staff appeared to be confused as to what they were supposed to say and do and what would be interpreted as 'racist'. From a political perspective, the approach of avoiding public discussion of the issues was ill judged.
And there you have it: concerns about racism hampered the investigation. Authorities encouraged a coverup of the ethnic dimensions of the problem. Of course, there were obviously other institutional failures here in addition to political correctness. This report is consistent with the mainstream media coverage. And this is the delicate, officially accepted report: I imagine that the true story is worse.
When a story is true, but it doesn't "make sense," that could be a sign that you are dealing with a corrupted map. I initially had the same reaction as you, that this can't be true. I think that's a very common reaction to have, the first time you encounter something that challenges the reigning political narratives. Yet upon further research, this event is not unusual or unprecedented. Following links on Wikipedia, we have the Rochdale sex gang, the Derby sex gang, the Oxford sex gang, the Bristol sex gang, and the Telford sex gang. These are all easily verifiable cases, and the perpetrators are usually people from Muslim immigrant backgrounds.
Sexual violence by Muslim immigrants is a serious social problem in the UK, and the multicultural political environment makes it hard to crack down on. Bad political ideas have real consequences which result in real people getting hurt at a large scale. These events represent a failure of the UK elites to protect rule of law. Since civilization is based on rule of law, this is a very serious problem.
Replies from: Acty, skeptical_lurker, ChristianKl↑ comment by Acty · 2015-07-22T13:43:46.443Z · LW(p) · GW(p)
--
Replies from: ErikM, Journeyman, VoiceOfRa, skeptical_lurker, None↑ comment by ErikM · 2015-07-22T15:29:20.891Z · LW(p) · GW(p)
No, I'm fairly confident the neoreactionaries, for whatever reason you brought them up, would happily join in the plan to strip out the objectionable bits of Pakistani culture and replace it with something better. Also, demanding more integration and acculturation from immigrants. What they probably wouldn't listen to is the apparent contradiction of saying we don't need to get rid of multiculturalism, but we do need to push a certain cultural message until it becomes universal.
Replies from: None↑ comment by [deleted] · 2015-07-23T15:13:06.594Z · LW(p) · GW(p)
What they probably wouldn't listen to is the apparent contradiction of saying we don't need to get rid of multiculturalism, but we do need to push a certain cultural message until it becomes universal.
You guys are arguing over the definition of "culture".
↑ comment by Journeyman · 2015-07-25T02:05:50.176Z · LW(p) · GW(p)
I'll like to start by backing up a bit and explaining why I brought up the example of Rotherham. You originally came here talking about your emphasis on preventing human suffering. Rotherham is a scary example of people being hurt, which was swept under the carpet. I think Rotherham is an important case study for progressives and feminists to address.
As you note, some immigrants come from cultures (usually Muslim cultures) with very sexist attitudes towards consent. Will they assimilate and change their attitudes? Well, first I want to register some skepticism for the notion that European Muslims are assimilating. Muslims are people with their own culture, not merely empty vessels to pour progressive attitudes into. Muslims in many parts of Europe are creating patrols to enforce Sharia Law. If you want something more quantitative, Muslim polls reveal that 11% of UK Muslims believe that the Charlie Hebdo magazine "deserved" to be attacked. This really doesn't look like assimilation.
But for now, let's pretend that they are assimilating. How long will this assimilation take?
In what morality is it remotely acceptable that thousands of European women will predictably be raped or tortured by Muslim immigrant gangs while we are waiting for them to get with the feminist program?
Feminists usually take a very hardline stance against rape. It's supremely strange seeing them suddenly go soft on rape when the perpetrators are non-whites. It's not enough to say "that's wrong" after the fact, or to point out biases of the police, when these rapes were entirely preventable from the beginning. It's also not sufficient to frame rape as purely a gender issue when there are clear racial and cultural dynamics going on. There perpetrators were mostly of particular races, and fears of being racist slowed down the investigation.
Feminists are against Christian patriarchy, but they sometimes make excuses for Muslim patriarchy, which is a strange double standard. When individual feminists become too critical of Islam, they can get denounced as "racist" by progressives, or even by other feminists. Ayaan Hirsi Ali was raised as a Muslim but increasingly criticized Islam's infringement of women's rights. She was denounced by the left and universities revoked her speaking engagements.
After seeing Rotherham and Sharia patrols harassing women, there are some tough questions we should be asking.
- Could better immigration policies select for immigrants who are on board with Western ideas about consent?
- Could Muslim immigrants to Europe be encouraged to assimilate faster towards Western ideas about women's rights?
- Are feminism and progressivism truly aligned in their goals? Is women's safety compatible with importing large groups of people who have very different ideas about women's rights?
- If you found out about Rotherham from me, not from feminist or progressive sources, what else about the world have they not told you?
↑ comment by Good_Burning_Plastic · 2015-07-25T07:50:50.890Z · LW(p) · GW(p)
If you want something more quantitative, Muslim polls reveal that 11% of UK Muslims believe that the Charlie Hebdo magazine "deserved" to be attacked. This really doesn't look like assimilation.
This doesn't say much unless we know the corresponding fraction among Muslims worldwide is not much larger than 11%.
Replies from: Journeyman↑ comment by Journeyman · 2015-07-26T19:40:54.386Z · LW(p) · GW(p)
I think your implication is that Muslims are assimilating if their attitudes are shifting towards Western values after immigration. But assimilation isn't just about a delta, it's also about the end state: assimilation isn't complete until Muslims adopt Western values.
Unfortunately, there is overlap between European and non-European Muslim attitudes towards suicide bombing based on polls. France's Muslim population is especially radical. Even if they are slowly assimilating, their starting point is far outside Western values.
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-07-26T19:47:47.031Z · LW(p) · GW(p)
Well, Acty's hypothesis was that they have started assimilating but still haven't finished doing so. But thanks for the data.
(Who on Earth thought that that bulleted list of sentences in that Wikipedia article is a decent way of presenting those data, anyway? I hope I'll have the time to make a bar chart, or at least a table. And how comes my spell checker doesn't like either "bulleted" or "bulletted"?)
↑ comment by Acty · 2015-07-25T02:20:28.241Z · LW(p) · GW(p)
--
Replies from: Journeyman↑ comment by Journeyman · 2015-07-25T04:25:12.649Z · LW(p) · GW(p)
Redistributing the world's rapists from less developed countries into more developed countries with greater law and order to imprison them? Is that really what you're suggesting? I find this perspective truly stunning and I object to it both factually and morally.
Factually, it's unclear that this approach would indeed reduce rape in the end. While many Muslim women are raped in Muslim countries, there are unique reasons why some Muslim men might commit sexual violence and harassment. By some Muslim standards, Western women dress like "whores" and are considered to not have bodily sovereignty. To use the feminist term, they are considered "rapable." Additionally, if British police fail to adequately investigate rape by Muslim men, whether due to chauvinism or fear of being seen as racist, then the rapists won't actually go to jail in a timely fashion. The Rotherham authorities couldn't keep up with the volume of complaints. I have no idea whether the Rotherham sex gangs would have been able to operate so brazenly in a Muslim country, where they would risk violent reprisals from the fathers and brothers of their victims. So the notion of reducing rape by jailing immigrant rapists is really, really speculative, and I think it's really careless for you to be making moral arguments based on it.
Even if spreading around the world's rapists actually helps jail them and eventually reduce rape, it's still morally repugnant. I'm trying to figure out what your moral framework is, but the only thing I can come up with is naive utilitarianism. In fact, I think redistributing the world's rapists is so counter-intuitive that it highlights the problems with naive utilitarianism (or whatever your framework is). There are many lines of objection:
From a deontological perspective, or from a rule/act utilitarian perspective, inflicting a greater risk of rape upon your female neighbors would be a bad practice. It really doesn't seem very altruistic. What about lower-class British people who don't want their daughters to risk elevated levels of rape and don't have the money to take flight to all-white areas? What if they aren't on board with your plans?
Naive utilitarianism treats humans and human groups interchangeably, and lacks any concept of moral responsibility. Why should the British people be responsible for imprisoning rapists from other countries? They aren't. They are responsible for handling their own rapists, but why should they be responsible for other people's rapists? I really disagree that British people should view Muslim women raped in Muslim countries as equivalent to British women raped by British people. When British women are raped in Britain, that represents a failure of British socialization and rule of law, but British people don't have control over Muslim socialization or law enforcement and shouldn't have to pick up the pieces when those things fail.
Nations have a moral responsibility to their citizens to defend their citizens from crime and to enforce rule-of-law. If nations fail to protect their own people from crime, people may get pissed off and engage in vigilantism or voting in fascist parties. A utilitarian needs to factor in these backlash scenarios in calculating the utility of rapist redistribution. You could say "well, British people should just take it lying down instead of becoming vigilantes or fascists" but that steps into a different moral framework (like deontology or rule/act utilitarianism) where you would have to answer my previous objections.
Even from a utilitarian perspective, I am not convinced that rapist redistribution actually is good for human welfare. I don't think it's utility-promoting to cause members of Culture A to risk harm to fix another Culture's B crime problems. If you take the world's biggest problems are redistribute them, then it just turns the whole world shitty instead of just certain parts of it. Importing crime overburdens the police force, resulting in a weakening of rule-of-law, which will only be followed by general civilizational decline.
If you really want to use British law-and-order against rapists from other countries, the other solution would be to export British rule of law to those countries instead of importing immigrants from them. Britain used to try this approach, but nowadays it's considered unpopular.
If you are going to say that it's The White Man's Burden to fix other nation's problems, then at least go whole hog.
If you are trying to stop rape from a utilitarian perspective, and you want to reeducate Muslims about consent, then you should become a colonialist: export Western rule to the Muslim world, encourage feminism, punish rape, and stop female genital mutilation. What if Muslims resist this utilitarian plan? Well, what if British people resist your utilitarian plan? If you think British people should lie down and accept Muslim immigrant crime cuz utility, then it's possible to respond that Muslim countries should lie down and accept British rule plus feminism cuz utility.
I am very stunned to see someone coming from a feminist perspective who is knowingly willing to advocate a policy that would increase the risk of rape of women in her society. I think this stance is based on very shaky factual and moral grounds, putting it at odds with any claims of being altruistic and trying to help reduce suffering. I have female relatives in England and I am very distressed by the idea of them risking elevated levels of sexual violence due to political and moral idea that I consider repugnant. If I'm interpreting you wrong then please tell me.
Replies from: Username↑ comment by Username · 2015-07-25T05:22:47.181Z · LW(p) · GW(p)
I think you're being a little hard on Acty. I agree her positions aren't super well thought out, but it feels like we should make a special effort to keep things friendly in the welcome thread.
Here's how I would have put similar points (having only followed part of your discussion):
You're right that the cultural transmission between Muslims and English people will be 2-way--feminists will attempt to impose their ideas on Muslims the same way Muslims will attempt to impose their ideas on feminists. But there are reasons to think that the ideas will disproportionately go the wrong way, from feminists to Muslims. For example, it's verboten in the feminist community to criticize Muslims, but it's not verboten in the Muslim community to criticize feminists.
It'd be great if what Acty describes could happen and the police of Britain could cut down on the Muslim rape rate. But Rothertam is a perfect demonstration that this process may not go as well as intended.
↑ comment by Journeyman · 2015-07-25T06:15:45.137Z · LW(p) · GW(p)
My response is the friendly version, and I think that it is actually relatively mild considering where I am coming from. I deleted one sentence, but pretty much the worst I said is to call Acty's position "repugnant" and engage in some sarcasm. I took some pains to depersonalize my comment and address Acty's position as much as possible. Most of the harshness in my comment stems from my vehement disagreement with her position, which I did back up with arguments. I invited Acty to correct my understanding of her position.
I think Acty is a fundamentally good person who is misled by poor moral frameworks. I do not know any way to communicate the depth of my moral disagreement without showing some level of my authentic emotional reaction (though very restrained). Being super-nice about it would fail to represent my degree of moral disagreement, and essentially slash her tires and everyone else's. I realize that it might be tough for her to face so much criticism in a welcome thread, especially considering that some of the critics are harsher than me. But there is also potential upside: on LW, she might find higher quality debate over her ideas than she has before.
Your hypothetical response is a good start, but it fails to supply moral criticism of her stance that I consider necessary. Maybe something in between yours and mine would have been ideal. Being welcoming is a good thing, but if "welcoming" results in a pass on really perverse moral philosophy, then perhaps it's going too far... I guess it depends on your goals.
Replies from: Acty↑ comment by Acty · 2015-07-25T11:32:02.897Z · LW(p) · GW(p)
--
Replies from: ErikM, Journeyman↑ comment by ErikM · 2015-07-25T20:50:37.118Z · LW(p) · GW(p)
I think locking out anyone who might be a criminal, when you have the power to potentially stop them being a criminal and their home country doesn't, is morally negligent. (I'm your standard no-frills utilitarian; the worth of an action is decided purely by whether you satisfied people's preferences and made them happy. Forget "state's duty to the citizens", the only talk of 'duty' I really entertain is each of our duty to our fellow humans. "The White Man's Burden" is a really stupid idea because it's every human's responsibility to help out their fellow humans regardless of skin colour.) I think it doesn't matter whether you decreased or increased crime on either side of a border, since borders are neither happiness nor preferences and mean nothing to your standard no-frills utilitarian type. I just care about whether you decrease crime in total, globally.
Let me try to briefly convince you of why there should be a state's duty to citizens from a utilitarian perspective, also corresponding greater concern about internal than external crime:
1) A state resembles a form of corporate organization with its citizens as shareholders. It has special obligations by contract to those shareholders who got a stake on the assumption that they would have special rights in the corporation. Suddenly creating new stock and giving it to to non-shareholders, thereby creating new shareholders, would increase the utility of new shareholders and decrease the utility of old shareholders to roughly the same extent because there is the same amount of company being redistributed, but would have the additional negative effect of decreasing rule of law, and rule of law is a very very good thing because it lets people engage in long-term planning and live stable lives. (There is no such problem if the shareholders come together and decide to create and distribute new stock by agreement - and to translate back the metaphor, this means that immigration should be controlled by existing citizens, rather than borders being declared to "mean nothing" in general.)
2) A state is often an overlay on a nation. To cash those terms out: A governing entity with major features usually including a legal code and a geographically defined and sharply edged region of influence is often an overlay on a cluster of people grouped by social, cultural, biological, and other shared features. ("Nation" derives from those who shared a natus.) Different clusters of people have different clusters of utility functions, and should therefore live under differing legal codes, which should also be administrated by members of those clusters whom one can reasonably expect to have a particularly good understanding of how their fellow cluster-members will be happiest.
3) Particularly where not overlaid on nations, separate states function as testbeds for experiments in policy; the closest thing one has to large-scale controlled experiments in sociology. Redistributing populations across states would be akin to redistributing test subjects across trial arms. The utilitarian thing to do is therefore to instead copy the policies of the most successful nations to the least successful nations, then branch again on previously unexplored policy areas, which each state maintaining its own branch.
Replies from: Acty↑ comment by Acty · 2015-07-27T23:14:22.698Z · LW(p) · GW(p)
--
Replies from: Journeyman, VoiceOfRa↑ comment by Journeyman · 2015-07-28T03:07:42.539Z · LW(p) · GW(p)
Saving the refugee kid is emotionally appealing and might work out OK in small numbers. You correctly note that there might be a threshold past which unselective immigration starts creating negative utility. I think it's easy to make a case that Britain and France have already hit this point by examining what is going on at the object level.
European countries with large Muslim populations are moving towards anarchy:
Rule of law is declining due to events like mass rape scandals like Rotherham, the Charlie Hebdo massacre, and riots. Here's a video of a large riot which resulted in a Jewish grocery store being burned down. If you watch that video or skip around in it, you will see what looks like a science fiction movie. Muslim riots are a common feature in Europe, and so are sex gangs (established in previous comments).
Sharia Patrols are becoming increasingly common in Europe.
Muslim immigrants form insular enclaves that are dangerous for non-Muslims, or even police (aka "no-go zones" or "Sensitive Urban Zones").
And these are only a few examples. How much more violence does there have to be before something is done?
Muslims and Europeans are not interchangeable. Muslims have distinct culture and identity, and it’s unlikely that socialization can change this on an acceptable time-scale.
The attitudes of most Muslim population on average are really scary. Muslims in Europe, especially France, have very radical attitudes that are supportive of terrorism. According to Pew Research, 28% of Muslims worldwide and 19% of US Muslims disagree that suicide bombing is never justified. The vast majority of Muslims believe that homosexuality is wrong, and that same survey shows that large percentages of Muslims believe that honor killings are morally permissible. (Note: Muslims from non-Middle Eastern, non-Muslim-ruled countries are less radical and better candidates for immigration.)
Muslim populations have extremely low support for charitable and humanitarian organizations relative to the rest of the world (first table, source is World Values Survey). Only around 3% of Muslims participate in charitable/humanitarian organizations, compared to nearly 20% of Anglos. I think this is mainly due to differences in tribalism rather than differences in wealth, but that’s another subject. Your Muslim refugee kid is not likely to be giving back very much to society.
Even if you are correct that Muslim immigrants are only say, 10% more likely to be involved in crime, that’s still a big problem if they are all hanging out with each other in poor areas and forming gangs that riot or harass women and gay people.
There are always going to be tribal conflicts between Muslims and other Muslims or their neighbors, and there are always going to be refugees. But if the West admits them in large numbers, they will bringing their tribal and religious attitudes with them, resulting in violent tribal conflicts with native Europeans and Jews. This situation isn’t remotely ethical or utilitarian. It’s only happening because leftist parties are incentivized to import voters who will be dependent on them; the thin moral justification is secondary.
Focusing on the plight of Muslim refugees obscures the violent direction of Muslim immigration to Europe. You may not be seeing this conflict yourself, and your filter bubble might not be talking about it, but lower-class Europeans certainly experience it, and Jews are writing articles with titles like “Is it time for the Jews to leave Europe?”. European elites need to fix these unselective immigration policies, create a preference for educated, non-radical Muslim immigrants, and encourage them to assimilate.
↑ comment by VoiceOfRa · 2015-07-28T01:03:58.830Z · LW(p) · GW(p)
If you are a kid facing persecution and a high possibility of being murdered in your home country, coming to the UK and receiving an education here and going on to a career here is a massive utility gain, and if you go on to a successful and altruistic career it's an even bigger utility gain. The disutility of the kid coming here - maybe the teacher in the local state school has to split their attention between 31 pupils instead of the original 30 - is only a very small disutility.
Um, that style of logic doesn't work. You need to balance the (large but restricted to an individual) utility to the kid against the (small to each individual and spread out access many individuals) disutility to society. This is the kind of computation that's impossible to do intuitively (and probably impossible to do directly at all since we have no way to directly measure utility). It is, however, easy to see what the implications of a large scale population transfer are and to see that they are negative. You assert that there exists a threshold below which immigration is positive utility. However, you have no way to calculate it's value or show we are below it (or even show that it's not zero), without resorting to what looks like wishful thinking.
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-07-30T21:21:51.899Z · LW(p) · GW(p)
You need to balance the (large but restricted to an individual) utility to the kid against the (small to each individual and spread out access many individuals) disutility to society.
That's reason for Pigovian taxes, not outright bans. There are plenty of other things which have diffuse negative externalities, e.g. anything which causes air pollution, and we don't just ban them all.
Replies from: VoiceOfRa↑ comment by Journeyman · 2015-07-26T02:51:34.526Z · LW(p) · GW(p)
I think you have the right idea by studying more before making up your mind about open borders and immigration. It’s really hard to evaluate moral solutions without knowing the facts of the matter, and unfortunately there is a lot of political spin on all sides.
In a situation of uncertainty, any utilitarian policy that requires great sacrifices is very risky: if the anticipated benefits don’t materialize, then the result turns into a horrible mess. The advantage of deontological ethics and rule/act utilitarianism is that they provide tighter rules for how to act under uncertainty, which decreases the chance of falling into some attractive, world-saving utilitarian scheme that backfires and hurts people: some sacrifices are just considered unacceptable.
Speaking of utilitarian schemes, British colonialism to the Muslim world would cause a lot of suffering, but so does current immigration policies that bring in Muslims. Why is one type of suffering acceptable, but another isn’t? Utilitarianism can have some really perverse consequences.
What if British-ruled Pakistan of 2050 was dramatically lower in crime, lower in violence towards women in both countries, and more peaceful, such that the violence of imposing that situation is offset? What if the status quo of immigration, or an open borders scenario would lead to a bloodier future that is more oppressive to women in both countries? What if assimilation and fixing immigrant isn’t feasible on an acceptable timescale, especially given that new immigrants are constantly streaming in and reinforcing their culture?
My point about British vs. Muslim rape survivors is about responsibility, not sympathy or worthiness as a human being. As a practical matter, people who live nearer each other and have shared cultural / community ties are better positioned to stop local crime and discourage criminals. Expecting them to use their local legal system to arrest imported criminals will stretch their resources to the point of failure, like we saw in Rotherham.
It’s both impractical and unfair to expect people to clean up other people’s messes. Moral responsibility and duty is a general moral principle that’s easy to translate into rule-utilitarianism. A moral requirement to bail out other people’s crime problems would mean there is no incentive for groups to fix their own crime problems.
Borders, rule of law, and nations are obviously important for utility, happiness, and preferences, or we wouldn’t have them (see ErikM’s comment also). Historically, any nation that didn’t defend its borders would have been invaded, destroyed, or had its population replaced from the inside.
Imagine we invited to LW hundreds of people from Reddit, Jezebel, and Stormfront to educate them. The result would not be pretty, and it wouldn’t make any of these communities happier. LW enforcing borders makes it possible for this place to exist and be productive. Same thing with national borders. If you cannot have a fence on your garden, then you cannot maintain gardens, and there is no incentive to make them. Which means that people cannot benefit from gardens.
Open borders are a terrible idea because they mix together people with different cultures and crime rates, causing conflicts that wouldn’t have happened otherwise. Certain elements of civilization, like women being able to walk around wearing what they want, can only occur in low-crime societies.
Historically, despite some missteps, the West has been responsible for an immense amount of medicine, science, and foreign aid due to a particular civilization based on rule of law, low crime, high trust, and yes, borders. If the West had lacked those things, it would not have been able to contribute to humanity in the past. And if those things are destroyed by unselective immigration, then the West will turn into a place like Brazil, or worse, South Africa: a world of ethnic distrust, gated communities, fascist parties and women scared to travel alone in public. That doesn’t sound like a very happy place to me.
If you are hoping that the outcome of unselective immigration and open borders would be beneficial, then you would need some pretty strong evidence, because the consequences of being wrong are really scary. You would need to be looking at current events, historical precedents, population projections, crime trends, and a lot of other stuff. The early indicators are not looking good, like Rotherham-style gangs all over England, plus Sharia Patrols, and these events should result in updating of priors.
↑ comment by skeptical_lurker · 2015-07-23T12:51:54.888Z · LW(p) · GW(p)
But of course, neoreactionaries hate feminists, so I suppose they'll all stop listening as soon as I use the word.
Do you think that anyone who is against multiculturalism is a neoreactionary?
But the problem isn't Pakistani people, the problem is that the culture they were brought up in is sexist. We don't need to get rid of multiculturalism; there's a lot of evidence that second-generation and third-generation immigrants' views shift, as the generations go by, towards egalitarian.
I.e. the immigrants adopt the culture of the host country. Are you sure you don't mean 'We don't need to get rid of multiracialism'?
Replies from: None, hairyfigment↑ comment by [deleted] · 2015-07-23T15:11:22.793Z · LW(p) · GW(p)
Do you think that anyone who is against multiculturalism is a neoreactionary?
Do you really think that proponents or opponents of "multiculturalism" are arguing over a well-defined program of action?
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2015-07-23T15:53:42.442Z · LW(p) · GW(p)
To some extent. Increase or decrease the rate of immigration, require criteria to be met for immigration or not, enforce speaking the native language or not, ban or allow faith schools, ban or allow child circumcision & FGM and so forth. Obviously, not all people on each side of the debate agree on which policies to pursue, but that's true of all politics.
↑ comment by hairyfigment · 2015-07-23T19:09:44.332Z · LW(p) · GW(p)
The author of the grandparent mentioned Moldbug not long ago; the sockpuppeteer who seems to be downvoting Acty is likely neoreactionary. Update, and then consider apologizing.
Replies from: Jiro, skeptical_lurker↑ comment by skeptical_lurker · 2015-07-23T20:05:46.310Z · LW(p) · GW(p)
So there is at least one person in this thread who has read Moldbug, and might be using sockpuppets (which is wrong, of course). Yet Acty's comment says "they'll all stop listening". Plural.
Plus Acty assumes that people who have a different ideology to her hate her and will just stop reading, which isn't very charitable when we should all be trying to avoid confirmation bias.
Replies from: Acty, hairyfigment↑ comment by Acty · 2015-07-23T21:07:48.072Z · LW(p) · GW(p)
--
Replies from: skeptical_lurker, VoiceOfRa↑ comment by skeptical_lurker · 2015-07-25T13:03:09.383Z · LW(p) · GW(p)
Yes, I think that subconsciously I have an assumption that all conservatives are either idiots or extreme neoreactionary types, based mostly on my personal experience of a) knowing lots of idiot conservatives IRL whose opinions amount to "an immigrant looked at me the wrong way once!!!11!!1!" and b) arguing really loudly with extreme neoreactionaries online. I'm not assuming you hate me, I've just somehow managed to subconsciously accumulate a really low prior for someone I'm arguing with being a smart normal conservative. I will update and endeavour to correct that.
I'm not sure I would class myself as a conservative, but I can understand your assumption that conservatives are idiots, in that there was a time when I would have said that anyone who is against gay rights is a fascist theocrat. Now I realise, on a more intuitive level, that just because many arguments for a position are idiotic doesn't mean that there isn't a somewhat intelligent argument out there.
Oddly enough, IRL I mostly meet fairly intelligent people with opinions that amount to "anyone who disagrees with my left-wing politics is evil! That person supports a right wing party, I'd like to burn their house down!"
My theory is that facebook and twitter have ruined discourse because people can't fit opinions more complex into 140 characters.
I've been refreshing my page ten times a minute to check my karma hasn't gone down any further and that is a really terrible use of my time.
You're new here, I guess you'll get used to the karma system in time? In the meantime, have an upvote :)
↑ comment by VoiceOfRa · 2015-07-24T01:36:52.716Z · LW(p) · GW(p)
knowing lots of idiot conservatives IRL whose opinions amount to "an immigrant looked at me the wrong way once!!!11!!1!"
I seriously doubt this. Rather I suspect you're, either intentionally or unconsciously, replacing opinions disagreeing with yours with ones that are easier for you to dismiss.
Replies from: Username, Good_Burning_Plastic, skeptical_lurker↑ comment by Username · 2015-07-25T22:23:35.896Z · LW(p) · GW(p)
I think you and Acty each live in your own filter bubbles, constructed mostly through subconscious intent. (Beware of believing your enemies are innately evil and intentionally causing themselves to be biased.) Everyone is subconsciously inclined to read authors they agree with; it's more pleasurable and less painful. In your filter bubble, you read thoughtful conservative thinkers, along with cherry-picked bits of poorly-reasoned liberal extremist thinking, that those conservatives tear apart. And the reverse is true for Acty.
I suspect the internet is increasing the ease of forming these sort of bubbles, which seems like a huge problem.
FWIW, I am disappointed to see political discussion drift from object-level political disagreements to person-level disagreements about who is more biased. I virtually never see a good outcome from such a discussion. I suppose it's an occupational hazard participating in discussions on a website about bias.
↑ comment by Good_Burning_Plastic · 2015-07-24T16:06:04.744Z · LW(p) · GW(p)
You could have said something to the effect of "not all conservatives have such dumb opinions, they aren't representative of all conservatism, and there also are liberals with even dumber opinions, and anyway it's not a good idea to judge memeplexes from their worst members" -- but no, you chose to go for James A. Donald-level asshattery -- "if you say you know conservatives with dumb opinions, you're probably lying or confabulating". (And somehow even got seven upvotes for that.) What does make you think it's so unlikely that Acty actually knows conservatives with dumb opinions? Are you familiar with all groups of conservatives worldwide?
Replies from: advael, VoiceOfRa↑ comment by advael · 2015-07-24T18:58:11.836Z · LW(p) · GW(p)
Because those vectors of argument are insufficiently patronizing, I'm guessing.
But in all seriousness, the "judging memeplexes from their worst members" issue is pretty interesting, because politicized ideologies and really any ideology that someone has a name for and integrates into their identity ("I am a conservative" or "I am a feminist" or "I am an objectivist" or whatever) are really fuzzily defined.
To use the example we're talking about: Is conservatism about traditional values and bolstering the nuclear family? Is conservatism about defunding the government and encouraging private industry to flourish? Is conservatism about biblical literalism and establishing god's law on earth? Is conservatism about privacy and individual liberties? Is conservatism about nationalism and purity and wariness of immigrants? I've encountered conservatives who care about all of these things. I've encountered conservatives who only care about some of them. I've encountered at least one conservative who has defined conservatism to me in terms of each of those things.
So when I go to my internal dictionary of terms-to-describe-ideologies, which conservatism do I pull? I know plenty of techie-libertarian-cluster people who call themselves conservatives who are atheists. I know plenty of religious people who call themselves conservatives who think that cryptography is a scary terrorist thing and should be outlawed. I know self-identified conservatives who think that the recent revelations about NSA surveillance are proof that the government is overreaching, and self-identified conservatives who think that if you have nothing to hide from the NSA then you have nothing to fear, so what's the big deal?
I do not identify as a conservative. I can steelman lots of kinds of conservatism extremely well. Honestly I have some beliefs that some of my conservative-identifying friends would consider core conservative tenets. I still don't know what the fuck a conservative is, because the term gets used by a ton of people who believe very strongly in its value but mean different things when they say it.
So I have no doubt that not only has Acty encountered conservatives who are stupid, but that their particular flavor of stupid are core tenets of what they consider conservatism. The problem is that this colors her beliefs about other kinds of conservatives, some of whom might only be in the same cluster in person-ideology-identity space because they use the same word. This is not an Acty-specific problem by any means. I know arguably no one who completely succeeds at not doing this, the labels are just that bad. Who gets to use the label? If I meet someone and they volunteer the information that they identify as a conservative, what conclusions should I draw about their ideological positions?
I think the problem has to stem from sticking the ideology-label onto one's identity, because then when an individual has opinions, it's really hard for them to separate their opinions from their ideology-identity-label, especially when they're arguing with a standard enemy of that ideology-label, and thus can easily view themselves as standing in for the ideology itself. The conclusion I draw is that as soon as an ideology is an identity-label, it quickly becomes pretty close to useless as a bit of information by itself, and that the speed at which this happens is somewhat correlated to the popularity of the label.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-24T22:03:32.348Z · LW(p) · GW(p)
Because those vectors of argument are insufficiently patronizing, I'm guessing.
Right, it's only OK to be patronizing to people who aren't present to defend themselves.
Replies from: advael↑ comment by advael · 2015-07-24T22:13:50.494Z · LW(p) · GW(p)
I'd argue that that little one-off comment was less patronizing and more... sarcastic and mean.
Yeah, not all that productive either way. My bad. I apologize.
But I think the larger point stands about how these ideological labels are super leaky and way too schizophrenically defined by way too many people to really even be able to meaningfully say something like "That's not a representative sample of conservatives!", let alone "You probably haven't met people like that, you're just confabulating your memory of them because you hate conservatism"
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-24T22:21:54.554Z · LW(p) · GW(p)
"That's not a representative sample of conservatives!", let alone "You probably haven't met people like that, you're just confabulating your memory of them because you hate conservatism"
One of those statements refers to a concrete event (or series of events), the other depends on the exact definition of conservative.
↑ comment by VoiceOfRa · 2015-07-24T22:02:17.576Z · LW(p) · GW(p)
What does make you think it's so unlikely that Acty actually knows conservatives with dumb opinions?
Well the fact that Acty has a tendency to not read/listen to what her opponents say and replace it with something easy to dismiss, as she has previously demonstrated in this very thread.
What does make you think it's so unlikely that Acty actually knows conservatives with dumb opinions?
What makes you think it's so unlikely that Acty is giving an inacurate report of their arguments?
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-07-24T22:24:28.628Z · LW(p) · GW(p)
What makes you think it's so unlikely that Acty is giving an inacurate report of their arguments?
I don't. I think both P(Acty actually knows conservatives with dumb opinions) and P(Acty is giving an inacurate report of their arguments) are sizeable.
↑ comment by skeptical_lurker · 2015-07-25T13:13:37.358Z · LW(p) · GW(p)
I'm going to be cynical here, and say that most conservative opinions are idiotic, and most liberal opinions are idiotic. Its an instance of the '90% of everything is shit' principle.
↑ comment by hairyfigment · 2015-07-24T02:46:25.855Z · LW(p) · GW(p)
We have little reason to think Journeyman is the same as Eugine Nier, who is almost certainly calling himself VoiceOfRa. Here is the puppeteer's previous account, ending 7 April. Here is VR's first comment on the site, jumping right into a discussion of decision theory. Here he is the same week talking about the purpose of LW/MIRI, linking an old thread in which he was active as Eugine Nier, repeating his old political views, and posting quotes as Nier was wont to do. Here is the dishonest piece of shit defector claiming I call anyone who disagrees with me a sockpuppet, rather than just him and his sockpuppets.
Replies from: Lumifer↑ comment by Lumifer · 2015-07-24T14:19:56.162Z · LW(p) · GW(p)
Can we skip the middle-school drama?
I downvoted your last couple of posts on the "vote down what you like to see less of" principle. I would like to see less whining and ideological witchhunts.
The downvotes will continue until the morale improves.
Replies from: hairyfigment↑ comment by hairyfigment · 2015-07-24T21:17:34.174Z · LW(p) · GW(p)
A fine example of how trying to stop "politics" can serve as a political move in favor of the status quo. You're not treating the cause.
Replies from: Lumifer↑ comment by [deleted] · 2015-07-23T15:12:37.489Z · LW(p) · GW(p)
But the problem isn't Pakistani people, the problem is that the culture they were brought up in is sexist. We don't need to get rid of multiculturalism; there's a lot of evidence that second-generation and third-generation immigrants' views shift, as the generations go by, towards egalitarian.
And how would we define "Pakistani culture" in such a way that it doesn't necessarily include patriarchy? Cultural evolution in response to moral imperative is a thing.
↑ comment by skeptical_lurker · 2015-07-21T22:52:56.694Z · LW(p) · GW(p)
You say "immigrants" but in every case you mention it's specifically Muslims. I've not heard of Hindu or Buddhist or atheist immigrants causing the same problems.
Replies from: Journeyman↑ comment by Journeyman · 2015-07-21T23:48:27.366Z · LW(p) · GW(p)
That's correct; I will update my comment to be more explicit. Muslims have very different attitudes towards women and consent than Westerners.
↑ comment by ChristianKl · 2015-07-21T22:43:50.032Z · LW(p) · GW(p)
some influential Pakistani-heritage councillors in Rotherham had acted as barriers
That a sentence that poses more question than it answers. What kind of influence do those councillors have? How many councillors of Pakistani heritage does Rotherham have? How many councillors of other heritage does it have?
If a powerful politician tries to prevent friends from being persecuted that's not what the standard concern about policemen being too PC is about. It's straight misuse of power.
Sexual violence by immigrants is a serious social problem in the UK, and the multicultural political environment makes it hard to crack down on.
Sexual violence by British MPs seems also to be a problem: http://www.rt.com/uk/170672-uk-politicians-pedophile-ring/
To what extend is this simply a problem of British politicians having too much power to cover up crimes and impede police work?
Following links on Wikipedia, we have the Rochdale sex gang, the Derby sex gang, the Oxford sex gang, the Bristol sex gang, and the Telford sex gang. These are all easily verifiable cases, and the perpetrators are usually people from immigrant backgrounds.
The idea that there are people from Immigrant backgrounds isn't what's surprising about the story of Rotherham or even that politicians act in a way to prevent reporting of tragedy. Politicians trying to keep tragedies away from the public is a common occurrence.
The thing that's surprising is the allegation of police inaction due to them being Muslim. Which happens something that you didn't list in your "what we know" list.
It would have to be true for the claim that PC policeman don't do their job properly to be true.
↑ comment by Journeyman · 2015-07-21T23:21:03.351Z · LW(p) · GW(p)
If indeed the coverup of the ethnic dimension was directed by British politicians, we might ask, why were they trying to hide this? In a child sex abuse scandal involving actual politicians, it's clear why they would cover it up. But why were these particular crimes so politically inconvenient? It's clear why Pakistani council members wanted to hide it, but why did the other council members let them?
We are not privy to the exact nature of the institutional dysfunction at Rotherham. But it's clear that the problem was occurring at multiple levels. One of my quotes does mentions that staff were nervous about being labelled racist, and that managers told them to told them to avoid mentioning the ethnic dynamics.
Here's another quote, which shows that reports were downplayed before politicians were even involved:
Within social care, the scale and seriousness of the problem was underplayed by senior managers. At an operational level, the Police gave no priority to CSE, regarding many child victims with contempt and failing to act on their abuse as a crime. Further stark evidence came in 2002, 2003 and 2006 with three reports known to the Police and the Council, which could not have been clearer in their description of the situation in Rotherham. The first of these reports was effectively suppressed because some senior officers disbelieved the data it contained. This had led to suggestions of cover- up. The other two reports set out the links between child sexual exploitation and drugs, guns and criminality in the Borough. These reports were ignored and no action was taken to deal with the issues that were identified in them.
So there are multiple kinds of institutional dysfunction here. It's not just politicians, it's not just police being PC. But from the quotes in my previous post, it's obvious that political correctness was a factor. Police, social workers, and politicians, all the way up the chain know that being seen as racist could be damaging to their career.
In the UK, there is a lot of social and political pressure to support multiculturalism and avoid any perception of racism. Immigration is important for economic agendas, but also for left political agendas of importing more voters for themselves. It is not a stretch to believe that this political environment would make it difficult to address crimes involving immigrant populations.
Replies from: hairyfigment, ChristianKl↑ comment by hairyfigment · 2015-07-22T07:03:40.937Z · LW(p) · GW(p)
You correctly note that there were factors beyond "PC", but fail to address the horrific corruption. At least two councilors and a police officer face charges of sex with abuse victims.
The police officer has been also accused of passing information on to abusers in the town. A colleague of the officer has reportedly been accused of failing to take appropriate action after receiving information about the officer's conduct. Both have been reported to the Independent Police Complaints Commission.
Another police officer, seen here being white, supposedly had an extensive child pornography collection. No word on whether this was related or whether the department just attracted pedophiles for some bizarre reason.
While I didn't predict this beforehand (nor, I think, did you) it seems both more credible, and more likely to protect the rape-gang, than does the idea of people seeing strong evidence of the crimes and somehow deciding that arresting immigrants was more likely to hurt their careers than ignoring a story which was bound to come out eventually. The "political correctness" you speak of apparently refers to people not wanting to believe their fellow police officers and council members were implausibly evil criminals.
Replies from: Journeyman, ChristianKl↑ comment by Journeyman · 2015-07-22T07:45:21.451Z · LW(p) · GW(p)
Thanks for providing the additional details, which I hadn't encountered. I don't think this corruption is mutually exclusive with the theory of political correctness. The Rotherham Scandal went back to 1997, involving 1,400+ victims. There are now 300 suspects (including some council members that you pointed out), and 30 council members knew. We not know the ethnicity of the council members who are suspects.
With such a long history and large number of victims, it doesn't seem very plausible that a top-down coverup to protect council member perpetrators is sufficient to explain this story. These people would need to be supervillains if they were the ringleaders since 1997, and the failure of investigation was just about them.
It is already established by my quotes from the report that political correctness about race was a factor in the coverup and failure of the investigation. Certainly the corruption and participation of council members and police is a disturbing addition to this story. With such a vast tragedy, it's quite likely that the coverup was due to multiple motivations and lots of things went wrong.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-22T11:15:06.750Z · LW(p) · GW(p)
With such a long history and large number of victims, it doesn't seem very plausible that a top-down coverup to protect council member perpetrators is sufficient to explain this story.
It seems like we have a perfect control case with the pedophiles in Westminster which didn't involve multiculturalism. They also engaged in it for a long time and managed to suppress it.
↑ comment by ChristianKl · 2015-07-22T11:18:31.980Z · LW(p) · GW(p)
While I didn't predict this beforehand (nor, I think, did you) it seems both more credible
I might add that I did speak about chauvinistic police officers as a problem and also that corruption is likely a cause over at Omnilibrium.
↑ comment by ChristianKl · 2015-07-22T10:18:16.045Z · LW(p) · GW(p)
and that managers told them to told them to avoid mentioning the ethnic dynamics.
There a huge difference between persecuting someone and then not writing his race or ethnicity into an official report and avoiding to prosecute them.
It's clear why Pakistani council members wanted to hide it, but why did the other council members let them?
From what you quoted from the report the those Pakistani council members were influential people. Just like the politicians who covered up the child abuse in Westminster also were influential people.
In general politicians also never want that scandals and tragedy under their watch get public.
At an operational level, the Police gave no priority to CSE, regarding many child victims with contempt and failing to act on their abuse as a crime.
Regarding child victims with contempt does suggest a dysfunctional police but it's not about multiculturalism.
↑ comment by VoiceOfRa · 2015-07-21T23:37:15.993Z · LW(p) · GW(p)
"This story doesn't make sense, the facts on the ground are likely to be different than the mainstream media reports I'm reading" instincts.
Have you tried updating your model to reflect reality?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-22T10:14:49.008Z · LW(p) · GW(p)
In general the heuristic of not trusting mainstream media reports to accurately reflect reality is well based on what I know about how it works.
I gave enough interviews to have an idea of how what the journalist writes differs from what was said in the interview in those cases.
I frequently read reports on scientific studies that don't match reality.
In the past I knew the background of quite a bunch of political stories in Berlin and how it differs from facts on the ground.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-23T03:05:05.769Z · LW(p) · GW(p)
In general the heuristic of not trusting mainstream media reports to accurately reflect reality is well based on what I know about how it works.
Without a direction to the bias that's a universal counterargument. I'm perfectly aware of some the biases in reporting, my heuristics say that the media is likely underreporting the extent of the problem.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-23T09:01:39.598Z · LW(p) · GW(p)
Without a direction to the bias that's a universal counterargument.
It's a universal counterargument when a newspaper stories that don't appear to make sense and you don't know the facts on the ground. You shouldn't believe those stories.
I'm perfectly aware of some the biases in reporting, my heuristics say that the media is likely underreporting the extent of the problem.
I haven't said anything about biases of reporting. I have spoken about journalists getting stories wrong. That quite often doesn't have anything to do with bias. Journalists in Berlin from time to time get the idea that it's the parliament and only the parliament that passes laws in Berlin wrong. That doesn't have anything to do with left or right bias.
Thinking in terms of bias isn't useful. My basic sense was that the story likely involves some form of corruption that didn't make it into the news articles I read. Garbage in Garbage out. You can't correct bad reporting by correcting for bias.
A police officer doesn't simply avoid persecuting a Muslim for rape because he's afraid of being called a racist. That simply doesn't make sense. On the other hand corruption can prevent crimes from being persecuted.
The UK is not a country where a newspaper can freely report on a story like this. But not because of multiculturalism. You can't sue a newspaper in the UK for that. The UK's insane defamation laws result in articles speaking about "influential Pakistani councilors" instead of naming the individuals in question. A US newspaper would have never done this and actually named the politicians who seem to have obstructed a rape investigation if this happened in any US city.
Of course you actually need to practice critical reading to get that. If you just take the story at face value and then try to correct a systematic bias you miss the juicy bits.
Given that the newspaper is effectively censored in speaking about the real story about corruption they make up a bullshit story about how it's multiculturalism that makes police officers afraid to go after Muslims. That's not to say that multiculturalism didn't do anything in that case. It reduced the reporting of the fact that the people were Muslim, but it very likely didn't prevent them from being persecuted.
Replies from: Jiro, VoiceOfRa↑ comment by Jiro · 2015-07-23T15:01:23.399Z · LW(p) · GW(p)
A police officer doesn't simply avoid persecuting a Muslim for rape because he's afraid of being called a racist. That simply doesn't make sense.
People make decisions at the margin, and it's entirely possible that the additional negative effect of being accused of racism pushes him over the edge in decisionmaking.
↑ comment by VoiceOfRa · 2015-07-24T01:44:44.411Z · LW(p) · GW(p)
A police officer doesn't simply avoid persecuting a Muslim for rape because he's afraid of being called a racist.
First of all, police officers don't prosecute anyone, prosecutors do. As for fear of being called racist, well some police officers complained when they noticed something was happening, and were promptly sent to cultural sensitivity training.
↑ comment by TheAncientGeek · 2015-07-21T17:17:00.699Z · LW(p) · GW(p)
Er.....Rotherham?
Replies from: Vaniver↑ comment by Acty · 2015-07-21T13:33:43.869Z · LW(p) · GW(p)
You know what else is a good way to stop people being killed? Create a liberal democracy where people are equal. So far in history, that has kinda correlated... really strongly... with less people dying. There is both less war and less crime. Forget strength, give them equality and elections. (I don't actually think democracy is the optimal solution, I think I advocate more of an economics-exam-based meritocratic oligarchy, but it is a really good one to put in place while we figure out what the optimal one is. And I need to read lots more books before I actually try and design an optimal society, if I'm ever qualified to try something like that.)
Being "strong" in a meaningful way, in the modern world, means being intelligent. Smart people can use better rhetoric, invent cooler weapons, and solve your problems more easily. Being well-educated and intelligent and academic actually strongly correlates with not being racist or sexist or transphobic or homophobic. Oh, and also liberal democracies seem to have much less prejudice in them.
Find me decent evidence that patriarchal societies are safer for everyone involved than liberal democracies where everyone is equal, and you'll have a valid point. But it kind of looks to me like, as a woman, I'm much safer in the modern Western democracies that prohibit sexism than I am in the patriarchal societies where women have no rights and keep getting acid thrown in their faces for rejecting advances. You say that in the recent years it was abandoned due to being oppressive but we should try and go back and compromise with it, but... why would we want to go back to that when literally everything has been improving ever since we abandoned those social models? To entertain your delusions of being a Strong Tribal Hero Protector Guy? Sorry, no.
I also don't see how we can't have strong protectors who are 100% PC. I'm not straight, male, neurotypical, traditional or even an adult. I try and protect and help those around me and on many occasions I succeed. I am the one in my friendship group who takes the lead down dark alleyways, carrying all the bags, reassuring my friends that it's safe because nobody's going to mug us while I'm there. Why exactly am I a weak and unworthy protector? Because I'm a girl? You're going to have to do an awful lot better than that. Put me in a physical fight with most boys of my age, and I would annihilate them. Every male who has picked a fight with me thinking that he'll be able to beat me because he's male has walked away rather humiliated. On exams and IQ tests I score far higher than your average male. Judging by how much I actually end up doing versus what I observe the boys around me doing, I have higher levels of inbuilt-desire-to-help-and-protect-others than the average male. (I suspect that the latter two facts at least are true of most women whom you might find on this specific website.) Why exactly does the average male, whom I can both outfight and outthink, get to protect me and not the other way around?
Replies from: None, VoiceOfRa↑ comment by [deleted] · 2015-07-21T13:53:04.334Z · LW(p) · GW(p)
I think I will not discuss with you this for about 5-10 years, because you sound a lot like me when I was around 21, and I know how naive and inexperienced and entirely unrealistic I was. Ultimately you miss the experiences that would make you far more pessimistic. For example nobody talked about making Western liberal democracies like third-world hellholes, it was about making them like their former selves when crime levels were lower, violence was lower, people were politer, people were politer with women and so on. In fact, turning Western liberal democracies into third-world hellholes is actually happening, but through a different, asylum-seeking / refugee pathway, a perfectly idiotic counter-selection where instead of exercising brain drain, we drain the most damaged people and expect it to turn out good. But that is just a small part of how you probably need to get more pessimistic experience before we can discuss it meaningfully. I have no interest in engaging with angry rants, they are not able to teach me anything, they just sound like both people really sweating and trying to win something, but there is no actual prize to win. Being drunk on the idea of social progress and the improvability of human nature is just like other addictions, you really need to hit rock bottom before you see what is the issue, I think anything I would try to explain here would be pointless without such a wake-up happening. So I wish you luck and maybe re-discuss this again in 5-10 years where you maybe got influenced by more experience.
Replies from: Acty, ChristianKl↑ comment by Acty · 2015-07-21T14:40:23.871Z · LW(p) · GW(p)
Telling your opponent that they are incapable of arguing with you until they are older is a fully general counterargument, and one of the more aggravating and toxic ones.
Even if it wasn't a fully general counterargument, it would be fallacious because it's ad hominem. There are plenty of people 5-10 years older than me who share my ideas, and you could as easily be arguing with one of them as you are arguing with me now; the fact that by chance you are arguing against me doesn't affect the validity/truth of the ideas we're talking about, and it's very irrational to suggest that it should. Attack my arguments, not me.
As for everything being better in "their former selves", do I seriously have to go find graphs? I have the distinct feeling that you won't update even if I show you them, so I'm tempted not to bother. If you've genuinely never looked at actual graphs of crime levels and violence over time and promise to update just a little, I can go dig those up for you. (For now, you're pattern matching to the kind of person who could benefit from reading http://slatestarcodex.com/2013/10/20/the-anti-reactionary-faq/ . I don't like SSC that much, but when the man's right, he's right.)
(As for "people were politer with women", my idea of polite is pretty politically correct, and I can guarantee you that political correctness doesn't increase if we look backwards in time...)
Replies from: None↑ comment by [deleted] · 2015-07-21T15:07:00.080Z · LW(p) · GW(p)
I am not your opponent, that is where it begins. Opponent means there is something to win and people compete over that prize. There is nothing to win here except learning, and this discussion quickly turned to be not conducive to it - you got all defensive and emotional instead of trying to understand use my models and see what you can do with them. Opponentism belongs to precise that kind of tribalism you are trying to want to overcome. Interesting, isn't it? Besides you keep being boringly solipsistic. Your strength instead of statistical strength differences, your idea of politeness instead of the social function of politeness... it seems you primarily subject you have useful information about is, well, you. Not interested. The first precondition to being interesting is to understand nobody gives a damn about you. I.e. to get out of the gravity well of the ego, to adopt viewpoints that don't depend strongly on personal desires. I am not even saying I would expect everyone to be able to do it, I am perfectly aware of how long it took for me, how much XP, read, suffering it took, so I don't even blame you for not having made it, it's just that it is seriously difficult to generate information interesting for others from that source. But if you think you can, then do it, say something genuinely interesting, try to offer any sort of a model or information from this utopian-progressivist school that is genuinely different and not the same stuff the mainstream media, BuzzFeed or Tumblr pouring on day and night. The only condition of interestingness is 1) it is not about you 2) it is not "done to death" a million times by the media or blogs.
Replies from: ChristianKl, hairyfigment, Acty↑ comment by ChristianKl · 2015-07-21T16:27:15.717Z · LW(p) · GW(p)
But if you think you can, then do it, say something genuinely interesting, try to offer any sort of a model or information from this utopian-progressivist school that is genuinely different and not the same stuff the mainstream media, BuzzFeed or Tumblr pouring on day and night.
If you read the list of her activities and speaking 6 languages at the age of 17 and being in the process of learning the 7th while also doing Judo to the point of being more fit than guys her age, having learned Java programming, doing filmmaking and being a DM and training other DMs she's not the person person to read through BuzzFeed and Tumblr day and night simply copying what other people are thinking.
Yes, being 17 means that she lacks experience but she's very capable of learning. You might not have been open to learning at 21 but you weren't speaking 6 languages either.
↑ comment by hairyfigment · 2015-07-22T08:28:17.741Z · LW(p) · GW(p)
You're the one making all sorts of claims about the need for, and traits of, "strong protectors", without any statistics. You're the one simultaneously claiming you'd be fine with giving Acty higher status than you, and using social tactics blatantly aimed at reducing her relative status - sometimes in the same sentence.
Replies from: None↑ comment by [deleted] · 2015-07-22T10:29:11.687Z · LW(p) · GW(p)
This is a bit broader stuff than something reducible to a few stats. But on a basic common sense level, if the starting point is fewer people getting killed, the most basic solution is bodyguards and so on. So that is at least sensible as a starter instead of a Plan B rewiring everybody's brain to not be hateful.
As for status, come on, that is something that happens between real people, while DeVliegendeHollander, Acty and hairyfigment are mere accounts. For all people know it could even be the same person behind all three accounts. Playing status with accounts one could throw away at any second and register three new ones in its place would be really, really stupid, let's try to not accuse each other with at something that simplistic. At least if you want to assume evil, asssume a less banal kind. In fact I am thinking anyway that I should recycle DVH because it is getting too much karma and this account is developing something too much like a personality. I recycle on Reddit about every three weeks, maybe a three month or six month cycle would be good here. (The goal is of course to have ideas said by accounts that are not associated by former ideas said by accounts and thus their reception being less biased. Besides to not accumulate this completely ridiculous karma thing.)
Replies from: Acty, TheAncientGeek, ChristianKl↑ comment by Acty · 2015-07-22T18:19:11.363Z · LW(p) · GW(p)
Yup, playing status with accounts would be kinda stupid. (That's why you should stop doing it.)
You know what would be especially stupid? If we lived in a world that accorded me higher status than you because of my general level of aggression, and I could end this entire argument with "I'm high status, you're low status, I'm right, you're wrong, shut up now".
Now, wouldn't that be a really stupid world...?
So tell me again why giving status and prestige to aggressive people is a great idea?
↑ comment by TheAncientGeek · 2015-07-22T11:42:17.282Z · LW(p) · GW(p)
You keep assuming there is a fixed background level of aggression, but that is just what left wing thinkers believe can be changed.
Replies from: None↑ comment by [deleted] · 2015-07-22T12:02:25.904Z · LW(p) · GW(p)
I won't even argue that, it is a fact it can be changed. Bicoastal America and NW Europe managed to make a fairly large young college-ed middle class that is surprisingly docile. The issue is simply the consequences of the change and its permanence.
If you talked to any random Roman or Ancient Greek author about it, he would basically say you guys are actively trying to get decadent and expect it will work out well? To give you the most simple potential consequence: doesn't it lead to reducing courage or motivation as well? Since this is what we precisely see in the above mentioned group: a decrease of aggressivity correlates with an increase of social anxiety, timidity, shyness i..e. low courage and with the kind of attitudes where playing videogames can be primary hobby, nay, even an identity.
Of a personal experience, as my aggression levels fluctuated, so fluctuated motivation, courage, happines, self-respect and similar things with it. Not in the sense of fluctuating between aggressive and docile behavior of course, but in the sense of needing to exercise a lot of self restraint to always stay civil vs. not needing to.
You can raise the same things about its permanence. The worst outcome is a lower-aggresion group just being taken over by a higher one. Another potential impermance comes from the fluctuation of generations. My father was a rebel (beatnik), so I had only rebellion to rebel against, and my own counter-revolutionary rebellion was approved by my grandfather :)
Finally a visual type of explanation, maybe it comes accross better. You can understand human aggressivity as riding a high energy engine towards a bad, unethical direction. Having a lot of drive to do bad things. We can do two things, steer it away into a good one or just brake and turn off the engine. Everything we seem to do in this direction seems to more like braking than steering away. For example, if we were steering, we would encourage people to put a lot of drive into creative hobbies instead of hurting each other. Therefore, we would shame the living fsck out of people who don't build something. Yet we don't do this: we praise people who build, but we neglect to shame the lazy gamers. Putting it differently, we "brake" kids when they do bad stuff, but we don't kick their butts in order to do good stuff, so they end up doing nothing mostly. Every time a child or a youth would do something useful with a competitive motivation like "I'll show those lazy fscks" we immediately apply the brake. This leads to demotivation.
So in short, negative motivation can be surpressed. The issue is, it has consequences, it is probably not permanent, and really hard to replace it with a positive one. Of course I am not talking about people like us but more like the average.
Replies from: advael, Nornagest↑ comment by advael · 2015-07-22T19:17:25.269Z · LW(p) · GW(p)
Um, I fail to see how people are making and doing less stuff than in previous generations. We've become obsessed with information technology, so a lot of that stuff tends to be things like "A new web application so that everyone can do X better", but it fuels both the economy and academia, so who cares? With things like maker culture, the sheer overwhelming number of kids in their teens and 20s and 30s starting SAAS companies or whatever, and media becoming more distributed than it's ever been in history, we have an absurd amount of productivity going on in this era, so I'm confused where you think we're "braking".
As for video games in particular (Which seems to be your go-to example for things characteristic of the modern era that are useless), games are just a computer-enabled medium for two kinds of things: Contests of will and media. The gamers of today are analogous in many ways to the novel-consumers or TV-consumers or mythology-consumers of yesterday and also today (Because rumors of the death of old kinds of media are often greatly exaggerated), except for the gamers that are more analogous to the sports-players or gladiators or chess-players of yesterday and also today. Also, the basically-overnight-gigantic indie game development industry is pretty analogous to other giant booms in some form of artistic expression. Video games aren't a new human tendency, they're a superstimulus that hijacks several (Storytelling, Artistic expression, Contests of will) and lowers entry barriers to them. Also, the advent of powerful parallel processors (GPUs), a huge part of the boom in AI research recently, has been driven primarily by the gaming industry. I think that's a win regardless.
Basically, I just don't buy any of your claims whatsoever. The "common sense" ideas about how society improving on measures of collaboration, nonviolence, and egalitarianism will make people lazy and complacent and stupid have pretty much never borne out on a large scale, so I'm more inclined to attribute their frequent repetition by smart people to some common human cognitive bias than some deep truth. As someone whose ancestors evolved in the same environment yours did, I too like stories of uber-competent tribal hero guys, but I don't think that makes for a better society, given the overwhelming evidence that a more pluralistic, egalitarian, and nonviolent society tends to correlate with more life satisfaction for more people, as well as the acceleration of technology.
↑ comment by Nornagest · 2015-07-24T00:14:48.343Z · LW(p) · GW(p)
we praise people who build, but we neglect to shame the lazy gamers
I can't help wondering where you got this idea. The mainstream absolutely shames lazy gamers; they're one of the few groups that it's socially acceptable to shame without reservation, even more so than other subcultures seen as socially unproductive (e.g. stoner, hippie, dropout) because their escape of choice still carries a childish stigma. That's countered somewhat by an expectation of somewhat higher social class, but the "mom's basement" stereotype is alive and well.
Even other lazy gamers often shame lazy gamers, although that's balanced (for some value of "balance") by a lot of back-patting; nerd culture of all stripes has a strong self-love/self-hate thing going on.
↑ comment by ChristianKl · 2015-07-22T12:07:57.048Z · LW(p) · GW(p)
But on a basic common sense level, if the starting point is fewer people getting killed, the most basic solution is bodyguards and so on.
That assumes that the bodyguards never use violence and beat somebody to death. Simply increasing the amount of people who can beat other people up doesn't automatically reduce violence. South Africa has a lot of body guards and it's still a lot more violent then states in Europe.
Giving the government the monopoly on violence is a standard enlightenment idea that works well if your state highers enough policemen and the citizens believe in the rule of law.
Playing status with accounts one could throw away at any second and register three new ones in its place would be really, really stupid, let's try to not accuse each other with at something that simplistic.
Status interactions are deeply ingrained in the way humans interact with each other. It not something that get's shut down just because a discussion is online.
↑ comment by Acty · 2015-07-21T16:26:23.838Z · LW(p) · GW(p)
Opponent is a word. Here, it refers to the person advocating the opposite view to mine. If you would like, I can use a different word, but it will change very little. Arguing over semantics is not a productive way to cause each other to update. Though to be honest, I ceased having much hope that you were in this discussion for the learning and updates when you started using ad hominem and fully general counterarguments. (Saying that your opponent is defensive and emotional and "opponentist" is also a fully general counterargument and also ad hominem. "Not even blaming me" for not agreeing with you is another example with an extra dash of emotive condescension. You have a real talent.)
Quite often, people have useful information about themselves because they know themselves quite well. I'm a useful data point when I'm thinking about stuff that affects me, because I know more about myself than I know about other examples. But I could also point out other examples of women in my community who are protectors. For instance, I know a single mother who is not only a national-level athlete but had to rush each of her children to hospital for separate issues four times in the last week. Twice it was because their lives were threatened. She stays strong and protects them fiercely, keeps up with her life and her training, and is frankly astonishingly brave. She is far, far more of a "traditional strong figure" than any man I have ever met. Of course, this is still anecdata. I haven't got big quantitative data because I can't think of a test for protectorness that we could do on a large scale; can you suggest one?
Your idea, as I understood it, was that men can carry out protective roles and therefore they should have high social status and prestige. I think this is a pretty good example of what I've heard called the Worst Argument in the World. I believe that protective and self-sacrificing individuals should be accorded high prestige. I agree that protectiveness can loosely correlate with being male. But protective women exist in high numbers, and non-protective men exist in high numbers, and many women exist who are significantly better at protectiveness than the average male. According protective women low prestige because they are women, and according useless men high prestige because they are men, is an entirely lost purpose. It is irrational sexism, pure and simple. You're doing the same thing as people who say "Gandhi was a criminal, therefore Gandhi should be dismissed and given low social status." You're saying that it would be good if people said, "Individual X is a male, therefore he should be accorded high prestige and conscripted. Individual Y is a female, therefore she should be given low prestige and not conscripted" even if X doesn't fit the protective-and-strong criteria and Y does fit the protective-and-strong criteria. Forcing protective strong women to stop doing that and accept low prestige, and forcing non-protective weaker men to try and fill protective roles, just hurts everyone.
You still haven't answered my question. You want to make a society where men get conscripted (an astonishingly rare event in a modern liberal democracy, by the way...) and protect those around them, and in return get high prestige. I know, and I presume you also know, numerous men who would be unsuitable for conscription and don't protect those around them. Some women would be perfectly suitable for conscription, and protect those around them. Why do those women not deserve the prestige that you want to give all the men?
Can you also tell me why you think "the same stuff the mainstream media, BuzzFeed or Tumblr pouring on day and night" is necessarily uninteresting/wrong? Shouldn't a large number of people agreeing with an ethical position usually correlate with that ethical position being correct? I mean, it's not a perfect correlation, there are exceptions, but in general people agree that murder and rape and mugging are undesirable, and agree that happiness and friendship and knowledge are desirable. Calling a position popular or fashionable should not be an insult and I am intrigued by how you could have come up with the idea that something that is "done to death" must be bad. Has "murder is wrong" been "done to death"?
If this conversation keeps going downhill, I'm just going to disengage. It is rather low utility.
Replies from: None↑ comment by [deleted] · 2015-07-22T08:01:41.243Z · LW(p) · GW(p)
This is getting more interesting now. To sum the history of things, you had this discussion with VoiceOfRa and you stressed primarily you want to save people from getting killed. I butted in and proposed you don't have to redesign the whole world to do that, it is possible in a traditional setup as well. Turned out we are optimizing for different things, I am trying to preserve older-time stuff while also changing them to the extent needed to address real, actual complaints of various people and work out compromises (calling it moderatism or moderate conservatism would be OK), while you are more interested in tearing things down and building them up. OK. But I think there are more interesting things here lurking under the surface.
An offer: retreat a few meta levels up, and get back to this object level later on.
IRL I discuss pretty much everything with people inside my age range (30-60), which means we rely not only on our intelligence and book knowledge (you obviously have immense amounts of both) but on our life experience as well. That is a difficult thing to convey because it is something that is not even learned in words (so there are no good books that sum it up, to my knowledge) but non-verbal pattern recognition. Yet it is pretty much this thing, this life experience that makes the difference between your meta level assumptions and mine. What can one do? I will try the impossible and try to translate it into words. Don't expect it to go very well, but maybe a glimpse will be transmitted. Also my tone will be uncomfortably personal and subjective because it is per definition something happening inside people's heads.
When I was 17 I assumed, expected and demanded the world to be logical and ethical. I vibrated between assuming it and angrily demanding it when I found it is not the case. University did not help - it was a very logical and sheltered environment.
When I started working (25) I had to realize how truly illogical the world is, and not because it was waiting for mr smart guy to reorganize it but because most people are plain simply idiots. I worked at implementing business software, like order processing, accounting and MRP. Still do. I had to face problems like when order processing employees did not know the price of an item, they just invoiced it at zero price. Gave away for free. The managers were no better, instead of doing something sensible (such as incentive pay or entering a price list for everything into the software and not letting the employees change them), they demanded us to make a technical solution like not allow a zero price, just give out an error message. From them on, the folks used a price of 1 currency when they did not know the correct price. The irony was really breath-taking - if there is one thing that is supposed to be efficient in this sorry world, it is corporations chasing profits: and they were incompetent at that very basic level of not giving away stuff for free. (They were selling fertilizers to farmers, in the kind of place where paved roads are rare.)
My first reaction was rage and complaining about humanity's idiocy. I was sort of similar to Reddit /r/atheism. Full of snark.
I would say the life experience part was not as much as learning the basic rule, namely that most folks are idiots, but really swallowing and digesting it, learning to resign myself to it, accept it, and see what can be done. That was what took long. I had to accept a Heideggerian "we are thrown into this shit and must cope, anyhow". This is my sorry species. These zombies are my peers. If I want to help people, I need to help them on their level.
First of all, I had to accept a change in my ethics: making most people happy is an impossible goal. The best I can do is supporting ideas that prevent the dumbass majority shooting themselves in the foot in the worst ways. Give a wild guess, will the majority those types of ideas be more often classified as "liberal" or "conservative" ?
Second, I had to realize that I was a selfish ass when I was young and "liberal". I wanted things to be logical, hence I wanted things to suit people who are logical. I wanted things to suit people like me. I wanted a world optimized for me and my folk, for intellectuals. That would be a horrible world for the vast majority of people who can't logic. They need really foolproof and dumbed-down systems that cut with the grain of their illogical instincts, not against it.
The corollary of this all was that I should support rules and systems that I hate and I would never obey them. I had to become a hypocrite e.g. to support keeping drugs illegal (because I saw fools i.e. normal people would only get more foolish from them) while being perfectly accepting and supporting of my intellectual friends who got great philosophical insights for them. The non-hypocritical solution would have been, of course, different rules for different people. Yes, that is actually one thing liberal democracy is not so good at making, so hypocrisy was the only way to deal with it.
It was during this, my rather horrified awakening process when I came accross as really unusual book, Theodore Dalrymple's Life At The Bottom. It is a book that probably would be classified "conservative", but it was refreshingly non-ideological, it was about the experiences collected by decades of working as a psychiatrist treating the underclass of Birmingham, UK. I was actually living there, attracted by the manufacturing prowess of the region, was good for me career-wise, and the book was actually able to explain the high levels of WTFery I experienced every day, such as I smoke a cig outside (I have my own idiotic side as well), a mother with a small kid in a pram and with an about 9 years old boy, and the boy walks up to me and asks for a cigarette. I look at the mom completely astonished. Blank face. WTF I do now? At any rate, Dalrymple said the basic issue is that intellectuals made rules that worked very well for themselves, such as atheism, sexual liberty and similar things. He was of course atheist too. But according to him this wrecked misery amongst the low-IQ underclass, they really needed their old churches and traditions and Gods Of The Copybook Headings to restrain their impulsivity and bad choices. Rules that work well for intellectuals ("liberal" rules) don't work well for everybody else, at all. Maybe if I wanted to offer a book that is life experience translated to words, even though it is not really possible, that is that book, I can only add it is not made up, I really saw these folks.I had to change my political and social views completely. I could no longer demand the kind of stuff that makes sense for ethical and intelligent people. I had to learn to demand stuff I would personally dislike.
I either have to demand really dumbing things down, or maybe designing things from the assumption of stupidity up. Which means either intellectuals sacrificing their own interests and accepting a world made of stupid, or publicly supporting rules but privately wiggling out of them (Victorian era) or different rules for different people (aristocracy). Cont. below
Replies from: Acty, None, TheAncientGeek, TheAncientGeek↑ comment by Acty · 2015-07-22T17:43:57.505Z · LW(p) · GW(p)
Um, how exactly do you want to preserve older things while I want to tear everything down and build it back up again? I don't want to tear things down. I want the trends that are happening - everything gets fairer and more liberal over time - to continue. To accelerate them if I can. (To design a whole new State if and only if it seems like it will make most people much happier, and even then I kinda accept that I'd need to talk to a whole lot of other people and do a whole lot of small scale experiments first.) None of those trends are making society end in fire. They're just nice things, like prejudiced views becoming less common, and violence happening less. I'm trying to optimise for making people happy; if you're optimising for something else, then I'm afraid I'm just going to have to inform you that your ethics are dumb. Sorry.
The problem with "life experience" as an argument is that people use "life experience" as a fully general counterargument. If you're older than me, any time I say something you don't like, you can just yell "LIFE EXPERIENCE!" and nothing I can do - no book you can suggest, nothing I can go observe - will allow me to win the argument. I cannot become older than you. This would be fine as an argument if we observed that older people were consistently right and younger people were consistently wrong, but as you'll know if you have a grandparent who tries to use computers, this just ain't so.
Just because you have been made jaded and cynical by your experience of the world, that doesn't mean that jaded and cynical positions are the correct ones. For one thing, most other people of your own age who also experienced the world ended up still disagreeing with you. There's a very good chance that I get to the age you are now, and still disagree with you. And if you observe most other people your own age with similar experiences (unless you're old enough that you're part of the raised-very-conservative generation) most of them will disagree with you. What is magic and special about your own specific experience that makes yours better than all those people who are the same age as you? Why should I listen to your conclusions, backed up by your "life experience", when I could also go and listen to a lefty who's the same age as you and their lefty conclusions backed up by their "life experience"?
Life experience can often make people a lot less logical. Traumatised people often hold illogical views - like, some victims of abuse are terrified of all men. That doesn't mean that, because they have life experience that I don't, I should go, 'oh, well, I actually think all men aren't evil, but I guess their life experience outweighs what I think'. It means that they've had an experience that damaged them, and we should have sympathy and try and help them, and we should even consider constructing shelters so they don't have to interact with the people they're terrified of, but we certainly shouldn't start deferring to them and adopting their beliefs just because we lack their experiences. The only things we should defer to should be logical arguments and evidence.
"Life experience", as a magical quality which you accumulate more of the more negative things happen to you, is pretty worthless. Rationalists do not believe in magical qualities.
If there is a version of the "life experience" argument that can be steelmanned, it's "I've lived a long time and have observed many things which caused me to make updates towards my beliefs, and you haven't had a chance to observe that." But that still makes no sense because you should be able to point out the things you observed to me, or show your observations on graphs, so I can observe them too. If you observed someone being a total idiot, and that makes you jaded about the possibility of a system that requires a lot of intelligence to function, then you should be able to make up for the fact that I haven't observed that idiocy by pulling out IQ charts or studies or other evidence that most people are idiots. If you can't produce evidence to convince someone else, consider that your experience may be anecdata that doesn't generalise well.
Perhaps your argument is "I've lived a long time and have observed very many very small pieces of evidence, and all of those small pieces of evidence caused me to make lots of very small updates, such that I cannot give you a single piece of evidence which you can consider which will make you update to my beliefs, but I think mine are right anyway." However, even if you can't give me a single piece of evidence that I can observe and update on, you ought to be able to produce graphs or something. Graphs are good at showing lots-of-small-bits-of-evidence-over-time stuff. If you want me to change my beliefs, you still have to produce evidence, or at least a logical argument. Appeals to "life experience" are nothing more than appeals to elder prestige.
Now, to address your actual argument. It seems to be (correct me if I misunderstand): liberal views are correct and work, but there are many stupid people in the world and liberal views don't work well for stupid people. Stupid people need clear rules that tell them in simple terms what to do and what not to do.
But if I were going to make up a set of really clear simple rules to tell a stupid person what to do and what not to do, they would be something like: 1. Be nice to people and try not to hurt people. 2. Don't try and prevent clever people from doing things you don't understand. Listen to the clever people. 3. Try and be productive and contribute what you can to society.
I cannot see any evidence that adding more rules beyond those three, like, "Defer to males because male-ness is loosely correlated with prestige-wanting and protectiveness" (even though male-ness isn't correlated with prestige-deserving, that seems about equal between genders) would do any good whatsoever. Since there are just as many stupid males as stupid females - male average IQ is actually slightly lower than female average IQ - a rule like that wouldn't do any good, would certainly not prevent people letting young kids have cigarettes or people beating one another or anything like that, and would in fact just lead to a lot of stupid people going "oh, it must be okay to hurt and disparage females and not let them have education then" and going around hurting lots of women.
And the view of mine, that liberal views don't hurt people and sexist/racist views do hurt people... certainly seems to fit the evidence. Liberal views are increasing over time, and crime and violence are decreasing over time. I can't find any studies, but I predict that if we do a study, we'll find that holding the belief "women and men are equal" is strongly correlated with being non-violent and not hurting women, and almost all of the violence and abuse cases committed against women will be by people who don't think they are equal. Similarly, doing stupid things like giving cigarettes to children and giving away products for free will be correlated with holding conservative views - though admittedly that could just be 'cause being uneducated is correlated with holding conservative views. And because men are on average more violent and more likely to hurt people. But then, that's a very good argument against putting them in charge, isn't it...?
Replies from: Lumifer, Jiro↑ comment by Lumifer · 2015-07-22T17:52:30.699Z · LW(p) · GW(p)
Acty, a question. What are you information sources? This is very general question -- I'm not asking for citations, I'm asking on the basis of which streams of information do you form your worldview?
For example, for most people these streams would be (a) personal experiences; (b) what other people (family, friends) tell them; (c) what they absorb from the surrounding culture, mostly from mainstream media; and (d) what they were taught, e.g. in school.
You use phrases like "I cannot see any evidence" -- where cannot you see evidence? Who or what, do you think, reliably tells you what is happening in the world and how the world works?
Replies from: Acty↑ comment by Acty · 2015-07-23T01:20:02.470Z · LW(p) · GW(p)
--
Replies from: Lumifer↑ comment by Lumifer · 2015-07-23T02:32:27.730Z · LW(p) · GW(p)
Thanks for the extended answer. If I may make a couple of small suggestions -- first, in figuring out where you views come from look at your social circle, both in meatspace and on the 'net. Bring to your mind people you like and respect, people you hope to be liked and respected by. What are their views, what kind of positions are acceptable in their social circle and what kind are not? What is cool and what is uncool?
And second, you are well aware that your views change. I will make a prediction: they will continue to change. Remember that and don't get terribly attached to your current opinions or expect them to last forever. A flexible, open mind is a great advantage, try not to get it ossified before time :-)
Replies from: Acty↑ comment by Acty · 2015-07-25T02:44:24.757Z · LW(p) · GW(p)
--
Replies from: Lumifer↑ comment by Lumifer · 2015-07-27T15:41:10.556Z · LW(p) · GW(p)
but if I expected to change my mind about something in the future, surely I'd just change it now?
No, because you don't know (now) in which direction will you change your mind (in the future).
As a general observation, you expect to learn a lot of things in the future. Hopefully, you will update your views on the basis of things you have learned -- thus the change. But until you actually learn them, you can't update.
↑ comment by Jiro · 2015-07-22T18:48:18.116Z · LW(p) · GW(p)
Liberal views are increasing over time
Here's at least one that isn't:
https://www.washingtonpost.com/news/volokh-conspiracy/wp/2014/02/17/growth-chart-of-right-to-carry/
This is also complicated by the fact that views that go out of favor tend to be characterized as not liberal regardless of whether they actually were liberal. Eugenics is one of the better known examples.
I can't find any studies, but I predict that if we do a study, we'll find that holding the belief "women and men are equal" is strongly correlated with being non-violent and not hurting women, and almost all of the violence and abuse cases committed against women will be by people who don't think they are equal.
Saying "I don't have a study, but I predict that if I do a study I will see X" is no better than just asserting X.
Also, "women and men are equal" is vague. Do you want equality of opportunity or equality of outcome?
Even "violence against women" is vague. ISIS likes to kidnap women, but they also like to kill all the men at the time they are kidnapping the women. So ISIS causes the absolute amount of violence against women to increase, but the relative amount (compared to violence against men) to decrease.
↑ comment by [deleted] · 2015-07-22T08:01:51.947Z · LW(p) · GW(p)
Continued form above
And now a bit closer to the object level of our previous discussion. Look at this mess about sex, gender, orientation and roles and whatnot. If it was about designing rules for an island where only smart people live, it would be fairly obvious: completely do away with terms like man, woman, feminine, masculine, straight, gay. Measure T levels and simply establish a hormonal gender based on that, a sliding scale, divided into maybe five quintiles or however you want to. Do away with straight or gay - people can date however they wish to, but expect that most people will choose a partner from the other end of the scale than themselves, because people naturally converge to dom-sub, top-bottom setups. Give high-T people more status because they really seem to crave it 1 while low-T people are comfortable enough with being deferential to them, in return make high-T people willing to risk their lives protecting low-T people who should have things safe and comfortable. Base everything on this - risky venture capital type stuff for the high end, predictable welfare/union job stuff for the low-end.
This would make sense, right? I must stress this hormonal scale would be independent of biological sex, social gender or orientation - those are abolished in this scenario. Why, I find it likely you would end up higher on the T-based gender scale than me although I am guy, but you seem to give out more aggressive vibes than I do (I don't mean it in a bad way, your engines are just more fired up, that is good if you use it for good things). And of course this scenario was made for an island populated by only smart people.
And then back to the real world. To have any chance of it working for sorry stupid species of humankind, it has to be dumbed down mercilessly. The average human's cognitive level is about Orwell's "two legs bad four legs good". Of course by dumbing down we lose accuracy and efficiency. Of course by dumbing down quite some people won't fit. Still it has to be done. And now you tell me, if you want to make it really, really primitive, if you want to dumb it down to the point where a blatantly obvious biological switch would determine if people are treated as belonging to the high end or the low end, what switch would that be? Which switch would be the most obvious?
And then of course you end up with a system you don't like, and most intellectuals don't like - it is a really poor fit for intellectuals. At this point the choices are: sacrifice humankind for intellectuals? Sacrifice intellectuals for humankinds? Make the rules that fit for humankind official while let intellectuals bend the rules in private? Establish an aristocracy and make different rules for intellectuals? Archipelago?
Huh. This would be the meta arch of these things. If you still want we can return to stuff like if it is the WATW or whether a large number of people aggreeing with something makes it usually ethically correct and so on, because I would then interpret these objections in these light, but I hope I managed to resolve those with this article.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-22T12:31:38.688Z · LW(p) · GW(p)
Give high-T people more status because they really seem to crave it
Low testosterone people also want status.
Replies from: None↑ comment by [deleted] · 2015-07-22T12:46:32.016Z · LW(p) · GW(p)
Then why do the linked researchers use it as a marker of that?
Maybe, it is about status of a different kind. Maybe this reduces to that it is not a well defined word and we always get into rather unresolvable debates because of that.
Let's try this: there are two kinds of status, one is more like being a commanding officer or a schoolteacher towards kids, it is respect and deference and maybe a bit of fear (but does not 1:1 translate to dominance, although close), the other kind is being really popular and liked. Remember school. The difference between the teacher's status and the really likeable kid's status is key here. The CEO vs. the Louis CK type well liked comedian. The politician vs. the best ballet dancer.
To the linked article: being good at math does not make one liked. It makes one respected. Closer to the first type.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-22T13:35:55.127Z · LW(p) · GW(p)
We don't live in a world where low testosterone people want high testosterone people to have more power.
Yes I want policemen to have a certain kind of status but policemen don't need to be high testosterone. Empathic policemen who are actually good at reading other people have advantages in a lot of police tasks.
Schoolteachers need respect but they don't need to be high testosterone either. A teacher does a better job if he understands the student.
Replies from: Acty, None↑ comment by [deleted] · 2015-07-22T15:36:41.744Z · LW(p) · GW(p)
You should really think that over. For the police example, we have two conflicting requirements, first is to be so scary (or more like respect-commanding) that he rarely needs to get physical, that is always messy. The opposite requirement is the smiling, nice, service-oriented, positive community vibes stuff, like the policemen who visited me and told me I forgot to lock my car. (The reading people is more of an investigator level stuff, not street level.) The question is, which one is more important? If you have one optimal and one less optimal, which one is better? Similarly we want teachers to be kind and understanding with students with genuine difficulties, but be more scary to the kind of thuggish guys half my class was.
The point is really how pessimistic or cautious is your basic view. Do you see the police as a thin blue line separating barbarism from civilization? Do you see every generation of children as "barbarians needing to be civilized" (Hannah Arendt) Or you have a more optimistic view, seeing the vast majority of people behaving well and the vast majority of kids genuinely trying to do good work?
Furthermore - and this is even more important - would you rather get more pessimistic than things are and thus slow down progress, or more optimistic and risk systemic shocks?
I think you know my answers :) I have no problem with a slow and super-safe progress. I don't exactly want to flee from the present or the past.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-22T16:06:59.956Z · LW(p) · GW(p)
For the police example, we have two conflicting requirements, first is to be so scary
No, policemen don't need to be scary to do their jobs. A police that can be counted on doing a proper investigation that punishes criminals can deter crime even when the individual policmen aren't scary.
A policemen being scary can also reduce the willingness of citizens to report crimes because they are afraid of the police. Whole minority communities don't like to interact with police and thus try to solve conflicts on their own with violence because they don't trust the police.
Similarly we want teachers to be kind and understanding with students with genuine difficulties
Being empathic is not simply about being "kind and understanding" it allows the teacher to have more information about the mental state of a student and more likely see when a student gets confused and react towards it. A smart student also profits when the teacher get's that the student already understands what the teacher wants to tell him.
The point is really how pessimistic or cautious is your basic view. Do you see the police as a thin blue line separating barbarism from civilization? Do you see every generation of children as "barbarians needing to be civilized" (Hannah Arendt) Or you have a more optimistic view, seeing the vast majority of people behaving well and the vast majority of kids genuinely trying to do good work?
Neither. I rather care about the expected empiric result of a policy than about seeing people are inherently good or inherently evil. I think you think too much in categories like that and care too little about the empirical reality that we do have less crime than we had in the past.
It's like discussing global warming not on the scientific data about temperature changes but on whether we are for the environment or for free enterprise. The nonscientific framing leads to bad thinking.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-23T03:24:07.147Z · LW(p) · GW(p)
A policemen being scary can also reduce the willingness of citizens to report crimes because they are afraid of the police. Whole minority communities don't like to interact with police and thus try to solve conflicts on their own with violence because they don't trust the police.
It's not clear how much of that is this and how much is them being more afraid of the kingpins of their ethnic mafias than they are of the police.
↑ comment by TheAncientGeek · 2015-07-22T11:57:12.488Z · LW(p) · GW(p)
But according to him this wrecked misery amongst the low-IQ underclass, they really needed their old churches and traditions and Gods Of The Copybook Headings to restrain their impulsivity and bad choice
Have things steadily been marching in the direction of greater liberalism, though? There are a bunch of things you can no longer get away with, such as spousal abuse.
Replies from: None↑ comment by [deleted] · 2015-07-22T12:09:28.017Z · LW(p) · GW(p)
That was precisely what TD wrote they got away with all the time. Just of course not amongst educated or middle-class people. Underclass woman gets admitted to the hospital with a broken arm, docs find it unlikely it was an accident, arrange a meeting with the local psychiatrist i.e. TD. Yeah, it was the boyfriend in a fit of jealousy. Police, testimony? Nope. Why not? She's making up all kinds of excuses for the guy. OK maybe she is scared, explain about the battered women shelter where nothing bad can come to her. Still not, and does not look scared or dependent or something. Finally the reason becomes clear: the "bad boy" is sexually exciting. And according to TD this happened all the time. 16 years old boy taken in to psychiatric treatment, suicidial. Mom's latest boyfriend was beating the boys head in the concrete. Why mom won't call the police? She said "he is a good shag" and rather the boy should try not be too much at home to avoid him. And so on, endless such stories. Get that book...
I was actually getting a weird impression from that book. The primary reason laws and institutions are indeed increasingly aware of and trying to deal with spousal abuse is not that it gets more noticed today, but because it actually does happen more than in the past where communal pressure could have worked against it. TD seems to imply that people's behavior got worse faster than the laws or police procedures evolved, so it is a net negative.
Replies from: TheAncientGeek, TheAncientGeek↑ comment by TheAncientGeek · 2015-07-22T15:35:17.173Z · LW(p) · GW(p)
I was actually getting a weird impression from that book. The primary reason laws and institutions are indeed increasingly aware of and trying to deal with spousal abuse is not that it gets more noticed today, but because it actually does happen more than in the past where communal pressure could have worked against it.
Is that based with figures?
Replies from: None↑ comment by [deleted] · 2015-07-22T15:37:49.538Z · LW(p) · GW(p)
Nope, TD is an essayist.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-07-22T15:51:58.204Z · LW(p) · GW(p)
Casual assumptions that things were better in the past, .or that the authors favoured theories must have worked, need to be taken with some scepticism.
↑ comment by TheAncientGeek · 2015-07-22T13:33:30.035Z · LW(p) · GW(p)
His point is not that anyone who wants to batter, can, it is that the nonzero amount left is due to collusion.
↑ comment by TheAncientGeek · 2015-07-22T10:44:00.290Z · LW(p) · GW(p)
I was actually living there
I used as well. This is getting spooky.
Replies from: None↑ comment by [deleted] · 2015-07-22T10:57:28.254Z · LW(p) · GW(p)
Second biggest city of the world's fourth biggest economy at that time? Not really a huge conincidence IMHO. Sort of in the world top 20-30 list of places to work one one's career. Not really in the top 20 to enjoy life though. (OK it wasn't too bad, we had a park behind the house where bands played frequently, it turned out Caribbean takeaways are really delicious, and going to the Godskitchen was a teenage dream came true 15 years later.)
↑ comment by ChristianKl · 2015-07-21T14:39:14.681Z · LW(p) · GW(p)
For example nobody talked about making Western liberal democracies like third-world hellholes, it was about making them like their former selves when crime levels were lower, violence was lower, people were politer, people were politer with women and so on.
Crime levels are lower in the West then they were in the past. It's only media mentions of crime that have risen and which result in a majority of the population believing that crime rates haven't fallen. Violence is down.
We haven't gotten increased politeness to woman measured by factors like the number of man who open doors for woman. On the other hand we have a lot more equality than we had in the past. Feminism was never about demanding politeness.
Replies from: None↑ comment by [deleted] · 2015-07-21T15:24:51.187Z · LW(p) · GW(p)
Levels, not rates. Rates are largely about the police trying to look good in numbers. It is seriously difficult to quantify these things properly. One distinct impression I have is that violent behaviors escaped the lower classes and middle-class people stopped being so sheltered from them. Perhaps if I could find a database relating to the education level of the victims of violent crimes I could quantify that better. The 1900 to 1920's idea of a romantic and dangerous "underworld" went away(example) ), but yet it affects middle-class people far more, from their angle of life experience life got more dangerous.
It is true that feminism is not about politeness, however politeness and preventing specifically violence as seemingly this was the core issue raised are closely related. A normal fella is not going to "please, good sir/madam" and then suddenly head-butt him/her. Formality is a way to avoid the kind of offense that gets retaliated physically, or a way to see if the other is peaceful and reliable, because if the other does not talk in a non-aggressive way then he is more likely to behave physically more aggressively and thus avoidance is advised. This is why it matters, not that relevant to feminism but relevant to safety, people being physically hurt and so on.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-21T16:14:20.642Z · LW(p) · GW(p)
Levels, not rates. Rates are largely about the police trying to look good in numbers. It is seriously difficult to quantify these things properly.
There are victimization surveys that verify lower crime levels apart from amount of crime that the police deals with.
We can discuss whether it's lead or the new clever crime fighting techniques that's the cause for lower violence but the expert consensus on the subject is that crime is down just as the expert consensus on global warming is that temperatures are up.
but yet it affects middle-class people far more, from their angle of life experience life got more dangerous.
How do you know?
Formality is a way to avoid the kind of offense that gets retaliated physically
But it's not the only way. I rather have a culture where people hug each other, are nice to each other and also openly speak about their concerns when that makes other people uncomfortable.
It might be that you personally prefer formality but other people don't. Don't project your desire for formality onto other people. It's a right wing value and right wing values lost ;) Right wing values losing is no reason why an idealistic youth should be pessimistic.
↑ comment by VoiceOfRa · 2015-07-21T23:17:33.461Z · LW(p) · GW(p)
You know what else is a good way to stop people being killed? Create a liberal democracy where people are equal. So far in history, that has kinda correlated... really strongly... with less people dying. There is both less war and less crime.
Um, if you want a society with less crime try Singapore or places like Shanghai. Hell, even Japan have much lower crime rates despite being more patriarchal then western liberal democracies.
Being "strong" in a meaningful way, in the modern world, means being intelligent. Smart people can use better rhetoric, invent cooler weapons, and solve your problems more easily.
Yes, and that will help you so much when someone tries to punch/rob/rape you.
↑ comment by VoiceOfRa · 2015-07-21T06:11:34.233Z · LW(p) · GW(p)
Believing in equality of opportunity =/= believing in equality of outcome =/= believing in communism =/= being willing to kill people to make communism happen.
Actually falsely believing in equality of ability => being willing to kill to make equality happen. The chain of reasoning goes as follows:
1) As we know all people/groups are of equal ability, but group X is more successful then other groups, thus they must be cheating in some way, we must pass laws to stop the cheating/level the playing field.
2) We passed laws to level the playing field but group X is still winning, they must be cheating in extremely subtle ways, we must pass more laws to stop/punish this.
3) Group X is still ahead, we must presume members of group X are guilty until proven innocent, etc.
If you are seriously suggesting that believing that it is wrong for people to hurt one another, so if you're hurting someone on grounds of their race, you should stop somehow leads to wanting to have a repeat of Cambodia and kill all the educated people
No that's not what I'm saying. In the grandparent you said:
If I say that I am opposed to racism, and someone immediately leaps to defend their right to read whatever scientific studies they like - completely ignoring all of the other things that racism refers to, like you know, genocide, which I think we can agree is a pretty bad thing - then that reveals a set of values which are kinda disturbing to me. It signals that you care about whether you can read IQ-by-race-and-gender studies more than you care about genocide and acid attacks and lynchings, and would rather yell at me about the possibility that I might oppose you reading IQ studies rather than agree with me that people murdering one another is a bad thing.
My point is that not being able to read IQ-by-race-and-gender studies is likely to lead to a repeat of Mao/Pol Pot. Thus being extremely concerned about being able to read them is a perfectly rational reaction.
I want to learn social science, do research to figure out what will make people happiest, and then do that.
Unfortunately, as we've just established you have very false ideas about how to go about doing that. Furthermore, since these same false ideas are currently extremely popular in academia, going there to study is unlikely to fix this.
↑ comment by VoiceOfRa · 2015-07-19T20:30:09.301Z · LW(p) · GW(p)
Talking about how angry I am about them IRL gets me labelled weird, and with my family, told to shut up or I'll be kicked out of the room/car/conversation/etc.
Where you live is more then just your immediate family.
You also assume that I oppose 'perfectly valid Bayesian inference', as if that's the only thing that can be meant by opposing racism and sexism.
Well technically one could define "sexism" and "racism" however one wants; however, in practice that's not how most people who oppose them use the words.
but a lot of people have trouble on updating on the fact that the individual they're faced with doesn't fit the trend.
That's because usually the individual does fit the trend. In fact these days people tend to under update for fear of being called "racist" and/or "sexist".
I don't know why you automatically leap to assuming that I am really angry about, say, people reading studies comparing male and female IQs when what I'm actually angry about is,
So are you also angry about what happened to Watson?
say, people beating LBGTQA+ individuals to death in dark alleys (which I am presuming you would not defend).
Are you also angry about people beating people without those psychological issues in dark alleys? The latter is much more common. Are you angry about, say, what happened in Rotherham and the ideology that lead to it being cover up? What about all the black on black violence in inner cities that no one seems to care about and cops don't want to stop for fear of being called "racist" when they disproportionately arrest black defendants.
Some people use a slight statistical trend indicating a small difference in X to say that all members of a minority must be completely lacking in X and therefore it's okay to hate them.
Do you know what the word "hate" means? I've seen it thrown around to apply to lot's of situations where there is no actual hate involved. Furthermore, in the rare cases where I've seen actual hate, well like you yourself said latter "emotion is arational" and hate is sometimes appropriate.
I'm a utilitarian.
Yet earlier you said "I'm against beatings and murder in general, really." Do you see the contradiction here? Do you some beatings and killings [your example wasn't murder since it was legal] even if they increase utility?
Replies from: Acty↑ comment by Acty · 2015-07-20T23:04:42.916Z · LW(p) · GW(p)
--
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-21T01:06:22.239Z · LW(p) · GW(p)
I am angry about everyone who has ever been beaten up in a dark alley. I think people should not be beaten up in dark alleys. I am angry about racism and sexism and homophobia and transphobia because they seem significant causes of people being beaten up in dark alleys
I agree they "seem" that way if you only superficially read the news. If you pay closer attention one notices that (at least in the US) fear of being precised as "racist" is a much larger cause of people being beaten up in dark alleys (and occasionally in broad daylight). It is the reason why cops don't want to police high crime (black) neighborhoods, why programs that successfully reduce crime (like stop and frisk) are terminated.
Hatred of human beings is almost never appropriate. Hatred of things is fine.
I would argue the exact opposite. Hatred and anger evolved as methods that let us pre-commit to revenge/punishment by getting around the "once the offense has happened it's no longer in one's interest to carry out the punishment" problem. They do this by sabotaging one's reasoning process to keep one from noticing that carrying out the punishment is not in one's interest. Applied against things, i.e., anything that can't be motivated by fear of punishment, all one gets is the partially sabotaged reasoning process without any countervailing benefits.
In fact, I don't think it's possible to be angry at a 'thing' like a disease. In order to do so one must either anthropomorphize the disease or actually get angry at some people (like say those people who refuse to give enough money to research for curing it).
comment by CurtisSerVaas · 2015-03-23T02:30:43.217Z · LW(p) · GW(p)
I'm a long-time user of LW. My old account has ~1000 karma. I'm making this account because I would like it to be tied to my real identity.
Here is my blog/personal-workflowy-wiki. I'd like to have 20 karma, so that I can make cross-posts from here to the LW Discussion.
I'm working on a rationality power tool. Specifically, it's an open-source workflowy with revision control and general graph structure. I want to use it as a wiki to map out various topics of interest on LW. If anybody is interested in working on (or using) rationality power tools, please PM me, as I've spent a lot of time thinking about them, and can also introduce you to some other people who are interested in this area.
EDIT: First cross-post: Personal Notes On Productivity (A categorization of various resources)
EDIT: I've edited the LW-wiki to make a list of LWers interested in making debate tools..
comment by Forux · 2015-07-03T11:31:48.530Z · LW(p) · GW(p)
Hello all =)
I am reading LW more that one year. I organized book club meetups about HPMOR in Kyiv, Ukraine in past (https://vk.com/hpmor_meeting and https://vk.com/efficient_reading5)
Now i start to organization process of first general LW meetup in Kyiv, our google group: http://groups.google.com/d/forum/LessWrong-Kyiv
On the first meet we will discuss Daniel Kahneman`s "Thinking, Fast and Slow" book in addition to what we will do in the future =)
Please, if you can - give any useful suggestions about what and how first meetup must be done (i have read LW pdf file about how to organize meetups).
Replies from: None, John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-07-05T15:18:28.398Z · LW(p) · GW(p)
Awesome! Note that you can advertise your meetup further using the LW meetup system.
Replies from: Forux↑ comment by Forux · 2015-07-08T11:03:53.800Z · LW(p) · GW(p)
Thanks,
http://lesswrong.com/meetups/1fd
Done =)
comment by rikisola · 2015-07-17T12:00:33.752Z · LW(p) · GW(p)
Hi all, I'm new. I've been browsing the forum for two weeks and only now I've come across this welcome thread, so nice to meet you! I'm quite interested in the control problem, mainly because it seems like a very critical thing to get right. My background is a PhD in structural engineering and developing my own HFT algorithms (which for the past few years has been my source of both revenue and spare time). So I'm completely new to all of the topics on the forum, but I'm loving the challenge. At the moment I don't have any karma points so I can't publish, which is probably a good thing given my ignorance, so may I post some doubts and questions here in hope to be pointed in the right direction? Thanks in advance!
Replies from: John_Maxwell_IV, Lumifer↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-07-17T12:33:30.643Z · LW(p) · GW(p)
Hello and welcome! Don't be shy about posting; if you're a PhD making money with HFT, I think you are plenty qualified, and external perspectives can be very valuable. Posting in an open thread doesn't require any karma and will get you a much bigger audience than this welcome thread. (For maximum visibility you can post right after a thread's creation.)
Replies from: rikisola↑ comment by rikisola · 2015-07-17T13:14:35.511Z · LW(p) · GW(p)
Hi John, thanks for the encouragement. One thing that strikes me of this community is how most people make an effort to consider each other's point of view, it's a real indicator of a high level of reasonableness and intellectual honesty. I hope I can practice this too. Thanks for pointing me to the open threads, they are perfect for what I had in mind.
↑ comment by Lumifer · 2015-07-17T14:43:52.613Z · LW(p) · GW(p)
Do your algorithms require co-location and are sensitive to latency?
Replies from: rikisola↑ comment by rikisola · 2015-07-17T14:58:01.445Z · LW(p) · GW(p)
Hi Lumifer. Yes, to some extent. At the moment I don't have co-location so I minimized latency as much as possible in other ways and have to stick to the slower, less efficient markets. I'd like to eventually test them on larger markets but I know that without co-location (and maybe a good deal of extra smarts) I stand no chance.
comment by VivienneMarks · 2015-06-16T21:08:48.876Z · LW(p) · GW(p)
Finally bit the bullet and made an account-- hi people! I've been "LW adjacent" for a while now (meatspace friends with some prominent LWers, hang around Rationalist Tumblr/ Ozy's blog on the sidelines, seems like everyone I know has read HPMOR but me), and figured I ought to take the plunge.
Call me Vivs. I'm in my early twenties, currently doing odd jobs (temping, restaurant work, etc.) in preparation to start a Masters' this fall. I'm a historian, and would loooooove to talk history with any of you! (fans of Anne Boleyn/Thomas Cromwell/Victorian social peculiarities to the front of the line, please) I've always been that girl who pays waaaaay too much attention to if the magic system is internally consistent in a fantasy novel and gets overly irritated if my questions are brushed off with "But magic isn't real," so I have a feeling I'll like the way this site thinks, even if I'm way out of the median 'round these parts in a lot of ways.
Replies from: None, Nornagest, None↑ comment by [deleted] · 2015-06-18T09:11:54.594Z · LW(p) · GW(p)
Hi!
Victorian social peculiarities
I just want to say I found Stefan Zweig's The World Of Yesterday really insightful about that. I used to think that kind of prudishness came from religion. According to Zweig, it was actually almost the opposite: it came from Enlightenment values, as in, trying really really hard to always act rationally (not 100% in our sense, but in the sense of: deliberately, thoughtfully, impassionately) and considered sexual instincts a far too dangerous, uncontrollable, passionate, "irrational" force, that is where it came from. Which suggests that Freud was the last Victorian, so to speak.
Replies from: VivienneMarks↑ comment by VivienneMarks · 2015-06-20T15:44:52.394Z · LW(p) · GW(p)
Hi back!
Actually, interestingly, some Victorian prudishness was encouraged by Victorian feminists, weirdly enough. Old-timey sexism said that women were too lustful and oozed temptation, hence why they should be excluded from the cool-headed realms of men (Arthurian legend is FULL of this shit, especially if Sir Gallahad is involved). Victorian feminists actually encouraged the view of women as quasi-asexual, to show that no, having women in your university was not akin to inviting a gang of succubi to turn the school into an orgy pit (this was also useful, as back then, there were questions on the morality of women). A lot of modern sexism actually has its roots not in anything ancient, but in a weird backlash of Victoriana.
Replies from: Lumifer, Epictetus, VoiceOfRa↑ comment by Lumifer · 2015-06-20T15:53:27.232Z · LW(p) · GW(p)
having women in your university was not akin to inviting a gang of succubi to turn the school into an orgy pit
LOL. To quote Nobel Laureate Tom Hunt as of a couple of weeks ago:
Replies from: None, Sarunas, None, VivienneMarksLet me tell you about my trouble with girls … three things happen when they are in the lab … You fall in love with them, they fall in love with you and when you criticise them, they cry.
↑ comment by [deleted] · 2015-06-20T16:10:44.762Z · LW(p) · GW(p)
I found that particular piece of stupidity particularly amusing since my field is upwards of 55 percent female (at my level - the old guard of people who have been in it since the 60s or 70s is more male) and I have worked in labs where I was the only man.
↑ comment by Sarunas · 2015-06-29T17:21:11.971Z · LW(p) · GW(p)
This quote seems to have been intended as a joke and was taken out of context. A very flawed accuser: Investigation into the academic who hounded a Nobel Prize winning scientist out of his job reveals troubling questions about her testimony
↑ comment by [deleted] · 2015-06-24T14:57:13.852Z · LW(p) · GW(p)
One therefore wonders at man/man, woman/man and woman/woman troubles, which statistically should account for the majority of academic, er, troubles.
Replies from: Jiro↑ comment by Jiro · 2015-06-25T14:19:27.780Z · LW(p) · GW(p)
He's asserting that most troubles between men and women fall into a particular category. It might be that man/man troubles rarely fall into that category, and because most of that category is missing, are less numerous overall.
Replies from: None↑ comment by [deleted] · 2015-06-25T17:10:24.455Z · LW(p) · GW(p)
Well... Having once been infatuated with my supervisor and more than once reduced by him to tears even when my infatuation wore off, I can say this:
It's not people falling in love with people that really reduces group output. Being in love I worked like I would never do again.
It's people growing disappointed with people/goals, or having an actual life (my colleague quit her PhD when her husband lost his job, + they had a kid), or - God forbid! - competing for money. Now that's what I would call trouble.
Replies from: MaryCh, Creutzer↑ comment by Creutzer · 2015-07-02T09:10:43.183Z · LW(p) · GW(p)
Very good point! It's a ubiquitous stereotype, but it's not a priori clear to me that workplace romance leads to a net decrease in productivity, and I haven't seen real evidence for it. Google Scholar yielded nothing, it either ignores the search word "productivity" or just yields papers that report the cliché.
↑ comment by VivienneMarks · 2015-06-20T18:40:04.331Z · LW(p) · GW(p)
Uggghhhh.... that guy. I may not be a scientist, but I saw red when I read that.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-06-21T23:56:08.406Z · LW(p) · GW(p)
but I saw red when I read that.
Any particularly good reason for that, or just an irrational reaction?
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-06-22T01:38:09.580Z · LW(p) · GW(p)
One explicit argument in favor of excluding women from the workplace is thus: "There is currently a preponderance of men in the workplace. If there are men and women in the workplace, then there will be romance. If there is romance, then there will be reduced productivity. If there are more women in the workplace, then there will be less productivity. Therefore, there should not be more women in the workplace."
The obvious counterpoint is that this argument implicitly describes a world in which all women that would be excluded from the workplace could be replaced by men. In the case of global research effort, as opposed to particular research projects, it is highly doubtful that the possible marginal decrease in productivity of one additional pair of coworking lovers is of greater magnitude than the marginal increase in productivity of one additional researcher, regardless of their gender. That is, the purported benefit of preventing a bit of romance is not worth the cost of excluding half of the global intellectual elite from the research community.
More to the point are the connotations that are smuggled in when the explicit argument is not rehearsed as I have rehearsed it above, and more vague things are said like "You fall in love with them, they fall in love with you and when you criticise them, they cry." Here we go beyond the pragmatic consideration of productivity; we imply that workplace romance is not a thing that occurs in the presence of men and women, but that women are the sole causal origin of workplace romance. We imply that women are seductive people, by their very nature distracting; that women are people that are incapable of accepting criticism, a necessary skill in the task of research; that women have a Seductive, Whiny Essence that is antithetical to research. By this model, we might expect that a research institution composed entirely of women would be extensionally equivalent to a lesbian orgy-fight.
I would be royally pissed if someone attributed to me a Seductive, Whiny Essence; that would be a patently inaccurate statement to no virtuous end, and more, it would do my world harm. For me, it is natural to infer that people who are not seeking truth and who are acting against my interests are threats, and it is rational to experience powerful emotions when one perceives legitimate threats.
The pedestrian response to a claim like, "Women can't take criticism," would be to emphatically reply, "Women can take criticism!" I make this distinction particularly because I have seen women acknowledge that they have had problems adjusting to the climate of professions predominantly occupied by men. Perhaps the issue is more complex than the presence or absence of such a hypothetical Criticism-Taking Ability. More interestingly, we could ask questions like, "Can we make generalizations about how particular populations of people give and take criticism, and if so, how can we, and how do they?" It is a misstep to acknowledge individual and average differences and jump to barring women wholesale from research. Optimizing communication between male and female researchers is a better solution than excluding half of the intellectual elite.
Replies from: Jiro, VoiceOfRa↑ comment by Jiro · 2015-06-22T15:20:03.826Z · LW(p) · GW(p)
The obvious counterpoint is that this argument implicitly describes a world in which all women that would be excluded from the workplace could be replaced by men.
That isn't necessarily true, though. Imagine that an average woman has a net negative effect and an exceptional woman has a net positive effect. Then you should hold women to higher standards without excluding them completely.
I would be royally pissed if someone attributed to me a Seductive, Whiny Essence; that would be a patently inaccurate statement to no virtuous end, and more, it would do my world harm.
They're not attributing it to you specifically; they're attributing it as a statistical property to a class that includes you.
Replies from: Vaniver↑ comment by Vaniver · 2015-06-22T16:41:09.494Z · LW(p) · GW(p)
Then you should hold women to higher standards without excluding them completely.
So, historically this is somewhat similar to what happened--if a woman managed to get some sort of advocate / ally, she had limited access to the scientific / mathematical world as a scientist or mathematician. (Getting access to that world by hosting a salon was the typical path women would take, but that would put them much more in a support role.) But it had clear inefficiencies--Emmy Noether was unable to get a professorship in Germany, for example, and then even in America taught at a women's college, despite being a clearly first-rate mathematician.
One partial solution is to have some procedure by which Noether is declared an honorary man, and then gets full access. It is interesting to try to figure out how much of the distance this would cross between how things actually were and the ideal case, for varying strength of filtering. One can also imagine this smoothly progressing from a minor, reversible change to the ideal case, but because of that one can also see the slippery slope arguments that would have defeated it (since probably someone thought of this at the time).
(Whenever this subject comes up and we talk about people who managed to get around restrictions, it's probably also important to talk about people who couldn't get around those restrictions, not because of their ability but because of the economics or power dynamics involved. Noether had trouble getting paid for doing math, but it's cheap to do math; Jennie Cobb could find jobs flying planes but it's not cheap to go to space.)
↑ comment by VoiceOfRa · 2015-06-22T02:47:28.251Z · LW(p) · GW(p)
More to the point are the connotations that are smuggled in when the explicit argument is not rehearsed as I have rehearsed it above, and more vague things are said like "You fall in love with them, they fall in love with you and when you criticise them, they cry."
And you seem to be conflating two separate issues, romantic attraction, and ability to handle criticism.
The pedestrian response to a claim like, "Women can't take criticism," would be to emphatically reply, "Women can take criticism!"
The rational response would be to look into the matter to see which is in fact the case.
I make this distinction particularly because I have seen women acknowledge that they have had problems adjusting to the climate of professions predominantly occupied by men.
Well, you're link is more like, a woman after a lot of effort improves her criticism taking ability. And even then she would rather the culture change so that she doesn't have to exercise it. The question of whether criticism is necessary for the institution achieve whatever it's actual goals are is not addressed.
"Can we make generalizations about how particular populations of people give and take criticism, and if so, how can we, and how do they?"
Yes, if only there were a mathematical theory of how to update on that kind of evidence.
It is a misstep to acknowledge individual and average differences and jump to barring women wholesale from research.
How about not going out of our way to actively promote more women in research, e.g., with all the "women in science" programs.
Optimizing communication between male and female researchers is a better solution than excluding half of the intellectual elite.
Except women appear to make up less than half of the intellectual elite, largely because they're IQ's have a smaller standard deviation.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-06-22T03:14:42.155Z · LW(p) · GW(p)
And you seem to be conflating two separate issues, romantic attraction, and ability to handle criticism.
Such is the apparent resolution of Tom Hunt's map of the territory.
The rational response would be to look into the matter to see which is in fact the case.
As I have already said, it is not a question of which is the case any more than the case of the unobserved falling tree is a question of sound or non-sound.
Well, you're link is more like, a woman after a lot of effort improves her criticism taking ability. And even then she would rather the culture change so that she doesn't have to exercise it. The question of whether criticism is necessary for the institution achieve whatever it's actual goals are is not addressed.
To represent my position or that comment's position as a claim that criticism is unnecessary in research or business is a straw man. You are not entertaining the possibility that women in general may adapt to the particular quality of criticism in predominantly male professions as that woman did, nor that a cultural change is possible, or furthermore, optimal.
Yes, if only there were a mathematical theory of how to update on that kind of evidence.
I use the word 'how' qualitatively, not quantitatively. I don't mean, "How well do women take criticism on a scale of positiveness?," I mean, "By what internal mechanisms and social norms do women tend to interpret and exchange criticism, and how is it different from the way men do?"
How about not going out of our way to actively promote more women in research, e.g., with all the "women in science" programs.
The problem with affirmative action is not encouraging minorities to participate in activities in which they have not historically participated; the problem with affirmative action is using a minority's membership test as the selection criterion as opposed to using the selection criteria that would maximize the intended effect of said activity. We can broadly encourage minorities to attempt to become researchers without letting people who are bad at research be researchers. Where affirmative action is suboptimal, 'negative action' is not the solution. It would be suboptimal for an NBA talent scout to exclude a seven-foot-tall white basketball player from consideration because he had precommitted to excluding white basketball players because being of African descent positively correlates more strongly with height than does being of European descent. It would be suboptimal for a senior researcher to exclude a woman with an IQ in the top 2% from consideration because he had precommitted to excluding women because having an IQ in the top 2% correlates more strongly with being male than with being female. Respecting the precommitment in those scenarios while keeping the ultimate size of your selected populations constant would not maximize the average height or IQ of your selected population, and the original purpose of the precommitments was to maximize the average height or IQ of your selected population; the precommitment in each case is a lost purpose. If you use race or gender to inductively infer someone's basketball-playing or research ability without using height or IQ when those data are available, then you are failing to update.
Except women appear to make up less than half of the intellectual elite, largely because they're IQ's have a smaller standard deviation.
Deary et al. (2006) estimate that ~1/3 of the top 2% of the population in IQ is female. 46.2 million people is non-negligible, to say the least.
Replies from: ChristianKl, VoiceOfRa↑ comment by ChristianKl · 2015-06-22T12:14:00.861Z · LW(p) · GW(p)
Such is the apparent resolution of Tom Hunt's map of the territory.
Just because I can quote a 37 word paragraph of someone's speech doesn't mean that I accurately model the resolution of the map of the territory of that person.
Assuming that you can infer the full position of a person from a short exerpt is wrong. Twitter culture where people think that everything boils down to short exerpts is deeply troubling.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-06-22T20:57:46.020Z · LW(p) · GW(p)
Just because I can quote a 37 word paragraph of someone's speech doesn't mean that I accurately model the resolution of the map of the territory of that person.
Assuming that you can infer the full position of a person from a short exerpt is wrong. Twitter culture where people think that everything boils down to short exerpts is deeply troubling.
I agree. It was really just a pithy way of saying that he has a naive conception of this particular issue.
Replies from: Vaniver↑ comment by Vaniver · 2015-06-22T21:49:12.283Z · LW(p) · GW(p)
naive conception
Note that ChristianKl's objection is that a short comment is at best a crude snapshot of someone's mind; it seems rash to make conclusions about Hunt's conceptions from just that comment.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-06-22T21:59:30.338Z · LW(p) · GW(p)
I agree that that's true in general, but on the other hand, when someone gives a weak argument for theism, I don't regard it as rash to disregard their opinion on the matter. I can have little or no knowledge of the internal process while only observing the outcome and, because I have confidence about what sort of outcomes good processes produce, infer that whatever process he is using, it is probably not one that draws an accurate map in this particular region.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-06-22T22:30:35.425Z · LW(p) · GW(p)
I can have little or no knowledge of the internal process while only observing the outcome
To assess the outcome you would need to know the context in which the paragraph stands. You would need to listen to the speech. Having a high confidence that a 37 words excerpt is enough to judge the quality of someone's thinking is a reasoning error.
I agree that that's true in general, but on the other hand, when someone gives a weak argument for theism
If I read through the writings and speeches of Richard Dawkins I am highly confident that I can find a short paragraph were Dawkins makes a weak argument against theism. That's no proof that Dawkins has a naive conception of the debate about theism.
The words that follow "they cry" in the speech are "now seriously". He verbally tagged it as a joke. Making a bad joke isn't proof of a naive conception of an issue.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-06-22T23:10:05.318Z · LW(p) · GW(p)
Okay, I agree.
↑ comment by VoiceOfRa · 2015-06-22T05:51:18.400Z · LW(p) · GW(p)
As I have already said, it [whether women can take criticism] is not a question of which any more than the case of the unobserved falling tree is a question of sound or non-sound.
What do you mean by this? Certainly it may be necessary to make the question more precise, but the question certainly talks about things that correspond to reality.
To represent my position or that comment's position as a claim that criticism is unnecessary in research or business is a straw man. You are not entertaining the possibility that women in general may adapt to the particular quality of criticism in predominantly male professions as that woman did, nor that a cultural change is possible, or furthermore, optimal.
You may want to reread the comment:
Once my context was realigned... well, I can't say it was easy [emphasis mine], but at least I realised that it was "me, not you".
So what's her conclusion about the possibility of women in general adapting to "masculine culture", i.e., a culture of criticism?
LW has the near-unique trait of being a bunch of people who are actively trying to change... therefore it's entirely possible that we can avoid the at-first-blush-alienating-to-the-majority-of-women approach that is common in other masculine-only cultures.
There's nothing wrong with the masculine culture. But it isn't the only way we could be.
There should be room for all of us. :)
In other words, she's arguing that most women won't be able to adept and that to be truly inclusive of women the culture would have to change.
It would be suboptimal for an NBA talent scout to exclude a seven-foot-tall white basketball player from consideration because he had precommitted to excluding white basketball players because being of African descent correlates more strongly with height than does being of European descent.
It would also be suboptimal for said scout to spend too much time in white neighborhoods looking for seven-foot-tall white basketball players, and a precommitment to doing so would also be a lost purpose.
Replies from: None, Gram_Stone↑ comment by [deleted] · 2015-06-23T16:33:20.073Z · LW(p) · GW(p)
An important point: I have personally observed particular members of the old male guard in cell biology reliably applying much harsher standards to their female colleagues and students than to their male colleagues unreasonably and repetitively. (EDIT: it sure isn't everyone or even a plurality but it sure is a visible pattern)
This leads to women (being over half my field at the PhD level) leaving their associations with these men and staying the heck away, as anyone would when being criticized and judged unfairly.
Thankfully I can say this is becoming much less common as the field turns over, and there are more options and sane colleagues available now such that female scientists need to put up with unreasonable behavior or leave all together much less frequently these days.
↑ comment by Gram_Stone · 2015-06-22T06:43:44.390Z · LW(p) · GW(p)
What do you mean by this? Certainly it may be necessary to make the question more precise, but the question certainly talks about things that correspond to reality.
Let us taboo 'criticism'. Here's one definition: "the expression of disapproval of someone or something based on perceived faults or mistakes." Surely the members of predominantly or entirely female social groups have some way of expressing disapproval and updating on it, as in all social groups; the problem is that it's different from the way that men do, and that when women find themselves in predominantly male professions, they're unfamiliar with the culture and misinterpret social signals that express disapproval by interpreting them in terms of the cultures to which they are accustomed. To speak of an innate Criticism-Taking Ability as the sole causal factor is a lossy compression that prevents you from imagining ways that you might improve the outcomes of situations in which criticism is exchanged. The question is then "How can we improve the outcomes of situations in which criticism is exchanged?"
In other words, she's arguing that most women won't be able to adept and that to be truly inclusive of women the culture would have to change.
She's arguing that the LW culture would probably be more amenable to altering itself for the sake of including women than most other cultures, not that women in the LW culture would be more amenable to altering themselves. It's true that women in the LW culture would probably be more amenable to altering themselves, but she wasn't arguing, and we can't say, that her example is strong evidence of how well an arbitrary woman will adapt to an arbitrary culture, or how well an arbitrary culture with a paucity of women will adapt to more women. At any rate, I only brought this comment up in my first comment in order to provide an example of the dissonance between male and female social norms for exchanging criticism, so this isn't really important.
It would also be suboptimal for said scout to spend too much time in white neighborhoods looking for seven-foot-tall white basketball players, and a precommitment to doing so would also be a lost purpose.
You're breaking the analogy. Random neighborhoods of people have not already been selected for height. (This is why talent scouts don't actually scout door-to-door. Beware when you find yourself arguing that a policy has some benefit compared to the null action, rather than the best benefit of any action.) If researchers are coming to you with résumés containing data relevant to their research ability, or if you're searching graduate programs for potential researchers, then IQ and research ability have already been selected for. You're using race and gender as proxies for height and IQ; you throw away proxies when you have the real deal. This is about excluding women a priori. It's simple: if you have a perfectly good female researcher in front of you, and you don't hire her because you have a sign that says "No girls allowed," then your sign is stupid.
Replies from: Jiro↑ comment by Jiro · 2015-06-22T15:11:32.680Z · LW(p) · GW(p)
Even assuming that IQ is completely correlated with ability to do the job, your pool of applicants that is "already selected for IQ" is not selected with 100% accuracy. In other words, being in the pool is a proxy for IQ. You're better off using two proxies than one (unless the effect of one proxy is nil when conditioned on the other proxy)--if you want to maximize the applicant quality, you should pick people from the pool, but prefer men when comparing two applicants who both are in the pool.
The basketball example here is bad because you actually can measure someone's height. If you can measure it you don't need proxies for it. You're not going to measure "ability to do the job" without using proxies and even if IQ is completely correlated with it, you're not going to measure IQ without using proxies either.
Replies from: Good_Burning_Plastic, Jiro↑ comment by Good_Burning_Plastic · 2015-06-23T08:02:31.772Z · LW(p) · GW(p)
Height is a proxy for basketball ability and the résumé is a proxy for research ability. It is possible that the latter is a worse proxy than the former, but it seems unlikely that it is much worse.
(ETA: Especially given that the latter enables you to go to arXiv and look at the applicant's actual research output so far, whereas knowing how tall someone is doesn't enable you to watch all the basketball matches they have played in.)
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-01T02:11:01.934Z · LW(p) · GW(p)
The problem is that height is a number, whereas it's hard to translate résumé into a number without resorting to various gameable metrics.
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-07-03T13:10:02.340Z · LW(p) · GW(p)
So what?
Replies from: VoiceOfRa↑ comment by Jiro · 2015-06-22T19:06:25.939Z · LW(p) · GW(p)
Is there some reason why this was modded down aside from political incorrectness?
Replies from: Lumifer↑ comment by Lumifer · 2015-06-22T19:33:03.052Z · LW(p) · GW(p)
I didn't downvote, but your second paragraph has a problem in that in the basketball example height is only a proxy for the ability to play basketball.
The first paragraph is a bit iffy, too, because proxies have different effectiveness or usefulness. By the time you're estimating someone's ability to do the job on the basis of a resume, the male/female proxy becomes basically insignificant.
In any case, I think that the better language here is that of priors. It's perfectly fine to have different priors for male job applicants than for female job applicants, but once evidence starts coming in, the priors become less and less important.
Replies from: Jiro↑ comment by Jiro · 2015-06-22T19:56:29.095Z · LW(p) · GW(p)
in the basketball example height is only a proxy for the ability to play basketball.
IQ is a proxy for the ability to do well in a job, too. I was ignoring that, so in the analogy I would have to ignore that for height and ability to play basketball.
proxies have different effectiveness or usefulness
Arguing "the other proxy is so much better that we don't need the original one" is not the same as "the other proxy is better, and that's all we need to know", and is even farther from "we can measure it so we don't need a proxy at all".
↑ comment by Epictetus · 2015-06-20T18:52:08.397Z · LW(p) · GW(p)
Actually, interestingly, some Victorian prudishness was encouraged by Victorian feminists, weirdly enough.
Feminists of that era were practically moral guardians. In the USA, they closely allied with temperance movements and managed to secure the double victory of securing women's right to vote and prohibiting alcohol.
Old-timey sexism said that women were too lustful and oozed temptation, hence why they should be excluded from the cool-headed realms of men
I can't track the reference right now, but I recall reading a transcript of a Parliamentary debate where they decided not to extend anti-homosexuality legislation to women on the grounds that women couldn't help themselves.
↑ comment by VoiceOfRa · 2015-06-22T02:27:07.842Z · LW(p) · GW(p)
Actually, interestingly, some Victorian prudishness was encouraged by Victorian feminists, weirdly enough. Old-timey sexism said that women were too lustful and oozed temptation, hence why they should be excluded from the cool-headed realms of men (Arthurian legend is FULL of this shit, especially if Sir Gallahad is involved). Victorian feminists actually encouraged the view of women as quasi-asexual, to show that no, having women in your university was not akin to inviting a gang of succubi to turn the school into an orgy pit (this was also useful, as back then, there were questions on the morality of women).
Note that neither you nor the Victorian feminists appear at all interested in the truth-status of the claims involved, merely their implications for the social status of women.
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-06-23T11:13:04.657Z · LW(p) · GW(p)
Yes, when a statement is clearly intended as a boo light its factual accuracy is not the most relevant thing.
↑ comment by Nornagest · 2015-06-16T21:10:43.601Z · LW(p) · GW(p)
Welcome to LW! I suspect you'll find a lot of company here, at least as regards thinking in unwarranted detail about fictional magic systems.
Replies from: Lumifer, VivienneMarks↑ comment by VivienneMarks · 2015-06-16T23:30:38.353Z · LW(p) · GW(p)
Thanks! I actually had a VERY long side discussion in an undergrad history course about whether stabbing a person possessed by a dybbuk creates a second dybbuk...
↑ comment by [deleted] · 2015-06-18T09:35:32.037Z · LW(p) · GW(p)
if the magic system is internally consistent in a fantasy novel
Do you find D&D's cast-and-forget system consistent? It was borrowed from Jack Vance's Dying Earth novels, but those felt really weird novels to me.
Replies from: VivienneMarks↑ comment by VivienneMarks · 2015-06-20T15:40:25.666Z · LW(p) · GW(p)
No! I actually find D&Ds system super-frustrating, but then I hate having luck-based elements in magic systems. :P
comment by [deleted] · 2015-03-20T21:03:17.555Z · LW(p) · GW(p)
Wow, I'm so glad I stumbled onto slatestarcodex, and from there, here!!! You guys are all like smarter, cooler versions of me! It's great to have a label for the way my brain is naturally wired and know there other people in the world besides Peter Singer who think similarly. I'm really excited, so my "intro" might get a little long...
Part 1-Look at me, I'm just like you!
I'm Ellen, a 22 year old Spanish major and world traveling nanny from Wisconsin, so maybe not your typical LWer, but actually quite typical in other, more important ways. :)
I grew up in a Christian home/bubble, was super religious (Wisconsin Evangelical Lutheran Synod), truly respected/admired the Christians in my life, but even while believing, never liked what I believed. I actually just shared my story plus some interesting studies on correlations between personality, intelligence, and religiosity, if anyone is interested: http://magicalbananatree.blogspot.com/2015/02/christian-friends-do-you-ever-feel.html The post is based almost entirely on what I've come to learn is called "consequentialism" which I'm happy to see is pretty popular over here. I subscribe to this line of thinking so much that I used to pray for a calamity to strengthen my faith. I chose a small Lutheran school despite having great credentials to get into an Ivy, because with an eye on eternity, I wanted to avoid any environment that would foster doubt. My friends suggested I become a missionary, but to me, it made far more sense to become a high profile lawyer and donate 90% of my salary to fund a dozen other missionaries. (A Christian version of effective altruism?) No one ever understood!
Some people might deconvert because they can't believe in miracles, or they can't get over the problem of evil. These are bad reasons, I think, and based on the presupposition that God doesn't exist. Personally, the hardest thing for me was believing that God was all-powerful. Like, if God were portrayed as good, but weak, struggling against an evil god and just doing the best he could to make a just universe and make his existence known, I probably would never have left the faith. It took me long enough as it is!
Part 2-A noob atheist's plea for help
Anyway, now I've "cleared my mind" of all that and am starting fresh, but my friends have a lot of questions for me that I'm not able to answer yet, and I have a lot of my own, too. I'm starting by reading about science (not once had I even been exposed to evolution!) but have a lot of other concerns on the back burner, and maybe you guys can point me in the right direction:
Who was the historical Jesus? As a history source, why is the Bible unreliable?
How can I have morality?? Do I just have to rely on intuition? If the whole world relied on reason alone to make decisions, couldn't we rationalize a LOT of things that we intuit as wrong?
Does atheism necessarily lead to nihilism? (I think so, in the grand scheme of things? But the world/our species means something to us, and that's enough, right?)
What about all the really smart people I know and respect, like my sister and Grandma, who have had their share of doubts but ultimately credit their faith to having experienced extraordinary, miraculous answers to prayer? Like obviously, their experiences don't convince ME to believe, but I hate to dismiss them as delusional and call it a wild coincidence...
Are rationalists just as guilty of circular reasoning as Christians are? (Why do I trust human reason? My human reason tells me it's great. Why do Christians trust God? The Bible tells them he's great.)
Part 3-Embarrassingly enthusiastic fan mail
Yay curiosity! Yay strategic thinking! Yay honesty! Yay open-mindedness! Yay opportunity cost analyses! Yay common sense! Yay tolerance of ambiguity! Yay utilitarianism! Yay acknowledging inconsistency in following utilitarianism! Yay intelligence! Yay every single slatestarcodex post! Yay self-improvement! Yay others-improvement! Yay effective altruism!
Ahhh this is all so cool! You guys are so cool. I can't wait to read the sequences and more posts around this site! Maybe someday I'll even meet a real life rationalist or two, it seems like the Bay Area has a lot. :)
Replies from: Alicorn, adamzerner, JohnBuridan, hairyfigment, orthonormal, Gunnar_Zarncke, Viliam_Bur, Gram_Stone↑ comment by Alicorn · 2015-03-20T21:11:52.941Z · LW(p) · GW(p)
Maybe someday I'll even meet a real life rationalist or two, it seems like the Bay Area has a lot. :)
There's now a portal into the meatspace Bay rationalist community if this is something you're interested in.
Replies from: None↑ comment by Adam Zerner (adamzerner) · 2015-03-21T01:21:27.617Z · LW(p) · GW(p)
My friends suggested I become a missionary, but to me, it made far more sense to become a high profile lawyer and donate 90% of my salary to fund a dozen other missionaries. (A Christian version of effective altruism?) No one ever understood!
That is awesome!
I'm starting by reading about science
If you haven't heard of HPMOR, check it out here. Anyway, there's this great sequence where Harry teaches the ways of science to Drako Malfoy... it's great! And I think very worthwhile for a beginner to read.
How can I have morality?? Do I just have to rely on intuition? If the whole world relied on reason alone to make decisions, couldn't we rationalize a LOT of things that we intuit as wrong? + other things you mention
Eliezer talks about a lot of this in the Metaethics Sequence, particularly in the post Where Recursive Justification Hits Bottom.
If you haven't already heard of it, check out the idea of terminal values. Something tells me that you understand it (at least on some level) though. Anyway, Eliezer seems to say something about Occam's Razor justifying our intuitive feelings about what's moral. Personally, I don't really get it. I don't see how a terminal value could ever be rational. My understanding is that rationality is about achieving terminal values, not choosing them. However, I notice confusion and don't have strong opinions.
Ahhh this is all so cool! You guys are so cool. I can't wait to read the sequences and more posts around this site!
Welcome :) LessWrong has had a huge positive impact on my life. I hope and suspect that the same will be true for you!
Replies from: None, hairyfigment↑ comment by [deleted] · 2015-03-24T04:10:24.125Z · LW(p) · GW(p)
Thanks for the welcome!!
I just read Where Recursive Justification Hits Bottom, and it was perfect and super relevant, thanks. "What else could I possibly use? Indeed, no matter what I did with this dilemma, it would be me doing it. Even if I trusted something else... it would be my own decision to trust it." This is basically what I've been telling people who ask me how I can trust my own reason, but it's great to have more good points to bring up. All the posts I've read so far have been so clear and well-written, I can't help but smile and nod as I go.
I'm going to start with the e-book, and once I finish that, I'll probably look into HPMOR! I've seen it mentioned a lot around here, so I figure it must be great, but um, should I read the original Harry Potter first? Growing up, I was never allowed to.
I clicked the terminal values link, and then another link, and then another, and then another... then I googled what Occam's razor is... my questions about morality are still far from settled, but all this gives me a lot to think about, so thank you :)
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2015-03-30T03:31:45.816Z · LW(p) · GW(p)
Sorry for the late reply. Glad to be of assistance!
I'm going to start with the e-book, and once I finish that, I'll probably look into HPMOR!
That seems reasonable. A thought of mine on the sequences: they could be a bit dense and difficult to understand at times. I think some version of the 20/80 rule applies, and I'd approach the reading with this in mind. In other words, there's a lot of material and a lot of it requires a lot of thought, and so a proper reading would probably take many months. And it would probably take years to achieve true understanding. However, there's still a lot of really important core principles that you could get in a couple of weeks.
should I read the original Harry Potter first?
https://news.ycombinator.com/item?id=9203769
Personally, I think that knowing the gist of the story is sufficient.
I saw some of your other comments and see that you still have a lot of questions and are a bit hesitant to post here before doing more reading. I think that people will be very receptive to any sort of comments and questions as long as you're open minded and curious. And if you ever just don't want to say something publicly, feel free to message me privately.
Replies from: None↑ comment by [deleted] · 2015-04-03T21:49:54.780Z · LW(p) · GW(p)
Thanks! I'm 30% through now. I've really been enjoying them so far, going back to reread certain chapters and recommending others like crazy based on conversations about similar but far less articulate thoughts I've had in the past. Even without knowing much about the content of HPMOR, I'm looking forward to it already just for its having been written by the same author.
Thanks for your offer, I will probably take you up on it some day! Although you're right that people here seem pretty receptive to honest questions. I asked a question in another thread a few days ago, about ambition vs. hedonism, an issue I've always wondered about...no replies so far, but I did get some "karma" so that felt nice, haha :)
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2015-04-04T00:46:10.123Z · LW(p) · GW(p)
30% through the Rationality book?!! WOW!
I responded to your comment about ambition vs. hedonism.
↑ comment by hairyfigment · 2015-03-21T17:05:41.669Z · LW(p) · GW(p)
Occam's Razor justifying our intuitive feelings about what's moral
Wait, what? Do you mean Simplified Humanism? I hope that's more of a description than a full argument. One could perhaps turn it into an argument by showing that our root values come from evolution - causally, not in the sense of moral reasons - and making a case that you would not expect them to have exceptions in those exact places.
Eliezer also makes a brief attempt to explain his opponents' motives. This may be true, but I don't think we should dwell on it.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2015-03-21T17:43:59.671Z · LW(p) · GW(p)
Honestly I don't really know.
↑ comment by JohnBuridan · 2015-03-21T20:08:28.948Z · LW(p) · GW(p)
Hi els!
I just wanted to welcome you and perhaps start a discussion. I have lurked around the Less Wrong boards for years (three, I think, recently made a new account because I forgot my username) and there is a lot of helpful and exciting discussion going on here and so long as you communicate clearly even dissenting opinions are valued.
You came from the jean-skirt Lutherans. I too came from a bubble, and I know it can be tough to find people around whom you feel comfortable talking about big questions like religion, metaphysics, and truth, and logic. But I believe once you start looking, you will find people who are curious about the world and want to increase their quality of life and mind too!
I don't think atheism leads to nihilism. An atheist doesn't have to be a strict materialist! For example, logic probably exists as part of the universe's fabric whether or not humans are thinking or even exist. Yet logic is not made of brain matter or any material. It is mind-independent. So are all the qualities that help people achieve their goals, such as courage, perseverance, honest self-reflection, charity, or whatever else. These are part of the human universe, even though they aren't essentially made of stuff. Well that's my perspective. And I, like the other guys and gals here, am always up to discuss these topics further and try to deepen our understanding and practice of rationality.
Hope you enjoy hanging around LW!
Cheers!
Replies from: None↑ comment by [deleted] · 2015-03-24T06:28:21.601Z · LW(p) · GW(p)
Thanks for the welcome! :) You're right, so many great conversations taking place here! I feel like I'm going to be doing a LOT more reading before I really post anywhere else, but I look forward to lurking too.
I guess when I think about nihilism, I don't necessarily think about strict materialism. That's an interesting point about logic being mind-independent though. I guess I just think about the simple definition of nihilism as meaninglessness. All my life, the "meaning" of life had come from Jesus, which in my mind, meant a relationship with God and eternity in heaven. Now, there's no afterlife. Is there still meaning? Do I even care what happens after I die? I think I do, but why? I could just go out and do more good than bad and enjoy my meaningless days under the sun; is it really worth the mental energy to think about all this stuff, and if so, why? I'm realizing one thing people love about Christianity is how easy it is, once you can get past the whole childlike faith thing.
↑ comment by hairyfigment · 2015-03-23T19:54:07.359Z · LW(p) · GW(p)
the problem of evil. These are bad reasons, I think, and based on the presupposition that God doesn't exist. Personally, the hardest thing for me was believing that God was all-powerful. Like, if God were portrayed as good, but weak, struggling against an evil god and just doing the best he could to make a just universe and make his existence known, I probably would never have left the faith.
This puzzled me, since it sounds a lot like the problem of evil. I take it you were describing the argument you lay out at the link?
For completeness - since I'm about to bash Christianity - I should note that Paul does not write like he has even an imagined revelation on the subject of Hell. He writes as if people in the Roman Empire often talked about everyone going to Hades when they died, and therefore he could count on people receiving as "good news" the claim that belief in Jesus would definitely send you to Heaven. (Later, the Gospels implied that your actions could send you to Heaven or Hell regardless of what you believed. Early Christians might have split the difference by reserving baptism for those they saw as living a 'Christian' life.) Clearly one can be a Christian in Paul's sense without believing in Hell.
Who was the historical Jesus?
We don't know. I have some qualms about Richard Carrier's argument (eg in On the historicity of Jesus: Why we might have reason for doubt). But plugging different numbers into his calculations, I come out with no more than a 54% chance Jesus even existed. We can't answer every factual question; some information is almost certainly lost to us forever.
As a history source, why is the Bible unreliable?
This one seems fundamental enough that if people insist on the truth of miracles - and reports that you can move mountains if you have faith the size of a mustard seed - I don't know what to tell them. But besides directing people to mainstream scholarship (which by the way places the date of Mark after the destruction of the Temple), I can note that Mark inter-cuts the story of the fig tree with Jesus expelling the money-lenders from the Temple. The tree seems like a straightforward metaphor. Then we have later Gospels openly changing the narrative for their own purposes. Mark says Jesus could give no sign to those who did not believe, and they would not have believed (says Jesus in a parable) even if some guy named Lazarus had returned from the dead. John says Jesus performed signs all the time, and as you would expect this led many people to believe in him, especially when he brought Lazarus back from the dead. Though the resurrected disciple who Jesus loved disappears from the narrative after the period John depicts, and even Acts shows no awareness of this important witness.
How can I have morality?? Do I just have to rely on intuition? If the whole world relied on reason alone to make decisions, couldn't we rationalize a LOT of things that we intuit as wrong?
If you want to have morality, you can just do it. By this I mean that any function assigning utility to outcomes in a physically meaningful way appears consistent. But yes, I've come to agree that simple utility functions like maximizing pleasure in the Universe technically fail to capture what I would call moral. For more practical advice, see a lot of this site and perhaps the CFAR link at the top of the page.
Does atheism necessarily lead to nihilism?
This depends. I would normally use the term "nihilism" to mean a uniform utility function, which does not distinguish between actions. This is equivalent to assigning every outcome zero utility. As the previous link shows, plenty of non-uniform utility functions can exist whether Yahweh does or not.
If you mean the lack of a moral authority you can trust absolutely, or that will force you to behave morally, then I would basically say yes. There is no authority anywhere.
What about all the really smart people I know and respect, like my sister and Grandma, who have had their share of doubts but ultimately credit their faith to having experienced extraordinary, miraculous answers to prayer?
Do they seem smarter and more worthy of respect than Gandhi? Perhaps he's not the best example, but putting him next to the many people from non-Christian religions who have made similar claims to religious experience may get the point across. (Aleister Crowley made a detailed study of mystical experience and how to produce it, but you may find him abrasive at best.)
Are rationalists just as guilty of circular reasoning as Christians are?
That also depends on what you mean.
Replies from: None↑ comment by [deleted] · 2015-03-24T07:27:04.374Z · LW(p) · GW(p)
Oh, oops, I can see why that would be puzzling. But yeah, you figured it out. Do you really think my link was an argument though? A lot of people have accused me of trying to deconvert my friends, but I really don't think I was making an argument so much as sharing my own personal thoughts and journey of what led me away from the faith.
You correctly point out that not all Christians believe in hell, but I didn't want to just tweak my belief until I liked it. If I was going to reject what I grew up with, I figured I might as well start with a totally clean slate.
I'm really glad you and other atheists on here have bothered looking into Historical Jesus. Atheists have a stereotype of being ignorant about this, which actually, for those who weren't raised Christians, I kind of understand, since now that I consider myself atheist, it's not like I'm suddenly going to become an expert on all the other religions just so I can thoughtfully reject them. But now that my friends have failed to convince me atheism is hopeless, they're insisting it's hallucinogenic, that atheists are out of touch with reality, and it's nice (though unsurprising) to see that isn't the case.
Okay, I know that I personally can have morality, no problem! But are you trying to say it's not just intuition? Or if I use that Von Neumann–Morgenstern utility theorem you linked, I'm a little confused, maybe you can simplify for me, but whose preferences would I be valuing? Only my own? Everyone's equally? If I value everyone's equally and say each human is born with equal intrinsic value, that's back to intuition again, right? Anyway, yeah, I'll look around and maybe check out CFAR too if you think that would be useful.
Oh! I like that definition of nihilism, thanks. Personally, I think I could actually tolerate accepting nihilism defined as meaninglessness (whatever that means), but since most people I know wouldn't, your definition will come in handy.
Also, good point about Gandhi. I had actually planned on researching whether people from other religions claimed to have answered prayers like Christians do, but bringing up the other alleged "religious experiences" of people of other faiths seems like a good start for when my sister and I talk about this. Now I'm curious about Crowley too. I almost never really get offended, so even if he is abrasive, I'm sure I can focus on the facts and pick out a few things to share, even if I wouldn't share him directly.
Thanks for your reply! Hopefully you can follow this easily enough; next time I'll add in quotes like you did...
Replies from: hairyfigment↑ comment by hairyfigment · 2015-03-25T19:58:54.803Z · LW(p) · GW(p)
The theorem shows that if one adopts a simple utility function - or let's say if an Artificial Intelligence has as its goal maximizing the computing power in existence, even if that means killing us and using us for parts - this yields a consistent set of preferences. It doesn't seem like we could argue the AI into adopting a different goal unless that (implausibly) served the original goal better than just working at it directly. We could picture the AI as a physical process that first calculates the expected value of various actions in terms of computing power (this would have to be approximate, but we've found approximations very useful in practical contexts) and then automatically takes the action with the highest calculated expected value.
Now in a sense, this shows your problem has no solution. We have no apparent way to argue morality into an agent that doesn't already have it, on some level. In fact this appears mathematically impossible. (Also, the Universe does not love you and will kill you if the math of physics happens to work out that way.)
But if you already have moral preferences, there shouldn't be any way to argue you out of them by showing the non-existence of Vishnu. Any desires that correspond to a utility function would yield consistent preferences. If you follow them then nobody can raise any logical objection. God would have to do the same, if he existed. He would just have more strength and knowledge with which to impose his will (to the point of creating a logical contradiction - but we can charitably assume theologians meant something else.) When it comes to consistent moral foundations, the theorem gives no special place to his imaginary desires relative to yours.
I mentioned above that a simple utility function does not seem to capture my moral preferences, though it could be a good rule of thumb. There's probably no simple way to find out what you value if you don't already know. CFAR does not address the abstract problem; possibly they could help you figure out what you actually value, if you want practical guidance.
Now I'm curious about Crowley too. I almost never really get offended, so even if he is abrasive, I'm sure I can focus on the facts and pick out a few things to share, even if I wouldn't share him directly.
Note that he doesn't believe in making anything easy for the reader. The second half of this essay might perhaps have what you want, starting with section XI. Crowley wrote it under a pseudonym and at least once refers to himself in the third person; be warned.
Replies from: None↑ comment by [deleted] · 2015-03-26T18:06:37.627Z · LW(p) · GW(p)
Thanks a lot for explaining the utility theorem. So just to be sure, if moral preferences for my personal values (I'll check CFAR for help on this, eventually) are the basis of morality, is morality necessarily subjective?
I'll get to Crowley eventually too, thanks for the link. I've just started the Rationality e-book and I feel like it will give me a lot of the background knowledge to understand other articles and stuff people talk about here.
Replies from: Viliam_Bur, TheAncientGeek↑ comment by Viliam_Bur · 2015-03-27T10:08:34.270Z · LW(p) · GW(p)
is morality necessarily subjective?
If "subjective" means "a completely different alien species would likely care about different things than humans", then yes. You also can't expect that a rock would have the same morality as you.
If "subjective" means "a different human would care about completely different things than me" then probably not much. It should be possible to define a morality of an "average human" that most humans would consider correct. The reason it appears otherwise is that for tribal reasons we are prone to assume that our enemies are psychologically nonhuman, and our reasoning is often based on factual errors, and we are actually not good enough at consistently following our own values. (Thus the definition of CEV as "if we knew more, thought faster, were more the people we wished we were, had grown up farther together"; it refers to the assumption of having correct beliefs, being more consistent, and not being divided by factional conflicts.)
Of course, both of these answers are disputed by many people.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-03-27T11:48:02.874Z · LW(p) · GW(p)
There is a set of reasonably objective facts about what values people have, and how your actions would impact them, That leads to reasonably objective answers about what you should and shouldn't do in a specific situation. However, they are only locally objective,..what value based ethics removes is globally objective answers, in the sense that you should always do X .or refrain from Y irrespective of the contexts,
It's a bit like the difference between small g and big G in physics,
Replies from: Lumifer↑ comment by Lumifer · 2015-03-27T14:45:55.676Z · LW(p) · GW(p)
There is a set of reasonably objective facts about what values people have, and hhow your actions would impact them, That leads to reasonably objective answers about what you should and shouldn't do in a specific situation.
Nope. It leads to reasonably objective descriptive answers about what the consequences of your actions will be. It does not lead to normative answers about what you should or should not do.
Replies from: None, TheAncientGeek↑ comment by [deleted] · 2015-03-27T17:45:47.261Z · LW(p) · GW(p)
Okay, I guess I'm still confused. So far I've loved everything I've read on this site and have been able to understand; I've appreciated/agreed with the first 110 pages of the Rationality ebook, felt a little skeptical for liking it so completely, and then reassured myself with the Aumann's agreement theorem it mentions. So I feel like if this utility theorem which bases morality on preferences is commonly accepted around here, I'll probably like it once I fully understand it. So bear with me as I ask more questions...
Whose preferences am I valuing? Only my own? Everyone's equally? Those of an "average human"? What about future humans?
Yeah, by subjective, I meant that different humans would care about different things. I'm not really worried about basic morality, like not beating people up and stuff, but...
I have a feeling the hardest part of morality will now be determining where to strike a balance between individual human freedom and concern for the future of humanity.
Like, to what extent is it permissible to harm the environment? If something, like eating sugar for example, makes people dumber, should it be limited? Is population control like China's a good thing?
Can you really say that most humans agree on where this line between individual freedom and concern for the future of humanity should be drawn? It seems unlikely...
Replies from: Lumifer, dxu↑ comment by Lumifer · 2015-03-27T19:20:56.602Z · LW(p) · GW(p)
I'm the wrong person to ask about "this utility theorem which bases morality on preferences" since I don't really subscribe to this point of view.
I use the world "morality" as a synonym for "system of values" and I think that these values are multiple, somewhat hierarchical, and are NOT coherent. Moral decisions are generally taken on the basis of a weighted balance between several conflicting values.
↑ comment by dxu · 2015-03-27T18:19:23.980Z · LW(p) · GW(p)
By definition, you can only care about your own preferences. That being said, it's certainly possible for you to have a preference for other people's preferences to be satisfied, in which case you would be (indirectly) caring about the preferences of others.
The question of whether humans all value the same thing is a controversial one. Most Friendly AI theorists believe, however, that the answer is "yes", at least if you extrapolate their preferences far enough. For more details, take a look at Coherent Extrapolated Volition.
↑ comment by [deleted] · 2015-03-27T20:53:50.306Z · LW(p) · GW(p)
Okay, that makes sense, but does this mean you can't say someone else did something wrong, unless he was acting inconsistently with his personal preferences?
Ah, okay, I've been reading most hyperlinks here, but that one looks pretty long, so I will come back to it after I finish Rationality (or maybe my question will even be answered later on in the book...)
↑ comment by hairyfigment · 2015-03-27T18:45:36.099Z · LW(p) · GW(p)
That is definitely not the idea behind CEV, though it may reflect the idea that a sizable majority will mostly share the same values under extrapolation.
↑ comment by seer · 2015-03-30T00:33:41.748Z · LW(p) · GW(p)
Most Friendly AI theorists believe, however, that the answer is "yes", at least if you extrapolate their preferences far enough.
Do they have any arguments for this besides wishful thinking?
Replies from: hairyfigment↑ comment by hairyfigment · 2015-03-30T00:44:36.914Z · LW(p) · GW(p)
I told him "they" assume no such thing - his own link is full of talk about how to deal with disagreements.
Replies from: seer↑ comment by seer · 2015-03-30T00:59:34.042Z · LW(p) · GW(p)
Yes, I've read most of the arguments, they strike me as highly speculative and hand-wavy.
Replies from: hairyfigment↑ comment by hairyfigment · 2015-03-30T01:10:40.322Z · LW(p) · GW(p)
This is an impressive failure to respond to what I said, which again was that you asked for an explanation of false data. "Most Friendly AI theorists" do not appear to think that extrapolation will bring all human values into agreement, so I don't know what "arguments" you refer to or even what you think they seek to establish. Certainly the link above has Eliezer assuming the opposite (at least for the purpose of safety-conscious engineering).
ETA: This is the link to the full sub-thread. Note my response to dxu.
↑ comment by TheAncientGeek · 2015-03-27T20:06:37.299Z · LW(p) · GW(p)
Is that a fact? It's true that the theories often discussed here , utilitarianism and so in, don't solve the motivation problem, but that doesn't mean no theory does,
↑ comment by TheAncientGeek · 2015-03-27T11:41:01.787Z · LW(p) · GW(p)
Not necessarily subjective, in the sense that "what should I do in situation X" necessarily lacks an objective answer.
Even if you treat all value as morally relevant, and you certain dont have to, there is a set of reasonably objective facts about what values people have, and how your actions would impact them, That leads to reasonably objective answers about what you should and shouldn't do in a specific situation. However, they are only locally objective,..
↑ comment by orthonormal · 2015-03-20T21:55:58.024Z · LW(p) · GW(p)
There's also a Less Wrong meetup group in Madison, if you still live in Wisconsin! (They also play lots of board games.)
Replies from: None↑ comment by Gunnar_Zarncke · 2015-04-21T22:12:15.427Z · LW(p) · GW(p)
You are awesome! I wish I could radiate only half as much enthusiasm and happiness. Even though I feel it - I just can't render it as much. I plan to learn from you in this regard!
You are welcome. I will also try to answer your questions. Some of them I ponderd myself and arrived at some answers. But then I had more time. I have a comparable background and I have a deep interest in children so you may also find my ressources for parents of interest.
But now to your questions:
My friends suggested I become a missionary, but to me, it made far more sense to become a high profile lawyer and donate 90% of my salary to fund a dozen other missionaries. (A Christian version of effective altruism?) No one ever understood!
Awesome. But it can be explained by the presence of memes in real-life christian culture that regulate such actions as misguided. See Reason as memetic immune disorder.
Who was the historical Jesus? As a history source, why is the Bible unreliable?
The Jesus Seminar may have answers of the kind you desire. If a historical Jesus can be found by taking the bible as historcal evidence instead of sacred text, then look there. The Jesus Seminar has been heavily criticised (in part legitimately so) but it may provide the counter-balance to your already known facts. See also http://en.wikipedia.org/wiki/Jesus_Seminar
How can I have morality?? Do I just have to rely on intuition? If the whole world relied on reason alone to make decisions, couldn't we rationalize a LOT of things that we intuit as wrong?
Well. What do you mean by "how"? By which social process does moral exist? Or due to which psychological process? The spiritual process apparently is out of business because it is ungrounded. There was a Main post with nice graphs about it that I can't find.
You might also want to replace the question with "why do I think that I have morality?"
Does atheism necessarily lead to nihilism? (I think so, in the grand scheme of things? But the world/our species means something to us, and that's enough, right?)
No. Atheism does remove one set of symbol-behavior-chains in your mind, yes. But a complex mind will most likely lock into another better grounded set of symbol-behavior-chains that is not nihilistic but - depending on your emotional setup - somehow connected to terminal values and acting on that. "symbol-behavior-chains" is my ad-hoc term. Ask if it is unclear.
What about all the really smart people I know and respect, like my sister and Grandma, who have had their share of doubts but ultimately credit their faith to having experienced extraordinary, miraculous answers to prayer? Like obviously, their experiences don't convince ME to believe, but I hate to dismiss them as delusional and call it a wild coincidence...
I feel with you. I have the same challenge. See my first link above. I respect them. I know how complex this migration is. I was free to explore. How can't I not reciprocate. I don't want to manipulate. I just want the best for them. And then extensions of the simulation argument might actually lead you back to theism (as least a bit).
Good luck and cheers!
Replies from: None↑ comment by [deleted] · 2015-04-28T04:49:16.680Z · LW(p) · GW(p)
Thanks for your reply :) You seem to radiate plenty of enthusiasm to me!
I'll check out your links and save the Jesus seminar stuff for later; I'm going to finish the rationality ebook and then researching historical Jesus will be my next project, but it looks like a good resource!
As for your questions...when I wrote this original post, by "how" I was still hoping that some sort of objective morality might exist... one not related to the human subject (a hope I now see as kind of silly but maybe natural so soon after my deconversion). I was hoping for some solid rules to follow that would always lead to good outcomes and never cause any emotional disturbance, but I've come to accept that things are just a bit more complicated than that in the real world...
↑ comment by Viliam_Bur · 2015-03-25T15:06:55.925Z · LW(p) · GW(p)
My two cents:
Who was the historical Jesus?
Who cares? Okay you obviously do, but why? If the religion is false and reports of miracles are lies, is there really an impotant difference between a) "Yes, once there was a person called Jesus, but almost everything that Bible attributes to him is completely made up" and b) "No, everything about Jesus is completely made up"?
In other words, if I tell you that my uncle Joe is the true god and performs thousand miracles every thursday, why would you care about whether a) I have a perfectly ordinary, non-divine, non-magical uncle called Joe, and I only lied about his divinity and miracles, or b) actually I lied even about having an uncle called Joe? What difference would it make and why?
As a history source, why is the Bible unreliable?
Because it was written by people who had an agenda to "prove" that they are the good ones and the divinely chosen ones? Maybe even because it contains magic?
I don't fully trust even historical books written recently. It can be funny to read history textbooks written by two countries which had conflicts recently; how each of them describes the events somewhat differently. And today's historical books are much more trustworthy than the old ones, because today people are literate, they are allowed to read and compare the competing books, they are allowed to criticize without getting killed immediately.
Sorry for the offensive comparison, but trusting Bible's historical accuracy would be as if in the parallel universe Hitler would win the war, then he would write his own historical book about what "really happened" and make it a mandatory textbook for everyone... and then a few thousand years later people would trust his every written word to be honest and accurate.
the world/our species means something to us, and that's enough, right?
Exactly. You already know what you care about. Atheism simply means there is no higher boss who could tell you "actually, you should like this and hate that, because I said so", and you would have to shut up and obey.
On the other hand; people can be wrong about their preferences, especially when their decisions are based on wrong assumptions. But "being wrong" is different from "disagreeing with the boss".
I can't wait to read the sequences
I would recommend the PDF version. It is better organized; you can read it from the beginning to end, instead of jumping through the hyperlinks. And it does not include the comments, which will allow you to focus on the text and finish it faster (the comments below the original articles are 10x as much text as the articles themselves; they are often interesting, but then it is really extremely lot of text to read).
Replies from: None↑ comment by [deleted] · 2015-03-26T17:44:02.511Z · LW(p) · GW(p)
Thanks for replying!
Why do I care about Historical Jesus? I actually wouldn't, I guess, except that I absolutely need to have a really well thought out answer to this question in order to maintain the respect of friends and family, some of whom credit Historical Jesus as one of the top reasons for their faith.
It can be funny to read history textbooks written by two countries which had conflicts recently; how each of them describes the events somewhat differently.
Good point about the authors being biased, thanks, no offense taken! I still don't like when people say miracles/magic definitively prove the Bible wrong though, since if a God higher than our understanding were to exist, of course he could do magic when he felt like it. Still, based on our understanding of the world, there is no good reason/evidence at all to believe in such a God.
I got the Rationality ebook, and it is great! Sooo well-written, well-organized, and well thought out! I just started today and am already on the section "Belief in Belief." I love it so much so far that it's a page-turner for me as much as my favorite suspense/fantasy novels. Definitely worth sharing and going back to read and re-read :)
Replies from: Viliam_Bur, Lumifer↑ comment by Viliam_Bur · 2015-03-27T09:52:25.768Z · LW(p) · GW(p)
I absolutely need to have a really well thought out answer to this question in order to maintain the respect of friends and family, some of whom credit Historical Jesus as one of the top reasons for their faith.
Yep. On the social level I get it, but on another level, it's a trap.
The trap works approximately like this: "I will allow you not to believe in my bullshit, but only if you give me a free check to bother you with as many questions as I want about my bullshit, and you have to explore all of these questions seriously, give me a satisfactory answer, and of course I am allowed to respond by giving you even more questions".
If you agree on this, you have de facto agreed that the other side is allowed to waste unlimited amounts of your time and attention, as a de facto punishment for not believing their bullshit. -- Today you are asked to make to make a well-researched opinion about Historical Jesus, which of course would take a few weeks or months to do a really serious historical research; and tomorrow it will be either something new, e.g. a well-researched opinion about the history of the Church, or about the history of Crusades, or about the history of Inquisition, or whatever. Alternatively, they may point at some parts of your answer about the Historical Jesus and say: okay, this part is rather weak, you have to bring me a well-researched opinion about this part. For example, you were quoting Josephus and Tacitus, so now give me a full research about both of them, how credible they are, what other claims they made, etc.
Unless the other side gives up (which they have no reason to; this games costs them almost nothing), there are only two ways this can end. First, you might give up, and start pretending to be religious again. Second, after playing a few rounds of this game, you refuse to play yet another round... in which case the other side will declare their victory, because it "proves" your atheism is completely irrational.
Well, you might play a round or two of this game just to show some good will... but it is a game constructed so that you cannot win. The real goal is to manipulate you into punishing yourself and feeling guilty. -- Note: The other side may not realize they are actually doing this. They may believe they are playing a fair game.
Replies from: None↑ comment by [deleted] · 2015-03-27T18:01:33.652Z · LW(p) · GW(p)
Good point, thanks!! I can't get too caught up in this; there are things I'd rather be learning about, so I need a limit. I'd like to think I can win, though, but this is probably just self-anchoring fallacy (I'm learning!)
Just because I would have been swayed by an absence of positive evidence doesn't mean everyone will be, even people who seem decently smart and open-minded with a high view of reason, like my old track coach and religion teacher. I just made a deal though, that I would read any book of his choice about the Historical Jesus (something I probably would have done anyway!) if he reads Rationality: AI to Zombies :)
↑ comment by Lumifer · 2015-03-26T17:49:46.310Z · LW(p) · GW(p)
Historical Jesus
Be careful about distinguishing two very different propositions:
(1) There was a preacher named Jesus of Nazareth who lived in a certain time in a certain place.
(2) Jesus of Nazareth rose from the dead and was the Son of God.
Specifically, evidence in favor of (1) usually has nothing to do (2).
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-03-27T13:06:04.167Z · LW(p) · GW(p)
That doesn't sound quite right to me, at least if you mean "nothing" literally", given that not-(1) logically implies not-(2).
I think the much smaller posterior probability of (2) than (1) has more to do with the much smaller prior than with the evidence.
Replies from: Lumifer↑ comment by Lumifer · 2015-03-27T14:40:52.354Z · LW(p) · GW(p)
A fair point, though "normal" people have a strong tendency to jump from "not-(1) logically implies not-(2)" to "therefore (1) implies (2)".
Replies from: None, dxu↑ comment by [deleted] · 2015-03-27T18:18:05.284Z · LW(p) · GW(p)
No worries, I knew what you meant. I am pretty good at logic though, so no need to worry about illogical jumps here. I may not have very much background knowledge about terminology or history or science or anything (yet), and I may not be a very articulate writer (yet), but the one thing I can usually do very well is think clearly. I am even feeling a bit smug after finding the mammography Bayesian reasoning problem that apparently only 15% of doctors get correct to be easy and obvious. :)
↑ comment by dxu · 2015-03-27T18:10:50.549Z · LW(p) · GW(p)
Ah, yes, the ever-popular fallacy of the inverse.
↑ comment by Gram_Stone · 2015-03-21T08:48:23.710Z · LW(p) · GW(p)
How can I have morality?? Do I just have to rely on intuition? If the whole world relied on reason alone to make decisions, couldn't we rationalize a LOT of things that we intuit as wrong?
Does atheism necessarily lead to nihilism? (I think so, in the grand scheme of things? But the world/our species means something to us, and that's enough, right?)
If these are the questions weighing heavily on your mind, then you would probably enjoy Gary Drescher's Good and Real. I suggest reading the first Amazon review to get a good idea of the topics it covers. It is very similar to some of the content in the Sequences. (By the way, if you purchase the book through that link, 5% goes to Slate Star Codex.)
Also, the Sequences have recently been released as an ebook entitled Rationality: From AI to Zombies. (You can download the book for free in MOBI, EPUB, and PDF format if you follow the 'Buy Now' link at the bottom of that page and enter a price of $0.00. If you do this, it won't request any payment information. If you pay more than that, the money will go to the Machine Intelligence Research Institute.) I have found that Rationality is much, much easier to read than the Sequences.
Are rationalists just as guilty of circular reasoning as Christians are? (Why do I trust human reason? My human reason tells me it's great. Why do Christians trust God? The Bible tells them he's great.)
You may not yet have the background knowledge necessary to understand it, and if that's the case then you can always return to it later, but I think that the most relevant post on this topic is Where Recursive Justification Hits Bottom. It's chapter 264 in Rationality. (That's a daunting number but the chapters are very short. Rationality is Bible-length but you can hack away at it one chapter at a time, or more at a time, if you please.) To be frank, you're asking the Big Questions and you might have to read a bit before you can answer them.
What about all the really smart people I know and respect, like my sister and Grandma, who have had their share of doubts but ultimately credit their faith to having experienced extraordinary, miraculous answers to prayer? Like obviously, their experiences don't convince ME to believe, but I hate to dismiss them as delusional and call it a wild coincidence...
When I read that, I'm reminded of something that Luke Muehlhauser, a prominent LessWrong user and former devout Christian, once wrote:
I went to church and Bible study every week. I prayed often and earnestly. For 12 years I attended a Christian school that taught Bible classes and creationism. I played in worship bands. As a teenager I made trips to China and England to tell the godless heathens there about Jesus. I witnessed miraculous healings unexplained by medical science. And I felt the presence of God. Sometimes I would tingle and sweat with the Holy Spirit. Other times I felt led by God to give money to a certain cause, or to pay someone a specific compliment, or to walk to the cross at the front of my church and bow before it during a worship service.
As you said yourself, "Yay tolerance of ambiguity!" Although their beliefs are false, their experiences can certainly be real. Even if there exists no God, that doesn't mean that the Presence-of-God Quale isn't represented by the patterns of neural impulses of some human brains. It's easy, nay, the default action, to view others with false beliefs in a negative light, but if rationalism were always intuitively obvious, then the world would be a very different place. I try not to make myself feel bad by overestimating my ability to convince others of the value of rationalism. That doesn't mean that I keep my mouth shut all of the time, but I do take it a day at a time, and it seems to work; sometimes I talk about something and it doesn't seem to go anywhere, and then a friend will bring it up days or weeks later and say something like, "You know, I was thinking about that, and I realized it made a lot of sense." And then I privately jump up and down. Sometimes it doesn't work, but for me, there's definitely a middle ground between falling in line and abandoning All I Have Ever Known. I also often see Paul Graham's essay What You Can't Say linked here when new atheists ask about how to maintain ties with religious family members.
EDIT: Oh, and welcome to LessWrong!
Replies from: None↑ comment by [deleted] · 2015-03-24T05:32:35.365Z · LW(p) · GW(p)
Thanks for the welcome!! Good and Real does seem like a good read. I'm going to read Rationality first, which I'm guessing will help me work through some of my questions, but I'll definitely keep that one in mind for later.
Where Recursive Justification Hits Rock Bottom was really relevant, thanks for the link. I'm still digesting Occam's Razor, I think that was the only concept completely new to me.
Thanks for the link to Luke's story. It seems like we went through the same difficult process of desperately wanting to believe, but ultimately just not being able to. I find it super encouraging that his doubts stemmed from researching the Historical Jesus, since that's one thing that my old high school track coach/religion teacher insists I have to look into. He claims no atheist has ever been able to answer any of his questions. The atheists I know all credit a conflict with science as the reason they left Christianity, and I credit...I don't even know, my personal thoughts, I guess... but it's great to know that researching history will also lead there. I'll have to go through the same resources he used so I can better explain myself to Christian friends.
"Although their beliefs are false, their experiences can certainly be real. Even if there exists no God, that doesn't mean that the Presence-of-God Quale isn't represented by the patterns of neural impulses of some human brains." Thanks for that!! It does make me feel better.
Hahaha, wow, I haven't even considered trying to convince others of the value of rationalism yet. Especially after my deconversion, I've been totally on the defensive, almost apologizing for my rationality. ("It's not my fault; it's the personality I was born with. If you guys really believe, you should feel lucky not just for having been born into Christian homes, but also, more importantly, for having been born with the right personalities for faith." and "You think my prayers for a stronger faith weren't answered because my faith wasn't strong enough, but I was doing everything possible to strengthen my faith to no avail." and "Believing isn't a choice, no matter how much I wanted it, I couldn't believe. So if any brand of Christianity is true, Calvinism is your best bet, and I wasn't among the elect.")
So far this strategy is doing remarkably, remarkably well in maintaining ties with friends and family. People understand where I'm coming from, and they feel just awful, sorry for me since they think I'm going to hell, but for the most part, not finding me at fault. Pity is slightly annoying when I'm so happy, but hopefully their pity will eventually lead them to find God unfair, which will lead them to dislike their beliefs, which will lead them to question why they bother believing something they don't like...and then, they won't find much reason at all aside from upbringing/community. Those were actually pretty much the steps of my deconversion process, only I didn't need a personal connection with a particular unbeliever to get there. Anyway, if nothing else, the defensive strategy works wonders for relations. I helped a friend share her doubts with her family in this way, and she said it worked for her too.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-03-27T17:42:09.172Z · LW(p) · GW(p)
I just thought to point out that there's going to be a Rationality reading group; basically, it's a planned series of posts about each Part in the book, where you have the opportunity to talk about it and ask questions. You clearly are very curious, (it's the only way you could survive so many hyperlinks) so it seems like just the thing for you.
I credit...I don't even know, my personal thoughts, I guess...
Just to give you words for this, and from what I read in the blog post that you linked to in your first comment (which I found very amusing), I think you're trying to verbalize that Christianity was inconsistent. You don't have to prefer consistency, but most people claim to prefer it, and apparently you do prefer it. (I know I do.) You didn't like it as a system because it was a system that said that God was perfectly benevolent and ridiculously selfish (though the second statement was only implicit) at the same time. You can always look at other subjects like science and history and come to the conclusion that religion conflicts with those things when it shouldn't; but you can also just look at religion and see how it conflicts with itself. I think that's what you did.
I saw some of your other comments about meaning, and meaninglessness in the absence of God, and nihilism. Notice that when you ask "Does life have meaning in the absence of God?", everyone says that it depends on what you mean, offers some possible interpretations, and shares their viewpoints and conclusions on what it means. The simplest way to give you a clue as to some of the problems with the question is something that you wrote yourself:
Oh! I like that definition of nihilism, thanks. Personally, I think I could actually tolerate accepting nihilism defined as meaninglessness (whatever that means), but since most people I know wouldn't, your definition will come in handy.
Vagueness is part of the problem, but there are other parts as well. Even though I've never been religious and therefore don't know what it's like to lose faith, worrying about "meaninglessness" is something that I dealt with. I promise that atheists aren't all secretly dead inside. (I actually used to wonder about that.) Rationality Parts N and P deal with questions like that.
I also want to say that I agree with Viliam_Bur's comments on you doing research to defend your new beliefs: It's a lot cheaper time- and resource-wise to act like a skeptic than it is to do research, and you never have to tolerate that awful feeling that you might be wrong. Even when you return with evidence contrary to their beliefs, their standards of evidence are too high for it to matter. I think it's telling that your coach sat around waiting for unusually knowledgeable, atheistic passersby to tell him about the Historical Jesus instead of doing any research on his own.
Replies from: None↑ comment by [deleted] · 2015-03-27T19:50:21.277Z · LW(p) · GW(p)
Cool, thanks so much for mentioning the Rationality reading group!! I'm probably going to finish each section long before it's discussed, but I'll definitely go back to re-read and chat. I'll bookmark it for sure! So exciting! I will try to bribe my sister and maybe a few other people to participate as well (self-anchoring again, maybe, but I'll call it optimism, haha).
Ooh, I like consistency, and Christianity is inconsistent. Christianity conflicts with itself. A God can't be both perfectly benevolent and ridiculously selfish. That's why I rejected it. Yeah, that sounds nice, thanks for the words. :)
Good point about vagueness. I like this slatestarcodex post" The Categories Were Made for Man, Not Man for the Categories Looking forward to parts N and P now too!
And yeah, good point about the standards of evidence being too high. Still, right now my only info about Historical Jesus is based off a few articles I've read on the internet, and I just feel like after 22 years learning one thing, I can't just reject it and jump ahead to other things without being able to formulate basic, well-reasoned atheist answers to common Christian questions. I guess it's not just about maintaining my friends' respect, it's also about my own self-respect. I can't go around showing the improbability of every religion, but I want to be able to do so about the one I grew up in (maybe this is a cousin of the sunk-cost fallacy?). Luckily, all of the groundwork here has already been done by other atheists, it should just a matter of familiarizing myself with basic facts/common arguments.
comment by Epictetus · 2014-12-22T12:19:09.291Z · LW(p) · GW(p)
Hello. My name is Tom. I'm 27 and currently working an a PhD in mathematics. I came to this site by following a chain of links that started with TVTropes of all things.
I have been a fan of rational thinking as long as I can remember. I'd always had the habit of asking questions and trying to see things from every point of view. I devoured all sorts of books growing up and shifted my viewpoints often enough that I became willing to accept the notion that everything I currently believe is wrong. That's what pushed me to constantly question my own beliefs. I have read enough of this site to satisfy myself that it would be worthwhile to make an account and perhaps participate in the community that built it.
Replies from: Daniel_Burfoot↑ comment by Daniel_Burfoot · 2014-12-22T17:20:11.481Z · LW(p) · GW(p)
Welcome to LW. At one point in my life I would read a randomly selected passage from the Enchiridion before going to sleep every night.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-12-22T17:29:49.101Z · LW(p) · GW(p)
Which of them all?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-12-22T22:02:34.372Z · LW(p) · GW(p)
Presumably that of Epictetus, the ancient Stoic.
comment by Richard Korzekwa (Grothor) · 2014-12-16T23:06:21.264Z · LW(p) · GW(p)
Hi everyone!
My name is Rick, and I'm 29. I've been lurking on LW for a few years, casually at first, but now much more consistently. I did finally post a stupid question last week, and I've been going to the Austin Meetup for about a month, so I feel it's time to introduce myself.
I'm a physics PhD student in Austin. I'm an experimentalist, and I work on practical-ish stuff with high-intensity lasers, so I'm not much good answering questions about string theory, cosmology, or the foundations of quantum mechanics. I will say that I think the measurement problem (as physicists usually refer to the question which "many worlds" is intended to answer) is interesting, but it's not clear to me why it gets so much attention.
I come from a town where (it seems like) everybody's dad has a PhD, and many people's moms have them as well. Getting a PhD in physics or engineering just seemed like the thing to do. I remember thinking as a teenager that if you didn't go to grad school, you were probably an uneducated yokel. More importantly, I learned very early that a person can have a PhD and still make terrible decisions or have terrible beliefs. I also formed weird beliefs like "chemistry is for girls" and "engineers ride mountain bikes; physicists ride road bikes". I think I still associate educational attainment too strongly with status.
I've been involved in the atheist and secular humanism communities for close to ten years now. I gradually transitioned from viewing these communities as a source of intellectual stimulation to sources of interesting and relatable people. I'm still involved in the secular humanism club that I started a few years back at UT.
I was vaguely aware of Less Wrong for a while before my roommate showed me HPMOR. After reading through all of that (which had been released at the time), I got more into the site and quickly read all the core sequences. I found all of it to be much more intellectually satisfying than all of the atheist apologetics I'd read in college, and I realized how much better it was for actually accomplishing something other than winning an argument. Realizing how toxic most political arguments are and understanding why I could win an argument and still feel icky about it were pretty huge revelations for me. In the last six months, I've been able to use things that I learned here and made some seriously positive changes in my life. It's been pretty great.
I'm also interested in backpacking, rock climbing, and competitive cycling. A bike race is a competition in which knowing what your opponent knows about you can be a decisive advantage. It's very much a Newcomb-like problem. Maybe I'll start a thread about that sometime.
comment by Marlon · 2015-03-12T17:19:55.893Z · LW(p) · GW(p)
Hello. New to the active part of the site, I've been lurking for a while, reading much discussions (and not always agreeing, which might be the reason I'm going active). I've come to the site thanks to HPMOR and the quest towards less bias.
I'm a (soon starting a PhD) student in molecular dynamics in France, skeptic (I guess) and highly critical of many papers (especially in my field). Popper is probably the closest to how I define, although with a few contradictions, the philosophy of what I'm doing.
I'm in the country of wine, cheese and homeopathy, don't forget it :)
comment by babblefish · 2015-01-18T23:22:35.614Z · LW(p) · GW(p)
Hey... I'm Babblefish. Having posted elsewhere I've been directed to this helpful Welcome thread.
How I got here? friends->HPMOR->Lesswrong blogs-> Project suggestion-> Forum.
Much as I'd love to claim I'm here to meet all you lovely folks, the truth is, I'm mainly here for one reason: I was recently re-reading the original blogs (e-reader form and all that), and noticed a comment by Eliezer something to the effect of "Someone should really write 'The simple mathematics of everything' ". I would like to write that thing.
I'm currently starting my PhD in mathematics (appears common here), with several relevant side interests (physics, computing, evolutionary biology, story telling), and the intention of teaching/lecturering one day.
Now... If someone's already got this project sorted out (it has been a few years), great... however I notice that the wiki originally started for it is looking a little sad, (diffusion of responsibility perhaps), and various websearches have turned up nothing solid.
So... if the project HAS NOT been sorted out yet, then I'd be interested in taking a crack at it: It'll be good writing/teaching practice for me, and give me an excuse to read up on the subjects I HAVEN'T got yet, and it'll hopefully end up being a useful resource for other people by the time I'm finished (and hopefully even when I'm under way)
I am here, because I figure this is probably a pretty good place to get additional information. In particular: 1) Has "the simple mathematics of everything" already been taken care of? If so, where? 2) Does anyone know what wiki/blog formats/providers might be useful (and free maybe?) and ABLE TO SUPPORT EQUATION. 3) Any other comments/advice/whatever?
Cheers, Babblefish.
comment by Jacob Falkovich (Jacobian) · 2014-12-22T21:11:46.382Z · LW(p) · GW(p)
Greetings, y’all. I’m very excited to take the plunge into the LW community proper. I spent the last six months plowing through the sequences and testing the limits of my friends’ patience when I tried to engage them in it. Besides looking for people to talk to, I am beginning to feel a profound restlessness at not doing anything with all the new ideas in my head. At 27, I’m not a “level 1 adult” yet. I don’t really have something to protect or a purpose I’m dedicated to. I hope that by being active in the community will at least get me in the habit of being active.
My name is Jacob, I was born in the Soviet Union and grew up in Israel. My parents are scientists, my dad is probably top 10 worldwide in his field. I grew up playing soccer and sitting at dinner with students and scientists from around the world, I hope I actually did realize even as a teenager how awesome it was. I did my Bar Mitzva at a reform synagogue but God was never really part of our family conversation, I don’t think that I’ve said a prayer and actually meant it since I was 12 or 13. There are just enough Russian-speaking math geeks in Israel to form a robust subculture and I was at the top of it: winning national competitions in math and getting drunk the next day on cheap vodka. I had a very strange four-year service in the IDF. I sweated blood for a degree in math and physics that got me a minimum-wage job in the Israeli desert, and then effortlessly breezed my way through a top 20 MBA in the US that suddenly made me a middle class New Yorker. I work an easy job that leaves me with plenty of energy at the end of the day to play sports, perform stand up, date, and improve my skills as a rationalist by considering my intellectual biases.
I stumbled on LW after reading an article about Roko’s #$&%!@ of all things, and the last few months were what I saw someone here describe as “epiphany porn”. Even before that, I read a lot on similar themes and took it all very seriously: “Fooled by Randomness” made me quit my job as a day-trader for a hedge fund and “Thinking Fast and Slow” changed my life in several ways, including the choice of car I bought. I’m very happy to start noticing changes in my brain after LW too. For example, I spent a lot of my time in the US arguing with anti-zionists. I just recently realized that the hypocrisy and stupidity I usually find arrayed against me has pushed me into a pro-Israel affective death spiral of my own, that I’m now trying to climb out of. In general, I argue less about politics now and don’t ever plan to vote anymore. I just went to my first OB-New York meetup and hung out at the solstice concert, I hope to become more and more engaged with LWers offline going forward.
The main result of my business school days are several entrepreneurial fantasies about “Moneyballing” things. One recent idea is to set up a personal philanthropy investment fund - people put in X% of their salary that can be used only for emergency or charity. This eliminates the psychological pain of giving money, increases giving, makes personal altruism much more focused and effective and saves on taxes. I also came up with a better matching algorithm for dating websites. Dating in general is at the very top of my interests. While a rigorous model of Bayesian dating seems as unattainable as quantum relativity, I do find that my open minded approach has gotten me in relationships that I didn’t even believe were an option a few years ago (that’s a discussion I’d love to get to on somewhere else on this site).
And finally: where I hope to end up. Perhaps even a year ago I imagined I could be perfectly satisfied living a content middle-class life with a decent job, good relationships and fun hobbies. I realized that the world doesn’t care too much that I was always the smartest person in the room as a teenager, and that I’d do well to dedicate myself to humility. Unfortunately, LW changed that. I see now that things are changing and going to change unpredictably, and that smart people occasionally do make a very non-humble impact. I’m not in a rush to plunge myself into some grand project (like FAI) just for the sake of it, but I do feel that my life is getting too comfortable for comfort. When the waves come, I want to have built a rad surfboard.
Replies from: Gondolinian↑ comment by Gondolinian · 2014-12-23T01:06:57.524Z · LW(p) · GW(p)
I grew up playing soccer and sitting at dinner with students and scientists from around the world...
...winning national competitions in math and getting drunk the next day on cheap vodka...
I sweated blood for a degree in math and physics that got me a minimum-wage job in the Israeli desert, and then effortlessly breezed my way through a top 20 MBA in the US that suddenly made me a middle class New Yorker.
...quit my job as a day-trader for a hedge fund...
Wow, just... wow. \salutes**
Welcome, Jacob!
comment by Gypsum · 2015-06-08T21:43:25.240Z · LW(p) · GW(p)
Hello, all!
I’ve lurked this site on and off for at least five years, probably longer. I believe I first ran into it while exploring effective altruism. Articles that had a definite impact on my thinking included those on anchoring, priming, akrasia, and Newcomb's problem. Alicorn's Luminosity series is also up there, and I keep perpetual bookmarks to "The Least Convenient Possible World" and "Avoiding Your Belief's Real Weak Points."
I earned a B.A. in history, worked for a couple years in a financial planning office, then ended up on the rather weird track of becoming a professional piano accompanist. It turned out to be a far more financially and logistically feasible career move than the other grand idea I attempted at the time (convincing GiveWell I'd be an awesome hire). So piano is what I'm doing now. (GiveWell is admittedly still my longshot/backburner plan B, but I'm focusing all professional development on the music end of things right now).
Some things I've got more than a passing interest in, which I think fit the LW ethos:
Taubman approach. Approach to keyboard technique (and prevention of repetitive-motion-injury) that got the recognition and interdisciplinary interest of the scientific and medical communities. My personal experience is, "This shit works: it saved my wrists and music career," and the data indicates my experience isn't just anecdote or placebo effect.
Evaluating the effectiveness of charitable-giving interventions. I went to a highly conservative/libertarian college, where, if I wanted to donate to or support any poverty-alleviation program, I'd better be ready with a 95-point defense of my choice. Or else. It's been a continuing interest of mine ever since, appealing equally well to both my cynicism and idealism.
Finding secular alternatives to the community-building structures, motivational structures, and self-examination/self-change disciplines of religion.
Classical stoicism. Thus far I've found its framework and mindhacks to be a balanced, practical fit for my personality and temperament. I especially appreciate how it hasn't yet sent me into any extreme, detrimental pitfalls as I've tried to apply it. I'd be interested in meeting other people who are trying to methodically apply it to their lives, but I get the feeling we're probably a pretty quiet and weird bunch.
I likely won't comment here much, but I wanted to at least finally make an account, introduce myself, and let you all know I've found the site valuable over the years. I've been making a more concerted effort recently to seek out and connect with individuals who value things I value, and I figured it was high time to drop by the Less Wrong community, as a part of that.
comment by michael_b · 2015-01-29T12:37:29.573Z · LW(p) · GW(p)
I discovered lesswrong.com because someone left a printout of an article on the elliptical machine in my gym. I started reading it and have become hooked.
I'm a formally uneducated computer expert. The lack of formal education makes me a bit insecure, so I obsess over improving my thinking through literature on cognitive dissonance and biases, such as books from the library and also sites like this.
Nowadays I get paid to be a middle-manager at technology companies. Most of my career has been in Linux system administration as well as functional programming.
I'm a bit of a health nut. I adopted a whole-food plant-based diet (the "China Study" diet) because it seems most well supported in the literature, although a broad consensus on the topic has not emerged. I base this decision in part on my trust of experts with titles after their names, since I'm too out of my element to read and interpret most of the literature on my own. At the same time I have a personal anecdote that this works well, so those two are enough to convince me for now.
There are times when I find reading about rational thinking rather sobering. It's clear that we were born with an irrational, "defective", brain and that we would be so lucky if we could even make a small dent in improving our decision making. Improvements seem very hard to come by, I worry that all I'm really doing is learning to distrust my beliefs.
So that's a nutshell full. How's everyone else? :)
Replies from: pjeby, Lumifer↑ comment by pjeby · 2015-01-29T16:42:26.999Z · LW(p) · GW(p)
I discovered lesswrong.com because someone left a printout of an article on the elliptical machine in my gym. I started reading it and have become hooked.
What article was that?
Replies from: michael_b↑ comment by Lumifer · 2015-01-29T16:03:32.681Z · LW(p) · GW(p)
I adopted a whole-food plant-based diet (the "China Study" diet) because it seems most well supported in the literature
Are you aware of Denise Minger's dissection of the China Study?
Replies from: michael_b↑ comment by michael_b · 2015-01-29T16:58:42.177Z · LW(p) · GW(p)
Yes. I spent a lot of time reviewing critiques of The China Study (TCS), including Minger's. At the end of it I came to the following conclusions.
- Nutrition science is extraordinarily nonlinear
- I'm definitely not qualified to deconstruct claims made about nutrition
- TCS critics don't seem very qualified either, especially when compared to the qualifications of the people advancing TCS
- There's no larger group of qualified people advancing a radically different approach
So, those are my reasons. I admit they're not very satisfying. I'm spoiled by fields where, once you grok the formal proof you can be highly confident that the claim is correct.
No such luck with something as squishy as nutrition, it would seem.
Replies from: IlyaShpitser, Lumifer↑ comment by IlyaShpitser · 2015-01-29T17:29:32.089Z · LW(p) · GW(p)
General advice: learn causal inference. Getting strong causal claims empirically is not so simple...
↑ comment by Lumifer · 2015-01-29T17:09:47.067Z · LW(p) · GW(p)
I disagree with your approach (basically, trust authority), but that's just me.
Replies from: dxu, None↑ comment by dxu · 2015-01-29T18:36:05.140Z · LW(p) · GW(p)
When you know next to nothing about the topic at hand and the only choice is to trust authority or to rely on your own, almost certainly flawed judgment, I'd go with authority.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-29T18:58:15.067Z · LW(p) · GW(p)
When you know next to nothing about the topic at hand and the only choice is to trust authority or to rely on your own, almost certainly flawed judgment, I'd go with authority.
When the topic is an important one, like health and nutrition, I'll go learn about the topic.
Replies from: michael_b, dxu↑ comment by michael_b · 2015-01-29T22:51:34.876Z · LW(p) · GW(p)
I'm skeptical this is a great strategy for topics in general.
Nutrition, for example, doesn't appear to be the kind of topic where you can just learn its axioms and build up an optimal human diet from first principles. It's far too complicated.
Instead you need substantial education, training, experience and access, as well as a community that can help you support and refine your ideas. You need to gather evidence, you need to learn how to determine the quality of the evidence you've gathered, and you need to propose reasonable stories that fit the evidence.
Since I haven't made health and nutrition my career most of these things will be hard or even impossible for me to come by. As such, my confidence in the quality of any amateur conclusions I come to must necessarily be low.
So, the most reasonable thing for me to do is trust authorities when it comes to nutrition.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-30T16:32:22.072Z · LW(p) · GW(p)
I'm skeptical this is a great strategy for topics in general.
And rightly so :-) This is an approach that should be reserved for important topics.
Instead you need substantial education, training, experience and access, as well as a community
I think you're setting the bar too high. What you describe will allow one to produce new research and that's not the goal here. All you need to be able to do is to pass a judgement on conflicting claims -- that's much easier than gathering evidence and proposing stories.
In nutrition, for example, a lot of claims are contested and not by crackpots. Highly qualified people strongly disagree about basic issues, for example, the effects of dietary saturated fat. I am saying that you should read the arguments of both sides and form your opinion about them -- not that you should apply to the NIH for a grant to do a definitive study.
Of course that means reading the actual papers, not dumbed down advice for hoi polloi.
↑ comment by dxu · 2015-01-29T19:50:43.147Z · LW(p) · GW(p)
By "learn", I assume you mean read existing literature on the topic. In the case of health and nutrition (and most other medical topics), high-quality literature is rather sparse, both because of frequently bad statistical analyses and the fact that practically no one releases their raw data--only the results. (Seriously, what's up with that?)
Replies from: Nornagest, Lumifer↑ comment by Nornagest · 2015-01-29T20:11:14.755Z · LW(p) · GW(p)
Given that the experts in the field are precisely those learning from and producing that same literature, the fact that the literature is generally low-quality doesn't make me more inclined to trust them. (Though, as bad as academic nutrition science is, conventional wisdom and pop nutrition science seem to be worse.)
It does make it exceptionally hard to gain a good understanding of the field yourself, though. Unlike Lumifer, I'd say the correct move, unless you are yourself a nutritionist or a fitness nerd or otherwise inclined to spend a large portion of your life on this, is to reserve judgment.
Replies from: dxu, Lumifer↑ comment by dxu · 2015-01-30T16:35:37.078Z · LW(p) · GW(p)
Given that the experts in the field are precisely those learning from and producing that same literature, the fact that the literature is generally low-quality doesn't make me more inclined to trust them.
In terms of statistics and data, yes, the papers they produce are fairly low-quality. In terms of domain-specific knowledge, however, I'd trust an expert over pretty much anyone else. That being said, I do agree with you here:
It does make it exceptionally hard to gain a good understanding of the field yourself, though. Unlike Lumifer, I'd say the correct move, unless you are yourself a nutritionist or a fitness nerd or otherwise inclined to spend a large portion of your life on this, is to reserve judgment.
Although I prefer trusting expert authority to making my own judgments on unfamiliar topics, gaining a good-enough understanding to figure out which experts to trust is still hard, especially with so many conflicting conclusions out there. This being the case, the strategy you propose--reserve judgment--is precisely what I do.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-29T20:15:09.120Z · LW(p) · GW(p)
is to reserve judgment
You can't -- you've got to eat each day :-/
Replies from: Nornagest↑ comment by Nornagest · 2015-01-29T20:21:09.496Z · LW(p) · GW(p)
Ah, the old "choosing not to choose is itself a choice" move. Never was too convinced by that.
You can reserve judgment on the theory while taking some default stance on the practical issue. Depending on where you're standing this might mean the standard diet for your culture (probably suboptimal, but arguably less suboptimal than whatever random permutations you might apply to it), or "common sense" (which I'm skeptical of in some ways, but it probably picks some low-hanging fruit), or imitating people or populations with empirically good results (the "Mediterranean diet" is a persistently popular target), or adopting a cautious stance toward dietary innovations from the last forty years or so (about when the obesity epidemic started taking off).
Replies from: Lumifer↑ comment by Lumifer · 2015-01-29T20:27:23.038Z · LW(p) · GW(p)
Never was too convinced by that.
It looks obviously true to me.
while taking some default stance on the practical issue
Your stance is a choice nevertheless and it necessary implies a particular theory of nutrition (even if that theory is not academically recognized and might be as simple as "eating whatever everyone else eats can't be that bad").
Replies from: Nornagest↑ comment by Nornagest · 2015-01-29T20:30:04.217Z · LW(p) · GW(p)
It's an option -- a point in a configuration space -- but not a random option. The default is, almost tautologically, a stable equilibrium, while in a sufficiently complicated system almost all possible choices may move you away from that equilibrium in ways you don't want.
Nutrition is a very complicated system. Of course, its fitness landscape might be friendlier than I'm giving it credit for here, but I don't have any particular reason to assume that it is.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-29T20:36:51.814Z · LW(p) · GW(p)
It's a choice, but not a random choice.
Well, of course. Where does the idea of a random choice even come from?
The default is, almost tautologically, a stable equilibrium
If by "default" you mean "whatever most people around me eat", then no, not necessarily. Food changes. Examples would be the introduction of white rice (hence, beriberi) or mercury-polluted fish.
There is also the issue of the proper metric. If you want to optimize for health and longevity, there is no particular reason to consider the "default" to be close to optimal.
Nutrition is a very complicated system.
I certainly agree.
Replies from: Nornagest↑ comment by Nornagest · 2015-01-29T20:39:31.649Z · LW(p) · GW(p)
Where does the idea of a random choice even come from?
If you don't have much good information about what the fitness landscape looks like -- for example, if the literature is opaque and often contradictory -- then there's going to be a lot of randomness in the effects of any choices you make. It's not random in the sense of a blind jump into the depths of the fitness landscape -- the very concept of what counts as "food", for example, screens off quite a bit -- but even if the steps are short, you don't know if you're going to be climbing a hill or descending into a valley. And in complex optimization problems that have seen a lot of iteration, most choices are usually bad.
You can, of course, iterate on empirical differences, and most people do, but the cycle time's long, the results are noisy, and a lot of people aren't very good at that sort of reflection in the first place.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-29T20:46:52.275Z · LW(p) · GW(p)
But it's not that the choice is random -- it's that the consequences of choices are rather uncertain.
its fitness landscape might be friendlier
Well, first it's well-bounded: there is both an upper bound on how much (in health and longevity) you can gain by manipulating your diet, and a clear lower bound (poisons tend to be obvious). Second, there is hope in untangling -- eventually -- all the underlying biochemistry so that we don't have to treat the body as a mostly-black box.
Another thing is that there is a LOT of individual (or group) variation, something that most nutritional research tends to ignore, that is, treat it as unwanted noise.
A major problem is that it's legally/politically/morally hard to experiment on humans, even with full consent.
↑ comment by Lumifer · 2015-01-29T20:10:44.902Z · LW(p) · GW(p)
By "learn", I assume you mean read existing literature on the topic
Also around the topic, not to mention that learning necessarily involves a fair amount of one's own thinking.
high-quality literature is rather sparse
I agree which makes relying of authority (and, usually, on mass media reinterpretations of authority) particularly suspect.
what's up with that?
I think the usual explanation is privacy and medical ethics, but my cynical mind readily suggests that it's much harder to critique a study if you can't see the data...
↑ comment by [deleted] · 2015-01-29T17:38:28.506Z · LW(p) · GW(p)
Sounds to me that you're trusting authority that just happens to be of a different sort.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-29T17:50:41.123Z · LW(p) · GW(p)
No, I do not. I actually read the papers and see if they make sense. One of my long-standing complaints is that in medical research no one releases the data -- it would be very useful to reanalyze it is a bit less brain-dead fashion.
Replies from: None↑ comment by [deleted] · 2015-01-29T17:56:36.948Z · LW(p) · GW(p)
Then why'd you recommend Minger's criticism? Because as far as I can tell it doesn't make sense.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-29T17:59:17.673Z · LW(p) · GW(p)
Makes a lot of sense to me. What is it that doesn't make sense to you?
Replies from: None↑ comment by [deleted] · 2015-01-29T18:02:44.125Z · LW(p) · GW(p)
Let's start with the sturm and drang over Tuoli, I suppose. Why aren't they an obvious outlier?
Replies from: Lumifer↑ comment by Lumifer · 2015-01-29T18:14:19.707Z · LW(p) · GW(p)
Um, it is.
To quote Minger
Using the data set with the flawed inclusion of Tuoli, Campbell cites a strong association between animal protein and lipid intake as a reason to implicate animal foods with breast cancer. Yet using the revised data set, animal foods do not contribute significantly more fat to total lipid intake than do plant oils. As a result, any association between breast cancer and dietary fat could be linked to either animal or plant-sourced foods, and there is no justification for indicting only animal products.
Also, to continue quoting Minger,
Replies from: None..meat was not the dietary feature noted in my discussion of Tuoli: dairy was. Both the three-day diet survey and the frequency questionnaire reveal high intakes of dairy for Tuoli citizens, with the questionnaire indicating milk products are consumed an average of 330.3 days per year, and closer to 350 in one township.[98] In addition, despite Campbell’s comment that the Tuoli migrate seasonally and consume more vegetables and fruit for part of the year, the China Study frequency questionnaire indicates Tuoli’s vegetable intake is only twice per year and fruit intake is less than once per year on average.[99]
If Campbell believes both the three-day diet survey and frequency questionnaire were in error, I must question why Tuoli county was not excluded entirely from the data set—especially given its pronounced influence on virtually all associations involving meat, dairy, and animal protein, many of which Campbell cited as verification for his animal foods-disease hypothesis.
↑ comment by [deleted] · 2015-01-29T18:31:21.825Z · LW(p) · GW(p)
Yet elsewhere:
Why aren’t [the people who live in Tuoli] sick and diseased?
We have plenty of evidence showing hormone-pumped dairy, grain-fed meat, pasteurized and homogenized milk, processed lunch meats, and other monstrosities are bad for the human body. No debate there. But we do have a woeful lack of research on the effects of “clean” animal products—meat from wild or pastured animals fed good diets, milk that hasn’t been heat-zapped, antibiotic-free cheeses and yogurts, and so forth.
[...]
Is it possible the diseases we ascribe to animal products aren’t caused by animal products themselves, but by the chemicals, hormones, and treatment processes we expose them to? If the Tuoli are any indication, this may be the case. Hopefully future research will shed more light on the matter.
Or, you know, they're an insular minority with peculiar nutritional requirements.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-29T18:54:10.800Z · LW(p) · GW(p)
You said that Minger's criticism of TCS "doesn't make sense". Did you actually have anything specific in mind?
I also don't see much problems with the passages you quoted.
Replies from: None↑ comment by [deleted] · 2015-01-29T19:22:36.782Z · LW(p) · GW(p)
They contradict each other. Why isn't Tuoli an outlier?
Replies from: Lumifer↑ comment by Lumifer · 2015-01-29T19:27:27.304Z · LW(p) · GW(p)
They contradict each other.
You're quoting from the page which says right on top:
Important disclaimer: In light of new information, this post needs to be taken with a really whoppin’ huge grain of salt. It turns out Tuoli was “feasting” on the day the survey crew came for China Study I, so they were likely eating more calories, more wheat, more dairy, and so forth than they typically do the rest of the year. We can’t be completely sure what their normal diet did look at the time, but the questionnaire data (which is supposedly more reliable than the diet survey data) still suggests they were eating a lot of animal products and very little in the way of fruits or vegetables.
At any rate, I recommend not quoting this post or citing it as “evidence” for anything simply because of the uncertainty surrounding the Tuoli data in the China Study.
You seem to be more interested in creating gotchas than in finding out what's actually happening in reality.
Why isn't Tuoli an outlier?
I am sorry, did you miss that comment?
But if you want to pretend Tuoli doesn't exist, sure, you can pretend Tuoli doesn't exist. What next?
Replies from: None↑ comment by [deleted] · 2015-01-29T19:41:03.986Z · LW(p) · GW(p)
You're quoting from the page which says right on top:
I was kind of waiting for you to point that out. Notice it's a non-disclaimer anyway:
but the questionnaire data (which is supposedly more reliable than the diet survey data) still suggests they were eating a lot of animal products and very little in the way of fruits or vegetables.
In any case, I'm not using it as evidence for or against a particular diet. I'm using it as evidence of her research process. About a quarter of her criticism of TCS is based around Tuoli being an outlier, so it's interesting that she also thought that their diet didn't increase their rate of disease significantly, even before she found out the data was bad. It's a clear sign of motivated cognition.
You seem to be more interested in creating gotchas than in finding out what's actually happening in reality.
In general, you don't seem very good at ascribing motives to me. Recall you were the one that asked for an example of what I found confusing.
I am sorry, did you miss that comment?
No, I didn't.
But if you want to pretend Tuoli doesn't exist, sure, you can pretend Tuoli doesn't exist. What next?
That's not even remotely close to what I said, and doesn't really have anything to do with the point at hand.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-29T19:46:47.297Z · LW(p) · GW(p)
About a quarter of her criticism of TCS is based around Tuoli being an outlier
I don't believe this is true -- see this.
You still haven't made any specific objections against Minger's criticism of TCS.
You did mention motivated cognition, did you not?
Replies from: None↑ comment by [deleted] · 2015-01-29T19:53:02.084Z · LW(p) · GW(p)
I don't believe this is true -- see this.
27 instances. Section 1.2, 1.4, 1.8, 2.2, 3.1, and of course 3.3. "A quarter" is about correct, but let's say "a fifth" if you'd like.
You still haven't made any specific objection against Minger's criticism of TCS.
She depends too much on the Tuoli data -- which she supposedly doesn't trust anyway -- to make her arguments.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-29T20:13:47.459Z · LW(p) · GW(p)
27 instances
I am going to call bullshit on that. You did a word search for "Tuoli" in a web page and that turned up 27 hits. That does not mean that there are 27 instances of using the Tuoli data to argue against TCS.
Section 1.2, for example, explicitly points out that taking Tuoli data out makes some Campbell claims to have much less support in the correlation numbers.
I think you're being dishonest. This conversation is over.
Replies from: Nonecomment by adam_shimi · 2014-12-23T17:36:10.372Z · LW(p) · GW(p)
Hello LessWrongers! After discovering the blog and MIRI research papers through a friend (Gyrodiot ) a few weeks ago, I finally decided to register here. For I keep seeing fascinating discussions I want to be part of, and I also would like to share my ideas about IA and rationnalism.
Currently, I am a first year student in an french Engineering school in Computer science and applied mathematics. Before that, I was in "Classes Préparatoires" for two years, an intensive formation in mathematics and physics to pass engineering school contests. Even If it was quite harsh (basically 30 hours of classes + 5 hours exam + homeworks impossible to finish every week), it gave me some kicks to become a post-rigorous mathematics student. (post-rigorous being here the definition of Terence Tao : http://terrytao.wordpress.com/career-advice/there%E2%80%99s-more-to-mathematics-than-rigour-and-proofs/ )
For my interest, I am actually working with one of my teacher on a online handwriting OCR based on a model of oscillatory handwriting he developped. But we also explore the cognitive consequences of the model, mostly Piaget's idea of assimilation, which can be linked to modern discoveries about mirror neurons. I also self-study Quantum Computation, even more now that there is high probability I will be on a summer research internship on Quantum information theory.
On the topics I saw here on LW and on the MIRI web-site, I think the corrigibility is the one that interests me the most.
That's all folks. ;)
Replies from: Gyrodiotcomment by arbo · 2014-12-20T06:51:32.861Z · LW(p) · GW(p)
Hello. I’m Mark. I’m a 24-year-old software engineer in Michigan.
I found LessWrong a little over a year ago via HPMOR. I’m working through the books listed on MIRI’s Research Guide. I finished Bostrom’s Superintelligence earlier this week, and I’m currently working through the Sequences and Naive Set Theory. I’m not quite sure what I want to do after I complete the Research Guide; but AI is challenging and interesting, so I’m excited to learn more.
P.S. I’m a SuperLurker™. I find it very difficult to post in public forums. I only visualize the futures where future!Me looks back at his old posts and cringes. If you suffer similarly, I hope you will follow my lead and introduce yourself. Throw caution to the wind! Or, you know, just send me a private message (a simple “hey” will suffice) and maybe we can help each other.
Replies from: Neo, Friendly-HI, John_Maxwell_IV↑ comment by Friendly-HI · 2014-12-26T11:53:47.919Z · LW(p) · GW(p)
Be honest, do you really actually fear cringing when you re-read your stuff months or years from now? Sounds to me like an invented reason to mask a much more plausible fear: Looking foolish in front of others by saying foolish things. Well in case you do make a fool of yourself you always have the option of admitting "back then I was foolish in saying that and I have changed my mind because of X". In this communuty being able to do that is usually accompanied with a slight status gain rather than severe status punishment and ridicule, so no need to worry about that.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-12-26T11:03:47.876Z · LW(p) · GW(p)
I only visualize the futures where future!Me looks back at his old posts and cringes.
You say you anticipate cringing... is that a correct anticipation? Do you currently find yourself frequently beating yourself up for things you've said or written? If so, maybe that's the bug you want to fix first. Reinterpretation can be a good strategy; maybe try to frame your past post differently. For example, despite whatever factors you might find make a post of yours cringeworthy, it seems likely that at least one person found it valuable, interesting, or at least amusing.
Anyway, welcome!
comment by JohnGreer · 2015-07-06T06:24:45.175Z · LW(p) · GW(p)
Hello!
I’ve lived in Berkeley for about six years. My girlfriend is going to medical school so we’re going to be moving to Boca Raton, Florida (most likely) or Columbus, Ohio in less than a month. I’m sad to be leaving the Bay Area but thrilled to be with my girlfriend when she starts such an exciting chapter of her life. I’m also very fortunate that I can handle nearly all my business online.
I co-founded a startup devoted to making a web game with an old buddy of mine. This same guy introduced me to LW.
Critical thinking and debate has been a focus of mine since I was quite young so LW fit right into my interests. I’m very interested in instrumental/practical applications of rationality. I’ve been lurking for many years and finally decided to make an account to get over my fear of online embarrassment given my unfamiliarity with a lot of the lexicon and protocol on LW.
Some passions of mine are movies, seeking out novel experiences (examples are shooting an AK-47, judging a singing competition, and visiting Pixar), and martial arts.
I’m also interested in effective altruism and AI research but still have a lot of learning to do, especially in the latter.
Replies from: Benquo↑ comment by Benquo · 2015-07-06T13:47:45.603Z · LW(p) · GW(p)
Welcome!
You may want to check out some of AnnaSalamon's old posts for some things to try as far as applied rationality goes, if you haven't already.
Have you been / are you interested in connecting with the Bay Area Rationalist or EA community while you're still here?
Replies from: JohnGreer↑ comment by JohnGreer · 2015-07-07T02:05:44.976Z · LW(p) · GW(p)
Thanks for the tip! I've read some of her posts but will look into the ones I've haven't.
We're going to be moving in about two weeks and are fairly busy before so probably not going to be able to. I regret not going to a Berkeley Meetup while I had more time.
comment by inferential · 2015-01-01T19:00:19.529Z · LW(p) · GW(p)
The person behind this account is not at all new to the Less Wrong community. He has read all of the sequences multiple times, as well as much of the output of many non-Eliezer figures associated with or influenced by LW, and has been around for more than half the time the site has existed. Suffice it to say he knows his stuff. He used to comment and then stopped for reasons which remain unclear.
The obvious question is, why the new account, especially since I'm not trying to hide who I was? I decline to answer.
Less Wrong is important to me. Reading the sequences caused in me a serious upgrade. LW inspired a lot of meetup groups, one of which I attend every week. It's not the group I wish I was attending, but it's better than the alternative: none. Things fall apart. Roko exploded. Vladimir_M vanished, Yvain seceded; many others of import including Eliezer have abandoned LW. They all have their reasons, some common and others not. There are forces, it seems, driving the best away, leaving behind a smattering of dunces.
I aim to turn the tide. Nate Soares didn't show up until 2013. Less Wrong is still at least theoretically a place that can attract good people. Less Wrong has been navel-gazing about its own demise for a long time, and the wails have gotten stronger while nothing else has. What is more, the widespread perception that "X is dead," is a self-fulfilling prophecy. But I think it can be done, I think I can lay down a gauntlet, for myself and others, the Less Wrong Rejuvenation Project. Why do I think it can be done? Wei Dai is still here. He is my benchmark. The day he goes off to greener pastures is the day I give up.
The name refers to inferential distance, something I want myself and my audiences to keep in mind.
comment by Philosophist · 2015-06-07T18:45:23.396Z · LW(p) · GW(p)
Hello LW World!
I have been reading the writings of Eliezer Yudkowsky for about 2 years now, ever since a friend of mine introduced me to HPMOR. It continues to blow my mind that there is an entire movement and genre dedicated to reason. It's provided a depth of thought that I've always felt different from others for enjoying, and now I can happily say that there's a community for it.
I am currently an unemployed veteran and college dropout seeking to solve the financial problems which prevent me from currently completing my degree. I am halfway finished with an ultrasound tech school and I am also studying programming as a hobby. I'm proud of a lot of my work so far, from making the beginnings of an awesome game on Scratch to completing an advanced challenge on Hackerrank (technically it's incomplete, but it's only the timeout limit on large inputs that I have yet to find a solution for). I'm also learning web design skills on FreeCodeCamp where I have found very supportive mentors and hope to get a basic foot-in-the-door level of skills to gain employment.
What I REALLY wanted to do but failed at due to financial hardship is to work in neuroscience research. I'm more interested in the cybernetic side of turning science fiction into real scientific discoveries, but AI research is not a concept that I would turn away from, as I believe it has mutually beneficial applications to connect with neuroscience. Fingers crossed, I can either accomplish my goals toward neuroscience sooner rather than later or I can be lucky enough to survive to the point where aging is cured and widely distributed, giving me more than a lifetime to complete my goals.
The reason I'm posting today in particular is that I wanted to know if Reason, Cyberpunk, and Transhumanist themed poetry that I have created would have a place here. I'm thinking that I would like to have feedback from others who enjoy thinking critically about life. That said, the poetry I've made is an art form and would only expect to get feedback from rationalists to the extent that Reason is an art form. Perhaps any concern of that nature is really the result of a fallacious view of Reason that still clings to me as the "Hollywood Reason" concept that Eliezer described.
Regardless, what I have created is intended to be thought provoking and entertaining for individuals who often think of the intricate concepts that are on LessWrong. Any feedback that would help me to make them more thought provoking and entertaining would be a great help to improve them. Any advice on if there is an acceptable space for such a thing as well as advice on where to begin is appreciated in advance.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-07-05T15:33:52.459Z · LW(p) · GW(p)
Welcome!
I don't think there's an official rule about poetry. Speaking as a person with over 9000 karma, my intuition is that it'd be well received if it has some novel ideas/perspective and was linked to from an open thread.
comment by Sable · 2015-04-24T19:34:06.243Z · LW(p) · GW(p)
Hello, my name is Daniel.
I've wanted to join the rationality community for a little while now, and I finally worked up the courage after a brief but informative discussion with Anna Salamon, CFAR's executive director (who was as kind as I was nervous).
I'm working on finishing up a B.S. in Electrical Engineering, and I plan on continuing to a doctorate in some branch of decision or control theory. I also study philosophy, fiction writing, and computer science.
Since becoming aware of rationality in general, and Eliezer Yudkowsky's way of making everything make sense, I've gotten pretty heavily into cognitive psychology and metacognition.
To be frank, I understand that I'm a rank amateur in the field of rationality in general, but I'm looking forward to trying to get better. So if you're downvoting me, or even upvoting me, explaining why in a comment or message would be extremely helpful, so I can take the time to reinforce my positive cognitive pathways, and prune my negative ones.
See you in the threads!
comment by lochchessmonster · 2015-03-23T17:44:43.926Z · LW(p) · GW(p)
Hello,
I am a month long lurker who finally decided to make an account.
I'm 24, and am living as a US expat in Beijing right now. I have a BA in Economics from a top 5 university, where the most important thing I learned was just how little that actually meant. I got pretty disillusioned with academia, and I've only been able to start enjoying intellectual pursuits again in the last year or so; hence, it is nice to find a non-university community where I might be able to discuss interesting ideas without all of the self-important swagger.
I would say that the other important thing my econ background influenced is my rational decision making: I do not vote; I was involved in effective altruism (until I became an ethical nihilist); etc. I think I've experience some significant emotional blunting from this, and have mixed feelings about it. Hopefully being in a community of similarly oriented people (and getting more information about typical outcomes) will help me work through whether this is something that I need to address or not.
I lean somewhat classical-liberal (or pro-market left of center, with significant room for government provisioning for market failure) at the moment, but lately I've fallen into a more libertarian heuristi, which I want to become more aware of and counteract as I disagree with that political philosophy on several formal issues. Hopefully I can use the resources at LW to recalibrate on this issue in particular.
My interests are pretty broad:
- Public finance / policy
- Game theory / auction theory / voting theory (especially wrt collective decisionmaking / policy)
- Epistemology (especially regress / Munchausen Trilemma)
- Dynamics of social identity (especially the ethics of statistical discrimination)
- Aesthetics (especially w.r.t. visual art)
- Psychology and personal identity (especially antipsychiatry)
- Consciousness, continuity of experience, and personhood
- Literature (especially Latin American)
Additionally, I enjoy learning math, though I am not very talented at it (I was a single Algebra/Galois Theory class away from a math degree though). Recently, I've been going back through some old analysis / algebra / number theory books to give it another shot; I'm still bad at it, but it's nonetheless rewarding.
One of the things about LW that seems really awesome is the deep programming knowledge. I enjoyed the few programming classes I took, and look forward to learning more about its applications to modelling decision making.
Anyways, I look forward to engaging with you, and if anyone has anything they want to point me towarda here, I'd love the tip.
Replies from: Nonecomment by [deleted] · 2015-03-02T23:28:44.657Z · LW(p) · GW(p)
Good [insert-time-of-day-here]! My name is Tighe, I'm 16 years old, and I found this site through one of my friends at school. I'm not the most intelligent person, but I am interested in becoming less wrong. I don't expect myself to compare very well to most people on this site, but hey, that's what the point of being an "aspiring" rationalist is, right? Some of my interests in life so far have been writing, programming, math, and science (though I'm not very good at the last two). I've been told that this site helps to improve one's thinking skills, ones that aren't offered in most high schools (or any high schools, really), and I think that could really help me improve in the aforementioned areas. Well, hello.
comment by AvivaLasVegas · 2015-01-28T20:06:57.163Z · LW(p) · GW(p)
I'm not new to the site, but new to actually posting. Long time reader, first time poster, etc. I am a somewhat-regular member of the Los Angeles Less Wrong meetup, and I'm excited to keep learning more about rationality in general and Bayesian probability in particular.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2015-01-28T20:10:16.074Z · LW(p) · GW(p)
Welcome from the depths of lurking! What made you decide to start posting?
(I'm curious partially because there seem to be a few people who lurk and go to meetups and I don't fully understand the psychology of that.)
Replies from: AvivaLasVegas↑ comment by AvivaLasVegas · 2015-01-28T20:51:48.548Z · LW(p) · GW(p)
Well, I'm actually helping to plan an event for this LA Meetup, and I can't post the Meetup Event Topic Thingy without having Karma, so that's basically what pushed me towards actually posting. Which is funny, because I've been a regular meetup attendee for almost a year at this point.
comment by avix215 · 2015-01-14T22:49:53.408Z · LW(p) · GW(p)
Hi everyone! I've been a lurker for a while now, this is my first real interaction. Found LessWrong through HPMOR (read the whole thing over a single weekend; read it again a month later).
I'm sixteen and have just graduated from a high school in India (I'm a US citizen, though). Currently applying to American universities, working through some online college courses and Godel, Escher, Bach; teaching myself Python, writing a novel, and continuing to teach myself Japanese (5th language). Also partying shamelessly.
I'm very undecided about my future, but to generalize, I'm probably going to go into either the film industry or physics, while writing fiction on the side. I have no doubt LessWrong can help immensely in each of my pursuits, and I aim to finish reading all the sequences by the end of the year (currently halfway through How to Actually Change Your Mind).
I love this site. At times while reading the articles I have a feeling of obscure deja vu, almost outright indignation. Like someone has stolen MY personal insights, expanded them exhaustively, and posted them online. (Yes, I realize the actual research is decades old and not solely by EY.) I find my own thought patterns in these articles. Some just click instantly, and I understand every aspect. Others I have to reread a few times to really get. Anyone else know this feeling, or does everyone just understand it with ease?
Can't thank my lucky stars enough that a site like this actually exists: it's a veritable compendium for ascending to godhood.
comment by PhilipKolbo · 2014-12-18T16:28:28.583Z · LW(p) · GW(p)
Hello LessWrong community,
I came to this site after having read Harper's Magazine article "Come With Us If You Want To Live" by LW member @swfrank (@vernvernvern and I have this in common!). I am 21 years old, and am a percussionist living in Omaha Nebraska.
The first rational thought I can recall occurred in Kearney, NE. I was about 8 years old, I was walking across a soccer pitch on my way home from school. I was singing a modern christian worship song, looking into the sky. As I stared into space, I realized how meaningless my words were. I was alone and I sang to no one (time seemed to slow, it was a surreal experience). I began questioning the existence of a watchful god (this was a hard thing to do in my highly christian family). After that I struggled to involve myself in worship. This was a cornerstone event for me, leading to a more rational way of life.
I am now a junior at University of Nebraska at Omaha working toward a percussion performance degree. My diet consists of about 60% Soylent. I look forward to the connections I will make on LessWrong.
I have compiled some individuals who have played a large role in my rationality and progress: Bjork (musician), Omar Rodriguez Lopez (of The Mars Volta), Stanley Kubrick, C.S. Lewis, Ralph Ellison, Friedrich Nietzsche, George Orwell, Ludwig Van Beethoven, György Ligeti (composer), David Lang (composer), Elon Musk, and Steven Schick (percussionist).
Philip Kolbo
Replies from: matt2000↑ comment by matt2000 · 2014-12-24T04:59:45.548Z · LW(p) · GW(p)
I can relate to having musicians in my list of intellectual inspirations. Greg Graffin of Bad Religion was certainly an influence in mmy developing aspirations to rationality.
Replies from: PhilipKolbo↑ comment by PhilipKolbo · 2014-12-25T14:35:43.117Z · LW(p) · GW(p)
Yea, punk is a inspiration to me as well. You can see that with Omar.
comment by matt2000 · 2014-12-24T04:55:45.004Z · LW(p) · GW(p)
I'm Matt, 32, Living in Los Angeles. I first read Less Wrong sometime in 2012, and attended the CFAR Workshop in February 2014, and finally now am getting around to signing up an account, because while i am not as wrong as I used to be, I'm still mostly wrong much of the time, but I'm working on fixing that. Sometimes I make overly complicated jokes that misuse mathmatical language, because I'm a programmer, not a mathematician. Sometimes I host rationalist rap battles, which in practice are a bit more like ratioanlist group hugs than the thing you saw in 8 mile. I'm an atheist who will gladly debate educated theists. I like board games and short walks on the beach. I'm @matt2000 on twitter.
Replies from: adam_shimi↑ comment by adam_shimi · 2014-12-24T09:12:33.057Z · LW(p) · GW(p)
Welcome Matt. :) Can you explain a little more what you mean by rationalist rap battle? Seems fun.
comment by hargup · 2014-12-18T16:12:53.501Z · LW(p) · GW(p)
Hi I'm Harsh Gupta I'm an undergraduate student studying Mathematics and Computing at IIT Kharagpur, India. I became interested in Rationality when I came across the wikipedia article for Conformational Bias around 2 years ago. That was pretty intriguing, I searched more and read Dan Ariely's book Predictably Irrational. Then also read his other book Upside of Irrationality and now I'm reading hpmor and Khaneman's Thinking Fast and Slow. I also read The Art of Startegy around the same time as Arliey's book and that was a life changer too. The basic background of Game Theory that I got from The Art of Startegy helped me learn to analyze complex real life situation from mathematical perspective. I came to know about lesswrong from grwern.net, which was suggested by friend who is learning functional programming. I want to get more involved with the community and I would like to contribute some articles in future. BTW is there any community todo list?
comment by Tom_Allen · 2014-12-16T08:41:17.138Z · LW(p) · GW(p)
Hello all. My name's Tom and I'm a second-year undergraduate mathematics student in Adelaide, Australia. I rediscovered LessWrong a few months back after a conversation with friends about charitable donations where I referenced a post here about effective altruism. I had previously read only a few of the Sequences posts, having been directed here by Eliezer's fanfiction, but since signing up I've made my way through about 80% of the major sequences.
If anyone has any questions about my background or interests, please feel free to ask.
comment by David_Kristoffersson · 2015-07-16T22:14:54.816Z · LW(p) · GW(p)
Hello.
I'm currently attempting to read through the MIRI research guide in order to contribute to one of the open problems. Starting from Basics. I'm emulating many of Nate's techniques. I'll post reviews of material in the research guide at lesswrong as I work through it.
I'm mostly posting here now just to note this. I can be terse at times.
See you there.
comment by John_Mitchell · 2015-06-12T12:17:19.936Z · LW(p) · GW(p)
Hello people.
I am brand new to this site and really to the topic of rationality in general. A friend recommended HPMOR to me a few months ago and I loved it. I then read Cialdini's 'Influence' on recommendation from these forums, and I am now reading Rationality: from AI to Zombies.
My background is in science, having studied oceanography at university, graduating about ten years ago. I am currently thinking about training as a science teacher. I look forward to becoming better acquainted with this topic, and being involved in the discussions.
Replies from: Vaniver, Gram_Stone, None, None↑ comment by Gram_Stone · 2015-06-12T15:26:38.817Z · LW(p) · GW(p)
Welcome!
↑ comment by [deleted] · 2015-06-12T13:52:56.119Z · LW(p) · GW(p)
Lectio brevior
Learn more contemporary philosophy of science. Perhaps start with the Vienna circle, graduate to Quine and go from there, delving into Hume and such when you see allusions to them.
Or, learn statistical machine learning.
You'll save time, have a reputable skill set and have a relatively normal vocabulary by the end of it.
edit 1: 2 downvotes and no comments? Have I hurt someone's feelings?
comment by Nanashi · 2015-02-10T15:19:50.828Z · LW(p) · GW(p)
Hi all,
I've been following EY and LW for about four years now. I'm fairly new to posting though. I started out as a "republican" in elementary school, then turned into a "libertarian" in high school because I didn't care for many conservative positions. Then an "objectivist" in college, because I didn't care for the fact that libertarianism only extended to politics and not ethics. Then I became frustrated with the Objectivist community and their inability to adapt to the real world so I became a "all the people I've met who self-identify as one of these labels has turned out to be really obnoxious so I really don't want to convolute discussions by using a label"-ist. It wasn't until recently that I discovered Rationalism and so far it has been the most accurate label and also the most complete system so far.
My end-game is to end death (and if entropically possible, reverse it). Which is a pretty big practical problem. As such, I don't have a ton of interest in many of the ethical questions because more often than not, my answer is: "If we can end or reverse death, it doesn't matter." .Short-term, my goal is to become rich enough to retire fairly early and have a significant amount of money that can be used to fund various worthy causes and allow me to continue this path full-time. I'm probably 75% of the way there. When I'm not trying to build wealth, most of my free time is spent tinkering with various AI algorithms, exploring number theory, or building prototypes of various gadgets (my latest one is a hard drive that stores data using energy rather than matter. Nevermind the fact that it can only store about 16 bytes.).
comment by David_Bolin · 2015-07-17T09:20:47.460Z · LW(p) · GW(p)
I am have been a Less Wrong user with an anonymous account since the Overcoming Bias days. I decided to create this new account using my real name.
comment by [deleted] · 2015-05-20T17:12:17.517Z · LW(p) · GW(p)
Hey everyone!
I'm a long-time lurker of this site, but I haven't posted anything before. I've read all the sequences twice over the past few years, along with almost all non-sequence posts. The list of all posts was really not in an obvious location, but I eventually managed to find it!
So I'm new to the idea of actually communicating with people over the internet; I've never actually been a member of any forum before. Though I have a Reddit account, I've only made about ten posts in the year that I've been there. It's really weird; I often find myself thinking I have a response to something I read, then thinking "too bad I can't communicate with them!", completely forgetting that no, wait, I have an account expressly for that purpose.
I've decided that this pseudo-voyeurism of online communities has gone on long enough and decided to join. I don't know if I'll have anything to contribute, as I'm pretty critical of the value of my own ideas, to the point that I once tried to start a blog but decided that everything I could ever want to say has already been said, and I deleted the blog after one post. Maybe I need to impose a comment quota on myself?
In any case, I'm a physics grad student who mostly works in biophysics. I'm also interested in pure mathematics, philosophy, and computer science / artificial intelligence, though I procrastinate too much and don't really know more than the average CS minor. I plan on changing that at some point (he said, ironically).
comment by wildboarcharlie · 2015-04-05T21:56:55.048Z · LW(p) · GW(p)
We'd love to know who you are:
- 19 y.o. at Berkeley
- Lived in Shanghai, London, CT, and CA
What you're doing:
- Dealing with classes
- Working jobs in design and CS on the side
- Thinking
What you value:
- Design that follows Dieter Rams' 10 Principles
- Talking with thoughtful people
- Big Hairy Audacious Ideas
- Good whisky
How you came to identify as an aspiring rationalist
- I get bored very easily so if Netflix and Hulu aren't available I occupy myself with thought experiments
- I like spotting logical holes in my beliefs and values (I argue against myself sometimes)
How you found us
- An HN post
Very new here. Hopefully I can learn a lot from all of you.
Replies from: gjmcomment by [deleted] · 2015-02-10T14:39:14.132Z · LW(p) · GW(p)
Hey everyone! I'm a longtime lurker but I've never gotten around to making an account before now. I think my introduction to this site was actually someone linking to the Baby-Eating Aliens story a few years ago, which I guess isn't a common way to find this site. I've since read all of the sequences twice, and most of the other posts. Recent (unfounded, I hope) discussion about the site dying have made me finally get an account.
I'm a physics PhD student working in biophysics and computer simulations, and I also read philosophy and psychology in my free time. Hopefully I'll have some interesting things to contribute; maybe a few posts or mini-sequences about just how useful learning a programming language is to your ability to think and plan, or about the gulf between the scientific methods employed by physics versus biology. Or maybe some clarifications on the interpretations of quantum mechanics. Hopefully there's something I can say that hasn't been said already and much better by someone else, even if it's just links to interesting articles I find as I scour the net.
In any case, Less Wrong has been insanely useful to me over the past few years. Reading it is how I was introduced to Anki, sleep hygiene, methods of avoiding procrastination, and all sorts of useful information I have successfully employed in my daily life.
comment by theWRITER · 2015-02-01T20:27:14.400Z · LW(p) · GW(p)
New to this site... Have studied very little about logic and philosophy starting with some big famous papers that talk about how we know nothing for certain (thanks, Descartes), going through whether All Ravens are Black, studying the Perfect Island argument, learning about Famine, Affluence, and Morality, and ending somewhere along the lines of whether justified true belief is knowledge. That is to say, I'm not that educated on logic or rationality, but entertaining ideas is a great hobby of mine.
I came to Less Wrong because I found it on Harry Potter MOR (I haven't read HPMOR, or HP for that matter, but I find both interesting nonetheless, and I just got really excited when I found that a site like this existed.).
My beliefs: I am a theist, and I do not affiliate with a religion or political party. Of course, that is to say, the mark of an educated mind is to be able to entertain ideas without fully accepting them. :) I also like to assume that the majority of the population is evil and has ulterior motives, but that's just me...
I'm a high school student who's just looking for something to write about and something to learn about. Just a new perspective altogether.
Nice to be here.
↑ comment by ilzolende · 2015-02-02T02:33:55.539Z · LW(p) · GW(p)
There seems to be a lot of other high school students on this site lately. If you like this stuff, you may also like the International Baccalaureate class Theory of Knowledge, which you can often take as an elective even if you're not an IB student.
Kind of curious about your theism, don't feel required to answer: A lot of nonreligious people who believe in a god are deists or pantheists. Are you either of those? If not, would you be willing to give more detail about your beliefs?
Also, I'm kind of starting to wonder if some people don't really like classifying themselves into groups. Is the reason you don't affiliate with a political party because you want one that better matches your positions on policy, or because you wouldn't associate with one even if you agreed with them on all policy proposals?
Most people define "evil" as "wants evil things", not "has evil revealed preferences". If you're looking at social behavior, we all have ulterior motives (I want to talk about things regardless of how annoyed a listener is, I want a strong support structure so that if something goes wrong I can get help, I want people to entertain me), but the actions those motives lead to are pretty low on the scale of bad stuff, somewhere close to EY's dust speck.
Replies from: Anders_H, Lumifer, theWRITER↑ comment by Anders_H · 2015-02-02T18:27:02.421Z · LW(p) · GW(p)
There seems to be a lot of other high school students on this site lately. If you like this stuff, you may also like the International Baccalaureate class Theory of Knowledge, which you can often take as an elective even if you're not an IB student.
As a 2001 IB Diploma Graduate, I have to disagree very strongly with this advice (unless the curriculum for the Theory of Knowledge course has changed substantially over the last 15 years).
I remember taking this course and being immensely frustrated by how almost every discussion was obviously just disagreement about semantics. This completely killed my interest in epistemology and philosophy, it was only when I read the "Human's Guide to Words" sequence several years later that I realized there were people who were thinking seriously about these issues without getting into pointless discussions about whether items are rubes or bleggs.
Courses in mainstream philosophy that get stuck on confusion about the meaning of words have the effect of turning rigorous thinkers away from thinking about philosophical questions. As for myself, if it hadn't been for reading Overcoming Bias years later, the IB course on Theory of Knowledge could have permanently killed my interest in epistemology.
Replies from: ilzolende↑ comment by ilzolende · 2015-02-06T06:58:52.807Z · LW(p) · GW(p)
It's been better than that so far (first few weeks). We haven't argued much over meanings of things yet.
The one disappointment is that I get really defensive every time we discuss whether doing whatever empathy tells you to do is moral, because that's half of the argument that says autistics are evil mass murderers (not actually the position of anyone in the class), and I get mildly annoyed when people mischaracterize utilitarianism or have clearly never heard of it before. (The situation in which all the available options are rule-violating and you choose the utility-maximizing one is different from the situation in which all the high-utility options are rule-violating, and you violate the rules and then choose a low-utility action.)
↑ comment by Lumifer · 2015-02-02T18:24:43.344Z · LW(p) · GW(p)
I'm kind of starting to wonder if some people don't really like classifying themselves into groups
I don't like classifying myself into groups. You try to crawl into a pigeonhole and you get scrapes and bruises, and sometimes things get torn off...
↑ comment by theWRITER · 2015-02-02T04:58:59.698Z · LW(p) · GW(p)
See as far as my beliefs, I have a strong religious background... Catholic elementary and middle school (I go to non-sectional, public high school now), Hindu dad, Protestant (Lutheran) mom... I mean, I generally end up changing my mind every year or so, but right now, I believe that God exists as the Universe working within itself... and that as each of us live, we each experience God... I don't know, I can't seem to get my head wrapped around the idea of a nonexistent god because of my strong religious background. Not very "rational", I guess, but that's just me personally, and there's really no should or shouldn't as far as faith goes, so I've just been rolling with it. So, I sort of just been changing my perspective based on what I learn and hear about the world.
I don't know if that really affiliates with deism or pantheism, really, but if what I explained above affiliates with one of them, would you (or anyone) explain how?
And as far as political parties go, there was this time when I tried to identify myself as Republican (though I really would be more of a Conservative Democrat) because I was tired of saying "No affiliation." It also kind of seemed like a fun little experiment because then I would be going against pretty much everyone else (most of the people I know tend to be democrat). I couldn't really hold out that long because, I don't know, being affiliated with Republican--or Democrat for that matter--makes people regard you as some political freak and not merely a person just agreeing with one more. Another thing, when I found myself affiliating with Republican, I found that I began to care more about what party supports what position, and I feel like that's something that just shouldn't matter.
In the end, I'm also somewhat ignorant and not very confident about my positions just yet either.
And as far as ulterior motives, saying that I don't trust people could be seen as my ulterior motive to not have to be generous and charitable (it's a pretty lame excuse to not empathize with charities sometimes.).
Replies from: ilzolende↑ comment by ilzolende · 2015-02-02T05:44:20.198Z · LW(p) · GW(p)
"Pantheism is the belief that the universe (or nature as the totality of everything) is identical with divinity, or that everything composes an all-encompassing, immanent God. Pantheists thus do not believe in a distinct personal or anthropomorphic god" (Wikipedia).
That matches to my interpretation of your stated beliefs.
I believe that God exists as the Universe working within itself... and that as each of us live, we each experience God.
Most of the atheism stuff on this site has more to do with a god that is a discrete being with supernatural capabilities than the thing you describe. However, if the main reason that you're not an atheist is that you have trouble picturing a godless universe, and you change beliefs based on what you learn and hear about the world (good work, by the way), chances are good that you'll end up being an atheist if you spend enough time on this site. ;)
If you actually want to clarify your beliefs, it could help to imagine some different worlds and see whether they count as having God in them or not, in order to consider what constitutes the absence of God. If there's no scenario that counts as God not existing, then I'm not sure what your belief that "God exists" is supposed to represent, and what information about the world someone could derive from that belief, given that it was true.
Thanks so much for the data about party affiliation!
Also, if you count subconscious desires to act in one's own interest as "ulterior motives", you may like what Robin Hanson on Overcoming Bias has to say about signaling.
↑ comment by LawrenceC (LawChan) · 2015-02-01T23:05:25.924Z · LW(p) · GW(p)
Welcome! I just want to comment on the "everyone is evil" idea - "Never attribute to malice that which is adequately explained by stupidity." Or broken incentive systems. Or something in that vein. :p
comment by [deleted] · 2015-06-30T02:28:54.974Z · LW(p) · GW(p)
A Challenger Has Arrived! Hello, yes, I'd like to announce that I am successfully existing for the first time in forever. I've been a lurker for quite some time, and have finished Eliezer's book. As I've stepped up my studies and plan to continue doing so, I've decided that scouting for a party to join would be wise.
Right now I'm finalizing my grasp of Rationality: From A.I. to Zombies, and organizing some notes I have on my personal struggle with willpower depletion. I would really appreciate if anyone knows of any site-external sources I could devour, in service to these goals.
From this basic grasp of rationality technique I will be departing to MIRI's research guide, so if you're currently on a quest to join the best, I certainly could use some companions in case I stumble.
Thanks, PhoenixComplex7
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-07-05T15:21:27.765Z · LW(p) · GW(p)
Welcome! Re: willpower stuff, I found this guy's writing very helpful several years ago. You can get his free book by putting your email in at the bottom of this page. (More specifics on the willpower issues you are facing might allow me to give more targeted advice.)
comment by zanglebert · 2015-06-27T15:17:12.216Z · LW(p) · GW(p)
Introduction comment, as requested.
I've been coming back to this site over and over again, for one or two years now I would say, for any number of topics, and today it dawned on me that there's something great about this site, the community / comments, and material, and that - maybe - I would like to become a part of it.
One email confirmation later, and the goal is achieved in its entirety.
Right, guys?
sigh
EDIT: One minor technical question... the comment system seems to be more or less a straight port from reddit, correct? But, unlike reddit, comment score starts at 0, it seems. Or did my other comment immediately receive a negative vote, seconds after going live?
Replies from: Nonecomment by 11kilobytes · 2015-06-03T08:27:54.169Z · LW(p) · GW(p)
Hello everyone.
My name is Kabelo Moiloa, and I graduated from the Anglo-American School of Moscow three weeks ago. My deep interests are math, computer science and physics, in fact I might consider doing a series of posts here on Homotopy Type Theory, since I've been going through the HoTT Book. I first came to this website likely four years ago, so I don't remember well how it was. As I recall, I came here soon after I deconverted from Catholicism, and have found the discussions and content here fascinating ever since. For example, although I had already rejected theistic morality before reading the articles here, Fake Explanations allowed me to explain why. The idea that morality is, "intrinsic to the nature of God," is no more explanatory than "my confusion about this metal plate is explained by the phrase heat conduction." Additionally, the emphasis here on beating akrasia and achievement, lead me to pursue commitment devices, productivity systems etc., which have improved my ability to archive my goals, although unfortunately I only pursued these late in my senior year of high school. I was also exposed to Cognito Mentoring, which was quite useful.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2015-06-03T08:32:04.594Z · LW(p) · GW(p)
I was also exposed to Cognito Mentoring, which was quite useful.
I remember you, glad to hear it :-).
comment by J_Smart · 2015-05-20T02:21:15.860Z · LW(p) · GW(p)
Hi all,
I'm a recently graduated aerospace engineer. First came upon LW via HPMOR a couple years ago, been through the Sequences once since then, currently going through Rationality: A to Z mostly as a refresher.
Gravitated toward aerospace as a sort of proto existential risk mitigation effort, but having spoken with Nick Beckstead via 80,000 hours and comparing the potential of various fields to mitigate X-Risk within the next ~100 years which resulted in my discounting space development relative to other fields, currently more open to other avenues.
Very interested in learning more computer science, and applied mathematics more generally, but part of what makes me strongly prefer LW over other communities interested in the same is the strong focus on effective, economical implementation of ideas
comment by theowl · 2015-05-09T03:27:28.077Z · LW(p) · GW(p)
Hi All, I live at the LW Boston house, the Citadel. My undergrad and grad was in Biology, and I am switching into programming. I am interested in psychology and cognitive biases. I value self-improvement and continuous learning. I recently started blogging at https://evolvingwithtechnology.wordpress.com.
comment by ladyastralis · 2015-03-06T08:57:38.972Z · LW(p) · GW(p)
Hello everyone!
I just registered and I don't quite know how this works, but the HPMOR Wrap Party Organizers Handbook said to post here, and so here I am.
Venue: Griffith Observatory front lawn
2800 E Observatory Rd, Los Angeles, CA 90027
Date/Time: March 14, 2015: 6:00pm
Cost: Free access to the complex, planetarium shows are $7
Facebook event page: https://www.facebook.com/events/1585754024996915/
Contact email: ladyastralis at gmail youknowtherest
Please bring: A (picnic) blanket, some snacks/food, some way to read HPMOR that has its own light source (I called the observatory -- they turn off the lights pretty early), and a thermos of hot cocoa. Don't forget a coat!
Notes: The final planetarium show is at 8:45pm. A fitting tribute.
The complex closes at 10:00pm.
I will be wearing my Ravenclaw scarf.
Looking forward to finally meeting other HPMOR fans!
Amanda
Replies from: Gondolinian↑ comment by Gondolinian · 2015-03-06T12:04:00.990Z · LW(p) · GW(p)
Welcome, Amanda!
You might want to post your event info as a comment here so it gets attention from the wrap party coordinator. Or you could send it to the coordinator as a private message.
Replies from: ladyastralis↑ comment by ladyastralis · 2015-03-09T04:54:49.655Z · LW(p) · GW(p)
Thanks Gondolinian! I took your advice. Also, Oliver is definitely aware of this party.
comment by [deleted] · 2015-02-06T12:25:07.214Z · LW(p) · GW(p)
Dear All (or whatever is the appropriate way to address the community here),
Reading Star Slate Codex kindled my interest in this community. I do not (yet) consider myself a Rationalist, largely because I don't put a disproportionately high value on the truth value of statements as opposed to their other uses, but I might be something sort of a fellow traveller because I think we have one thing in common: curiosity and the desire to investigate and analyze everything.
About me: not actually Dutch (although European, never been to the USA), my nickname is a bit of an in-joke I cannot explain without compromising my privacy. ESL, but hopefully fluent enough.
Things I would like to discuss and please guide me to the right places for this:
1) Why do you place such a high value on the truth value of statements as opposed to their other uses? For example when you are grieving for a loved one, don't you rather want to hear some comforting, soothing half-truths?
2) Same, with a focus on religion. Why do you care so much about whether they are true, as opposed to caring about whether they are socially useful or harmful, for a huge variety of purposes and optimization goals?
2/B) Shouldn't a species with a generally Low Sanity Waterline rather construct something along the lines of lest harmful / most useful Designer Religion (parallel: designer drugs) as opposed to trying to overcome it entirely? What would be the ideal features, goals, deliverables of a proper Designer Religion?
3) How can we approach the problem of ego-centrism / narcissisism rationally, which is NOT the same problem as selfishness or egoism? It is rather the problem of a disproportionate focus / attention to the self, which can be entirely coupled with unselfish altruism, for example giving charity but not focusing on the recipient but on your own virtue. This a problem, I think this is a growing problem, I think in politics narcissism or ego-centrism has traditionally been a problem of the Left and the most intelligent conservatives and religious writers (Chesterton, Burke, Oakeshott, Lewis etc.) can be seen as anti-narcissists, but they were not systematic, not principled enough - and ignored narcissism on their own side of course. This deserves a rational analysis but I don't even know where to begin! Is there something like a narcissism test for example?
4) Value judgements and personal choices. Is the Future You always right? You face the choice between going to the gym to lose weight or stay in comfortably and read. Your short term goals conflict with your long term ones. Your time preference conflicts with your other preferences. Current You would feel better staying in, Future You prefers to not be overweight. Generally it is said wise people who have self-control and whatnot, respected people choose the preferences of Future You. But if you keep pleasing Future You, you will very literally never be happy. And if you keep pleasing Current You, you end up an unhealthy addicted trainwreck. What is the rational strategy?
5) Testosterone and masculinity. I used to be the typicial intellectual "gamma rabbit" man who dislikes it, see Carl Sagan on testosterone poisoning. I used to be influenced by Redpillers to the opposite, then I realized they are, how to put it, not the kind of people I want to take my advice from. Vox Day does a "great job" of inadvertedly convincing people like me to not want to have ANYTHING to do with people like them. Now I stand confused in the middle. Right now I try to play both sides of the game, be a good husband and dad at home and a fierce fighter in the boxing gym (the keyword is "try", as in, fiercely trying not to collapse from exhaustion during sandbag work). I don't know if anyone tried to analyze this rationally, what is best etc.
6) Discuss Jack Donovan. Dude be crazy. Also, intelligent and writing well-researched stuff. Also, he is evil. What not to like?
7) Thomas Aquinas. Theist or not theist, he was a genius. Even if you see theology as a form of fantasy fiction, he was leaps and bounds the best, most structured, most logical fantasy writer. You want superhuman machine intelligence? It will probably have to cross through the phases of very high human intelligence. One phase of your AI will be "AIquinas".
8) Pet topic: how to un-fuckup Eastern Europe? I intend to live there, so quite motivated. Example: how to convince people that thinking in categories of players and suckers is not such a good idea or cooperation is a good one? Is there such a thing as escaping the corruption spiral?
Replies from: Richard_Kennaway, IlyaShpitser, ChristianKl, Richard_Kennaway↑ comment by Richard_Kennaway · 2015-02-06T14:56:33.034Z · LW(p) · GW(p)
And while I'm thinking about Aquinas, I remember I once wrote this pastiche of the method:
Whether the composition of the Summa Theologiae was an act of bizarre monomania divorced from reality?
Objection 1: The Angelic Doctor was learned in all of the theology and scripture that preceded him, and drew it into a single coherent work that has not been superceded. Therefore, this was a valuable and mighty deed, and not an act of bizarre monomania divorced from reality.
Objection 2: The Church has blessed his work and canonised its author. Therefore, etc.
On the contrary, It is written that the author himself, after seven years labour cast his work aside, saying that it was of straw, and did not pick up his pen again before he died soon after.
I answer that, It was an act of bizarre monomania divorced from reality. For it is written that there is only One Holy Book, the manuscript of nature, the only scripture which can enlighten the reader. And the Summa makes no reference to anything but the writings and philosophical speculations of the past. Therefore, it fails to read of that Book which alone can enlighten the reader.
Furthermore, the form in which the Summa is written, listing for each point of doctrine objections, contrary objection, verdict, and refutation of the opposing objections, lends itself to argument in favour of any view whatever; in contrast to the method of logic and experiment, which does not lend itself to argument in favour of any view whatever, but only (save for our fallible natures), in favour of that which is true and can be tested. Therefore the Summa proves no point of doctrine, but rather provides only a form of catechism to be recited in favour of the official doctrine.
Reply to Objection 1. The writings of the past are valuable as a source of truth, only in so far as they ultimately rest on observation of nature. Neither theology nor scripture rest upon observation of nature.
Reply to Objection 2. Those who themselves value a work, do not by that act prove the value of that work.
↑ comment by IlyaShpitser · 2015-02-06T12:52:36.753Z · LW(p) · GW(p)
how to un-fuckup Eastern Europe?
This is a great question, I think about this a lot too. My intuitions are: a bit of reaction, e.g. getting in touch with the glorious past. This might work w/ e.g. Poland/Lithuania, may work even on Russia, if Russia remembers how the Novgorod republic worked. But Russia is a hard nut to crack.
But yes once there is a society-wide defection norm, it is hard to get out of.
Replies from: Lumifer, hg00↑ comment by Lumifer · 2015-02-06T15:30:09.838Z · LW(p) · GW(p)
a bit of reaction, e.g. getting in touch with the glorious past
Isn't that what Putin is doing? I am not sure this is a great idea. The past glories tend to be associated with nationalistic wars.
Another issue is what would unfucking entail -- turning East Europeans into Scandinavians? National cultural characteristics tend to be pretty persistent :-/ Otherwise, the canonical answer seems to be a long period of civil society, rule of law, etc. I am not holding my breath.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-02-06T18:45:22.715Z · LW(p) · GW(p)
Russia is a super interesting special case. An interesting alternative history to ponder, re: Russia, is what would have happened had Novgorod predominated and not Moskva. Novgorod was sort of "the Lowlands of the East" in terms of the way they did things. Moskva was quite culturally nasty, and they got ahead by being basically the tax collectors for the Mongols.
↑ comment by hg00 · 2015-02-23T08:43:05.790Z · LW(p) · GW(p)
But yes once there is a society-wide defection norm, it is hard to get out of.
One solution to this is to develop, through force if necessary, a small group of people where cooperation is enforced, then expand that group. For example, anarchy advances to despotism when a single powerful despot dominates and prevents anyone but him from using force. City-states advance to empire when a single city (e.g. Rome) conquers them and forces cooperating within its borders (Pax Romana). The analogy might be for a rich, powerful Russian with a clean reputation to make lots of friends who also have a clean reputation and go found a city somewhere in unincorporated Russian land with an able, honest police force and strongly enforced cooperation norms. Of course, in this age you win with industry, so maybe you'd also want lots of smart people starting software companies.
(Or why start it on Russian land, even? Russian is one of the coldest places on Earth, right? Is just moving everyone who doesn't like corruption out of Russia a viable solution?)
Are there anonymous online forums where Russians can discuss corruption?
Replies from: Lumifer, IlyaShpitser↑ comment by Lumifer · 2015-02-23T18:28:09.499Z · LW(p) · GW(p)
One solution to this is to develop, through force if necessary, a small group of people where cooperation is enforced, then expand that group.
Are you referring to the collectivization of agriculture in Russia? X-D
a rich, powerful Russian with a clean reputation
Ain't no such animal.
Are there anonymous online forums where Russians can discuss corruption?
Anonymity is on the speaker's end, not on the forums end. But you might be interested in Alexei Navalny who is politically active on the anti-corruption platform.
↑ comment by IlyaShpitser · 2015-02-23T10:35:00.920Z · LW(p) · GW(p)
Is just moving everyone who doesn't like corruption out of Russia a viable solution?
It is, and is in fact what happened once the Iron Curtain fell. (This is an oversimplification, obviously).
↑ comment by ChristianKl · 2015-02-06T16:03:40.226Z · LW(p) · GW(p)
For example when you are grieving for a loved one, don't you rather want to hear some comforting, soothing half-truths?
I can handle my feeling on their own and I don't need someone to lie to me to comfort myself. Fully accepting reality, allows processing of emotions much better.
2/B) Shouldn't a species with a generally Low Sanity Waterline rather construct something along the lines of lest harmful / most useful Designer Religion (parallel: designer drugs) as opposed to trying to overcome it entirely? What would be the ideal features, goals, deliverables of a proper Designer Religion?
Being conscious about ideology is useful but I don't think it's very useful to think in terms of religion. For Muslims rules about how inheritance works are part of their religion. For Christians that's not true.
Effective Altruism does fulfill some social values of religion. It doesn't need a God to do so, or forbid it's followers to believe in Gods. If you want spiritual experiences there are various practices that don't need any decision to believe in Gods that might even better at providing spiritual experiences than Christian religion.
This deserves a rational analysis but I don't even know where to begin! Is there something like a narcissism test for example?
Of course. Academic psychology attempts to measure a variety of traits.
↑ comment by Richard_Kennaway · 2015-02-06T14:51:03.892Z · LW(p) · GW(p)
On the subject of AIquinas, there's a story: The Quest for St. Aquin.
comment by kaler · 2015-02-05T09:28:20.574Z · LW(p) · GW(p)
Replies from: gjm, IlyaShpitser, CCC↑ comment by gjm · 2015-02-05T12:59:48.132Z · LW(p) · GW(p)
I am concerned that I am not smart enough
No one is smart enough.
But if you mean, specifically, smart enough to
aspire for rationality
then I think the question is kinda backwards. "Am I too stupid to try to improve my thinking?" -- it's like "am I too sick to try to improve my health?" or "am I too weak to try to improve my strength?" or "am I too poor to try to get more money?".
Now, no doubt all those things are possible. If you really can't reason at all, maybe you'd be wasting your time trying to reason better. And there are such things as hospices, and maybe some people are so far in debt that nothing they do will get them out of poverty.
But those are unusual situations, and someone who is headed for a good result in a challenging subject at a good university is absolutely not in that sort of situation, and if the stuff on Less Wrong is too hard for you to understand the fault is probably in the material, not in you.
I find it difficult to multiply two 2-digit numbers in my head
A fine example of the kind of "easy" task human brains (even good ones) are shockingly bad at. I just attempted a randomly-chosen 2-digit multiplication in my head. I got the wrong answer. Am I just not very intelligent? Well, I represented the UK at two International Mathematical Olympiads, have a PhD in mathematics from the University of Cambridge, and have been gainfully employed as a mathematician in academia and industry for most of my career. So far as I can tell from online testing, my IQ is distinctly higher than the (already quite impressive) Less Wrong average. It is OK not to be very good at mental arithmetic.
(Having said which: If there were something important riding on it, I'd be more careful and I'm pretty sure I could do it reliably. I did a few more to check this and it looks like it's true. So I may well in fact be better at multiplying 2-digit numbers than you are. But the point is: this is not something you should expect to be easy, even if it seems like it should be. And the other point is: Even if you are, in some possibly-useful sense, less intelligent than you would like to be, that is not reason not to aspire to rationality. And the other other point is: It's clear that your intelligence is, at the very least, perfectly respectable.)
↑ comment by IlyaShpitser · 2015-02-05T10:19:51.547Z · LW(p) · GW(p)
For folks who post here morale and akrasia are usually much bigger problems than brain hardware.
↑ comment by CCC · 2015-02-05T10:10:41.125Z · LW(p) · GW(p)
Should I aspire for rationality or am I too stupid?
...
...consistently do relatively well in Physics, Chemistry, Engineering and Programming modules.
You are not too stupid.
I'm in a double degree in Chemical Engineering and Business and on track to receive First Class Honours in both.
You are really, really, seriously, not too stupid.
Yet, I find it difficult to multiply two 2-digit numbers in my head.
That's something that you might want to work on, but it's not a general intelligence failure. There are some tricks that can be learned (or discovered) and employed to multiply by specific numbers more quickly; alternatively, practice will help to speed up your mental multiplication.
Replies from: kaler↑ comment by kaler · 2015-02-05T10:41:20.285Z · LW(p) · GW(p)
Replies from: Epictetus, ChristianKl, CCC↑ comment by Epictetus · 2015-02-05T13:29:47.867Z · LW(p) · GW(p)
I just really don't get why I don't do well in math, which I assume would be the best measure of one's fluid intelligence.
Scholastic math is a different beast. I can say that a lot of professors have issues with the "standard" math curriculum. I have taught university calculus myself and I don't think that the curriculum and textbook I had to work with had much to do with "fluid intelligence".
It seems that my mind lights up with too many questions when I learn math, many of which are difficult to answer. (My professor does not have much time to meet students for consultations and I don't think I want to waste his time). It seems that I need to undergo suspension of disbelief just to do math, which doesn't seem right given that a lot of it has been rigorously proven by loads of people much smarter than me.
Sounds like one source for your troubles. It's a lot harder to succeed at school math and go through the motions if you have unanswered questions about why the method works (and aren't willing to blindly follow formulas). By all means bring your questions up to the professor. If he's teaching, there's probably some university policy that he be available to students for a certain amount of hours outside of class (i.e. it's part of his job). You lose nothing by trying. Even an e-mail wouldn't be a bad idea in the last resort. In my experience, professors tend to complain about students who never seek help until they show up the day before the final at their wits' end (or, worse still, after the final to ask why they failed). By that point it's too late.
Things such as why dividing by zero doesn't work confuses me
We like our multiplication rules to work nicely and division by zero causes problems. There's no consistent way to define something like 0/0 (you could say that since 1 x 0 = 0, 0/0 should be 1, but this argument works for any number). With something like 1/0, you could say "infinity", but does that then mean 0 x infinity = 1? What's 2/0 then?
↑ comment by ChristianKl · 2015-02-05T13:35:57.170Z · LW(p) · GW(p)
A very easy way to improve your writing would be to separate your text into paragraphs. It doesn't take any intelligence but just awareness of norms.
It seems that my mind lights up with too many questions when I learn math, many of which are difficult to answer. (My professor does not have much time to meet students for consultations and I don't think I want to waste his time).
Math.stackexchange exists for that purpose.
Not everybody is good at math. That's okay. Scott Alexander who's an influential person in this community writes on his blog:
In Math, I just barely by the skin of my teeth scraped together a pass in Calculus with a C-. [...]“Scott Alexander, who by making a herculean effort managed to pass Calculus I, even though they kept throwing random things after the little curly S sign and pretending it made sense.”[...]I don’t want to have to accept the blame for being a lazy person who just didn’t try hard enough in Math.
Things such as why dividing by zero doesn't work confuses me and I often wonder at things such as the Fundamental Theorem of Calculus.
Math is about abstract thinking. That means "common sense" often doesn't work. One has to let go of naive assumptions and accept answers that don't seem obvious.
In many cases the ability to trust that established mathematical finding are correct even if you can't follow the proof that establishes them is an useful ability. It makes life easier.
In addition to what CCC wrote http://math.stackexchange.com/questions/26445/division-by-0 is a good explanation of the case.
Replies from: kaler, CCC↑ comment by kaler · 2015-02-05T13:44:34.994Z · LW(p) · GW(p)
Replies from: ChristianKl↑ comment by ChristianKl · 2015-02-05T15:17:32.077Z · LW(p) · GW(p)
I hope you don't mind that I have now separated my comment into paragraphs. It's such an obvious problem in hindsight.
Accepting feedback and directly applying it is great :)
↑ comment by CCC · 2015-02-05T13:58:12.512Z · LW(p) · GW(p)
In many cases the ability to trust that established mathematical finding are correct even if you can't follow the proof that establishes them is an useful ability. It makes life easier.
While yes, that can make life easier, it also means that if the reason why you can't follow the proof is because you're misunderstanding the finding in question, then you're not applying any error checking and anything that you do that depends on that misunderstanding is going to potentially be incorrect. So, if you're going into any field where mathematics is important, it can also make life significantly harder.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-02-05T16:12:21.633Z · LW(p) · GW(p)
It's hard to put in words what I mean. There a certain ability to think in abstract concepts that you need in math. Wanting things to feel like you "understand" can be the wrong mode of engaging complex math.
That doesn't mean that understanding math isn't useful but it's abstract understanding and trying to seek a feeling of common sense can hold people back.
Replies from: CCC, Epictetus↑ comment by CCC · 2015-02-05T20:48:36.211Z · LW(p) · GW(p)
I... think I learnt math in a very different way to you. If I didn't feel that I understood something, I went back until I felt that I did.
I do not understand the difference between an "abstract understanding" and a "feeling of common sense". Is a feeling of common sense not a subtype of an abstract understanding (in the same way that a "square" is a subtype of a "rectangle")?
↑ comment by Epictetus · 2015-02-05T17:04:17.414Z · LW(p) · GW(p)
On the contrary, failing to feel common sense is usually a sign that you don't really understand what's going on. Your understanding of an abstract concept is only as good as that of your best example. The abstract method in mathematics is just a way of taking features common to several examples and formulating a theory that can be applied in many cases. With that said, it is a useful skill in math to be able to play the game and proceed formally.
There's an anecdote about a famous math professor who had to teach a class. The first time, the students didn't understand. A year later, he taught it again. Learning from experience, he made it simpler. The students still didn't understand. When he taught it a third time, he made it simple enough that even he finally understood it.
I will concede that in practice it can be expedient to trust the experts with the complications and use ready-made formulas.
Replies from: Nornagest↑ comment by CCC · 2015-02-05T11:34:17.988Z · LW(p) · GW(p)
I noticed that many articles in the sequences confuse me at times because I can think of multiple interpretations of a particular paragraph but have no idea which was intended. Also, many actions/thoughts of Harry in HPMOR confuse me. I might have interpretations of the events but I don't think those interpretations are likely to be correct. Is this normal?
This seems normal to me. What is intended is very often not an easy question to answer.
I have edited the post though, I think that saying that I am on track to receive First Class Honours in both is too optimistic.
The mere fact that you have been accepted for and expect to pass a double degree tells me that you are really not too stupid. (I'm not actually sure what the difference between Second Upper and First Class Honours is - I assume that's because you're referring to the education system of a country with which I am not familiar).
I just really don't get why I don't do well in math, which I assume would be the best measure of one's fluid intelligence. Things such as why dividing by zero doesn't work confuses me and I often wonder at things such as the Fundamental Theorem of Calculus. It seems that my mind lights up with too many questions when I learn math, many of which are difficult to answer.
Theory: You had a poor teacher in primary-school level maths, and failed to learn something integral to the subject way back there. Something really basic and fundamental. Despite this severe handicap, you have managed to get to the point where you're going to pass a double degree (which implies good things about your intelligence).
Is this normal too?
I... don't actually know. Throughout my entire school career, I was the guy for whom maths came easily. I don't know what's normal there.
Actually, it may be possible to narrow down what you're missing in mathematics. (If we do find it, it won't solve all your math problems immediately, but it'll be a good first step)
Let's start here:
Things such as why dividing by zero doesn't work confuses me
Define "division".
Replies from: kaler↑ comment by kaler · 2015-02-05T12:13:30.254Z · LW(p) · GW(p)
Replies from: arundelo, CCC↑ comment by arundelo · 2015-02-05T21:21:13.761Z · LW(p) · GW(p)
I recommend chapter 22 ("Algebra") of volume 1 of The Feynman Lectures on Physics. Here's a PDF.
My summary (intended as an incentive to read the Feynman, not a replacement for reading it):
We start with addition of discrete objects ("I have two apples; you have three apples. How many apples do we have between us?"). No fractions, no negative numbers, no problem.
We get other operations by repetition -- multiplication is repeated addition, exponentiation is repeated multiplication.
We get yet more operations by reversal -- subtraction is reversed addition, division is reversed multiplication, roots and logarithms are reversed exponentiation. These operations also let us define new kinds of numbers (fractions, negative numbers, reals, complex numbers) that are not necessarily useful for counting apples or sheep or pebbles but are useful in other contexts.
Rules for how to work with these new kinds of numbers are motivated by keeping things as consistent as possible with already-existing rules.
↑ comment by CCC · 2015-02-05T12:41:09.687Z · LW(p) · GW(p)
Well, About 3-5 percent of the best students in a cohort can expect to get First Class Honours. It basically means 97th percentile, or 95th percentile, depending on the quality of the students. The 75th to 95th percentile can expect to get Second Class Honours.
Which implies that I can, tentatively, estimate you to be in the top 10% of people who are accepted for a degree. That's really good.
I must admit that this question stunned me. I don't actually know.
...I think we've found the start of the problem. Your foundations have a few holes.
Dividing X by Y, at its core, means that I have X objects, I want to place them in Y exactly equal piles, how many objects do I place per pile? (At least, that's the definition I'd use). In this way, the usefulness of the operation is immediately apparent; if I have six apples, and I want to divide them among three people, I can give each person two apples.
I can use the same definition if I have five apples and three people; then I give each person one and two-thirds apples.
This also works for negative numbers; if I have negative-six apples (i.e. a debt of six apples) I can divide that into three piles by placing negative-two apples in each pile.
Division by zero then becomes a matter of taking (say) six apples, and trying to put them into zero piles. (I hope that makes the problem with division by zero clear).
And yes, there is a fancy algorithm that I can put X and Y in and get the quotient out... but that algorithm is not a particularly good basic definition of division. (Interestingly, I note that your definition jumps straight to setting out separate cases and then trying to apply a different algorithm to each individual case. This would make it very hard to work with in practice; I've worked with division algorithms on computers, and they're far simpler, conceptually, than what you had there. If that's what you've been working with, then I am really not surprised that you've been having trouble with maths).
Now let's see how far this goes...
Define "multiplication", "addition", and "subtraction".
Replies from: kaler↑ comment by kaler · 2015-02-05T13:11:42.940Z · LW(p) · GW(p)
Replies from: CCC↑ comment by CCC · 2015-02-05T13:50:29.307Z · LW(p) · GW(p)
When I read your answer, I was thinking, (seriously no offense because I know you are really smart) I don't know for sure that this definition works for complex numbers.
It does; complex numbers are just another type of number. We'll get to them shortly.
And then I was thinking that mathematics relies on definitions and deductive reasoning and intuition cannot give the certainty of deductive reasoning, thus it might be a fallacy to think that something simple and intuitive is an accurate model of mathematical reality... then I remembered that it was taught in kindergartens even...
To be fair, sometimes the intuitive answer is wrong; one does have to take care. But sometimes, as in these cases, the intuitive model does work.
Define "multiplication"
X*Y : I have Y sets of X objects, how many objects do I have?
Exactly.
"addition"
X+Y : I have X objects. I am given Y objects. How many objects do I have?
Perfect.
It's easy to visualize imaginary numbers as another type of object 'x', and I am given y objects. So I have x + y imaginary objects and X + Y real objects.
You could do it that way, and it leads to the correct answers, but I think it's fundamentally problematic to see complex numbers as intrinsically different to real numbers. (For one thing, real numbers are a subset of complex numbers in any case).
"subtraction"
X-Y : I have X objects. Y objects are taken away from me.
Right.
Then it makes me wonder what other exceptions to manipulation there is
There's only one that I can think of off the top of my head; if x^z=y^z, this does not mean that x=y (i.e. we can't just take the z'th root on both sides of the equation). This can be clearly demonstrated with x=2, y=-2 and z=2. Two squared is four, which is equal to (negative two) squared, but two is not equal to negative two.
Now, as to complex numbers. Let me start by asking you to define a "complex number".
Replies from: kaler↑ comment by kaler · 2015-02-05T14:12:37.450Z · LW(p) · GW(p)
Replies from: CCC↑ comment by CCC · 2015-02-05T21:35:00.916Z · LW(p) · GW(p)
Okay, those are all - well, I think I can kind of see some relation to complex numbers in there, but it's very vague.
So, let me describe how I understand complex numbers. To do that, we'll have to go right back to the very basics of mathematics; numbers.
Imagine, for a moment, an infinite piece of paper. (Or you can get a piece of paper and draw this, if you like; you won't need to draw the whole, infinite thing, just enough to get the idea)
Take a point, nice and central. Mark it "zero".
Select a second point (traditionally, this point is chosen to the right of zero, but the location doesn't matter). Mark it "one".
Now, let us call the distance between zero and one a "jump". You start from zero, you move a jump in a particular direction, you get to "one". You move another jump in the same direction, you get to "two". Another jump, "three". Another jump, "four". And so on, to infinity. These are the positive integers.
Now, consider an operation; addition. If I apply addition to any pair of positive integers, I get another positive integer. Any of these numbers that I add gives me a number I already have; I can add no new numbers with addition.
However, I can also invert the addition operation, to get subtraction. If I want to find X+Y, I hop X jumps from the zero point,then Y more jumps. But if I want to find X-Y, I must jump X jumps to the right, then Y jumps to the left; and this gives me the negative integers. Add them to the mental numberline.
At this point, multiplication gives us no new numbers. Division , however, does.
You will now notice, there are still gaps between the numbers. To fill these gaps, we turn to division; X/Y gives us a plethora of new numbers (1/2, 2/3, 3/4, 4/5, so on and so forth), hundreds and millions and billions of little dots between each point on the numberline. These are the rational numbers.
Is the numberline full yet? Hardly; it turns out that the rational numbers are so small a proportion of the numberline that it's still more empty space than marked point. I could say that there's billions of irrational numbers for every rational number, but that severely underestimates the number of irrational numbers that there are.
But let's add all the irrational numbers as well. (If you're actually drawing this, just take a ruler and draw a line across the page, such that all your integers fall on the line).
This line, then, is the famous numberline. I'm sure you've seen it before, on classroom walls and similar. It contains all the real numbers and, now that we've added the irrational numbers, it is full; there is no space on the line where another number can be added.
Now, let's consider squaring. The square of one is one. The square of any positive number greater than one is an even greater positive number (for example, two squared is four). The square of any positive number between zero and one is a positive number closer to zero (0.5 squared is 0.25).
The square of zero is zero.
The square of any negative number is equal to the square of the corresponding positive number; thus the square of negative two is four.
Therefore, four has two square roots; 2 and -2. Similarly, one's square roots are one and minus one.
So, a question then emerges; where are the square roots of minus one?
They cannot be on the numberline. There is no space for new numbers on the line, and the square of every number on the line is a positive number (or zero).
Let us call the square roots of minus one i and -i (somwhat arbitrary notation that was used once and stuck) Where do we put them on the line?
Since the line is full, we cannot put them on the line. If you place the line such that the zero is in front of you, the positive numbers head off to the right, and the negative numbers go to the left, then i is found one jump directly up from zero. Similarly, -i is one jump directly down from zero.
So, they are numbers, but they are not on the number line.
And now that we have placed i and -i, we can apply the same operation as we used earlier.
Addition: adding 1 is a jump to the right. Similarly, adding i is a jump upwards. There is a 2i two jumps above zero; a 3i three jumps above zero, and so on.
In fact, by following the same steps as were used to construct the original, real number line, we can create an imaginary number line at right angles to it; so that we can point to, say, 2.5i, or even pi i.
Then, if we want t find the point where (say) 3+4i is, we first jump three jumps to the right, then we jump four jumps up; adding the numbers 3 and 4i. 3+4i is thus a clearly defined point on the numberplane (since it's no longer one-dimensional, "numberline" is not exactly accurate anymore).
Adding and subtracting complex numbers on this plane is perfectly straightforward (though actually describing what i apples look like is beyond me). Multiplication follows the rules for multiplying additive expressions; that is, (a+b)*(c+d) = ac+ad+bc+bd. So, therefore:
(3+4i)*(2+5i) = (3*2)+(3*5i) + (4i*2) + (4i*5i) = 6 + 15i + 8i + 20*i*i
But since i is defined such that i*i=-1, that means:
(3+4i)*(2+5i) = 6 + 15i + 8i + 20(-1) = 23i-14
Voila, multiplication.
comment by wikispective · 2015-01-27T03:36:30.105Z · LW(p) · GW(p)
Hello everyone – I’m a new member of LessWrong. I consider myself to be a rationalist and humanist. I’m interested in applying rational analysis to help the general public understand complex problems. To help achieve this goal, I’ve been working on a wiki-style website to explain the key nuances of various controversial issues.
The concept is designed to provide meaning and clarity to a wide variety of complex issues, rather than simply enumerating the facts as Wikipedia already does decently well.
I’m wondering if: 1) Anyone in the LessWrong community has thought about something like this; and 2) If there is any interest in learning more about this project
Best, WS
comment by PrimeMover · 2015-01-18T20:59:21.527Z · LW(p) · GW(p)
Newcomer, mathematician by species; freethinker, secularist and rationalist by nature. Abrasive and irreverent, if I haven't annoyed off at least five pompous people in any given day, it's a day utterly wasted.
comment by RaistDragon · 2014-12-31T04:13:23.909Z · LW(p) · GW(p)
I'm Sam, 22. Lurked here for two years after first stumbling upon the Sequences. Since then, I've been trying to curb inaccurate or dishonest thought patterns or behaviors I've noticed about myself, and am trying to live my life more optimally. I'm making an account to try to hold myself more accountable.
comment by k_ebel · 2015-06-24T22:58:08.986Z · LW(p) · GW(p)
I joined a while ago but don't think I ever posted here. I'd lurked for quite some time here and at various blogs a degree or so separation away since before that. I've mostly link-hopped my way around the sequences and various pieces of fiction and followed folks on facebook and recently realized we had a local LW meetup. I'm happy to answer any questions about me, but never really know what kind of information would be relevant to put in an introductory post, so instead I thought I'd make a proposal instead:
I've seen (for a while) a lot of activity regarding AI / Singularities / Existential Risk within these groups of people. For my own part, I have pretty much no background knowledge when it comes to that. So I was looking to really dig into the book Superintelligence as a way to get a rudimentary understanding of it all.
That said, I find that I definitely get a lot more out of learning when I have people to discuss it with. So, with a bit of encouragement, since this is the "get-to-know-you" thread, I figured I'd to put a call out on here to see if there was anyone who might be interested in reading (or re-reading) the book along with me being skype buddies for this process.
My current plan is to go through it a chapter at a time and discuss / do further research / etc before moving on. Message me if that sounds like something you might be interested in doing!
~Kim
comment by humesbacon · 2015-06-15T13:56:36.016Z · LW(p) · GW(p)
Hi, I'm new here. I find this site while looking for information about A.I. I read a few articles and couldn't help but smile to myself and think 'wasn't this what to the Internet was suppose to be. I had no idea this site existed and I'm honestly glad to have found stacks of future reading, you know that feeling. I never really post on sites and would have usually have lurked myself silly but I've been promted into action by a question. I posted this to reddit in the shower thoughts section because it seemed appropriate but I'd like to ask you (more).
I was reading about Orthogonality thesis, and Oracle A.I.'s as warnings and attempted precaution to potential hostile outcomes. I've recently finished Robots and Empires and couldn't help but think that something like the Zeroth law could further complicate trying to restrain A.I.'s with begin laws like do no harm or seemingly innocent tasks like acquire paper clips. To me it seemed trying to stop A.I.'s from harming us whilst also completing another task would always end up with us in the way. So I thought perhaps we should try to give the A.I. a goal that would not benefit from violence in anyway. Try to make it Buddha-like. To become all knowing and one with all things? Would a statement like that even mean anything to a computer? The one critism I receive was "what would be the point of that?" I don't know. But I'm curious.
What do you think?
Replies from: hairyfigment↑ comment by hairyfigment · 2015-06-15T18:37:23.216Z · LW(p) · GW(p)
a goal that would not benefit from violence in anyway...To become all knowing
I have bad news for you. People have described ideas for an AI that only seeks knowledge (though I can't find the best link to explain it now). I think this design would calmly kill us all to see what would happen, if we'd somehow prevented it from dropping an anvil on its own head.
To "become one with all things" does not seem sufficiently well-specified to stop either from happening. In general, if we can reasonably interpret the goal as something that's already true, then the AI will do nothing to achieve it (nothing being the most efficient action).
Replies from: humesbacon↑ comment by humesbacon · 2015-06-16T01:58:56.581Z · LW(p) · GW(p)
I by no means thought I had stumbled upon something. I was just curious to see what other people thought. I thought to be one with all things was a very ambiguous statement, I think what I was trying to get at was if the A.I. caused harm in some way it would therefore inhibit it from completing its primary goal by definition. And Buddha seemed the only example I could think of. Perhaps Plato's or Nietzche's versions of übermensch might fit better? Thank you for replying I look forward to being a part of this community
comment by cunning_moralist · 2015-05-22T15:43:52.539Z · LW(p) · GW(p)
I love teaching, especially interacting with my students and their thinking, and I love philosophy, especially ethics. Understandably, I'm a philosophy teacher. I also enjoy politics, history, biology and the great outdoors.
comment by physicistcellist · 2015-03-10T00:00:11.137Z · LW(p) · GW(p)
Hello All!
I'm not exactly new: I discovered this at around time HPMOR started (wow 3 years ago). I've always liked thinking about how thought works; Hofstadter's GEB was a big influence. I've started the sequences several times, but never seem to finish. So I'm actually registering to see if that helps motivate me to read them all.
comment by pzwczzx · 2015-02-27T09:32:01.784Z · LW(p) · GW(p)
Hello everyone,
I'm Xavier, a 20 year old student from France. I've known about this site for a while. A week ago, I finally decided to start digging into the sequences for some useful insights. I'm interested in various topics such as philosophy, futurology, history and science. However, I'm almost certain my understanding of the world is seriously lacking compared to the average poster here. For example, I have no STEM background at all aside from the most basic knowledge, which is likely to become a problem in the future.
I've been obsessed with the idea of living a rational life for years. I've failed spectacularly to achieve this lofty goal, instead falling prey to what some of the sequences have described as akrasia. I've also been "dunning-krugered" many times due to a tendency to overestimate my abilities. I hope that by reading more Less Wrong and following the discussions here I will be able to eventually correct some of these issues and become a bit less amateurish in the process! Who knows?
Looking forward to meeting you guys.
Replies from: dxucomment by misterbailey · 2015-01-12T14:15:57.956Z · LW(p) · GW(p)
Hi. I'm a long time lurker (a few years now), and I finally joined so that I could participate in the community and the discussions. This was borne partly out of a sense that I'm at a place in my life where I could really benefit from this community (and it could benefit from me), and partly out of a specific interest in some of the things that have been posted recently: the MIRI technical research agenda.
In particular, once I've had more time to digest it, I want to post comments and questions about Reasoning Under Logical Uncertainty.
More about me: I'm currently working as a postdoctoral fellow in mathematics. My professional work is in physics-y differential geometry, so only connected to the LW material indirectly via things like quantum mechanics. I practice Buhddist meditation, without definitively endorsing any of the doctrines. I'm surprised meditation hasn't gotten more airtime in the rationalist community.
My IRL exposure to the LWverse is limited (hi Critch!), but I gather there's a meetup group in Utrecht, where I'm living now.
Anyway, I look forward to good discussions. Hello everyone!
comment by Nikario · 2014-12-24T12:46:22.152Z · LW(p) · GW(p)
Hello. I am new to this site as well. My background includes physics, mathematics, and philosophy at graduate level, which I am studying now.
I do not identify myself as a "rationalist", but that does not mean that I may not be a rationalist or that I am not trying to follow some of the advice that is given here to be a rationalist. I discovered LW after reading the story "Three Worlds Collide", which I discovered thanks to tvtropes.org. Lately I have been thinking and writing a lot about my own goals, and when I took a look around LW I was surprised to discover that many of the conclusions that I have arrived at independently appear in the sequences and other posts here. Thus I find myself agreeing with many of the things said here, but without having ever considered myself a "rationalist" explicitly. Still now, I'm not sure if "rationalism" is the right label to identify the kind of aspirations that I have and that I have found in this site. But it may be.
Anyway, to me that is unimportant. I think I am likely to find people here with a kind of interests that are very difficult to find in people you meet in person. I hope that I will be able to discuss here some topics that I cannot talk about anywhere else. Thus I have decided to sign up :)
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-12-26T10:36:14.815Z · LW(p) · GW(p)
Welcome!
Maybe we can invent a new label for people like you and me who aren't sure if they identify as "rationalists" but nonetheless find themselves agreeing with lots of what's written on Less Wrong anyway :P Quasirationalist or semirationalist, perhaps?
Replies from: Nikario↑ comment by Nikario · 2014-12-26T12:40:44.940Z · LW(p) · GW(p)
Thanks!
Actually, even though I said it is unimportant, I would like to explore further this particular question at some point. I would like to know: 1) How does my thought differ, if it does, from the major current of thought in LW. 2) Does this difference, if there is any, amount to the fact that I am not as rational as the average LWer is? Or is it due to factors that are neutral from the point of view of rationality (if there are such things)?
I'll write about it when I find the time.
comment by TheFaceDancerFox · 2015-04-21T13:01:44.508Z · LW(p) · GW(p)
Well met!
My name is Fox, and I am an actor and magician...well...In actuality, I guess those are both the same thing. I know how you all love concision, so I'll try again... ahem
Well met!
My name is Fox, and I am a liar. Empathetic to a fault, highly spiritual, and emotionally driven--still an emo boy at heart--I live as far from consciously as it gets. My main passions are girls, music, and service to others. Core values are: love, kindness, beauty, passion, immersion, and evolution.
For the past year I have studied and practiced magick. It is very real to me and has been the lens through which I view the universe as well as my primary method for navigating life. I have enjoyed many experiences and even some progress, living as such. ...for a time.
Lately, I have just been working and preparing to reenter school. I find this "being an adult" business baffling and struggle with finances. Throw intangibles into the mix and it's untenable. Which brings me to why I am here: I need to implement rational-thinking as my default state-of-mind to help me with goal-oriented decision making and getting a grasp on the most elusive concept in the world to me: self-discipline
I suggest you may be human. Your awareness may be powerful enough to control your instincts. Your instincts will be to remove your hand from the box. If you do so you will die.
-Reverend Mother Gaius Helen Mohiam during the testing of Paul Atreides in Dune.
So LW...will you be my Gom Jabbar??
Replies from: None, Lumifercomment by nitrat665 · 2015-03-26T14:21:27.303Z · LW(p) · GW(p)
Hello, everyone!
I am a long-time lurker and reader of LessWrong, and I have finally worked myself up to making an account and writing some comments. I am looking forward to participating in the discussions more, and hopefully writing some posts and contributing to the thought-bank here. So far, LessWrong have been a great resource for me, helping me to get a sturdier basis for my ideological framework, and exposing me to some good new ideas to think about.
For a little bit about myself: I am 29 years old, Russian, bachelors’ degree in Chemistry and Math and a Masters’ in Nuclear Chemistry from an American university. Currently I live in Russia, working as an instructor in IT / software development for a business analytics software company. The job is pretty much another step of school, only going into a “job experience” slot on the resume, instead of the “education” one – we study a topic for a month, then we go and teach it to our developers. My first year was our company’s software applications, then development and coding, now I am on the databases part. Eventually, I am hoping to return to a sciencier sort of work, though.
Religion-wise, I am an atheist, formerly going through all kinds of interesting religious searches (maybe I will make a separate comment on the rationalist origin thread about that). Politics – wise, I find it hard to classify myself as going with any traditional views (call me an effective anarchist, maybe?). Or maybe I am hoping for a better set of political ideas to emerge someday in the future.
My interests are the following:
Reading everything I can get my hands on, preferably science and science-pop literature, fiction and science fiction.
Science and self-education. When I found Less Wrong, it sparked yet again my interest in the more arcane parts of IT, and I am currently working through the basics part of the Miri research guide posted here, while also keeping up with my job-related applied IT studies. In the past, I found myself sometime venturing into the evolution theory field (still hoping to find some time some day to make a study of evolutionary algorithms and maybe program some fun simulation with evolving pseudo-life), basics of quantum (well, that was in my school program), biology, sometimes philosophy, religion and applied ethics.
As for less science and reading-related interests – I enjoy camping, rafting, the general summery outdoors stuff. In my city, summer is short, so we try to squeeze as much goodness as possible out of it.
Anyways, I am looking forward to having some fun discussions here. Nice to meet you, guys!
comment by [deleted] · 2015-01-30T03:27:34.731Z · LW(p) · GW(p)
I do economics, working on an interesting problem that might involve computer logic and recursion, but I am not a computer logic and recursion man. Thought to write a series of articles on economics aimed at building up to my current confusion, then thought to post them somewhere, would be convenient if audience with some knowledge of computer logic and recursion...
...oh.
~12 articles in, should be fun....
comment by Good_Burning_Plastic · 2014-12-25T12:39:46.517Z · LW(p) · GW(p)
Hello, everybody, and happy belated solstice.
I used to post here from a different account until some time ago, then I decided it was not anonymous enough (also, the username was quite silly) so I deleted it. Here I am again, but this time I'll be more careful about privacy.
BTW, the only reason for the underscores in my username is that the software won't let me use spaces, so don't bother with them. Also, in case you need to refer to me with a gendered pronoun, I'm a "he".
comment by Dajoker · 2015-04-14T18:28:08.922Z · LW(p) · GW(p)
Hello everyone!
I'm new to the site. I'm a grad student with a science background hoping to learn more about rationality and science. I've read posts on LW for quite some time (~ 3 years). I'm an atheist and a skeptic with some knowledge about theoretical physics.
See you around! ~dajoker
comment by dottedmag · 2015-04-09T08:12:09.211Z · LW(p) · GW(p)
Hello.
My name is Mikhail, and I have been lurking on LW for a several months, mostly reading sequences. I have discovered this site after reading HPMoR, as no doubt many had.
I'm a practitioner of GTD, and I am looking for
supplementing understood low-level practices of performing things with metaalgorithms for decision making and planning
improving tactics / learning tricks for handling low-level tasks which don't come naturally to me (such as learning languages), and hence cannot be efficiently done by regular planning / execution process
A bit of personal info: I am a software engineer (15y experience, more if one counts tinkering with software during school years), I live in Norway and originally from Siberia.
comment by Algon · 2015-04-07T13:49:38.214Z · LW(p) · GW(p)
Hi. I've been lurking here for a couple of months, reading up on some of the sequences and so forth, I made an account because I wanted to post a few things on the discussion board. Mainly to do with why I'm pretty convinced that immortality is already a thing, and how that has badly damaged my belief in a utilitarian system of ethics. Finally, I wanted to ask about something to do with FAI; essentially, why wouldn't X work. I'm curious to see how FAI will reveal itself to be more fiendish than I already thought.
comment by dlarge · 2015-03-26T20:28:28.337Z · LW(p) · GW(p)
Hello, everyone! I've been lurking for about a year and I've finally overcome the anxiety I encounter whenever I contemplate posting. More accurately, I'm experiencing enough influences at this very moment to feel pulled strongly to comment.
I've just tumbled to the fact that I may have an instinctive compulsion against the sort of signalling that's often discussed here and by Robin Hanson. In the last several hours alone I've gone far out of my way to avoid signalling membership in an ingroup or adherence to a specific cohort. Is this sort of compulsion common amongst LWers? (I'm aware that declaring myself an anti-signaller runs the risk of an accusation of signalling itself but whadayagonnado.)
I'm also very interested in how pragmatism, pragmaticism, and Charles Sanders Peirce form (if at all) the philosophical underpinnings of the sort of rationality that LW centers on. It seems like Peirce doesn't get nearly as much attention here as he should, but maybe there are good reasons for that.
Replies from: Viliam_Bur, None↑ comment by Viliam_Bur · 2015-03-27T10:11:46.574Z · LW(p) · GW(p)
Is this sort of compulsion common amongst LWers?
Speaking for myself, (a) I am not good at playing social games, therefore I hate environments where things like signalling are the only important thing, and (b) joining any faction feels to me like indirectly supporting all their mistakes, which I would rather avoid.
comment by TommiH · 2015-03-18T15:32:52.762Z · LW(p) · GW(p)
Hello!
My name is Tommi, and I'm a 34-years old Finn living in Berlin at the moment. I work as a freelance developer, focusing on the Unity development environment, making educational games, regular games, virtual art galleries, etc. for an hourly fee (so that's the skill set I bring into the community). I found Less Wrong some years ago via HPMOR (I forget how I found HPMOR). I've read it occasionally, but over the last year or so I've been slowly gravitating towards it, and decided now to make the effort to try on this community.
I've always valued reason and science over hearsay and guessing, but so far it's manifested mostly in terms of what I like to read and who I vote for. I also participated in the Green party of Finland for some years, in order to advance scientific decision making and a long-term, global approach to things (the Greens in Finland have a fairly strong scientific leaning despite hanging onto some dogmas). However, as an introvert my effect was, as far as I could tell, minimal. Now that I've learned that lesson, and am also in a good position financially and in terms of available time, I'm looking at my life goals again, and would like to see if this community could help me reach them.
As I understand them now, my goals are as follows:
1) Live a comfortable life materially. I'm not willing to sacrifice all of life's comforts to serve a higher goal. However, my material desires are lowish compared to my ability to earn (I'm a freelance programmer and apparently a pretty good one).
2) Have a fullfilling social life. One reason I've been looking at Less Wrong are the articles on improving social skills. However, I'm not certain if improving them is worth the effort - perhaps it would be better to settle for the kind of social life I can get with my current skills, and focus on other things. (Romance seems to be particularly hard to achieve - I think it's particularly hard because I'm gay and I haven't found many social circles that are simultaneously gay and nerdy enough to feel comfortable to me.)
3) Have a high net positive impact on the world. Unless I suddenly lose my income, I intend to pay 10% of my income this year to charity. I'll probably go for a GiveWell approved charity, although I have some reservations on the utilitarian leanings of it. I believe in more complex ethics than a simple sum total of utility. For example, I believe that debt exists: If someone loses utility because of me (either they helped me or I did them harm), I'm obligated to compensate them (if they want that) instead of helping some other person. So I tend to think I should become carbon-neutral before contributing to other charities, unless those charities help the same people damaged most by carbon emissions (something that may well be true). I also believe that the utility of people who do harm to others is worth less than the utility of those who don't. The application of this second rule isn't as clear, though.
4) Artistic aspirations. I wish to advance the field of interactive storytelling. Basically, I'd like to make a game/games that offer the player/players meaningful choices. Meaningful in storytelling, moral, and strategic sense. Such games already exist, of course, but I want to make the choices more open-ended than in an RPG like Mass Effect, and more real and personal than in a strategy game. Ideally, I'd like to make the player feel like they're interacting with and affecting the lives of real people in an imaginary setting. My ambitions are similar to Chris Crawford's (http://www.erasmatazz.com/library/interactive-storytelling/what-is-interactive-storyte.html) but my approach is not as puristic as his. My other role models are the people behind the game King of Dragon Pass.
Initially, I was thinking about this in terms of the usual heroic stories that are being made into games over and over (just doing it better, of course). However, now I'm thinking about combining this ambition with another ambition, which was turning one of my old roleplaying campaigns into a novel/series. I wrote a few chapters a couple of years ago, and it was very well received at the creative writing workshop I showed it at. Some of the honor goes to the failed MMO Seed which my roleplaying campaign was fanfiction of. Seed, and by extension the campaign, had strong rationalistic leanings - it's a science fiction story about a group of colonists on another planet sorting out various problems via science and technology, and having political games about which way to steer the colony. The characters tend to be very analytic and look at things with a long perspective.
My campaign was pre-HPMOR, though, so it wasn't that super-deep about rationalism. But now I think it might be interesting to combine the writing goal with the interactive story goal, and strive to deepen the thinking involved as much as I can. Ideally, the game would reward the player/players for thinking rationally, while also making them care about the characters and the unfolding story - without turning it into a series of rationality puzzles with only one right answer.
So, I'd like to see if digging deeper into the Less Wrong community would help me with these goals.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2015-03-25T15:25:19.843Z · LW(p) · GW(p)
Unity development environment, making educational games, regular games
Just what I want to do!!!
However, I'm not certain if improving [the social skills] is worth the effort
I believe social skills make a huge difference in one's life. I also believe that most people underestimate this because they are not aware of the benefits that being popular could bring them.
Sometimes changing your environment brings better results. But these two options are not mutually exclusive. You can have a great preferred environment and be able to navigate successfully the rest of the world -- because you have to interact with the rest of the world to achieve many things you want. Even to explore it to find the good parts of the environment.
comment by Tryagainslowly · 2015-03-16T12:36:56.866Z · LW(p) · GW(p)
Hello, I'm new to LessWrong. I was hoping someone could help me with a technical problem I'm having. I posted this same problem on the open thread under the discussion page, but I thought I'd be more likely to get a response here. It's to do with the LessWrong wiki. I made an account called Tryagainslowly on it; it wouldn't let me use my LessWrong account, instead making me register for the wiki independently. I wanted to post in the discussion for the wiki page entitled "Rationality". The discussion page didn't have anything posted in it. I wrote out my post, and attempted to post it, but it wouldn't let me, telling me new pages cannot be created by new editors. What do I need to do in order to submit my post? I'm happy to show what I was intending to post here if anyone wants me to.
Replies from: Viliam_Bur, Tryagainslowly↑ comment by Viliam_Bur · 2015-03-25T15:15:56.964Z · LW(p) · GW(p)
I thought I'd be more likely to get a response here
Did you realize that "here" refers to a three months old article (on a website that has new articles every day)?
For future, you are more likely to get a quick response in the most recent Open Thread. (There are articles called "Open Thread" in the "Discussion" section of the website.)
↑ comment by Tryagainslowly · 2015-03-18T11:33:36.344Z · LW(p) · GW(p)
It works now! It just required waiting a bit.
comment by high_IQ_monkey · 2015-03-02T07:48:42.334Z · LW(p) · GW(p)
Hello to all, although I am quite new to this site I have been exploring it ever since I first found it. I am an undergraduate mathematics and physics student with the goal to get a PhD in mathematics with a specialization in game theory and/or decision theory. Throughout my schooling I have constantly been bored with the lackluster mathematics that have been shoved in my face so consequently I have constantly been doing extra studying and research on my own. During one of my information binges I came across what is known as 'timeless decision theory' that I found on this website and after reading the article I was hooked on the plethora of talking points that I found on this website. Though i have done much research on my own on topics such as behavior analysis, game theory of popular board games, and group theory I do not plan on trying to contribute right away, though I hope I will end up posting some great arguments, I feel I need to learn the jargon and protocol before I can sufficiently contribute. As for the more personal side of things, my hobbies include a very healthy dose of board games and math ( yeah, i count it as a hobby). I have what I think is a good sense of humor and my philosophy is that offence is taken, not given, meaning their is no such thing as an intrinsic offensive statement. If anyone has a desire to chat about any of the previously mentioned topics I would be happy to indulge (especially board games, which if you couldn't already tell is a favorite of mine). Thank you and have a nice day to all.
comment by Ulti8 · 2015-01-11T11:27:27.196Z · LW(p) · GW(p)
Hello, I am Connor (18) from Victoria, Australia. I have been at LW a few times before but usually only as a brief look after being drawn into it from a link. As of today, I have decided to actually stay and properly look into it all (The Sequences, discussions, etc) and learn.
I am a student learning economics and business management. I mostly got interested in rationalism because of two fundamental reasons. Firstly through my upbringing and in extension personality, where my father taught me to be highly sceptical of assumptions (Ironically, he himself is relatively irrational as his beliefs fall under being overly cynical and paranoid) and claims made by any person or organization without first thinking on it myself. My questioning of baseless or fallacious assumptions (Including from the person who taught me to be as such) and desire for adapting my mindset to factor in evidence led me to find rationalism something worth inspecting to improve my thinking process.
Secondly, the other reason I am here is my interest in learning how the world and its many systems work, particularly societal(Which I am learning) and natural/scientific (Which I am sadly limited in). While I myself am not a scientist and have little knowledge of (hard) sciences, I put high value in said fields and would like to talk to (or at least quietly learn from) people who actually ARE knowledgeable in those area's.
I am a fan of technology (particularly cybernetics, robotics, space technology and energy technology) , literature, history, military strategy and art. I also occasionally dabble in philosophy. On a less serious note I love video games, watch anime and occasionally read fanfiction (HPMoR did not bring me here but I have read it). Finally, I am a futurist and transhumanist eagerly awaiting the singularity and am an ardent advocate of renewable energy (Along with my father, we plan on starting up an energy company built around algae bio-fuel once we have enough investors).
And that's me, off to continue reading the Sequence's.
comment by SwitchContext · 2014-12-26T18:49:24.160Z · LW(p) · GW(p)
Hi there everyone, happy mid-winter festive period.
I'm V (not from the film), 33, and living in the wilds of the UK, for now. I became very sick when I was 16 and essentially slept through my late teens and 20s so I'm playing catch up with a vengeance. I found the site through a friend and I've been a (silent, shadowy) member for a while but hadn't been able to carve out the time to get through the sequences, until now.
I'm a final year Applied Maths and Computer Science student but I'm also really interested in cognitive science, rationality, philosophy and their applications. I detest being wrong and not understanding things I consider to be important. Rationality is the best tool I've found for helping me get out of my own way and for protecting myself from myself and others. Having lost so much time and having had a generally strange life, I care a lot about getting the most out of the time that I do have, having opinions that reflect reality as closely as possible and making the best quality decisions I can.
At the moment degree work, trying to move house and preparing for post graduation is swallowing my life but I do have a couple of side projects on the go; a couple of app ideas which may or may not be useful enough to make, gaining basic programming proficiency (for some value of all three words) and a portfolio of work; a blog about my later stage recovery and the process of becoming "well", and a few other bits and pieces.
I have embarrassingly poor grammar and spelling which I'm trying to improve so I'm happy to be corrected if I start spewing word salad. I'm aware I've just invited replies consisting entirely of corrections to this comment and that's o.k.
comment by hoofwall · 2015-04-11T12:35:10.084Z · LW(p) · GW(p)
Hi! I am socially retarded... There are many things the standard human was born with the capacity to grasp that I never can. The word "autism" appears to me to be being thrown around a lot lately, mostly as a meaningless word used to convey that one thinks another is simply not normal but when I first noticed how heavily users on the internet threw around the word two years ago I identified as such for a bit to make conversation more expedient. I am able to comprehend metaphors and similes and such for some reason, but things such as having the capacity to roleplay or being able to perceive what I should do in any given scenario to maximize the happiness of the human before me is incomprehensible to me. I like to think I am a purely logical thinker and was born to be such but I'd rather not start talking about that right now...
My education is pretty poor. Eighth grade. I have read next to no books, and the internet was what taught me to speak English as I do today. My English was very basic prior, even though it is my only language. I looked up in the dictionary every word I encountered that I couldn't define for two years, until I decided that refining my expression in the English language for the human's sake was a waste of time and stopped caring.
I feel like I can't express more about myself without delving right into my philosophy, the likes of which I used to contest with every mind I came across indiscriminately only to have them still disagree with me 99% of the time despite my cornering them in argument, and I don't really want to because I've had such bad experiences with convincing others to think like me. The downvote system on this website is kind of intimidating as well... my first post on this website got downvoted once almost immediately and I'm not sure if I can tell by whom. I hate systems that enable passive-aggression like that. Even conversing in real life is awful because others can use petty tricks to try to emotionally manipulate you instead of actually just explaining why you should think like them via argument. It's just masturbation for them, and they have no interest in convincing you to think like them. I suppose that is one thing I feel I can safely say about my philosophy... I don't see my opinions as just opinions, I see them as an objective rationalization of this universe the likes of which one cannot disagree with without simply being wrong. I want to rationalize everything too, you know. I used to be indoctrinated to the point where I thought simply asking questions was evil. All I'd ever wanted to do was rationalize to all my understanding of the universe to objectively minimize their pain and maximize their pleasure for the sake of forcing the world to tend to its most rational end as i perceive it but whatever... I'm still being impertinent with whatever I'm writing here since I don't think just up and writing out my opinions would be a good idea.
I have very few interests. I really only care about defining right and wrong, and giving my philosophy to others, which I haven't done for a very long time. One day I hope to start expressing my opinions on what is right and wrong in a formal manner just to have done so in my lifetime. I apologize for the entirely vague post... I still haven't really any idea how this site works but if I ever debate users here or something I won't hesitate to express my opinions in their entirety.
Replies from: Weedlayer, polymathwannabe↑ comment by Weedlayer · 2015-04-11T15:28:01.054Z · LW(p) · GW(p)
Edit: I misunderstood what you said by "rationalize", sorry.
As Polymath said, rationalization means "To try to justify an irrational position later"", basically making excuses.
Anyway, I wouldn't worry about the downvotes, based on this post the people downvoting you probably weren't being passive aggressive, but rather misinterpreted what you posted. It can take a little while to learn the local beliefs and jargon.
↑ comment by polymathwannabe · 2015-04-11T13:56:51.587Z · LW(p) · GW(p)
Hi, Hoofwall. Welcome to LessWrong.
I have considered the label "autistic" to describe myself at some points in the past, but now I'm not sure. I may be at another point in the spectrum, or I may be just imagining things. But I can definitely empathize with anyone who struggles to make themselves understood to humans.
I'm confused about one point: Your usage of the verb " to rationalize" suggests that you intend it for a meaning that is slightly different from the standard meaning it has in logical jargon. We usually say that someone is "rationalizing" when they make an irrational decision and then, afterwards, make up an excuse to keep feeling good about it. I suspect that's not what you meant when you used that word; it feels like it would have been clearer to use the verb "to reason."
Of course, this is only my speculation. Please correct me if I'm wrong. (Within the rationalist culture prevalent in this forum, correcting other people when they're wrong is socially accepted as something you can do, but also, accepting corrections when you're wrong is something you're expected to do.)
Replies from: hoofwall↑ comment by hoofwall · 2015-04-11T14:15:34.658Z · LW(p) · GW(p)
Hi. I did indeed mean what you express as "to reason" when I said "rationalize"... I am entirely unfamiliar with the distinctions made on this site so thanks for pointing out to me how others might misinterpret what I say. Also, thanks for the welcome.
Sorry for using you like this but, do you know whether or not swearing is against the rules here, and if so would you please tell me if it is or not? The closest to a rule list I found was just about etiquette. I'm wondering whether or not uttering like the "n word" for instance would get me banned or something... I want to know how much freedom I have for expression here. I don't intend to spam or anything; I just want to know what I am allowed to do.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-04-13T14:58:04.017Z · LW(p) · GW(p)
This is the official etiquette policy. However, it is true that it offers very little guidelines about what language is allowed. From what I've perceived here, people are very formal and behave like adults, i.e. I haven't seen anyone throwing angry insults or wishing someone were dead or using hateful terms toward a demographic group. In short, this is not the YouTube comments section.
comment by Vilx- · 2015-03-08T14:05:45.023Z · LW(p) · GW(p)
Hello folks! I'm new to your site here and still trying to get my bearings. :) The navigation is pretty nonstandard, hence somewhat confusing to me. I found this website from a link my friend posted on a Facebook discussion we had. Since then I've got one question that keeps bugging me, so I decided to ask it here. As I understand, this thread (is this the equivalent of a forum thread?) is a good place to do it. :)
The question is this: I've got a theory which seems (to me) so simple and obvious and able to explain all human behavior that I'm surprised that it hasn't been already accepted as the golden standard. In fact, when browsing Wikipeda it seems there are dozens different competing theories about human motivation, and some of the more popular ones (like the one that Daniel Pink is promoting) are really skirting around the truth (according to my theory). So, obviously I'm full of doubts about how correct I am. There must be something I'm missing here.
Furthermore the idea isn't exactly mine - it's just a slightly modified (or maybe not even modified, depending how you look at it) totally classical idea dating back to Freud himself. I tried to find counterexamples on this site but couldn't find any that I couldn't explain with my theory.
So, the theory is this: humans will always choose to do the action which they think will bring them most pleasure/least pain. As I said - totally classical. The "modification" however is the "they think" part. We cannot see into the future so we cannot choose with absolute certainty the actions what will bring us the maximum enjoyment. Instead we try to predict the likely outcomes of our choices - and quite often we get it totally wrong. Many times every day, in fact.
The reasons for getting it wrong are many. We don't have complete information (or our memory didn't recall it in time; or recalled it incorrectly); we value consequences that arrive sooner as more important than those that arrive later; we can only correlate a limited number of items (memory limitation); etc.
Also we don't only take external things into account but also try to predict our own emotions, because those are quite real pleasure/pain sources too. For example, when I decide to organize my desk, I do it because I anticipate the sense of accomplishment and order (everything in its place and a place for everything) when I've completed the task.
But at the end of the day when all is said and done, the decision mechanism will just sum up all the predicted positive outcomes (and their magnitudes) and all the negative ones, and choose the option with the greatest value.
And this way I've so far been able to explain any example I've come across. Now, if this was the truth, I'm sure there wouldn't be such an eternal debate over it and there wouldn't be so many other competing theories. So where is my mistake? Can anyone come up with a counterexample that I won't be able to explain with my theory?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2015-03-08T15:30:53.439Z · LW(p) · GW(p)
Welcome!
Short introduction to navigation: Clicking the "Discussion" link at the top of the page will show you (most of) the new articles. If you write comments there, you are most likely to receive replies.
If there is something called "Open Thread", that pretty much means: feel free to ask or say anything (as long as it is at least somewhat relevant to this website, but even that is not always necessary). Also, posting in the most recent open thread will give you more visitors and thus more replies than posting in a three months old article. As of today, the most recent open thread is here, but tomorrow a new one will be started, and it may be strategic to wait.
humans will always choose to do the action which they think will bring them most pleasure/least pain. ... and quite often we get it totally wrong.
Well, if you put it this way, it is almost impossible to find a counterexample, because for literally any situation where "a person X did Y", you can say "that's because X somehow believed Y will bring them most pleasure / least pain", and even if I say "but in this specific situation that doesn't make any sense", you can say "well, this is one of those situations when X was totally wrong".
Better approach than "can you find a situation that my theory cannot explain?" is "can you find a situation that my theory cannot predict?" The difference between explanation and prediction is that explanation is what you do after the fact, when you already know which outcome you need to explain, while predictions are done before the fact. For example, if in the next American elections the Democrats will win, I can explain you why. However, if Republicans will win, I can also explain you why. But if you ask me to predict who will win, then I am in trouble, because here my verbal skills cannot save me.
Analogically, if we have a situation "Joe spends his afternoon reading Reddit", it is easy to explain: Joe believed that reading Reddit will bring him most pleasure. But if we have a situation "Joe decided not to read Reddit, and instead learned a new programming language", it is also easy to explain: Joe believed that learning will bring him most pleasure in long term. The problem is if Joe is starting his computer right now, and your theory has to predict whether he will read Reddit (as he usually does, but not always), or whether he will learn a new programming language (which is what he procrastinated doing for a long time, but today he feels slightly more motivated than usually). What will Joe do? This is the difficult question. But once he does something, it will be extremely easy to explain in hindsight why did he choose this option, instead of the other option.
More info here: Making beliefs pay rent. But the general idea is: if your theory can explain anything, but predict nothing, what exactly is the point of having such theory?
Replies from: Vilx-, Vilx-↑ comment by Vilx- · 2015-03-08T22:19:11.253Z · LW(p) · GW(p)
Hmm... I've given it some thought (more to come later, for sure), but there's already one thing I've found this theory useful for. There have been times when I've caught myself doing/desiring things that I should not do/desire. I then asked myself the question - so why do I do/desire this thing? What pleasure/pain motivates me here? Answers to these questions were not immediately available, but after some time doing introspection, I've come up with them. After that it was a simple matter of changing these motivators to rid myself of the unwanted behavior.
So... yes, I think it can be used for predicting stuff (like, "if I change X, then behavior Y will also change"). Now, the information needed for these predictions is hard to come by (but not impossible!). Essentially you need to know/guess what a person is thinking/feeling. But once you have that, you can predict what they will do and how to influence them.
What's your opinion on this?
Replies from: Viliam_Bur, TommiH↑ comment by Viliam_Bur · 2015-03-25T15:33:06.036Z · LW(p) · GW(p)
After that it was a simple matter of changing these motivators to rid myself of the unwanted behavior.
What you describe as "simple" here, is extremely difficult for me. (There are many possible explanations for why it is so, and I am not sure which one of them is the correct one.) Generally what you described seems like a part of the correct explanation... but there are other parts, such as biology, environment, etc.
For example, if my goal is to exercise regularly, I should a) think about my goals, imagine the consequences, think about the costs, and solve the internal conflicts... but also b) do some strategic activities, such as find where the nearest gym is, or maybe buy some exercise equipment to home, and c) check my health to see there is no biological problem such as e.g. anemia making me chronically tired.
↑ comment by TommiH · 2015-03-18T23:47:13.203Z · LW(p) · GW(p)
An alternative explanation I can think of is the placebo effect. It's possible that your behaviour Y changed after changing X, because you believed behaviour Y would change. Especially as you wanted to change those behaviours in the first place.
Also, even if this was not due to placebo effect, it's only evidence on how your mind works. Other people's minds might work differently. (And I suspect it's also quite weak as evidence goes, though I can't seem to articulate why I think so. At the very least I think you'd need a very big sample size of behaviour changes, without forgetting to consider also the failed attempts at changing your behaviour.)
Replies from: Nonecomment by Philosophist · 2015-06-13T14:32:02.928Z · LW(p) · GW(p)
Thank you for this article. I'm finding it still difficult to navigate the site in terms of comments and posts. Would it be possible to edit some more explanation in the "site mechanics" portion of this article to include an explanation of what open threads are and how to use them?
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-07-05T15:29:57.409Z · LW(p) · GW(p)
Open threads are for things that aren't important enough for either a toplevel post or a discussion post. You use them just like you used the welcome thread: leave a comment and let people respond :)
comment by Ozyrus · 2015-05-20T16:43:36.054Z · LW(p) · GW(p)
Hello, everyone!
LW came to my attention not so long ago, and I've been commited to reading it since that moment about a month ago. I am a 20-year old linguist from Moscow, finishing my bachelor's. Due to my age, I've been pondering with usual questions of life for the past few years, searching for my path, my philosophy, essentially, a best way to live for me.
I studied a lot of religions, philosophies, and they all seemed really flat, essentially because of the reasons stated in some articles here. I came close to something resembling a nice way to live after I read "Atlas shrugged", but something about it bothered me, and after thorough analysis of this philosophy I decided to take some good things from it and move on, as I did a lot of times before.
I found this gem of a site through reddit and roko's basilisk (is it okay if I say it here? I heard discussion was banned). I am deeply into the whole idea of rationality and nearly all ideas that are presented on this site, but something really bothers me here, too.
The thing is that it is implied that altruism and rationality go hand in hand; maybe I missed some important articles that could explain me, why?
Let's imagine a hypothetical scenario: there is a guy, Steve, who really does not feel anything when he helps other people nor when does other "good" things generally; he does this only because his philosophy or religion tells them to. Say this guy was introduced to ideas of rationality and thus he is no longer bound by his philosophy/religion. And if Steve also does not feel bad about other people suffering (or even takes pleasure in it?)?
What i wanted to say is that rationality is a gun that can point both ways: and it is a good thing that LessWrong "sells" this gun with a safety mechanism (if it is such "safety mechanism". Once again, maybe I missed something really critical that explains why altruism and "being good" is the most rational strategy).
In other ways, Steve does not really care about humanity; he cares about his well-being and will utilize all knowledge he got just to meet his ends ( people are different, aren't they? and ends are different, too).
Or even another, average rationalist Jack estimated that his own net gain will be significantly bigger if he hurts or kills someone (considering his emotions and feelings about overall humanity net gain, and all other possible factors). That means he must carry on? Or is it a taboo here? Or maybe it is a problem of this site's demographics and nobody even considered this scenario (which fact I really doubt).
I feel that i dive too deep into metaphors, but i am not yet a good writer. I hope you understood my thought and can make me less wrong. :)
edit: fixed formatting
Replies from: Lumifer, Gram_Stone↑ comment by Lumifer · 2015-05-20T17:11:54.791Z · LW(p) · GW(p)
The thing is that it is implied that altruism and rationality go hand in hand
That is not so. There is a certain overlap between the population of rationalists and the population of altruists, people from this set intersection are unusually well represented on LW. But there is no "ought" here -- it's perfectly possible to be a non-altruist rationalist or to be a non-rational altruist.
↑ comment by Gram_Stone · 2015-05-20T19:28:17.989Z · LW(p) · GW(p)
Welcome, Ozyrus.
This is moral philosophy you're getting into, so I don't think that there's a community-wide consensus. LessWrong is big, and I've read more of the stuff about psychology and philosophy of language than anything else, rather than the stuff on moral philosophy, but I'll take a swing at this.
Let's imagine a hypothetical scenario: there is a guy, Steve, who really does not feel anything when he helps other people nor when does other "good" things generally; he does this only because his philosophy or religion tells them to. Say this guy was introduced to ideas of rationality and thus he is no longer bound by his philosophy/religion. And if Steve also does not feel bad about other people suffering (or even takes pleasure in it?)?
What i wanted to say is that rationality is a gun that can point both ways: and it is a good thing that LessWrong "sells" this gun with a safety mechanism (if it is such "safety mechanism". Once again, maybe I missed something really critical that explains why altruism and "being good" is the most rational strategy).
In other ways, Steve does not really care about humanity; he cares about his well-being and will utilize all knowledge he got just to meet his ends ( people are different, aren't they? and ends are different, too).
It seems that your implicit question is, "If rationality makes people more effective at doing things that I don't value, then should the ideas of rationality be spread?" That depends on how many people there are with values that are inconsistent with yours, and it also depends on how much it makes people do things that you do value. And I would contend that a world full of more rational people would still be a better world than this one even if it means that there are a few sadists who are more effective for it. There are murderers who kill people with guns, and this is bad; but there are many, many more soldiers who protect their nations with guns, and the existence of those nations allow much higher standards of living than would be otherwise possible, and this is good. There are more good people than evil people in the world. But it's also true that sometimes people can for the first time follow their beliefs to their logical conclusions and, as a result, do things that very few people value.
Or even another, average rationalist Jack estimated that his own net gain will be significantly bigger if he hurts or kills someone (considering his emotions and feelings about overall humanity net gain, and all other possible factors). That means he must carry on? Or is it a taboo here? Or maybe it is a problem of this site's demographics and nobody even considered this scenario (which fact I really doubt).
Jack doesn't have to do anything. If 'rationality' doesn't get you what you want, then you're not being rational. Forget about Jack; put yourself in Jack's situation. If you had already made your choice, and you killed all of those people, would you regret it? I don't mean "Would you feel bad that all of those people had died, but you would still think that you did the right thing?" I mean, if you could go back and do it again, would you do it differently? If you wouldn't change it, then you did the right thing. If you would change it, then you did the wrong thing. Rationality isn't a goal in itself, rationality is the way to get what you want, and if being 'rational' doesn't get you what you want, then you're not being rational.
Replies from: Ozyrus↑ comment by Ozyrus · 2015-05-20T22:03:42.703Z · LW(p) · GW(p)
It seems that your implicit question is, "If rationality makes people more effective at doing things that I don't value, >then should the ideas of rationality be spread?" That depends on how many people there are with values that are >inconsistent with yours, and it also depends on how much it makes people do things that you do value. And I would >contend that a world full of more rational people would still be a better world than this one even if it means that there >are a few sadists who are more effective for it. There are murderers who kill people with guns, and this is bad; but >there are many, many more soldiers who protect their nations with guns, and the existence of those nations allow >much higher standards of living than would be otherwise possible, and this is good. There are more good people >than evil people in the world. But it's also true that sometimes people can for the first time follow their beliefs to their >logical conclusions and, as a result, do things that very few people value.
Excellent answer! Yes, you deducted the implicit question right. I also agree that this is a rather abstract field of moral philosophy, though i did not see that at first. Although I don't think that your argument for the world being a better place with everyone being rational holds up, especially this point
There are more good people than evil people in the world.
Even if there are, there is no proof that after becoming "rational" they will not become "bad" (apostrophes because bad is not defined sufficiently, but that'll do.). I can imagine some interesting prospect for experiments in this field by the way. I also think that the result will vary if the subject is placed in society of only-rationalists vs usual society - with "bad" actions carried out more in the second example, as there is much less room for cooperation.
But of course that is pointless discussion, as the situation is not really based on reality in any way and we can't really tell what will happen. :)
comment by [deleted] · 2015-03-16T12:25:24.798Z · LW(p) · GW(p)
Hello world.
I am new to the community, but I have read through the most part of the major sequences before I registered. I found this site by reading Eliezer´s Harry Potter fanfic hpmor. It was really good by the way. I am happy to learn about biases and how to overcome them and how to optimize certain things.
I am fairly intelligent and I am a VERY philosophical person.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2015-03-25T15:11:53.078Z · LW(p) · GW(p)
My greatest weakness in this community is probably that I against my will partly believe that most americans are stupid due to lack of general education and political propaganda.
Have you considered the effect of selection? From USA, you usually hear about the stupid stuff, because that's what makes interesting news. (Also, exaggerating all the bad things from the "decadent West" has a long tradition in Russian propaganda.) From your own country, you spend most time with your circle of educated people.
Replies from: None↑ comment by [deleted] · 2015-03-25T17:57:07.254Z · LW(p) · GW(p)
Yes, I have considered it. We have no russian propaganda in Finland. Overall, finnish people don´t like Russia very much. I don´t spend much time with educated people right now either. But I agree that selection somehow may have something to do with it.
comment by Ndtrip · 2015-03-12T10:35:05.976Z · LW(p) · GW(p)
Hello everyone, first post. My education level is Associate's. My special skills include mathematics and reading comprehension.
I come to this website, because as I look at the rationalist techniques I can't think to myself, "This is a skill that would be beneficial to learn." I have done some preliminary reading of some of the posts here and find that while a lot of it is rather chewy (that is, taking extra time to process mentally), it is genuinely enjoyable to peruse and be made to think.
I have a question. Considering that I am religious, and I fully intend to stay that way, despite any evidence that might otherwise suggest to change that, how much are any rationalist skills that I may build up hampered? I don't want to summon a religious discussion, so if it seems that I might be, please just think of it as Fixed Belief X. I understand that the ability to update beliefs is central to rationality, but one such belief doesn't seem crushing.
I ask because I want to make sure that I am actually obtaining value out of my time. I don't want to find some arbitrary time down the road that my efforts have borne no fruit, and it was impossible from the beginning.
Replies from: None, None↑ comment by [deleted] · 2015-03-12T14:00:10.007Z · LW(p) · GW(p)
Well, the meta-level of what you said is "Updating beliefs when evidence is against them is not always beneficial." I think there are articles here that challenge this kind of meta although I cannot point to them, I am fairly new here. But I still see the issue namely how exactly do you decide, by what algorithm, what other beliefs of yours you want to update when evidence is against them and what not? So it seems you will have to competing motives, to execute the truth-seeking algorithm and the belief-defending one and they may weaken each other. Yet, I think with some compartmentalization it can work but it may be difficult.
To put it different words, you can simply put a taboo on full-on truth-seeking wrt religion and let the truth-seeking algorithm run elsewhere, but you have a reason, an algorithm for that taboo, maybe not fully conscious and that may conflict your truth-seeking algorithm in other fields in more subtle ways: perhaps not handing out an clear obvious taboo, but biasing results. Or to be blunt: non-rationality has reasons and methods too, and thus leaks out from compartments and contaminates.
Just my 2 cents, I am also a beginning learner here.
↑ comment by [deleted] · 2015-03-12T12:38:17.838Z · LW(p) · GW(p)
Depends on how much effect your religion has on you. I doubt you'll be any less rational if you go to church every day although you may end up loathing it one day.
If anybody has a link to the post that Eliezer told a story about how he was told to "pray and (literally) stfu" you'll have a good example of how religion can screw up reasoning. You can still reason effectively in religion irrelevant to how true it is, but you're probably going to encounter something you'll say "this doesn't make sense" and you will one day encounter someone who WILL do something entirely paradoxial while wearing their chosen religious headwear.
Replies from: None↑ comment by [deleted] · 2015-03-12T14:09:54.669Z · LW(p) · GW(p)
To be fair, this kind of example is a bit extreme. I used to read edwardfeser.blogspot.com and he fails at being empiricist, but does not fail at logical reasoning. His only - albeit catastrophic - failure is "X follows from the premises we accepted to be true, hence reality works like X". Map-terrain... However, even Feser could not make a useful ratonalist because of this failure at empiricism. Unwilling to step over the map-terrain gap, the language-reality gap.
Really, the primarily problem of Feser type smart theists is not that they cannot reason, it is that they believe too much in language. Theism almost follows from that failure mode, as language is a mind-product, so when they believe reality works so that that the arguments expressed in words, which tend to convince human minds also happen to be true out there in reality, almost assumes there is a human-like mind behind the universe. Proper atheism starts with the idea of accepting the universe does not give half a shit about our logic, reasoning and intellectuality and we can find ideas perfectly convincing and we can admit they are true and out there still they aren't: but that is really hard as it means really throwing out much of our intellectual history and tradition. It is an incredible huge gap for a culture that got shaped by e.g. Plato to say - and we MUST say this - "Your ideas convinced me perfectly. They are still not true."
Replies from: None↑ comment by [deleted] · 2015-03-12T15:20:58.678Z · LW(p) · GW(p)
No, it's a great example of EVERYTHING (not just religion) going to shit because it basically says "don't think, do".
It's not any less harmful even if we remove religion from there. It can apply to.. practically everything. I think it's sound personal philosophy to know what the fuck you're actually doing. Hell, it's probably the first step in making a plan and it's a step in every process of it.
comment by [deleted] · 2015-02-09T23:31:42.824Z · LW(p) · GW(p)
Hello all, I'm new to this site. I've stumbled across this website a few times, and have been interested in its implications on philosophy. I am here in a position of scepticism about the claims and projects this site wishes to advance. I suspect most of my posts in the recent future will be critiques of other things found on this website. I hope I make some friends, and not too many enemies.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-02-09T23:43:15.772Z · LW(p) · GW(p)
I am here in a position of scepticism about the claims and projects this site wishes to advance.
What do you understand those to be?
Replies from: None↑ comment by [deleted] · 2015-02-10T13:14:41.124Z · LW(p) · GW(p)
I do not fully know yet.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-02-10T14:00:39.868Z · LW(p) · GW(p)
What do you mean when you say you are skeptic of ideas that you don't know?
Replies from: None↑ comment by [deleted] · 2015-02-15T16:39:24.166Z · LW(p) · GW(p)
You do not need to fully understand something to approach it with skeptically.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-02-15T16:55:53.591Z · LW(p) · GW(p)
Yes, but then it says more about your general approach to things you don't understand then it says something about the subject.
You also didn't answer the question. What do you actually mean when you say, that you are skeptic?
Replies from: None↑ comment by [deleted] · 2015-02-15T18:44:07.579Z · LW(p) · GW(p)
No; it tells you about my approach to LessWrong based on what I know of LessWrong. I hope you are familiar with the word skeptic. If not, I recommend you read a dictionary entry on it, and perhaps look up its usage in literature. If you mean "what precisely do I mean when I say I am a approaching LessWrong skeptically", I mean that I will be reading carefully through articles on LessWrong, looking for potential flaws and failings, and generally maintaining a high degree of doubt over anything said or implied.
I have to add that this welcoming thread isn't very welcoming.
Replies from: ChristianKl, dxu↑ comment by ChristianKl · 2015-02-15T19:27:39.197Z · LW(p) · GW(p)
I hope you are familiar with the word skeptic.
I'm familiar enough to know that different people use it to mean different things. Asking people to explain in detail what they mean is called "tabooing" on LW. It helps with rational thinking.
Of course your are skeptic about the value of explaining what you mean. That's alright. It takes mental effort to value clear thinking and most people are not used to engage in that effort.
This might seem unwelcoming because I don't allow you to easily get away with a vague statement and confront you on an intellectual level. But that's not the point, I welcome you by engaging you.
Replies from: None↑ comment by [deleted] · 2015-02-15T20:15:11.554Z · LW(p) · GW(p)
Yeah, you would not make a good host if you welcomed your guests by interrogating them. 'Of course your are skeptic about the value of explaining what you mean' - what on earth does this mean? 'It takes mental effort to value clear thinking and most people are not used to engage in that effort' - great concealed insult. Not quite obvious enough to make you look bad, but with enough "I'm superior to you"-ness to put me down. 'This might seem unwelcoming because I don't allow you to easily get away with a vague statement and confront you on an intellectual level' - nope, it's unwelcoming because you are excessively pedantic, and because you aren't very nice (e.g. the concealed insult).
As a note, I do not have the time nor patience to look through everything linked to me. Also, how do you quote on this website?
Replies from: dxu, ChristianKl↑ comment by dxu · 2015-02-15T21:39:38.538Z · LW(p) · GW(p)
nope, it's unwelcoming because you are excessively pedantic
What you call "pedantry", some people call "clear communication".
As a note, I do not have the time nor patience to look through everything linked to me.
I don't want to sound condescending, but to understand discussions, you may have to. This is not an absolute rule, but it is a good rule of thumb that when someone links you somewhere, it's a good idea to at least click on that link.
Also, how do you quote on this website?
Quotes are written by prefacing whatever you want to quote with a "greater-than" character: ">". For instance, "> Hello." would appear as
Hello.
EDIT: Also, note that this notation only works if you begin your quote on a new line. Using a ">" symbol in the middle of a paragraph, for instance, won't do anything.
↑ comment by ChristianKl · 2015-02-15T20:35:29.524Z · LW(p) · GW(p)
Yeah, you would not make a good host if you welcomed your guests by interrogating them.
Being a good host means creating an environment in which the right people feel welcome. On LW the right people happen to be people who like to explain how they reason.
Yeah, you would not make a good host if you welcomed your guests by interrogating them. 'Of course your are skeptic about the value of explaining what you mean' - what on earth does this mean?
You started by saying you are skeptical about this website way of handling things.
I answered with a standard way of this websites way of handling things. Asking you to taboo a term you used, without specifically using the word "taboo" because it's internal jargon.
As you said at the beginning you are indeed skeptical of ideas of this website. Tabooing happens to be one of them. It's a new concept for you and for you being skeptical is not about philosophical skepticism but about having a high bar to adopting new concepts.
Replies from: dxu↑ comment by dxu · 2015-02-15T21:37:04.699Z · LW(p) · GW(p)
Being a good host means creating an environment in which the right people feel welcome.
This statement is slightly stronger than I would word it. In particular, since Perrr333 has expressed that he/she does not feel welcome, combining that fact with this statement would imply one of the following conclusions:
- LessWrong is not being a good host.
- Perrr333 is not one of the "right people" for LessWrong.
I don't believe 1 is true, and I don't think you can determine the truth of 2 after so little time. As a result, I don't quite agree with the quoted statement above. Is that statement really what you meant to say?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-02-15T22:00:10.560Z · LW(p) · GW(p)
This statement is slightly stronger than I would word it.
My statements are polarized. Polarization has the advantage of making clear points.
LessWrong is not being a good host.
LW is a forum. It's not a host.
Perrr333 is not one of the "right people" for LessWrong.
As long as he's not willing to be asked why he believes what he believes ('being interrogated"), he's not in that category. Not being willing to go there, leads to a lot of pointless debates for the sake of debating.
On the other hand it's something that he can easily change if he's willing.
Replies from: dxu↑ comment by dxu · 2015-02-16T02:08:11.924Z · LW(p) · GW(p)
My statements are polarized. Polarization has the advantage of making clear points.
Fair enough.
LW is a forum. It's not a host.
Still, wouldn't you say LW should at least strive to provide a fairly pleasant environment for its frequenters?
As long as he's not willing to be asked why he believes what he believes ('being interrogated"), he's not in that category. Not being willing to go there, leads to a lot of pointless debates for the sake of debating.
On the other hand it's something that he can easily change if he's willing.
I don't really disagree with this, but I'm not sure his behavior in this thread alone can be used as a reliable indicator of whether he's willing to be "interrogated". Possibly he may be more receptive to questioning in other threads.
↑ comment by dxu · 2015-02-15T19:02:45.129Z · LW(p) · GW(p)
If you mean "what precisely do I mean when I say I am a approaching LessWrong skeptically", I mean that I will be reading carefully through articles on LessWrong, looking for potential flaws and failings, and generally maintaining a high degree of doubt over anything said or implied.
This is generally referred to around here as "maintaining good epistemic hygiene", and it's considered a fairly normal practice. There's no particular need to give it a special name like "skepticism", especially when that word already has a philosophical meaning.
Moreover, if you come onto any website (not just LessWrong) and say something like "I am here in a position of scepticism about the claims and projects this site wishes to advance," naturally people will think you are referring to specific claims. If they then ask you which claims you are referring to, and you say "I don't know," it's only expected that people will react with confusion and (probably) will not warm up all that much to you. It's almost like a sort of bait-and-switch; you start off (seemingly) claiming one thing (either explicitly or implicitly) and then reveal that you were talking about something else all along. We have a name for that on this site as well: logical rudeness.
I have to add that this welcoming thread isn't very welcoming.
In general, saying (or implying, at least) in your first comment on a new site you are joining that you disagree with many of its claims is not likely to lead to welcoming responses. This is not because residents are trying to be unpleasant; rather, it is because they are simply following the flow of the conversation. Consider the following exchange:
A: I am new here, and I am skeptical of many of the claims this site has to offer.
B: Welcome!
B's response is something of a nonsequitur, and in fact does not address what most people would perceive to be the meat of A's comment: that A is skeptical of many of the claims this site has to offer. More realistic would be the following conversation:
A: I am new here, and I am skeptical of many of the claims this site has to offer.
B: Really? Which claims in particular did you have in mind?
And if you look closely at the first two comments in this thread, you'll see that this is exactly what happened. Nothing hostile going on. If A then goes on to reply "I don't know", well, then people might start to find A's position slightly strange. But there's no "unwelcoming" vibe going on here, I don't think.
(But since you are correct that no one actually welcomed you, let me be the first: Welcome to LessWrong!)
Replies from: None↑ comment by [deleted] · 2015-02-15T20:01:22.665Z · LW(p) · GW(p)
I am aware of the philosophical meaning. If you don't mind, I'd prefer to just use regular terminology rather than your site-specific terminology. I've been around the block of debating sites, and none of them have gotten so defensive when I've simply stating I'm approaching their claims skeptically. Stating you wish to approach something skeptically without stating exactly what you are approaching seems sensible to me.
Also, it seems rather silly to me that your response to me effectively saying "I feel unwelcome" is "Every reply has been legitimate!!!!". I didn't say anyone had been unreasonable, I just said I feel unwelcome. And your reply certainly hasn't changed that.
Replies from: dxu, ChristianKl↑ comment by dxu · 2015-02-15T21:32:09.680Z · LW(p) · GW(p)
I am aware of the philosophical meaning.
Then you should be aware that the way in which you used the term is not in line with its philosophical meaning.
Stating you wish to approach something skeptically without stating exactly what you are approaching seems sensible to me.
This was not, in fact, your original wording. From your original comment:
I am here in a position of scepticism about the claims and projects this site wishes to advance.
Specifically, you singled out "this site", i.e. LessWrong, as the one whose claims you were approaching skeptically, suggesting that there was something in particular about LessWrong which you found disagreeable. The connotations of your original comment and the ones you are offering now are radically different, even if they may be denotatively similar. The practice of picking up on (and sending) said connotations is a crucial element of any social interaction, so if people are apparently interpreting your words incorrectly, you should take that as evidence that you were unclear and seek to be more clear in the future, rather than waste time defending your original wording. A simple "Sorry, you misunderstood me; this is what I actually meant" would have sufficed.
I didn't say anyone had been unreasonable, I just said I feel unwelcome.
Again, your original wording:
I have to add that this welcoming thread isn't very welcoming.
This is not a statement about your own state of mind; rather it is a claim of what (presumably) you regard as an objective aspect of this thread (whether it is "welcoming" or not). Again, your time could better be spent simply providing a clarification rather than arguing that said clarification is what you said in the first place. No need to bring up "I didn't say this; I said that"; instead, just say "I meant to say that".
As a more general statement: LessWrong as a community places extremely great emphasis on clear communication. Often, we find that a good majority of disagreements can be avoided simply by having all participants state their position clearly in the beginning, rather than having said position remain unclear or nebulously defined, eventually devolving into arguments about the definition of a word, or some such. If you view this thread in light of this, you'll see that none of this is intended as an attack, as you (seem to) have been perceiving it as. We are simply trying to encourage clear communication, and clean up misunderstandings.
↑ comment by ChristianKl · 2015-02-15T20:24:38.038Z · LW(p) · GW(p)
I've been around the block of debating sites, and none of them have gotten so defensive when I've simply stating I'm approaching their claims skeptically.
This is no debating side. It a side for rational discourse about how to reason. As such we talk about the subject of how to reason. Not to defend something but because we care about how to reason and your particular way of reasoning.
I am aware of the philosophical meaning. If you don't mind, I'd prefer to just use regular terminology rather than your site-specific terminology.
You said that you don't understand what the website is about and people try to explain it to you. If you don't want to understand the local terminology you won't understand LW.
Also, it seems rather silly to me that your response to me effectively saying "I feel unwelcome"
You didn't you said people acted unwelcome. That's something different than saying you feel unwelcome even by conventional standards of language.
comment by [deleted] · 2014-12-29T14:06:43.159Z · LW(p) · GW(p)
Been looking for this for a few moments. I don't see much to expand on myself. I found out about LW when someone pointed me to the 1000-year old vampire post which I really liked.
And that's almost enough for now. I tried using the search but I didn't get the thing I wanted. All or fucking nothing I guess: What's the best way to ask a girl out?
"Best" means a lot of things that I'm naturally not aware of otherwise I wouldn't be asking this :) But true, I feel like there's a lot of things to account for in "best" that I might not be realistically able to do in different situations.
If you're asking why I'm asking this, it's just because although I manage a conversation (I do have an almost severe aversion to inane conversations/topics so sometimes I really have nothing to say, and in the case I do I always think "this is stupid but.. fucking conversation") at a level I consider okayish (could work on this too, but that's an entirely different topic) I always feel like "now's not the time". Not sure why. Maybe I'm not getting the right signal or maybe I'm missing it, but I always have this feeling that even though I'd like to do it, I'd probably mess up. Instinctively (or in some cached way) I think I should lead the conversation there but.. well, this is dragging on. So guys (I guess girls too), what's the best way to ask a girl out?
Replies from: chaosmage, Alicorn↑ comment by chaosmage · 2014-12-29T15:48:58.353Z · LW(p) · GW(p)
I'm guessing here, but it sounds like you have a very common problem, which people usually call "fear of rejection" but I think should be called "no plan for rejection". We instinctively avoid situations we don't feel able to handle, and in anyone able to think ahead, this includes situations that might lead into situations we don't feel able to handle. And that can feel like now's not the time.
A popular method for fixing this is The Rejection Game. Ask for something and get rejected, once per day, for a month. Your requests should be somewhat ridiculous, so you'll get rejected even though you're super polite and respectful. (Ask salespeople for discounts, for example.) After rejection, don't give up immediately, but negotiate a bit - this gives you something to do and should get you rejected more firmly.
Also, it might help to pretend they're boys.
Bonus prize: If you handle rejection really well, you get additional attempts later. Magic!
Replies from: None↑ comment by [deleted] · 2014-12-29T16:22:05.268Z · LW(p) · GW(p)
What makes you draw a line between what I've said to a fear of rejection? I have a philosophy of always trying to stretch my limits but I know the difference between reckless foolishness and planning ahead. The main plan is to do it. The smaller details are basically the steps. I've been rejected a position I'd really have liked to have today. I'll try to negotiate next time (not on the dates I guess cause that really feels like I'm kissing her ass) I do need something though. Also great in case the person rejected me for some devious reason. (I'm looking for a another job now. No reason to dwell on a no.)
But here's another question in addition to the line-drawing one: Assuming I get this rejection thing done and I'm not fearful or rejections, how does that one-up my chances? How much am I going to get other than the bonus prize?
It also seems this rejection thing is heading towards quantity and not quality. Also, it sounds like the thing being rejected doesn't seem to haave much weight. You'd definitely feel worse if you've been rejected something that's important to you. Naturally, that's no reason to dwell too much on it, but sometimes I honestly wonder if I did a few things better, would I have a better outcome?
EDIT: Also I'm going to try this rejection thing for the laughs of it. Let's see how funny it can get.
Replies from: chaosmage↑ comment by chaosmage · 2014-12-29T18:10:49.991Z · LW(p) · GW(p)
The quality of your query isn't entirely unimportant - you can lose a chance with poor quality - but the person asked will usually have lots of other reasons that play into their decision, and most of them you'll never know. In the absence of this information, what you have is an opinion on the quality of your request, so naturally that's what you focus on to optimize; but that doesn't mean this is the decisive variable in the average case.
Assuming I get this rejection thing done and I'm not fearful or rejections, how does that one-up my chances?
It makes it easier to actually try. As long as you still feel "now's not the time", worrying about the quality of what you'd say if you actually did is not an efficient use of your attention.
You're right, the rejection game is about quantity not quality, and that's because people have found quantity makes more of a difference.
Replies from: None↑ comment by Alicorn · 2014-12-29T14:27:04.204Z · LW(p) · GW(p)
Do you have a girl in mind or do you mean generally speaking?
Replies from: None↑ comment by [deleted] · 2014-12-29T14:33:29.233Z · LW(p) · GW(p)
A specific one in mind? I actually have a few girls I'd like to ask out.
But I'd suppose a general solution would probably have a better a specifically optimized one.
I'd like to be greedy and ask for both, as I assume the answer will be different depending on how I answer. So "yes" and "no". :)
Replies from: hairyfigment↑ comment by hairyfigment · 2015-01-30T21:20:19.004Z · LW(p) · GW(p)
Aside from other problems - like your apparent inability to write grammatically - a general solution is impossible by Turing's Halting Problem Theorem. It's exactly the same proof.
Replies from: None↑ comment by [deleted] · 2015-01-31T18:12:41.509Z · LW(p) · GW(p)
I say your English is the one lacking here as many people understand what I say, both in text and speech.
But if I include the middle, I say my English is Internet English based on Video Game English. I learned it on my own by stringing together text from video games to the image or analyzing the text when there wasn't anything graphical to compare it to. I'm still not sure how I learned my English, but that seems like the best explanation I can think of. That's my reasonable explanation as to why I'm supposedly unable to write gramatically.
comment by RobertTaylor · 2015-06-27T19:49:45.676Z · LW(p) · GW(p)
Hey guys, I'll ask something I've been thinking about here since I don't have the karma to make a thread yet:
Does slowing the population growth decrease existential risk?
With fewer people around it's less likely any of them will use dangerous technology, and thus more time to get the technology under control and make counter-measures.
If that's the case, then the next question is whether there are any effective ways of reducing population growth, like access to birth control for example.
Less people may slow down the progress of science, but not that much I'm guessing since: Many people are working on the same problems and not cooperating that much. There would be more natural resources per person so more money can be spent on education and research per person.
comment by curtd59 · 2015-05-18T09:49:10.995Z · LW(p) · GW(p)
HI. Curt Doolittle. I follow LW via Feedly, but today someone asked me to comment on a LW article. I write analytic philosophy in epistemology (specifically truth), ethics, law, politics and science. I'm reasonably well known and easy to find on the web.
Here is my response to the recent post on Signaling by Outliers (Hipster analogy). You can use it as a test of worthiness.
All, Thank you for asking me to respond. I'll convert it from signaling (the author's criticism and somewhat humorous demonstration of signaling), from moral justification, to scientific language, and I think it will be clearer:
1) All radicals do not fit into the center of the distribution - the statement is tautological, not insightful.
2) We all signal, and signaling is necessary for evolutionary reproductive selection.
3) The presumption of not fitting into some locus of the median of the distribution is a democratic one - that we are equal rather than (as I argue) we constitute a division of cognitive labor: perception, evaluation, knowledge and advocacy. (humans divide cognition more so than other creatures because we specialize in cognition.)
4) Our theories do tend to justify our social positions (signaling) but then, we would not have information necessary to theorize about any other set of interests, now would we?
5) The origin of theories is irrelevant (justification is false), and therefore the question of a theory produced by any subset of a polity can be judged by only criticism - its irrelevant who comes up with a theory.
The vast difference between pseudoscience and science in ethics, law, politics, and economics is captured those few words.
Now, to state the positive version: the solution to the fallacy of the enlightenment hypothesis of equality of ability, interest, and value is captured in these additional points:
6) economic velocity (wealth) is determined by the degree of suppression of parasitism (free riding/imposed costs). This eliminates transaction costs.
7) central power originates to centralize parasitism and increase material costs, by suppressing local parasitism and transaction costs. Once centralized they can be incrementally eliminated. If and only if an institutional means of following rules can be used to replace personal judgement.
8) The only means of producing institutional rules to replace personal judgement (provision of 'decidability') is in the independent, common, evolutionary law resting upon a prohibition on parasitism/free-riding/imposed costs (negatives), codified as property rights (positives): productive, warrantied, fully informed, voluntary transfer(exchange), free of negative externalities.
9) Language evolved to justify (morality), negotiate (deceive), and rally and shame (gossip), and only tangentially and late to describe (truth). Truth as we understand it is an invention and an unnatural one - which is why it is unique to the west, and why it has taken philosophers so long to understand it. However, westerners evolved a military epistemology because they relied upon self-financing warriors voluntarily participating, as well as the jury and truth telling. (The marginal difference in intellectual ability apparently not common - they were all smart enough. and such testimony was in itself 'training'.)
10) We cannot expect or demand truth from people unless they know how to produce it. ie: Education in what I would consider the religion of the west: "the true, the moral and the beautiful". So I consider this education 'sacred' not just utilitarian.
11) We cannot demand truth and law from people unless it is not against their interests: ie: the only universal political system is Nationalism, because groups can act truthfully internally, truthfully externally, and can use trade negotiations to neutralized competitive differences. And with nationalism, individuals cannot escape paying the cost of transforming their own societies, and themselves, and laying the burden of doing so upon other societies.
12) Commons are a profound competitive advantage. Territorial, institutional, normative, genetic, physical, and economic (industrial) commons are a profound advantage to any group. The west is the most successful producer of commons so it is even more important to the west. So we must provide a means of producing those commons. The difference between market for private goods and services (where competition in production is a good incentive) and corporate (public) goods, where we must prevent privatization of gains an socialization of losses, requires that we provide monopoly protection of those goods from consumption. But does not require that we provide monopoly contribution to them. Commons require only that the people willing to pay for them, do so. Otherwise there is no demonstrated preference for that commons. Insurance is a commons and I will leave that for another time. Return on investment (dividends) are the product of commons. I will leave that for another time as well. The central point is that we can produce a market for common goods using government just as we do in the market private goods. But that law and commons are two different things. and that there is no reason whatsoever, knowing how to construct the common law, that government should be capable of producing law. it cannot. Law is. It cannot be created. Only identified.
(This is also probably the most profound 1000 words on politics that you will be able to find at this moment in time)
#propertarianism
Curt Doolittle The Propertarian Institute
Replies from: Lumifer