Posts
Comments
It's Mary Daly, Catholic theologian and radical feminist: http://www.enlightennext.org/magazine/j16/daly.asp?pf=1
I don't believe that that quite applies to my situation. I'm not predicting whether I'll choose right now to break up with my girlfriend (99.999% certainty I won't); I'm predicting whether at some point in the next year one of the future Ozymandiases, subtly different from me, will find zirself in a state in which zie wants to break up with zir girlfriend. I have already made up my mind to not break up; I'm predicting how likely I am to change my mind.
I hope that the cynicism I reject in my own self-examination of my membership in my own church of rational physics engineering leads me to reject cynicism when trying to understand other people's churches. There ARE reasons people believe things and they are by no means all stupid reasons.
We're definitely in agreement there. And even the ones that are stupid may be psychologically reassuring or otherwise "make sense" even if they are completely irrational. While signalling arguments are important, I think it's unrealistic to consider them to the exclusion of other arguments.
I was thinking roughly Matrix 2 level backlash: a significant group of "ruined FOREVER" fans, but the movie does not become a byword for terribleness now and forever like Episode 1. Possibly this could be measured by the number of negative YMMV tropes on its TVTropes page?
Fan backlash is remarkably difficult to operationalize.
Sorry. I apparently suck at the Internet. :)
noseriouslywhatabouttehmenz.wordpress.com
No death or rape threats. I have yet to come up with a theory about why (beyond "crazy random happenstance" and "I'm so nice no one wants to rape and murder me"); suggestions appreciated.
Thanks! LW actually helped me crystallize that a lot of the stuff social-justice-types talk about is not some special case of human evil, but the natural consequence of various cognitive biases (that, in this case, serves to disadvantage certain types of people).
Dammit, could someone clean the fanboy off the ceiling? The goop is getting in my hair. :)
It is true, I forgot to account for the effects of a GOP presidency on OWS. However, I still think there's a high chance of a OWS fadeaway for a few reasons. One, the liberal hippies (generally the backbone of social justice movements) have started to nitpick OWS in earnest: this could be a sign either that OWS is getting more successful (and the crab in a bucket mentality is taking over) or that it's losing their support, but given that the mainstream media seems to have decided OWS is yesterday's news, I think it might be the latter. Second, as the economy splutters into recovery, OWS will get less support. Third, if OWS continues to get more popular, the government will likely make some token effort to address their concerns that will take away some of the momentum of the movement.
Nevertheless, you did mention an important factor I overlooked, so I'll downgrade it to a roughly 60% probability.
To a certain degree, different brands of feminism could function as different parties (certainly in academic feminism they do). A Christina-Hoff-Sommers-esque conservative feminist is unlikely to agree much with a Dworkinite radical feminist. For instance, "rape is a subset of violence with no particularly gendered component" and "rape is the natural outgrowth of a culture in which women's subordination to men is eroticized" are two substantially different positions (both of which I disagree with).*
Admittedly, the average person is not particularly clear on the distinct branches of feminism; hell, there is still a widespread belief that radical feminist means "a feminist who's really extreme" as opposed to a distinct framework of theories and political beliefs. And even among the different groups of feminists there are usually some common premises (gender being at least partially a social construct, men being privileged over women, etc.).
That said, I too would like more variation in the gender politics space; some groups (most notably, men) are distinctly underserved by the current gender discourse, and more competition in the marketplace of ideas can only be a good thing. :)
*I am somewhat cheating here by picking an issue on which there is a lot of disagreement among different branches of feminism, as opposed to (say) the gender gap, in which the primary disagreement is between feminists who do and do not suck at math.
The difference in my reaction when reading this post before and after I found my something to protect is rather remarkable. Before, it was well-written and interesting, but fundamentally distinct from my experience-- rather like listening to people talk about theoretical physics. Now, when I read it, my feeling of determination is literally physical. It's quite odd.
Has anyone else had a similar experience?
I'm already polyamorous, so there is in fact a certainty of a polyamorous relationship situation at some point in 2012. :)
My girlfriend knows and is highly amused at my pessimism.
My logic is that I have never actually had a relationship that went much beyond the six-month mark, and while there are all kinds of factors that mean that this one is different and will stand the test of time, all of my other relationships also had all kinds of factors that meant this one is different and will stand the test of time.
The prediction is only 60%, however, since I might have actually gotten better at relationships since the last go-round. And because my girlfriend is really fucking awesome. :)
Romney will be the Republican presidential nominee: 80%.
Obama will win reelection: 90%.with a non-Romney presidential nominee, 50% against Romney
The Occupy Wall Street protests will fade away over the next year so much that I no longer hear much about them, even in my little liberal hippie news bubble: 75%
There will be massive fanboy backlash against The Hobbit: 80%. Despite this, the Hobbit will be a pretty good movie (above 75% on Rotten Tomatoes): 70%
John Carter will be a pretty good movie (above 75% on Rotten Tomatoes). 85% Whether or not it is a good movie, I will love it. 95%
I will get my first death or rape threat this year: 80% My reaction to the death or rape threat will be elation that I've finally made it in feminist blogging: 95% Even if it isn't I will totally say it is in order to seem cooler: 99%
My comod and I will complete the NSWATM spinoff book this year: 75% It will be published as an ebook: 80% It will not make the transition to dead-tree-book this year: 90% It will make the transition to dead-tree-book eventually: 60%
I will break up with my girlfriend at some point over the next year: 60%.
I will acquire a new partner at some point over the next year: 90%.
Thank you for the link to the Chalmers article: it was quite interesting and I think I now have a much firmer grasp on why exactly there would be an intelligence explosion.
The second is that consciousness is not necessarily even related to the issue of AGI, the AGI certainly doesn't need any code that tries to mimick human thought. As far as I can tell, all it really needs (and really this might be putting more constraints than are necessary) is code that allows it to adapt to general environments (transferability) that have nice computable approximations it can build by using the data it gets through it's sensory modalities (these can be anything from something familiar, like a pair of cameras, or something less so like a geiger counter or some kind of direct feed from thousands of sources at once).
Also, a utility function that encodes certain input patterns with certain utilities, some [black box] statistical hierarchical feature extraction [/black box] so it can sort out useful/important features in its environment that it can exploit. Researchers in the areas of machine learning and reinforcement learning are working on all of this sort of stuff, it's fairly mainstream.
I am not entirely sure I understood what was meant by those two paragraphs. Is a rough approximation of what you're saying "an AI doesn't need to be conscious, an AI needs code that will allow it to adapt to new environments and understand data coming in from its sensory modules, along with a utility function that will tell it what to do"?
Before I ask these questions, I'd like to say that my computer knowledge is limited to "if it's not working, turn it off and turn it on again" and the math I intuitively grasp is at roughly a middle-school level, except for statistics, which I'm pretty talented at. So, uh... don't assume I know anything, okay? :)
How do we know that an artificial intelligence is even possible? I understand that, in theory, assuming that consciousness is completely naturalistic (which seems reasonable), it should be possible to make a computer do the things neurons do to be conscious and thus be conscious. But neurons work differently than computers do: how do we know that it won't take an unfeasibly high amount of computer-form computing power to do what brain-form computing power does?
I've seen some mentions of an AI "bootstrapping" itself up to super-intelligence. What does that mean, exactly? Something about altering its own source code, right? How does it know what bits to change to make itself more intelligent? (I get the feeling this is a tremendously stupid question, along the lines of "if people evolved from apes then why are there still apes?")
Finally, why is SIAI the best place for artificial intelligence? What exactly is it doing differently than other places trying to develop AI? Certainly the emphasis on Friendliness is important, but is that the only unique thing they're doing?
Very few people know what career they want when they're seventeen. Of those people, a significant proportion end up either doing a different job or displeased by their choice.
This is what I did; it may or may not work for you. Go to a college with a wide variety of class choices and highlight everything in the course book that looks interesting and that you have the prereqs for. Narrow it down to four or five classes by eliminating courses that occur in the same time block as another course you're more interested in, courses with dull or unintelligent teachers, or courses that come from disciplines you've already taken a lot of classes in. (Note: if you have general course requirements, take those courses.) That should give you some data to eliminate majors you're absolutely not interested in; for the rest, assuming you have not gotten an all-consuming obsession with one particular field, look at the BLS statistics to see which one has the best overall job outcomes (income, hours worked, unemployment risk, etc) and major in that one.
General warnings: unlike most people here, I am not a STEM major; my experience applies strictly to the social sciences and the humanities. I also have not attempted to get a job in this economy, so take my advice with a grain of salt.
I think many people will assume that "literature thread" also means "book thread," since "literature" is often used to mean "book, with connotations of being worthwhile/classic/making you a better person/whatever."
Perhaps "media" would work? Although that almost presents the opposite problem...
I'd suggest that high-cost ideas are generally high-benefit, or at least high-apparent-benefit (see: love-bombing in cults), in order to incentivize people to believe them.
I definitely think it's important to recognize that almost all group beliefs are both signalling and something that people actually believe and that has effects on their life. The PhD's role as a signal of membership in the Physicist Conspiracy doesn't conflict with the PhD's role of learning interesting things about physics; in fact, they're complementary. (However, it's certainly possible to imagine someone who can signal "being a physicist" without having learned interesting things about physics (fake PhD) or vice versa (extremely skilled autodidact), which why I think they're probably two separate but related functions.)
Interesting article!
I presume that "I realized this goal was irrational and switched to a different goal that would better achieve my values" would also be a victory for instrumental rationality...
Ah, thank you. I misunderstood. :) I've had a few problems with people being confused about why my blog uses so much feminist dogma if it's a men's rights blog, so I'm hyper-sensitive about being mistaken for a non-feminist.
Thank you very much, Miley! I tend to view feminism and men's rights as being inherently complementary: in general, if we make women more free of oppressive gender roles, we will tend to make men more free of oppressive gender roles, and vice versa. However, in the great football game of feminists and men's rights advocates, I'm pretty much on Team Feminism, which is why I get so upset when it's clearly doing things wrong.
Also, my pronoun is zie, please. :)
To a certain degree one could test instrumental rationality indirectly. Perhaps have them set a goal they haven't made much progress on (dieting? writing a novel? reducing existential risk?) and see if instrumental rationality training leads to more progress on the goal. Or give people happiness tests before and a year after completing the training (i.e. when enough time has passed that the hedonic treadmill has had time to work). Admittedly, these indirect methods are incredibly prone to confounding variables, but if averaged over a large enough sample size the trend should be clear.
I think the most important thing about a rationality training service is operationalizing what is meant by rationality.
What exact services would the rationality training service provide? Would students have beliefs that match reality better? Be less prone to cognitive biases? Tend to make decisions that promote greater utility (for themselves or others)? How would you test this? Martial arts dojos tend to (putting it crudely) make their students better at hitting things than they were before; that's a lot easier to objectively measure than making students better at thinking than they were before.
I personally would not pay for a rationality training service unless it provided clear, non-anecdotal evidence that the average person received some benefit. I'd be particularly concerned about whether the service actually taught people to think more clearly, or simply inculcated them with the views of the people running the service.
I think the distinction is not between logical and illogical ideas, but between high-cost and low-cost ideas.
Illogical ideas are generally high-cost, for the reasons outlined in the OP, unless you live in a society in which everyone accepts the high-cost idea (for instance, Creationism in the American South). Cryonics is a high-cost idea: it may be right, but it is also deeply weird and unlikely to find acceptance among non-transhumanists. PhD physicists have high-cost ideas because of the time and effort required to understand them. Even jargon might count as a high-cost idea because of the price you pay in ease of communication, especially jargon that those outside the group tend to understand differently than those inside the group (for instance, feminists tend to use patriarchy to mean "the system of institutionalized societal sexism", while most non-feminists interpret it as meaning "all men oppressing all women").
Of course, all this is purely speculative. And the causation might go the other way: instead of adopting a high-cost idea signalling one's membership in the group, it might be that high-cost ideas tend to create groups, because low-cost ideas tend to be adopted by large numbers of people.
l'd like a separate Less Wrong readthrough because I don't have a Reddit account and don't want to acquire one for the sole purpose of the readthrough (because then I'll comment on Reddit, and I have quite enough time-wasting things to do on the Internet already :) ).
Where are you? I'm in Fort Lauderdale and the Tampa area. If we're near each other maybe we could arrange one of those meetup thingies...
I'm another classic brilliant-at-age-ten kid. The biggest problem I experienced related to being considered smart rather young was that a lot of my sense of self-worth got tied up in being the smartest kid in the room. This is suboptimal-- not only does it lead to the not asking stupid questions issue, but it also means that as soon as I was in a situation in which I wasn't smart about something, I felt like I had no worth as a human being whatsoever. (Possible confounding variable: I had depression.)
The closest thing to a solution I've found is to try to derive my self-worth from multiple sources. I am worth something as a human being not simply because of intellectual achievements, but also because I have friends who like me, I give to charity, I refused to give up. I don't know how well this will work for other people, though.
The other big problem I encountered is that I tended to automatically give up if I wasn't immediately good at something; this is why, among other reasons, I have a roughly ninth-grade understanding of math, even though I've taken calculus. (I've read studies that suggest that that's common among children praised for traits instead of actions; I'm away from JSTOR and my psych textbooks at the moment, but if someone would like a citation then I can dig it up in a week or so.) My solution was to grade myself on process instead of achievement: I defined success not as "learning two new songs" but as "practicing guitar for half an hour every week." My other solution was to work to overcome the ugh fields around activities I'm generally not good at, and to redefine those fields within my brain as "cool stuff I haven't learned yet", not "stuff I can't do."
Hi everyone! I'm Ozy.
I'm twenty years old, queer, poly, crazy, white, Floridian, an atheist, a utilitarian, and a giant geek. I'm double-majoring in sociology and psychology; my other interests range from classical languages (although I am far from fluent) to guitar (although I suck at it) to Neil Gaiman (I... can't think of a self-deprecating thing to say about my interest in Neil Gaiman). I use zie/zir pronouns, because I identify outside the gender binary; I realize they're clumsy, but English's lack of a good gender-neutral pronoun is not my fault. :)
One of my big interests is the intersection between rationality and social justice. I do think that a lot of the -isms (racism, sexism, ableism, etc.) are rooted in cognitive biases, and that we're not going to be able to eliminate them unless we understand what quirks in the human mind cause them. I blog about masculism (it is like feminism! Except for dudes!) at No Seriously What About Teh Menz; right now it's kind of full of people talking about Nice-Guy-ism, but normally we have a much more diverse front page. I believe that several of the people here read us (hi Nancy! hi Doug! hi Hugh, I like you, when you say I'm wrong you use citations!).
I've lurked here for more than a year; I got here from Harry Potter and the Methods of Rationality, just like everyone else. I've made my way through a lot of the Sequences, but need to set aside some time to read through all of them. I don't know much about philosophy, math, science, or computers, so I imagine I will be lurking here a lot. :)