Posts
Comments
I think the basic problem here is an undissolved question: what is 'intelligence'? Humans, being human, tend to imagine a superintelligence as a highly augmented human intelligence, so the natural assumption is that regardless of the 'level' of intelligence, skills will cluster roughly the way they do in human minds, i.e. having the ability to take over the world implies a high posterior probability of having the ability to understand human goals.
The problem with this assumption is that mind-design space is large (<--understatement), and the prior probability of a superintelligence randomly ending up with ability clusters analogous to human ability clusters is infinitesimal. Granted, the probability of this happening given a superintelligence designed by humans is significantly higher, but still not very high. (I don't actually have enough technical knowledge to estimate this precisely, but just by eyeballing it I'd put it under 5%.)
In fact, autistic people are an example of non-human-standard ability clusters, and even that's only by a tiny amount in the scale of mind-design-space.
As for an elevator pitch of this concept, something like "just because evolution happened design our brains to be really good at modeling human goal systems, doesn't mean all intelligences are good at it, regardless of how good they might be at destroying the planet".
Looking for advice with something it seems LW can help with.
I'm currently part of a program the trains highly intelligent people to be more effective, particularly with regards to scientific research and effecting change within large systems of people. I'm sorry to be vague, but I can't actually say more than that.
As part of our program, we organize seminars for ourselves on various interesting topics. The upcoming one is on self-improvement, and aims to explore the following questions: Who am I? What are my goals? How do I get there?
Naturally, I'm of the opinion that rationalist thought has a lot to offer on all of those questions. (I also have ulterior motives here, because I think it would be really cool to get some of these people on board with rationalism in general.) I'm having a hard time narrowing down this idea to a lesson plan I can submit to the organizers, so I thought I'd ask for suggestions.
The possible formats I have open for an activity are a lecture, a workshop/discussion in small groups, and some sort of guided introspection/reading activity (for example just giving people a sheet with questions to ponder on it, or a text to reflect on).
I've also come up with several possible topics: How to Actually Change Your Mind (ideas on how to go about condensing it are welcome), practical mind-hacking techniques and/or techniques for self-transparency, or just information on heuristics and biases because I think that's useful in general.
You can also assume the intended audience already know each other pretty well, and are capable of rather more analysis and actual math than is average.
Ideas for topics or activities, particularly ones that include a strong affective experience because those are generally better at getting poeple to think about this sort of thing for the first time, are welcome.
It also depends on the jeans. Some jeans are, for some reason, more likely to smell after being worn just once. I have no idea why, but several people I know have corroborated this independently.
Map and territory - why is rationality important in the first place?
Alright, that works too. We're allowed to think differently. Now I'm curious, could you define your way of thinking more precisely? I'm not quite sure I grok it.
So, essentially, there isn't actually any way of getting around the hard work. (I think I already knew that and just decided to go on not acting on it for a while longer.) Oh well, the hard work part is also fun.
This appears to be a useful skill that I haven't practiced enough, especially for non-proof-related thinking. I'll get right on that.
reads the first essay and bookmarks the page with the rest
Thanks for that, it made for enjoyable and thought-provoking reading.
I don't really have good definitions at this point, but in my head the distinction between verbal and nonverbal thinking is a matter of order. When I'm thinking nonverbally, my brain addresses the concepts I'm thinking about and the way they relate to each other, then puts them to words. When I'm thinking verbally, my brain comes up with the relevant word first, then pulls up the concept. It's not binary; I tend to put it on a spectrum, but one that has a definite tipping point. Kinda like a number line: it's ordered and continuous, but at some point you cross zero and switch from positive to negative. Does that even make sense?
Right, that makes much more sense now, thanks.
One of my current problems is that I don't understand my brain well enough for nonverbal thinking not to turn into a black box. I think this might be a matter of inexperience, as I only recently managed intuitive, nonverbal understanding of math concepts, so I'm not always entirely sure what my brain is doing. (Anecdotally, my intuitive understanding of a problem produces good results more often than not, but any time my evidence is anecdotal there's this voice in my head that yells "don't update on that, it's not statistically relevant!")
Does experience in nonverbal reasoning on math lend actually itself to better understanding of said reasoning, or is that just a cached thought of mine?
I'd say that my thinking about mathematics is just as verbal as any other thinking.
Just to clarify, because this will help me categorize information: do you not do the nonverbal kind of thinking at all, or is it all just mixed together?
Could you please explain what you mean by "correct" and "accurate" in this case? I have a general idea, but I'm not quite sure I get it.
I only got to a nonverbal level of understanding of advanced math fairly recently, and the first time I experienced it I think it might have permanently changed my life. But if you dream about math...well, that means I still have a long way to go and deeper levels of understanding to discover. Yay!
Follow-up question (just because I'm curious): how do you approach math problems differently when working on them from the angle of engineering, as opposed to pure math?
I have a question for anyone who spends a fair amount of their time thinking about math: how exactly do you do it, and why?
To specify, I've tried thinking about math in two rather distinct ways. One is verbal and involves stating terms, definitions, and the logical steps of inference I'm making in my head or out loud, as I frequently talk to myself during this process. This type of thinking is slow, but it tends to work better for actually writing proofs and when I don't yet have an intuitive understanding of the concepts involved.
The other is nonverbal and based on understanding terms, definitions, theorems, and the ways they connect to each other on an intuitive level (note: this takes a while to achieve, and I haven't always managed it) and letting my mind think it out, making logical steps of inference in my head, somewhat less consciously. This type of thinking is much faster, though it has a tendency to get derailed or stuck and produces good results less reliably.
Which of those, if any, sounds closer to the way you think about math? (Note: most of the people I've talked to about this don't polarize it quite so much and tend to do a bit of both, i.e. thinking through a proof consciously but solving potential problems that come up while writing it more intuitively. Do you also divide different types of thinking into separate processes, or use them together?)
The reason I'm asking is that I'm trying to transition to spending more of my time thinking about math not in a classroom setting and I need to figure out how I should go about it. The fast kind of thinking would be much more convenient, but it appears to have downsides that I haven't been able to study properly due to insufficient data.
You're right, my apologies.
My value judgment about disincentives still stands, though. Religious communities have a framework for applying social and other disincentives (and incentives) in order to achieve their desired result. That framework could be useful if adapted to the purpose of promoting rationality.
Based on admittedly anecdotal evidence I'm inclined believe this correlation, but I think we're interpreting its existence differently. In my view, by becoming more "religious" and providing more disincentives for deviating from norms, we can increase our cohesiveness and effectiveness, but this should only be done up to a point, that point being, as far as I can tell, where we as a community can no longer tolerate the disincentives. This view is based on my value judgment that not all disincentives for deviating from norms I find acceptable or admirable are unacceptable, but rather too many disincentives or those that are too extreme are unacceptable.
I agree that this is the case in some religious communities, and that this is not necessarily the direction a rationalist community should go. (On the other hand, I have a hard time agreeing with the proposition that social pressure in favor of rationality is a bad thing, but I have yet to reach a definite conclusion on the subject.) However, I happen to be familiar with several religious communities where direct and violent pressure to conform is not the case, and it is those communities I wish to emulate.
I made no mention of control. Simply being present in all aspects of life is not the same as having control over all aspects of life. For example, if you live in a western society it's extremely probable that marketing and advertising are present in many aspects of your life, but I don't think either of us would say that the simple fact of their presence gives the marketers control over those aspects of your life.
Done, though sadly without the digit ratio due to lack of equipment. I'm a newbie and I just thought that was really cool.
Not necessarily. It's totalitarianism if said institutions do the ensuring through force, and without the consent of the disciples. However, by choosing to belong to a religious community, people choose to have institutions and members of the community remind them of the religious values.
You're right, that was uncalled for and I retract that statement.
I think this sort of thing works differently in my country (Israel) than it does in other places. Because religious and secular societies are more segregated, it's fairly common for people to affiliate themselves with a particular group due to the community's norms, customs or values rather than religious belief.
As a newbie around here: thank you, this is quite helpful.
When explaining/arguing for rationality with the non-rational types, I have to resort to non-rational arguments. This makes me feel vaguely dirty, but it's also the only way I know of to argue with people who don't necessarily value evidence in their decision making. Unsurprisingly, many of the rationalists I know are unenthused by these discussions and frequently avoid them because they're unpleasant. It follows that the first step is to stop avoiding arguments/discussions with people of alternate value systems, which is really just a good idea anyway.
Universities are not a good example of the institutions he was talking about. Durability isn't the only important factor. One of the main strengths of religious institutions is their sheer pervasiveness; by inserting itself into every facet of life, religion ensures that its disciples can't stray too far from the path without being reminded of it. Universities, sadly, are not capable of this level of involvement in the lives of communities or individuals.
In this case, rationality should seek to emulate religion by creating institutions and thus a lifestyle that makes its ideas pervasive. For example, if you could attend weekly lectures at your local "rationality church" or have those better at the art of rationality available to guide you the way priests guide Christians, becoming and staying a rationalist would be much easier and thus more accessible to the populace. This already sort of happens through the internet and meetups, but what religion has is a proven formula that builds communities around ideas, and we can definitely learn from it.
Until very recently I believed that I was completely anti-religious and took the opposing view to religion whenever the choice presented itself. I participated in a discussion on the topic and found myself making arguments I didn't actually agree with. This was mostly due to several habits I've been practicing to make me better at analyzing my own beliefs, most notably running background checks on any arguments I make to see where exactly in my brain they originate and constantly looking for loopholes in my arguments.
Because of this experience I've come to understand that most of my beliefs about religion were more based on color politics than any rational thought processes. Since breaking out of religious thinking itself a few years ago I'd simply been aligning my beliefs with the more anti-religious side of the atheist movement.
For example, where I once automatically looked down on the choice to live in religious society, regardless of personal religious belief, I've come to realize that I actually think of this decision as more of a lifestyle choice than a religious one, and thus undeserving of my baseless criticism.