What will rationality look like in the future?
post by DataPacRat · 2012-02-03T01:28:37.168Z · LW · GW · Legacy · 21 commentsContents
21 comments
One of the standard methods of science-fiction world-building is to take a current trend and extrapolate it into the future, and see what comes out. One trend I've observed is that over the last century or so, people have kept coming up with clever new ways to find answers to important questions - that is, developing new methods of rationality.
So, given what we do currently know about the overall shape of such methods, from Godel's Incompleteness Theory to Kolmogorov Complexity to the various ways to get around Prisoner's Dilemmas... Then, at least in a general science-fictional world-building sense, what might we be able to guess or say about what rationalists will be like in, oh, 50-100 years?
21 comments
Comments sorted by top scores.
comment by see · 2012-02-03T02:59:35.613Z · LW(p) · GW(p)
If a good rationalist could predict with reasonably high probability what methods good rationalists would use in 50-100 years, wouldn't said rationalist immediately update to use those methods now, invalidating his own prediction?
Replies from: JoshuaZ, None↑ comment by JoshuaZ · 2012-02-03T03:46:52.211Z · LW(p) · GW(p)
Well, one could pick specific issues that one things we'll understood more. For example, we might have better understanding of certain cognitive biases, or may have better tactics for dealing with them. This is similar to how someone in 1955 could have made predictions about space travel even if they couldn't design a fully functioning spacecraft.
↑ comment by [deleted] · 2012-02-03T14:21:11.445Z · LW(p) · GW(p)
Not if those options are currently too computationally difficult to run. For instance, I'm currently considering the prediction "In the future, good rationalists will use today's rational methods of thinking, but they will use them faster and with more automation and computer assistance."
To give an example, imagine if a person currently posting on Less Wrong was much older, and was still posting about rationality. And that person had a little helper script that would interject into an argument you were going to make with "Is this part here an appeal to emotion?"
You could retranslate that into the concept "Thoroughly recheck all of your arguments to make sure you aren't making basic mistakes." and suggest that right now. It's good advice. I try to do it, but I don't do it enough, and I still miss things. I think AnnaSalamon pointed out that one thing she noticed from work writing that rationality curriculum is that she noticed she was doing that more often. So it's certainly an improvable skill.
But right now, (or even if that planned rationality curriculum works brilliantly) a rationalist would still have to reread posts or review thoughts and find those manually. It seems like this could be automated in the future, for at least some types of basic mistakes. I would not at all be surprised if some mistakes were harder to find than others. So in addition to spell check, and grammar check, in the future we might have fallacy check and/or bias check, with the same types of caveats and flaws that those automated checkers had had during their development.
Now that I've actually laid out the prediction, I do find it compelling, but that might just be because I can't see any obvious flaws in the passes that I made to recheck it, and there is a limited amount of time I have to review it before either the idea seems stale or I want to move on, or I feel like I have checked it enough and I haven't seen anything so I'm fairly confident it's accuracy would be too difficult to improve.
Edit: Corrected spelling. (After mentioning spell checkers and their caveats and flaws in my post, one of which I have just been reminded of is that they don't fix usernames.)
Replies from: roystgnr↑ comment by roystgnr · 2012-02-04T19:37:11.959Z · LW(p) · GW(p)
Having a few very good rationalists applying "fallacy check" and "bias check" to all their own essays would be wonderful... but just imagine the implications of having many mediocre rationalists regularly applying "fallacy check" and "bias check" to their politicians essays and speeches.
I'd love to see what kind of feedback that provides to the politicians speechwriters. "Well, sir, we could say that, and it could give us a nice brief popularity boost, but would that be worth the blowback we get once everybody's talking about how we sent their fallacy-meters off the charts?"
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-02-05T21:33:37.202Z · LW(p) · GW(p)
but just imagine the implications of having many mediocre rationalists regularly applying "fallacy check" and "bias check" to their politicians essays and speeches.
Their ability to do this without getting mind-killed is very much open to question.
comment by scientism · 2012-02-03T19:25:05.807Z · LW(p) · GW(p)
I have this dream where you have a supercomputer and you feed it all the world's academic papers and so forth and using a set of heuristics it highlights all the parts of the documents that have markers for various confusions, biases, and errors, then it ranks the documents according to some sort of rationality index, and traces all the connections through citations, etc, to produce a complete map of rationality in the sciences. You can immediately see where the clearest thinking is being done, drill down to discover the most rational researchers and even see highlighted sentences that display biases, confusions, errors, etc. All without a hint of intelligence.
Replies from: Alex_Altair↑ comment by Alex_Altair · 2012-02-04T00:01:24.518Z · LW(p) · GW(p)
I wish I had dreams that awesome and complicated.
comment by TimS · 2012-02-03T03:14:09.873Z · LW(p) · GW(p)
There are lots of open social science-ish problems (e.g., optimal employee management, clinical psychology, effective political organizing, child raising). I expect that 50-100 years from now experts will have a much better grasp of the best responses to these problems, roughly in parallel to how experts have a better grasp of heart surgery than they did 50 years ago. Likewise, I expect public understanding of the solutions will be at the level of today's public understanding of heart surgery - the average reader of the New York Times knows the basics of what it is, why you'd do it, and has a very basic idea of problems that could arise (i.e. knows organ rejection is possible).
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-02-03T03:57:52.322Z · LW(p) · GW(p)
I'm not sure, attempts to solve social science-ish problems tend to get derailed by status signalling in ways that heart surgery does not.
Replies from: TimS↑ comment by TimS · 2012-02-03T04:10:06.049Z · LW(p) · GW(p)
Even time I go to renew my vehicle registration or renew my driver's license, the facility is better streamlined. That's the result of social science research. I just think we'll keep getting better at it, so more and more will be accepted at the level of traditional medicine. That's not to say that mindkilling won't continue to be a huge risk in those fields.
Replies from: roystgnr↑ comment by roystgnr · 2012-02-04T19:27:38.132Z · LW(p) · GW(p)
Even time I go to renew my vehicle registration or renew my driver's license, the facility is better streamlined. That's the result of social science research.
Are you sure? Bureaucratic record keeping is almost the most inherently computerizable, networkable, software-automatible task I can imagine, and as it happens we have been making some incredible strides in computers, networking, and software for the past few decades...
Replies from: TimS↑ comment by TimS · 2012-02-04T20:08:53.490Z · LW(p) · GW(p)
The advances in queuing people efficiently are not a product of advancements in software or hardware. In other words, I think of the insights that led to the creation of Disney's Fastpass as social science advancements.
comment by Wrongnesslessness · 2012-02-03T05:48:35.110Z · LW(p) · GW(p)
The powers of instrumental rationality in the context of rapid technological progress and the inability/unwillingness of irrational people to listen to rational arguments strongly suggest the following scenario:
After realizing that turning a significant portion of the general population into rationalists would take much more time and resources than simply taking over the world, rationalists will create a global corporation with the goal of saving the humankind from the clutches of zero- and negative-sum status games.
Shortly afterwards, the Rational Megacorp will indeed take over the world and the people will get good government for the first time in the history of the human race (and will live happily ever after).
Replies from: Normal_Anomaly, hamnox↑ comment by Normal_Anomaly · 2012-02-04T15:14:23.318Z · LW(p) · GW(p)
I find it very unlikely that this will happen, mostly due to a lack of sufficiently effective rationalists with an interest in taking over the world directly and the moral fiber to provide good government once they do so. But I think it would be awesome.
↑ comment by hamnox · 2012-02-06T01:11:56.923Z · LW(p) · GW(p)
This sounds remarkably like my dream. But I figured that we'd take over some of the world, institute mandatory rationality training in that part, use our Nation of Rationalists to take over the rest of the world, and then go out and start colonizing space.
comment by djcb · 2012-02-04T15:16:09.728Z · LW(p) · GW(p)
Interesting question... I'm sure with our BrainPals™ (as seen in John Scalzi's Old Man's War series) we can better quantify alternatives, as well as take more data into consideration. So, if someone on the street asks you for something, you "intuitively" sense that there's a 12% chance he wants to mug you, based on certain parameters. Of course, that's just improved applications of a known method.
Taking a step back, it's also interesting to see what will happen to rationalism in the general population -- are we becoming more rational over time? Or is it just something for a small group? I think that today the methods of rationality are at least available to more people (some of the smartest people in previous ages could have made good use of that!), but that doesn't mean humanity as a whole gets more rational.
comment by faul_sname · 2012-02-03T01:52:01.735Z · LW(p) · GW(p)
Assuming no singularity/other game-changer?
Replies from: DataPacRat↑ comment by DataPacRat · 2012-02-03T01:55:24.462Z · LW(p) · GW(p)
Assuming no detectable singularity, anyway. Or, if you think one's inevitable, then feel free to consider just the time leading up to it.
comment by loveandthecoexistenc · 2012-02-04T01:50:47.396Z · LW(p) · GW(p)
If some specific projects (...Rationality Curriculum) become a success, rationality will be much more widespread and, as a result, much less defined.
And 50 years is a bit too much for any notable concentrations of probability.