Posts
Comments
What do you mean by "made the world a worse place"? Worse than it was before democracy and liberalism started spreading, i.e. pre-1700s? Or worse today than it would have been today if democracy and liberalism hadn't spread? The first question seems easy (we're more peaceful and prosperous than the past), the second a nearly impossible counterfactual, depending heavily on what government systems and philosophies we'd have instead.
Stories are a huge way we make sense of the world. Adding a narrative sequence to the post did helped me keep track of the ideas and how they fit together.
Is histocracy compatible with a secret ballot? (And for that matter, is futarchy?)
And as a separate question, would it be a good idea to keep voters' individual reliability scores secret, too? If a voter is known to have an accurate record and her opinion is public before a vote, couldn't she get overweighted, because she'll sway others' votes as well as getting more weight in the vote sum?
It's broken for me too, in exactly the way you describe. One of the variants on the error page invites me to buy a reddit t-shirt.
I participated in the IB diploma program in 1997, in Texas. My experience was better than KPier's in several ways. I think having a skilled and experienced teacher makes all the difference. Mine wasn't a LessWrong style rationalist, but she had experience with teaching philosophy, so we got past initial naive intuitions on most of the class topics relatively soon, and I witnessed basic changes in attitude toward the nature of language and knowledge in both me and several of my classmates.
In retrospect, I think the best thing that could have been added would have been a discussion up front about how not to be confused about words. Some combo of the material in Disputing Definitions and Conceptual Analysis and Moral Theory. After that, something to undermine reliance on introspection and intuition more generally, perhaps in the context of presenting basic cognitive biases.
When I came last week (hadn't checked here a while) and didn't see anybody there, I though the regular meeting was defunct. I'm glad to see it's still going. See y'all this evening!
I will attend most of the weekly Irvine meetups, at least through the end of July.
Zip code correction:
501 West 15th Street, Austin, TX 79701
Should be
501 West 15th Street, Austin, TX 78701
Isn't this already implemented, as the Anti-Kibitzer in the preferences section?
Drop the little skyline/boat grayscale image (mini-landscape.gif) that appears at the bottom of each top-level post. Original mention. Seems to have no purpose, and doesn't really fit the design theme.
Another by the same guy, more general in scope, and (in my opinion) more inspiring toward rationality: Why Didn't Anybody Tell Me?
Thanks for the (potentially) very useful post. Upvoted with pleasure, as the best thing in while fitting the criterion: "I'd like to see more posts like this on LessWrong."
Vivo ahora en California, pero mi esposa y yo nos mudamos a España en Augusto (todavía no hemos decidido cual ciudad). Viviremos allí por un año.
I'll be coming. Thanks for putting this together again, Jennifer.
I suggest two topics:
- The recent inspiring post by Cosmos on a successful rationalist community
- some kind of structured conversation or exercise
I won't attend this one (too far from Irvine), but thanks for setting up that mailing list. Now I don't have to worry about missing a meetup via not checking the site for a while.
A very thought-provoking and well-written article. Thanks!
Your biggest conceptual jump seems to be reasoning about the subjective experience of hyperintelligences by analogy to human experiences. That is, and experience of some thought/communication speed ratio for a hyperintelligence would be "like" a human experience of that same ratio. But hyperintelligences aren't just faster. I think they'd probably be very very different qualitatively. Who knows if the costs / benefits of time-consuming communication will be perceived in similar or even recognizable ways?
I won't be coming to this one; it's too far for me. I'll see y'all when we gather again in Orange County.
Great! I'll be there with p ≈ 0.9.
Possibly, but probably not. I have a young baby at home, so I can't be away for very long. The issue for me isn't transportation availability, but transportation time. I live about a mile from the IHOP.
If there's no better basis for choosing than the most convenience for the most people, my vote is for the IHOP near UCI, which is super-convenient for me. If it's there, I could come; if not, I'd very likely have to miss it.
I haven't talked about how to practice it yet. I was planning on doing that in another post that uses the conceptual framework of this one. Do you think it should all be a single post?
Yes, or else posted very soon. In any case, if the content ends up separate, please link each post to the other.
Great idea for a post. I've really enjoyed reading the comments and discussion they generated.
Does it make sense to speak of probabilities only when you have numerous enough trials?
No, probability theory also has non-frequency applications.
Can we speak of probabilities for singular, non-repeating events?
Yes. This is the core of a Bayesian approach to decision making. The usual interpretation is that the probabilities reflect your state of knowledge about events rather than frequencies of actual event outcomes. Try starting with the LW wiki article on Baesian probability and the blog posts linked therefrom.
It sort of seems like a critique of the terminology if dark arts tend not to be deployed by the dark side.
I agree. I think the dark side terminology is based on the "dark side of the force" from Star Wars, which has connotations of a personal fall into temptation, and the dark arts refers to magic of evil intent or effect, perhaps from Harry Potter, where it is used by evil but not self-deceiving villains. This could explain the inconsistency.
I think you have really helped to clarify the go side of this analogy, and I'm grateful for your description of sabaki play and what makes it different from trick moves. I think the connection you draw to rationality and debate are pretty good.
I'm not sure about this, but I think there's another sense in which the term "dark arts" is used on LessWrong: using one's knowledge of common cognitive biases and other rationality mistakes to get people to do or believe something. That is, fooling others, not fooling yourself. For the go analogy, I think this is most closely related to trick (non-obviously suboptimal) moves. Or perhaps the technically unsound but necessary aggressive moves used by white in handicap games to which black often responds with too much humility.
Small advantages escalate
Actually, one thing I enjoy about go is that small advantages don't escalate, at least not nearly as much as they do in chess. In go, if you make a mistake early that puts you behind by, say, 30-40 points, the place where you made that mistake usually interacts with the rest of the board little enough that you're not hugely disadvantaged elsewhere, and if you play better in the time and space that is left, you can catch up. But as you say about chess, I'm not sure if this is a very generalizable idea, at least when it comes to rationality.
To become good at poker it's crucial to be able to distinguish between bad luck and play mistakes. You have to keep your cool when your opponent makes bad moves and wins anyway....In life, we are very often faced with situations where we have to analyze to what extent something is the result of our own actions and to what extent it is the result of factors outside our control.
I think this sounds like a valuable lesson to learn, and as you say, the kind of thing you couldn't get from a deterministic game. And as with go, I suspect that some lessons from poker sink in better when you experience them in play than when you just read them. I would be interested to read more about it, if you (or any other poker players out there) have the time and interest to write a post on rationality in poker or other games with a chance component. I have a feeling that there are lessons related to probability and quantifying your beliefs that could be drawn, or perhaps stories from games that can be used as illustrations of probabilistic or Bayesian reasoning.
There's an interesting essay by William Pinckard that contrasts the philosophical perspectives of the gameplay of three ancient games; backgammon, chess, and go, which says in summary: backgammon is man-vs-fate, chess is man-vs-man, and go is man-vs-self.
What's your playing strength in Go? The article reads a bit like it's either too much targeted at people without understanding of go or is written without by someone with a playing strength >5 kyu?
I'm about 12k on KGS. I definitely aimed the article at people who knew nothing about go, but I think it's also interesting that you could tell that I'm not a very strong player myself. I would be interested to know if you have found generalizable lessons which only came after you achieved a deeper understanding of the game.
But if the two players disagree, the solution is simply to resume play. That true for some Go rule sets but it isn't true for Japanese style rules.
According the wikipedia article on rule sets' treatment of the end, all the sets actually say that you should play things out, capturing dead stones. I guess I've only ever played with the more convenient practice of mutual agreement about dead stones. It happens this way in every club and internet server I've ever played at, even when using Japanese rules. So in this sense, the actual experience of playing go does reinforce the idea that new evidence is the arbiter of conflicting beliefs.
I think you're right that most goal-directed activity, especially formalized pursuits like abstract board games, encourages rational thinking. Nevertheless, I have gotten the feeling that go is particularly good in this regard, at least in my experience. I played chess for a long time, and have tried many other types of formal table and online games, and of them all, go seems to have the strongest tendency to show me how bad habits of thinking work against me.
I would love to see more articles like this one explicitly illustrating how other activities can be be approached as a means of rationality practice.
(Perhaps you have had experience gambling in the paper clip casino to increase your hoard, which has given you valuable practice in understanding probability?)
OK, I see, thanks for explaining it. I'd never really heard of this difference before. I'd have to say that the more subtle moves that risk only a little and feel out your opponent sound more akin to the "dark arts" in rationality.
Again, thank you. I've made another fix. As you can see, life and death problems are not my strength!
The "dark side" has an analogy in go. It is tempting to play moves that you know don't work because you think your opponent won't be able to figure out the correct response. It is usually not obvious to beginners that doing this is really holding them back.
Good point! I thought about including this connection between trick moves and the dark arts, actually. They don't seem quite parallel to me, but there are definitely similarities. If anybody is interested, you can read more about trick moves here.
it contains approximately 10,000 times the maximum safe dosage of in principle.
Great quote.
One aspect of go which is present on LW but not true about rationality in general (and so not part of the article) is a culture of welcoming and mentoring. Good players are honored by teaching beginners, and the handicap system facilitates interesting teaching games. You should not worry about bothering (go playing) humans with any lack of sophistication. Not all players have this attitude, of course, but surprisingly many do. The place on the internet I've found best reflects this welcoming culture of go is the Kiseido Go Server.
Also, I should note that I've been advised by strong players on a few occasions not to play against computer opponents much, especially those set to easier difficulty levels, because it can build bad habits.
Thanks for the feedback. You're right: for players with more than beginning skill, I agree that Fig 3 is alive (and Peter de Blanc is right that Fig 2 is not "unconditionally alive") in the original versions of the figures. I've revised Figures 2 and 3 accordingly. (So the rest of you shouldn't worry if this comment thread seems confusing! If you're interested, the original versions are here and here.)
In choosing examples, I was aiming for arrangements that visually conveyed the three states of close surrounding, surrounding with internal structure, and something intermediate. The goal is to be able to talk about "life" and "death" as alternative states the game might be in, like alternative hypotheses of reality, to serve the go/rationality analogy, without having to explain the rules. I hope the revised versions still do this, while making their labels more correct.
This is a nice observation, and I think it's true about both go and rationality. Wish I'd thought of it for the post!
Yes, this is true, but it's also true that some kinds of imitation can take you far even if you don't understand them. Personally, I try to play with good shape, I have seen it pay off, but I don't understand most of ways that this helps me. A good parallel in rationality might be learning self doubt. This can help, even if one doesn't know the myriad ways people have of fooling themselves which it is intended to thwart.
A couple thoughts on places to look for ideas, places where people have probably been thinking about similar challenges:
- Interstellar Travel There's a lot of speculation about feasibility here, and I think people generally assume the need for some sort of long-term, low-power cryogenic preservation. They do assume access to interstellar vacuum, though.
- DNA "arks" and similar biodiversity libraries. I haven't heard of anything in this space looking at zero- or low-maintenance preservation, but maybe there's a paranoid fringe?
I see what you mean. It's a matter of what threat you have in mind. I'm thinking mainly of the hostility of a pretty-much intact society to cryonics, and how to take your idea of protecting preserved people by using the notion of "respect for the dead" further, also incorporating the idea of honoring the dead by maintaining shrines/graves, etc.
You're totally right that if there's a global depression or civilizational collapse, then the threat of thawing comes more from inability to maintain rather than unwillingness or opposition.
Maybe it would help to split the post, or maybe organize this discussion, to investigate these ideas separately? It seems that engineering speculation about zero-maintenance cryonics is interesting and useful, and that using the "grave" analogy to make cryonics more acceptable and safe from interference is also interesting, but different issues and constraints arise for each of them.
... and a ΔT of 220 °C ...
With liquid nitrogen at -196°C and the average temp in the places you suggest well below freezing (A few minutes of googling suggests it wouldn't be hard to find an average annual temp of -20°C.), I think you could use a more-optimistic ΔT of 175°.
Why limit yourself to no maintenance at all in your feasibility speculations? Tending graves is common across cultures. As long as you're spinning a tank of liquid nitrogen as a "grave", why not spin a nitrogen topoff as equivalent to keeping the grass trimmed or bringing fresh flowers?
Does anybody know what is depicted in the little image named "mini-landscape.gif" at the bottom of each top level post, or why it appears there?
that's much better than the nothing that will befall those who would otherwise have been totally lost
I'm curious to know why you make this judgment. I imagine future people choosing between making a new person and making an as-similar-as-a-relative copy of a preserved person. In both cases, one additional person gets to exist. In both cases, that person is not somebody who has ever existed before. In neither case case does a future person get to revive a loved one, because the result will only be somebody similar to that loved one. Reviving the preserved person is better for the preserved person, I guess, but making a new person is better for the new person. Once you've lost continuity of identity, you've lost any reason why basing new people on recordings is better than making new people the old fashioned way.
Put another way, the nothing that will befall the totally lost feels exactly as bad to me as the nothing that will befall the future unborn whom they displace.
I know that ethical reasoning about potentially-existing people is hard, so I'm not too clear on this, so I'd like to know why you feel the way you do.
Thanks for the well-written article. I enjoyed the analogy between statistical tools and intuition. I'm used to questioning the former, but more often than not I still trust my intuition, though now that you point it out, I'm not sure why.
I would very much like to see a cannon develop for knowledge that LWers generally agree upon
LW is working on it, and you can help!
This is an interesting way of thinking about citizenship and immigration, one which I think is useful. I don't think I've ever thought about the way other countries' immigration rules regard me. Thanks for the new thought.
This is a very well written post which I enjoyed reading quite a bit. The writing is clear, the (well cited!) application of ideas developed on LW to the problem is great to support further building on them, and your analysis of the conventional wisdom regarding disease and blameworthiness as a consequence of a deontologist libertarian ethics rang true for me and helped me to understand my own thinking on the issue better.
Thanks for the care you put into this post.
This is some of the best writing on online societies I've ever read. Thanks for link and excerpt. I think this is worthy of a top-level post (if we want top-levels to ever go meta), because I'm worried for LessWrong.
However, despite these legal changes, it's not correct to say that the ring doesn't cost the proposer anything.
You've changed my mind: there is a real cost to the ring. I considered the ring a thing equal in value to its price but didn't think it through enough to realize that after it's bought it only retains much value (as sentimental value to the couple) if the proposal succeeds. Thanks for the links; I had no idea diamonds were so over-priced.