The Long Now

post by Nic_Smith · 2010-12-12T01:40:07.470Z · LW · GW · Legacy · 19 comments

Contents

  Similarities
  Differences
None
19 comments

It's surprised me that there's been very little discussion of The Long Now here on Less Wrong, as there are many similarities between the groups, although the approach and philosophy between them are quite different. At a minimum, I believe that a general awareness might be beneficial. I'll use the initials LW and LN below. My perspective on LN is simply that of someone who's kept an eye on their website from time to time and read a few of their articles, so I'd also like to admit that my knowledge is a bit shallow (a reason, in fact, I bring the topic up for discussion).

Similarities

Most critically, long-term thinking appears as a cornerstone of both the LW and LN thought, explicitly as the goal for LN, and implicitly here on LW whenever we talk about existential risk or decades-away or longer technology. It's not clear if there's an overlap between the commenters at LW and the membership of LN or not, but there's definitely a large number of people "between" the two groups -- statements by Peter Thiel and Ray Kurzweil have been recent topics on the LN blog and Hillis, who founded LN, has been involved in AI and philosophy of mind. LN has Long Bets, which I would loosely describe as to PredictionBook as InTrade is to Foresight Exchange. LN apparently had a presence at some of the past SIAI's Singularity Summits.

Differences

Signaling: LN embraces signaling like there's no tomorrow (ha!) -- their flagship project, after all, is a monumental clock to last thousands of years, the goal of which is to "lend itself to good storytelling and myth" about long-term thought. Their membership cards are stainless steel. Some of the projects LN are pursuing seem to have been chosen mostly because they sound awesome, and even those that aren't are done with some flair, IMHO. In contrast, the view among LW posts seems to be that signaling is in many cases a necessary evil, in some cases just an evolutionary leftover, and reducing signaling a potential source for efficiency gains. There may be something to be learned here -- we already know FAI would be an easier sell if we described it as project to create robots that are Presidents of the United States by day, crime-fighters by night, and cat-people by late-night.

Structure: While LW is a project of SIAI, they're not the same, so by extension the comparison between LN and LW is just a bit apples-to-kumquats. It'd be a lot easier to compare LW to a LN discussion board, if it existed.

The Future: Here on LW, we want our nuclear-powered flying cars, dammit! Bad future scenarios that are discussed on LW tend to be irrevocably and undeniably bad -- the world is turned into tang or paperclips and no life exists anymore, for example. LN seems more concerned with recovery from, rather than prevention of, "collapse of civilization" scenarios. Many of the projects both undertaken and linked to by LN focus on preserving knowledge in a such a scenario. Between the overlap in the LW community and cryonics, SENS, etc, the mental relationship between the median LW poster and the future seems more personal and less abstract.

Politics: The predominant thinking on LW seems to be a (very slightly left-leaning) technolibertarianism, although since it's open to anyone who wanders in from the Internet, there's a lot of variation (if either SIAI or FHI have an especially strong political stance per se, I've not noticed it). There's also a general skepticism here regarding the soundness of most political thought and of many political processes.  LN seems further left on average and more comfortable with politics in general (although calling it a political organization would be a bit of a stretch). Keeping with this, LW seems to have more emphasis on individual decision making and improvement than LN.

Thoughts?

19 comments

Comments sorted by top scores.

comment by [deleted] · 2010-12-12T02:14:06.135Z · LW(p) · GW(p)

Never heard of it before. My first impression: I wish there were more science, but I liked this quote.

If you ask my eight-year-old about the Future, he pretty much thinks the world is going to end, and that’s it. Most likely global warming, he says—floods, storms, desertification—but the possibility of viral pandemic, meteor impact, or some kind of nuclear exchange is not alien to his view of the days to come. Maybe not tomorrow, or a year from now. The kid is more than capable of generating a full head of optimistic steam about next week, next vacation, his tenth birthday. It’s only the world a hundred years on that leaves his hopes a blank. My son seems to take the end of everything, of all human endeavor and creation, for granted. He sees himself as living on the last page, if not in the last paragraph, of a long, strange and bewildering book. If you had told me, when I was eight, that a little kid of the future would feel that way—and that what’s more, he would see a certain justice in our eventual extinction, would think the world was better off without human beings in it—that would have been even worse than hearing that in 2006 there are no hydroponic megafarms, no human colonies on Mars, no personal jetpacks for everyone. That would truly have broken my heart. When I told my son about the Clock of the Long Now, he listened very carefully, and we looked at the pictures on the Long Now Foundation’s website. “Will there really be people then, Dad?” he said. “Yes,” I told him without hesitation, “there will.”

I find it hard "believing" in a technological future too. I have that "last paragraph of a long book" feeling. But they're right, it's probably not healthy.

Replies from: sketerpot, Desrtopa
comment by sketerpot · 2010-12-12T02:30:00.144Z · LW(p) · GW(p)

I used to have that the-world-is-ending feeling, too. I picked it up by osmosis. Environmentalists were talking like we were going to run out of natural resources any day now (and often predicting disaster just a few years ahead). A lot of people casually mentioned that they expected the world to end any day now from nuclear war, although that might have been exacerbated by the fact that I read books which were written before the Soviet Union broke up. But nuclear tensions have been steadily ebbing, and environmental doomsday predictions have consistently failed to come true, and now I'm feeling optimistic enough to think about actually dealing with a technological future.

By the way, this reminds me of one of the fake job application cover letters from Joey Comeau's book Overqualified, which probably qualifies as a rationality quote in its own right, if only for the brilliant last paragraph. It hails from an alternate universe where Greenpeace isn't stupid and counterproductive.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-12-12T15:41:04.309Z · LW(p) · GW(p)

I don't have links handy, but I've seen quite a bit of discussion to the effect that science fiction, especially American science fiction, has become very pessimistic.

comment by Desrtopa · 2010-12-13T17:32:42.386Z · LW(p) · GW(p)

I don't expect humanity to end in the near future, but I can definitely relate. I've internalized the idea of a technological singularity, not in the sense that everything will become incredibly awesome due to rapid technological advancement, but in the sense that attempting to make predictions beyond a horizon in the not so distant future is absolutely futile. I don't think mind uploading or interstellar colonization, I just get... blank.

comment by shokwave · 2010-12-12T08:45:26.123Z · LW(p) · GW(p)

One major difference: LessWrong has a common (if not prevailing) view that the billion-year-scale future of most of the universe will be decided in the decade-scale future. In a sense, the last page of one book and the first page of a much greater one.

The Long Now seems less like "today's thinking will define the future" and more like "today's thinking needs to be extensible into the future"

comment by lucidfox · 2010-12-13T02:49:06.043Z · LW(p) · GW(p)

Here on LW, we want our nuclear-powered flying cars, dammit!

I don't.

Flying cars are a silly SF idea. Even if we had the technological ability to make them, remember that the social success of a technology often depends on politics and the human factor, rather than technology itself. We have enough problems regularing traffic on 2D roads - imagine what kind of accidents flying cars would be capable of.

Replies from: jaimeastorga2000, wedrifid, JoshuaZ
comment by jaimeastorga2000 · 2010-12-14T01:22:14.271Z · LW(p) · GW(p)

I've always said that we already have flying cars - they are called "helicopters" and are currently too expensive for most people to afford.

comment by wedrifid · 2010-12-13T04:17:21.523Z · LW(p) · GW(p)

We have enough problems regularing traffic on 2D roads - imagine what kind of accidents flying cars would be capable of.

It could well be safer, assuming appropriate navigation technology. Spreads out the congestion.

Replies from: lucidfox
comment by lucidfox · 2010-12-13T14:32:40.695Z · LW(p) · GW(p)

I'd first like to see the problem of safe navigation on real roads solved before we move into 3D space.

Not to mention, even if you somehow make a 100% reliable guiding mechanism, it requires drivers to completely trust the automatic controls. Are they going to do that? And the consequences of deliberate abuse would be far worse than even those with real cars.

Replies from: Dreaded_Anomaly, mindspillage, gwern
comment by Dreaded_Anomaly · 2010-12-31T07:47:42.010Z · LW(p) · GW(p)

Google's Eric Schmidt thinks that "it's a bug that cars were invented before computers." It's an interesting viewpoint, given Google's largely successful experiments with automated driving.

comment by mindspillage · 2010-12-17T09:57:38.823Z · LW(p) · GW(p)

Maybe. Relevant part of the article: 1,000 miles were driven without human intervention; 140,000 with occasional human intervention. I'd love to know more detail on what prompted people to intervene and when they did it but I'm surprised at even that amount of trust in a technology at its level.

comment by gwern · 2010-12-16T23:31:21.542Z · LW(p) · GW(p)

Not to mention, even if you somehow make a 100% reliable guiding mechanism, it requires drivers to completely trust the automatic controls. Are they going to do that?

Allow me to rephrase this.

Not to mention, even if you somehow made a 100% reliable plane autopilot which can even land it safely, it requires the pilot and co-pilot to trust said autopilot. Are they really going to do that?

Replies from: lucidfox
comment by lucidfox · 2010-12-20T11:02:05.044Z · LW(p) · GW(p)

For one, airplane pilots are generally far more qualified than car drivers.

For two, civilian airplane pilots don't usually have to deal with other planes entering their space and having to execute demanding maneuvres in real time, thanks to strict airspace regulations.

comment by JoshuaZ · 2010-12-13T04:28:29.575Z · LW(p) · GW(p)

And to add another note, flying cars is not just a silly SF idea, it is a silly SF idea that is very American. Americans have an incredible car culture. Cities with less car use have lower obesity rates. (See for example here). The idea of the flying car is to take the already unhealthy and inefficient American car obsession and make it even worse.

Replies from: Nic_Smith
comment by Nic_Smith · 2010-12-13T06:43:12.627Z · LW(p) · GW(p)

Fine, flying trains, super-zepplins, whatever "works."

comment by gwern · 2010-12-17T00:35:17.670Z · LW(p) · GW(p)

I personally take the approach that one should think Less Wrong and act Long Now, if you follow me. I diligently do my daily spaced-repetition review and n-backing. I carefully design my website and writings to last decades, actively think about how to write material that improves with time, and work on writings that will not be finished for years (if ever).

It's a bit schizophrenic since both are fairly total worldviews with some fairly conflicting recommendations about where to invest my time - it's a case of very low discount rates versus very high discount rates, I suppose.

Perhaps we should view the Long Now as insurance? I think that's how they describe some of their proposals like a Long Library and seedbanks - as insurance in case the future turns out to be surprisingly unsurprising.

comment by timtyler · 2010-12-12T13:43:17.640Z · LW(p) · GW(p)

The Long Now Foundation seems to be a dissapointing organisation to me. The many smart people involved should surely be able to find something better to do with themselves than that.

http://fora.tv/partner/Long_Now_Foundation

...does have some worthwhile material though.

comment by Vladimir_Nesov · 2010-12-12T02:19:16.776Z · LW(p) · GW(p)

LN embraces signaling like there's no tomorrow (ha!) -- their flagship project, after all, is a monumental clock to last thousands of years, the goal of which is to "lend itself to good storytelling and myth" about long-term thought.

Is this gesture at least made in good Machiavellian faith?

Replies from: sketerpot
comment by sketerpot · 2010-12-12T02:54:16.394Z · LW(p) · GW(p)

They want people to think more long-term, and they're very up-front about about this goal, as well as how their signaling is supposed to encourage that sort of thinking. There's nothing deceptive here, or even particularly subtle. Is that what you mean?