New Year's Predictions Thread

post by MichaelVassar · 2009-12-30T21:39:09.895Z · LW · GW · Legacy · 446 comments

Contents

446 comments

I would like to propose this as a thread for people to write in their predictions for the next year and the next decade, when practical with probabilities attached. I'll probably make some in the comments.

446 comments

Comments sorted by top scores.

comment by Vladimir_Golovin · 2010-01-01T14:59:10.537Z · LW(p) · GW(p)

I'm 90% confident that the cinematic uncanny valley will be crossed in the next decade. The number applies to movies only, it doesn't apply to humanoid robots (1%) and video game characters (5%).

Edit: After posting this, I thought that my 90% estimate was underconfident, but then I remembered that we started the decade with Jar-Jar Binks and Gollum, and it took us almost ten years to reach the level of Emily and Jake Sully.

Replies from: James_K, orthonormal, MatthewB, xamdam, Bindbreaker, dfranke, gwern
comment by James_K · 2010-01-02T07:28:45.928Z · LW(p) · GW(p)

Is there a reason Avatar doesn't count as crossing the threshold already?

Replies from: stevage, Vladimir_Golovin
comment by stevage · 2010-01-02T09:46:12.397Z · LW(p) · GW(p)

Because the giant blue Na'vi people are not human.

Replies from: timtyler
comment by timtyler · 2010-01-02T11:14:07.416Z · LW(p) · GW(p)

You mean you didn't notice the shots with the simulated humans in Avatar? ;-)

comment by Vladimir_Golovin · 2010-01-03T09:13:02.749Z · LW(p) · GW(p)

Avatar and Digital Emily are the reasons why I'm so confident. Digital actors in Avatar are very impressive, and as a (former) CG nerd I do think that Avatar has crossed the valley -- or at least found the way across it -- I just don't think that this is proof enough for general audience and critics.

Replies from: MatthewB
comment by MatthewB · 2010-01-03T09:33:11.202Z · LW(p) · GW(p)

I think before the critics will be satisfied, one would have to make an entirely CGI film that wasn't Sci Fi, or fantastic in its setting or characters.

Something like a Western that had Clint Eastwood & Lee Van Cleef from their Sergio Leone Glory Days, alongside modern day Western Stars like Christian Bale, or.. That Australian Guy who was in 3:10 to Yuma. If we were to see CGI Movies, such as I mentioned, with the Avatar tech (or Digital Emily), then I am sure the critics and public would sit up and take notice (and immediately launch into how it was really not CGI at all, but really a conspiracy to hide immortality technology from the greater public).

Replies from: Vladimir_Golovin
comment by Vladimir_Golovin · 2010-01-03T09:48:05.677Z · LW(p) · GW(p)

I think before the critics will be satisfied, one would have to make an entirely CGI film that wasn't Sci Fi, or fantastic in its setting or characters.

Exactly. I was thinking about something like an Elvis Presley biopic, but your example will do just fine (except that I don't think that vanilla westerns are commercially viable today).

Replies from: MatthewB
comment by MatthewB · 2010-01-03T12:32:05.220Z · LW(p) · GW(p)

Vanilla Westerns?!? There is Nothing Vanilla about a Sergio Leone Western! And Clint Eastwood's Unforgiven was an awesome western, as were Silverado and 3:10 to Yuma (and there are even more that have made a fair killing at the box office).

Westerns are not usually thought of as Block-Busters though, but they do draw a big enough crowd to be profitable.

If one were to draw together Lee Van cleef, Clint Eastwood, and Eli Wallach from their Sergio Leone days together with some of the Big names in Action flics today to make a period western that starred all of these people... I think you'd have a near Block-Buster...

However, the point is really that using this technology one would be able to draw upon stage or film actors of any period or genre (where we had a decent image and voice recording) and to be able to mix actors of the past with those of today.

I just happen to have a passion for a decent Horse Opera. Pity that Firefly was such crap... decent Horse Opera is really no different from a decent Space Opera. Something like Trigun or Cowboy Bebop

comment by orthonormal · 2019-12-31T19:31:56.428Z · LW(p) · GW(p)

Not sure whether it's been fully crossed, but it's close.

By 2015 we had a CGI-on-top-of-body-double Paul Walker and audiences weren't sure when the clips of him were real ones. Rogue One had full-CGI Tarkin and Leia, though those were uncanny for some viewers (and successful for others). Can't think of another fully CGI human example.

(No, non-human humanoids still don't count, as impressive as Thanos was.)

comment by MatthewB · 2010-01-03T09:29:37.145Z · LW(p) · GW(p)

You don't think that the Valley will be crossed for video games in the next ten years?

Considering how rapidly the digital technologies make it from big screen to small, I'm guessing that we can see the Uncanny Valley crossed (for Video Games) within 2 years of its closure in films (the vast majority of digital films having crossed it).

Part of the reason is that the software packages that do things like Digital Emily (mentioned below) are so easy to buy now. They no longer cost hundreds of thousands, as they did in the early days of CGI, and even huge packages like AutoDesk, which used to sell for $25,000, now can be had for only $5,000. And, those packages can be had for a similar price. That is peanuts when compared to the cost of the people who run that software.

Replies from: Christian_Szegedy, Vladimir_Golovin
comment by Christian_Szegedy · 2010-01-06T20:03:48.426Z · LW(p) · GW(p)

I agree with you. The uncanny valley refers to rendering human actors only. It is not necessary to render a whole movie from scratch. It is much more work, but only work.

IMO, The Life of Benjamin Button was the first movie that managed to cross the valley.

comment by Vladimir_Golovin · 2010-01-03T10:00:12.512Z · LW(p) · GW(p)

My reply is here. BTW, major CG packages like Autodesk Maya and 3DS Max were at the level of $5000 and below for over a decade.

Replies from: MatthewB
comment by MatthewB · 2010-01-03T12:35:25.032Z · LW(p) · GW(p)

I've been out of circulation for a while. Last time I priced Autodesk, was in the early 90s, and it was still tens of thousands. I'm just now getting caught up to basic AutoCAD, and I hope to begin learning 3DS Max and Maya in the next year or so. I am astounded at how cheap these packages are now (and how wrong one of my best friends is/was about how quickly these types of software would be available. In 1989, he said it would be 30 to 40 years before we saw the types of graphics displays & software that were pretty much common by (I have discovered) 1995)... Thanks for the head's up though.

comment by xamdam · 2010-07-02T17:39:10.051Z · LW(p) · GW(p)

Interesting, it seems that they are currently ahead with image synthesis than voice/speech synthesis.

comment by Bindbreaker · 2010-01-02T10:09:38.189Z · LW(p) · GW(p)

In a way, the uncanny valley has already been crossed-- video game characters in some games are sufficiently humanlike that I hesitate to kill them.

Replies from: Vladimir_Golovin
comment by Vladimir_Golovin · 2010-01-03T10:04:04.069Z · LW(p) · GW(p)

I once watched a video of an Iraqi sniper at work, and it was disturbingly similar to what I see in realistic military video games (I don't play them myself, but I've seen a couple.)

comment by dfranke · 2010-01-01T20:43:21.661Z · LW(p) · GW(p)

Why such a big gulf between your confidence for cinema and your confidence for video games?

Replies from: Vladimir_Golovin, Chronos
comment by Vladimir_Golovin · 2010-01-01T21:01:15.131Z · LW(p) · GW(p)

Movies are 'pre-computed' so you can use a real human actor as a data source for animations, plus you have enough editing time to spot and iron out any glitches, but in a video game facial animations are generated on-the-fly, so all you can use is a model that perfectly captures human facial behavior. I don't think that it can be realistically imitated by blending between pre-recorded animations like it's done today with mo-cap animations -- e.g. you can't pre-record eye movement for a game character.

As for the robots, they are also real-time, AND they would need muscle / eye / face movement implemented physically (as a machine, not just software), hence the lower confidence level.

comment by Chronos · 2010-01-01T21:33:32.777Z · LW(p) · GW(p)

The obvious answer would be "offline rendering".

Even if the non-interactivity of pre-rendered video weren't an issue, games as a category can't afford to pre-render more than the occasional cutscene here or there: a typical modern game is much longer than a typical modern movie -- typically by at least one order of magnitude, i.e. 15 to 20 hours of gameplay, and the storyline often branches as well. In terms of dollars grossed per hours rendered, games simply can't afford to keep up. Thus, the rise of real-time hardware 3D rendering in both PC gaming and console gaming.

Replies from: mattnewport
comment by mattnewport · 2010-01-06T07:24:40.398Z · LW(p) · GW(p)

Rendering is not the problem. I would say that the uncanny valley has already been passed for static images rendered in real time by current 3D hardware (this NVIDIA demo from 2007 gets pretty close). The challenge for video games to cross the uncanny valley is now mostly in the realm of animation. Video game cutscenes rendered in real time will probably cross the uncanny valley with precanned animations in the next console generation but doing so for procedural animations is very much an unsolved problem.

(I'm a graphics programmer in the video games industry so I'm fairly familiar with the current state of the art).

Replies from: Chronos
comment by Chronos · 2010-01-11T04:53:30.653Z · LW(p) · GW(p)

I wasn't even considering the possibility of static images in video games, because static images aren't generally considered to count in modern video games. The world doesn't want another Myst game, and I can only imagine one other instance in a game where photorealistic, non-uncanny static images constitute the bulk of the gameplay: some sort of a dialog tree / disguised puzzle game where one or more still characters' faces changed in reaction to your dialog choices (i.e. something along the lines of a Japanese-style dating sim).

Replies from: mattnewport
comment by mattnewport · 2010-01-11T08:34:59.877Z · LW(p) · GW(p)

By 'static images rendered in real time' I meant static images (characters not animated) rendered in real time (all 3D rendering occurring at 30+ fps). Myst consisted of pre-rendered images which is quite different.

It is possible to render 3D images of humans in real time on current consumer level 3D hardware that has moved beyond the uncanny valley when viewed as a static screenshot (from a real time rendered sequence) or as a Matrix style static scene / dynamic camera bullet time effect. The uncanny valley has not yet been bridged for procedurally animated humans. The problem is no longer in the rendering but in the procedural animation of human motion.

comment by gwern · 2010-08-21T09:21:01.229Z · LW(p) · GW(p)

How would you verify a crossing of the uncanny valley? A movie critic invoking it by name and saying a movie doesn't trigger it?

Replies from: Vladimir_Golovin
comment by Vladimir_Golovin · 2010-08-21T11:19:16.968Z · LW(p) · GW(p)

An ideal indicator would be a regular movie or trailer screening where the audience failed to detect a synthetic actor who (who?) played a lead role, or at least had significant screen time during the screening.

Replies from: timtyler
comment by timtyler · 2010-08-21T11:34:08.141Z · LW(p) · GW(p)

There isn't much financial incentive to CGI a human - if they are just acting like a regular human. That's what actors are for.

Replies from: gwern
comment by gwern · 2010-08-21T23:04:52.523Z · LW(p) · GW(p)

I suppose Avatar is a case in point - it's worth CGIfying human actors because otherwise they would be totally out of place in the SF environment which is completely CGI.

Replies from: timtyler
comment by timtyler · 2010-08-22T07:21:32.692Z · LW(p) · GW(p)

''There are a number of shots of CGI humans,'' James Cameron says. ''The shots of [Stephen Lang] in an AMP suit, for instance — those are completely CG. But there's a threshold of proximity to the camera that we didn't feel comfortable going beyond. We didn't get too close.''

comment by MichaelVassar · 2009-12-30T22:56:58.051Z · LW(p) · GW(p)

A killer application for augmented reality is likely to be the integration of communication channels. Today's, cellular phones annoy people with constant accountability and stress, not to mention spotty coverage, but if a HUD relay over life can display text messages as they are sent and invite fluid shifts to voice conversation. When video is engaged and shared, people could also see what their potential conversation partner is doing prior to requesting attention, giving distributed social life some of the fluidity and contextual awareness of natural social life. These sorts of benefits will motivate the teenagers of 2020 to broadcast much of their lives and to interpret the absence of their friend's data streams as a low intensity request not to call. Archival will at first be a secondary but relatively minor benefit from the technology, but will ultimately widen the divide between public and private life, a disaster for privacy advocates but a boon for academic science (by normalizing the publication of all data). Paranormal beliefs will also tend to decline, as the failure to record paranormal events and the fallibility of memory both become more glaring.

Replies from: gwern, Pablo_Stafforini, whpearson, Unknowns, kip1981
comment by gwern · 2010-08-17T08:42:36.466Z · LW(p) · GW(p)

Could you operationalize some of the many predictions and theories embedded in this comment? How would one judge all this? (AR apps like Foursquare are already fairly popular but don't much resemble traditional theories of what AR would look like.)

comment by Pablo (Pablo_Stafforini) · 2010-01-01T16:19:53.836Z · LW(p) · GW(p)

Robin Hanson makes a similar prediction in 'Enhancing Our Truth Orientation' (pp. 362-363):

Humans have long worked to document their lives, inventing gadgets to aid in writing and recording, concepts and conventions to make what we say meaningful and comparable, and social institutions to let us coordinate in monitoring and verifying our documentation. It is harder to lie, and so to self-deceive, about documented events. [...] Many lament, and some celebrate (Brin, 1998), a coming ‘‘surveillance society.’’ Most web pages and email are already archived, and it is now feasible and cheap for individuals to make audio recordings of their entire lives. It will soon be feasible to make full video recordings as well. Add to this recordings by security cameras in stores and business, and most physical actions in public spaces may soon be a matter of public record. Private spaces will similarly be a matter of at least private record.

comment by whpearson · 2009-12-31T20:38:10.237Z · LW(p) · GW(p)

On a AR theme I think there will be a high level language created within ten years for AR that will try to make the following accessible

  • Pulling info off the Internet
  • Machine vision
  • Precise overlay rendering

People will want to mash up different AR services in one "view" so you don't have to switch between them. There needs to be a lingua franca and HTML doesn't seem suited. I'd think it likely that it will be some XML variant.

Replies from: cabalamat, sketerpot
comment by cabalamat · 2010-01-01T18:55:24.611Z · LW(p) · GW(p)

Aren't these more likely to be done by libraries than languages?

I'd think it likely that it will be some XML variant.

I hope not. Something like JSON is far less verbose.

comment by sketerpot · 2010-01-01T21:42:02.022Z · LW(p) · GW(p)

If AR gets any sort of popularity, even just among early adopters, I guarantee you that there will be several competing tools for doing what you describe, with more coming out every month.

Replies from: whpearson
comment by whpearson · 2010-01-01T21:56:43.300Z · LW(p) · GW(p)

It already has a sort of popularity. There are already startups working in the field.

If you want to keep abreast of the field keep an eye on Bruce Sterling's Blog.

comment by Unknowns · 2010-01-01T08:05:56.545Z · LW(p) · GW(p)

There are already plenty of supposedly "paranormal" events recorded on Youtube, as well as elsewhere. With the increase of recording devices, many more such things will be recorded, and paranormal beliefs will increase.

comment by kip1981 · 2010-01-01T09:16:12.000Z · LW(p) · GW(p)

I think these are great predictions.

comment by orthonormal · 2010-01-02T06:00:56.995Z · LW(p) · GW(p)

One word: subcultures.

I think we'll see an expansion to most of the First World of the trend we see in cities like San Francisco, where the Internet has allowed people to organize niche cultures (steampunk, furries, pyromaniacs, etc.) like never before. I think that, by and large, people would prefer to seek out a smaller culture based on a common idiosyncratic interest if it were an option, not least because rising in status there is often easier than getting noticed in the local mainstream culture. I think that the main reason the mainstream culture is presently so large, therefore, is because it's hard for a juggling enthusiast in Des Moines to find like-minded people.

I expect that over the next 10 years, more and more niche cultures will arise and begin to sprout their own characteristics, with the measurable effect that cultural products will have to be targeted more narrowly. I expect that the most popular books, music, etc. of the late 2010s will sell fewer copies in the US than the most popular books, music, etc. of the Aughts, but that total consumption of media will go up substantially as a thousand niche bands, niche fiction markets, etc. become the norm. I expect that high schoolers in 2020 will spend less social time with their classmates and more time with the groups they met through the Internet.

And I expect that the next generation of hipsters will find a way to be irritatingly disdainful of a thousand cultures at once.

Replies from: gwern, Zack_M_Davis, orthonormal, sketerpot
comment by gwern · 2010-08-23T14:48:36.801Z · LW(p) · GW(p)

What do you make of criticism that sales currently show the exact opposite trend?

Replies from: orthonormal, orthonormal
comment by orthonormal · 2019-12-31T19:22:24.689Z · LW(p) · GW(p)

Well, your criticism was correct.

(Though some other trends have obviously reversed- streaming music ate album and single sales, which were increasing rapidly in the iTunes era of the 2000s.)

comment by orthonormal · 2010-08-24T05:57:03.174Z · LW(p) · GW(p)

Thanks for the link! I didn't know there was already a version of this theory out there, and I didn't know the actual figures.

So what do I make of this data (assuming the veracity of the Wikipedia summary, since I'm not dedicated enough to read the papers)? Well, I'm surprised by it.

Replies from: gwern
comment by gwern · 2010-08-24T10:29:17.613Z · LW(p) · GW(p)

I'm not especially surprised. Aside from possible confounding factors like the rise of Free & free stuff (strongest in subcultures) which obviously wouldn't get counted in commercial metrics, technological and economic development means that mass media can spread even further than Internet-borne stuff can. cue anecdotes about Mickey Mouse posters in African huts, etc.

The subcultures seem to me to appeal mostly to the restricted 1st World wealthier demographics that powered the mass media you are thinking of; one might caricature it as 'white' stuff. It makes sense that a subculture like anime/manga or FLOSS, which primarily is cannibalizing the 'white' market, can shrink ever more in percentage terms as the old 'white' stuff like Disney expand overseas into South America, Africa, Southeast Asia and so on.

If you had formulated your thesis in absolute numbers ('there will be more FLOSS enthusiasts in 2020 than 2010'), then I think you would be absolutely right. You might be able to get away with restricted areas too ('there will be more otaku in Japan in 2020 than 2010, despite a ~static population'). But nothing more.

comment by Zack_M_Davis · 2010-01-02T06:10:54.573Z · LW(p) · GW(p)

the Internet has allowed people to organize niche cultures (steampunk, furries, pyromaniacs, etc.)

You forgot us!

comment by orthonormal · 2019-12-31T19:21:02.147Z · LW(p) · GW(p)

Following up: I was wrong about my most testable prediction. The biggest media hits in the USA are getting proportionally larger, not smaller, though this may be mediated by streaming/ebooks taking away from the traditional outlets.

(If you find more complete sources for any of these, let me know. I restricted to the US because the international market is growing so rapidly it would skew any trends.)

Music: This is obviously confounded by the switch from buying physical albums to streaming music, but in any case, it looks as if I was wrong: the top albums have sold comparable numbers of copies (after averaging out by 5-year increments) since 2005, while the total number of album sales has plummeted. (Maybe people are only buying albums for the most popular artists and massively diversifying their streaming music, but in any case I would have antipredicted the top artist album sales staying constant.)

Books: Total revenue for trade books has stayed remarkably consistent at about $15 billion per year for the past five years; I didn't find first-half-of-decade results as easily. Top books by print copies might be misleading, but they're easy to find retrospectively using Publishers Weekly lists like this one. And they've been increasing since 2014, though 2013 had the massive outlier of the Fifty Shades series (sigh). Another loss for the theory.

Movies: Domestic box office has been growing slowly, and the biggest domestic hits have been growing rapidly. Essentially, Disney is eating the movie theater market with their big franchises.

And more broadly/vaguely, the US social media landscape looks less like a land of ten thousand subcultures and more like a land of fewer than ten megacultures, each fairly defined by their politics and united on their morality and aesthetics.

comment by sketerpot · 2010-01-02T06:42:30.674Z · LW(p) · GW(p)

So it's possible that, if we had a really huge, dense, wired city with excellent transportation, we would find a significant subculture of steampunk furries, or vampire gothic lolita hip-hop dance squads? Actually, this sounds like a lot like Tokyo.

And I expect that the next generation of hipsters will find a way to be irritatingly disdainful of a thousand cultures at once.

It's easy, really. Practice this phrase: "Man, what weirdos." You just have to selectively overlook the weirdness of your own subculture while recognizing and stigmatizing it in others. It's an elegant approach.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-01T07:30:47.350Z · LW(p) · GW(p)

For the next decade: Videoconferencing.

Replies from: orthonormal, Baughn
comment by orthonormal · 2019-12-31T20:10:50.602Z · LW(p) · GW(p)

Thank maths for videoconferencing, enabling working from home (at least occasionally) for every major tech company.

comment by Baughn · 2010-01-01T12:08:57.506Z · LW(p) · GW(p)

Videoconferencing what, exactly?

I've been using it for years. I'm not sure how to correctly expand your sentence, and it shouldn't be subject to interpretation.

Replies from: Unknowns, whpearson
comment by Unknowns · 2010-01-01T13:22:13.762Z · LW(p) · GW(p)

Eliezer seems to be predicting that videoconferencing will become common in the next decade. Yes, some use it now, but it is still not common. I predict that it will not become common until someone uses a utility to modify your appearance so that when you look at the eyes of the person on the screen, your image on the remote end will look like it is looking at the eyes of the person on the other end. This might well be developed in much less than 10 years, however.

comment by whpearson · 2010-01-01T12:27:28.169Z · LW(p) · GW(p)

I suspect Eliezer is making broad predictions about what is important in the next 10 years. As if someone said smartphone for the next decade in 2000. Not giving too much detail makes it more likely to be true...

Replies from: Jack
comment by Jack · 2010-01-01T13:13:34.436Z · LW(p) · GW(p)

coughmakingbeliefspayrentcough

comment by mattnewport · 2009-12-31T13:21:29.956Z · LW(p) · GW(p)

Next Year

  • Holiday retail sales will be below consensus forecasts leading to some market turmoil in the early part of the year as the 'recovery' starts to look shaky (70%).
  • A developed country will suffer a currency crisis - most likely either the UK, US or one of the weaker Eurozone economies (60%).
  • A new round of bank failures and financial turmoil as the wave of Option ARM mortgage resets starts to hit and commercial real estate collapses including at least one major bank failure (a 'too big to fail' bank) (75%).
  • A major terrorist attack in the US (50%) most likely with a connection to Pakistan. The response will be disproportionate to the magnitude of the attack (99%).
  • Apple will launch a tablet and will aim to do for print media what it has done for music (80%).
  • Democrats will lose seats in Congress and the Senate in the elections but Republicans will not gain control of either house (70%).
  • One or more developed countries will see significant civil unrest due to ongoing problems with the economy (50%).

Next Decade

  • US will undergo a severe currency crisis (more likely) or sovereign default (less likely) (75%).
  • Developed countries' welfare states will begin to collapse (state retirement and unemployment benefits and health care will be severely curtailed or eliminated in more than one developed country) (75%).
  • UK will undergo a severe currency crisis or sovereign default (90%).
  • One or more countries will drop out of the Euro or the entire system will collapse (75%).
  • A US state will secede (30%).
Replies from: ciphergoth, RolfAndreassen, orthonormal, xamdam, knb, cabalamat, orthonormal, NancyLebovitz, mattnewport, mattnewport, mattnewport, mattnewport, MichaelVassar, Kutta
comment by Paul Crowley (ciphergoth) · 2009-12-31T13:24:05.513Z · LW(p) · GW(p)
  • A major terrorist attack in the US (50%) most likely with a connection to Pakistan.

I would be very happy to accept a bet with you on those odds if there's a way to sort it out. I'd define major as any attack with more than ten deaths.

Replies from: MrHen, mattnewport, gwern, gwern
comment by MrHen · 2009-12-31T15:39:50.321Z · LW(p) · GW(p)

I voted all the betting comments up because I think this is awesome. Does this kind of thing happen often here?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-01T11:46:14.868Z · LW(p) · GW(p)

I occasionally offer people bets, but I think this has been the first time for me that the subject of contention is the right shape for betting to be a real possibility.

comment by mattnewport · 2009-12-31T13:29:38.004Z · LW(p) · GW(p)

Do you have a PayPal account? I'd be willing to wager $50 USD to be paid within 2 weeks of Jan 1st 2011 if you're interested. I can provide my email address. That would rely on mutual trust but I don't know of any websites that can act as trusted intermediaries. Do you know of anything like that?

Replies from: ciphergoth, Bo102010
comment by Paul Crowley (ciphergoth) · 2009-12-31T13:44:17.474Z · LW(p) · GW(p)

For $50, trust-based is OK with me.

How about this wording? "10 or more people will be killed on US soil during 2010 as the result of a deliberate attack by a party with a political goal, not overtly the act of any state". And if we hit an edge case where we disagree on whether this has been met, we'll do a poll here on LW and accept the results of the poll. Sound good?

Replies from: mattnewport, timtyler
comment by mattnewport · 2009-12-31T15:20:06.870Z · LW(p) · GW(p)

I'd like to change the wording slightly to "on US soil, or on a flight to or from the US" if that's alright with you (even though I think an attack on an aircraft is less likely than an attack not involving aircraft). A poll here sounds like a fair way to resolve any dispute. I expect to still be reading/posting here fairly regularly in a year but I'm also happy to provide my email address if you want.

Replies from: Kevin, ciphergoth, ciphergoth
comment by Kevin · 2010-01-01T00:25:24.860Z · LW(p) · GW(p)

Do you think this was a terrorist attack? http://en.wikipedia.org/wiki/Fort_Hood_shooting

Replies from: GreenRoot, ciphergoth
comment by GreenRoot · 2010-02-26T18:36:56.745Z · LW(p) · GW(p)

The term "terrorism" is usually taken to mean an attack on civilians, though as a legal matter, this is far from settled. This definition would exclude the Fort Hood shooting, where the targets were soldiers. In any case, the bet is over non-state, politically motivated killing, which is broader and would include Fort Hood, I think.

Replies from: SilasBarta, mattnewport
comment by SilasBarta · 2010-03-05T20:43:19.285Z · LW(p) · GW(p)

FWIW: The targets at Fort Hood were soldiers, but predictably-disarmed soldiers. In the area Hasan attacked, the soldiers he shot at aren't allowed to carry weapons or even have them within easy reach. So it's more analogous to shooting up a bar frequented by soldiers that takes your weapons at the door.

Plus, his attack was intended to spread terror, not to achieve a military objective (any weakness he inflicted on the army capability itself was probably a secondary goal).

comment by mattnewport · 2010-02-26T18:54:53.762Z · LW(p) · GW(p)

I was going to ask whether people would classify the recent attack on the IRS building in Texas as terrorism. It wouldn't qualify for the bet either way because there was only 1 casualty but I'm curious if people think it would count as terrorism?

Replies from: SilasBarta
comment by SilasBarta · 2010-02-26T21:45:02.599Z · LW(p) · GW(p)

Bob Murphy's post, excerpting Glen Greenwald, summarizes my position very well. In short:

1) What Stack did meets the reasonable definition of terrorism: "deliberate use of violence against noncombatants to achieve political or social goals by inducing terror [in the opposing population]".

2) Most of what the government is classifying as terrorism, isn't. Fighting an invading army, no matter how unjust your cause may be, is not terrorism. Whetever injustice you may be committing does not additionally count as terrorism. Yet the label is being applied to insurgents.

3) It's in the government's interest, in taking over the terrorism label, that Stack not be called a terrorist, because he seems too (otherwise) normal. People want to think of terrorists as being "different"; a middle-aged, high-earning programmer ain't the image they have in mind, and if they did have that in mind, they'd be more resistant to make concessions in the name of fighting terrorism.

comment by Paul Crowley (ciphergoth) · 2010-01-01T20:45:23.516Z · LW(p) · GW(p)

Excellent question! If such an attack happens this year, I'd say it wasn't a terrorist attack, but if mattnewport felt that it was I'd pay out without making a poll.

Replies from: mattnewport
comment by mattnewport · 2010-01-06T07:38:28.470Z · LW(p) · GW(p)

I'd lean towards saying it was a terrorist attack but I'm sufficiently uncertain about how to classify it that I'd be happy to let a community poll settle the question.

comment by Paul Crowley (ciphergoth) · 2010-01-03T17:54:17.867Z · LW(p) · GW(p)

Could you email me so I have your address too? paul at ciphergoth.org. Thanks!

Replies from: mattnewport
comment by mattnewport · 2010-01-06T07:09:37.407Z · LW(p) · GW(p)

Had limited Internet access over the New Year, I've sent you an email.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-01-02T11:42:41.394Z · LW(p) · GW(p)

I think I won this one - have emailed the address you sent me. Thanks!

EDIT: paid in full - many thanks!

comment by Paul Crowley (ciphergoth) · 2010-01-01T11:44:12.717Z · LW(p) · GW(p)

Fine with me. My email is paul at ciphergoth.org. How exciting!

comment by timtyler · 2010-01-02T11:19:01.941Z · LW(p) · GW(p)

Re: "10 or more people will be killed on US soil during 2010 as the result of a deliberate attack by a party with a political goal, not overtly the act of any state".

How come "Pakistan" got dropped? A contributing reason for the claim being unlikely was that it was extremely specific.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-02T11:37:05.362Z · LW(p) · GW(p)

From the wording, it seemed that the 50% was for any attack, not just one with Pakistan involved. I think I'm on to a pretty good bet even without it. It's not as unlikely as a US state seceding, but I didn't want to wait ten years :-)

Replies from: MatthewB, timtyler
comment by MatthewB · 2010-01-02T12:12:48.521Z · LW(p) · GW(p)

The US State seceding is something that many of my friends sit around contemplating. We have had speculations about whether it will be a state like Mississippi, or South Carolina (Red), or if it will be a state like California or Oregon (Blue).

The Red States are pretty easy to understand why they might wish to secede from the heathen atheistic socialist nazi USA... But, the motivations for a Blue State are a bit more complex.

For instance, in California, I have noticed a lot of people complaining about how much money this state pays into Social Security, yet only gets back about 10% of that money. If we were able to get back all of it, instead of supporting states like South Carolina or Mississippi, we would be able to go a long way toward solving many of our own social ills. Not to mention that many in CA chafe under having to belong to the same union as states such as those I have mentioned, and thus have issues with being able to even pursue social solutions that might pay off big (Stem Cell research, Legalization & regulation of narcotics, work and skills training for inmates - and socialization skills for the same, infrastructure work to which the USA is slow to commit, and so on).

All of these are also issues that Red States like to brag about being able to focus on if they were to secede. The only problem with most Red States is, just like in the Civil War, they have little to no economy of their own. Texas (Maybe Florida) is really the exception. Also, should a Red State secede, most of the best and brightest would flee the state (Academics usually don't like working under ideological bonds, for instance).

It will be interesting to see what would happen should a state try to secede. I think it could be the best thing that could happen to our country if things continue to become divisive.

Replies from: anonymous666
comment by anonymous666 · 2010-01-03T06:34:31.473Z · LW(p) · GW(p)

That's why California's economy is 20 billion plus in the red. And has been for years. Fine fiscal management.

That's why your governor has gone begging Washington for a bailout. Off of our backs, not yours.

You folks should secede. You'd save the rest of us from yourselves.

-- Born an bred in California, escaped the insanity as soon as I could.

Replies from: MatthewB
comment by MatthewB · 2010-01-03T06:49:48.522Z · LW(p) · GW(p)

As I understand it, our economy is in such dire straights because most of the money in CA's taxes leaves the state instead of staying in it.

I could be wrong about that. I am mostly dealing with facts I have obtained from Gov't web sites, so the data could be skewed.

Your statement only deals with the management and not the fiscal reality of the cash flow in CA. It is true that we have a financial shortfall, but that could be the case with anyone, even if they made billions of dollars a year if all of that money was being taken by another party. No management in the world would be able to help in that situation.

Replies from: knb, orthonormal
comment by knb · 2010-01-08T04:30:22.177Z · LW(p) · GW(p)

Texas is another big tax donor state, yet they turn budget surpluses mostly. The difference is California doesn't bother to balance their out of control spending with their revenues.

Replies from: MatthewB
comment by MatthewB · 2010-01-08T11:48:48.527Z · LW(p) · GW(p)

Texas, though, doesn't contribute more to the US Budget than they get out, and... I hate to say this... Both GW Bush, and his predecessor in the Governors office did pretty good jobs managing the State.

During the Office of Rick Perry, they had some tremendous problems (I am from Texas, and technically, it is still a state of residence for some of my bills). Texas and California are however, the only two states (NY Possibly an exception, but only barely) that could really stand as an independent country in this day and age (They both did so in the past under very different conditions).

Upon thinking about it a bit. CA does have a more out of control spending problem. I still think that the problem could be remedied by a more equitable share of their Federal Tax money (not just Social Security) being returned to the state. Regardless of whether that happened, fiscal responsibility is needed. It doesn't do any good to increase an income if the expenses rise disproportionately.

Replies from: knb
comment by knb · 2010-01-08T20:10:12.687Z · LW(p) · GW(p)

Actually, Texas does contribute more than they get back. Texas gets 94% of federal tax contributions back. California gets 79% back.

http://www.taxfoundation.org/blog/show/1397.html

Based on that map we can also see that more agriculturally focused states do well from federal tax dollars. I assume this is mostly farm subsidies.

comment by orthonormal · 2010-01-03T08:25:44.225Z · LW(p) · GW(p)

You are correct about federal taxing vs. spending with respect to states.

California's uniquely awful budget crisis is mainly due to the state's consitutional amendment that requires a supermajority to raise state taxes (and the fact that it's never in the Republican minority's political interest to agree to a tax hike), along with the lawmakers' shortsighted tendency to cut taxes when the economy was in great shape.

(N.B: it's spelled "dire straits".)

Replies from: MatthewB
comment by MatthewB · 2010-01-03T09:01:48.502Z · LW(p) · GW(p)

I knew that I wasn't imagining that bit about the Fed Taxing v spending.

I was also aware of the supermajority thing. Although, I wonder exactly how much of a Republican Schwarzenegger really is (I hope I spelled his name right. I can't be bothered to find out). He has many beliefs about the rule of law and government that I find to be very at odds with the Republicans, and all I can really find that binds them together is his extreme misogynism and love of guns (alright, I could look further and find more, I am sure, but my point is that he is really a populist candidate/politician who just happened to land in the Republican's back yard).

CA's budget crises can also be traced to several Texas Energy companies (Does Enron mean anything to anyone) who gouged the state in all kind of manipulative practices during the late 90s/early 00s.

Also, never mind that California is responsible for around 12% - 14% of the USA's total economy, or that we have a GDP, all on our own of around 2 trillion dollars (the largest in the USA, and I believe that we are right behind England or France in total GDP)... Yeah, never mind all that (to the naysayers of California).

comment by timtyler · 2010-01-02T11:51:44.048Z · LW(p) · GW(p)

Oh, I see - sorry!

I looked into who was going to win such a bet.

http://en.wikipedia.org/wiki/List_of_assassinations_and_acts_of_terrorism_against_Americans

...looks like a reasonable resource on the topic.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-02T13:22:35.855Z · LW(p) · GW(p)

I'm not sure that the acts of a single person with no associations with anyone else are really the sort of thing I had in mind, but it's too late to refine the bet now, so we'll see whether people think such a thing counts if we need to.

Replies from: timtyler
comment by timtyler · 2010-01-02T13:28:49.520Z · LW(p) · GW(p)

"10 or more people [...] as the result of a deliberate attack" seems to suggest that 10 assassinations in 2010 would probably not qualify - unless it was proved that they were all linked. My summary of the link is that there have been few terrorist attacks against Americans on American soil recently.

Replies from: ciphergoth
comment by Bo102010 · 2009-12-31T14:58:30.695Z · LW(p) · GW(p)

What makes your think 2010 is the year? I mean, this has even been floating around lately. And at 99%^h^h^h50% confidence!

Replies from: mattnewport
comment by mattnewport · 2009-12-31T15:16:31.333Z · LW(p) · GW(p)

That was 99% confidence that the response will be disproportionate to the magnitude of the attack, if an attack takes place, not 99% confidence that there will be an attack. My odds of an attack were 50%. I think an attack is fairly unlikely to be on an aircraft - security is relatively tight on aircraft compared to other possible targets.

Replies from: Bo102010, PhilGoetz
comment by Bo102010 · 2009-12-31T17:18:59.963Z · LW(p) · GW(p)

I'll agree that if anything happens, or even if something doesn't (is thwarted), the response will be silly and disproportionate. However, I still think you're way too high with 50%.

comment by PhilGoetz · 2009-12-31T22:47:55.142Z · LW(p) · GW(p)

You must specify disproportionately high, or disproportionately low.

Replies from: mattnewport, CannibalSmith
comment by mattnewport · 2010-01-06T07:12:20.024Z · LW(p) · GW(p)

I thought disproportionately high went without saying (but then I would with a confidence level that high wouldn't I?)

comment by CannibalSmith · 2010-01-01T16:31:43.797Z · LW(p) · GW(p)

A declaration of war, curtailment of liberties, or other expenditure of resources more than ten times the loss of resources (including life, which is not priceless) it tries prevent.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-01T16:35:45.763Z · LW(p) · GW(p)

Is there a standard method for assigning a numerical value to liberties?

Replies from: randallsquared
comment by randallsquared · 2010-01-01T22:16:44.370Z · LW(p) · GW(p)

The money those people would pay to avoid the loss of liberty, had they the option.

Replies from: James_K
comment by James_K · 2010-01-02T07:38:13.045Z · LW(p) · GW(p)

That's a valid measure, but it would require a fairly complicated study to actually get a value for it.

Replies from: Technologos
comment by Technologos · 2010-01-02T07:43:34.963Z · LW(p) · GW(p)

And it's complicated by loss aversion.

comment by gwern · 2010-08-03T10:31:35.206Z · LW(p) · GW(p)

I've added this prediction to PredictionBook: http://predictionbook.com/predictions/1565 based on the description at http://wiki.lesswrong.com/wiki/Bets_registry

comment by gwern · 2010-08-02T07:51:47.766Z · LW(p) · GW(p)

So now that 2010 is more than half over with no attack that I know of, have you or mattnewport's opinions changed?

(I notice that domestic terrorism seems kind of spiky - quite a few in one year, and none the next: http://en.wikipedia.org/wiki/Category:Islamist_terrorism_in_the_United_States omits entire years but has several in one year, like 2007 or 2009.)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-02T15:15:30.977Z · LW(p) · GW(p)

I am more confident of winning as you'd expect. But I'm finding it counterintuitive to adjust my subjective probability for losing the bet in proportion to the portion of the year that's lapsed, which means either my initial probability was too low or my current one is too high.

Replies from: gwern
comment by gwern · 2010-08-03T04:04:09.681Z · LW(p) · GW(p)

Incidentally, if you have a specific probability for an event occurring in 1 out of 365 days, say, or not occurring at all, you could try to calculate exactly what probability to give it occurring in the rest of the year (considering that it's August): http://www.xamuel.com/hope-function/ / http://www.gwern.net/docs/1994-falk

(Actually calculating the new probability is left as an exercise for the reader.)

comment by RolfAndreassen · 2009-12-31T20:43:28.571Z · LW(p) · GW(p)
  • A US state will secede (30%).

I will take a bet on this, if you like. Also, did you perhaps mean "attempt to secede", or are you predicting actual success? I'll take the bet either way.

Replies from: John_Maxwell_IV, mattnewport
comment by John_Maxwell (John_Maxwell_IV) · 2010-01-05T05:50:51.772Z · LW(p) · GW(p)

You'll have to define what constitutes an attempt.

Replies from: LucasSloan
comment by LucasSloan · 2010-01-05T06:25:15.996Z · LW(p) · GW(p)

Perhaps a vote goes through the state legislature in favor of secession?

comment by mattnewport · 2010-01-06T07:35:21.935Z · LW(p) · GW(p)

On further reflection I think I need to revise my estimate down somewhat. Thinking on it further my 30% estimate is conditional on general trends that I think are more likely than not to occur but I did not correctly incorporate them into the estimate for secession. I think 10-15% is probably a better estimate taking that into account.

I think the political pressure for secession will stem from an extended period of economic weakness in the US and widespread fiscal crises in states like California and New York. If, as seems likely, federal aid is seen to go disproportionately to certain states that have the most troubled finances then the states that feel they are losing out will begin to see secession as an attractive option. My original estimate did not sufficiently account for the possibility that I am wrong about the economic troubles ahead however.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2010-01-06T23:18:10.656Z · LW(p) · GW(p)

I would still be willing to take a bet at these odds, given some reasonably clear-cut definition of "attempt to secede".

Replies from: mattnewport
comment by mattnewport · 2010-01-06T23:40:47.794Z · LW(p) · GW(p)

I think we could probably hammer out a mutually agreeable definition but the decade time frame for a pay out makes a bet on this impractical I feel. I'm reasonably comfortable making a bet to be settled next January but a bet to be settled in 2020 doesn't seem practical through an agreement on a forum.

comment by orthonormal · 2019-12-31T19:37:14.806Z · LW(p) · GW(p)

So close on Brexit, but just missed the deadline.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2020-05-05T05:33:40.212Z · LW(p) · GW(p)

Britain was in the EU, but it kept Pounds Sterling, it never adopted the Euro.

comment by xamdam · 2010-07-02T19:34:42.880Z · LW(p) · GW(p)

A developed country will suffer a currency crisis - most likely either the UK, US or one of the weaker Eurozone economies (60%).

http://en.wikipedia.org/wiki/2010_European_sovereign_debt_crisis

Not bad.

Replies from: mattnewport
comment by mattnewport · 2010-07-02T20:38:02.277Z · LW(p) · GW(p)

I guess now is a good time for a 6 month review of how the predictions in this thread are panning out.

Next Year

  • Holiday retail sales will be below consensus forecasts leading to some market turmoil in the early part of the year as the 'recovery' starts to look shaky (70%).

Retail sales were a bit worse than expected but despite a bit of a dip in the stock market in late Jan / early Feb it took longer than I expected for the recovery in the US to be seriously questioned. It's only in the last few weeks that talk of a double dip recession has become really widespread. The problems in Europe and more recently in China brought the global recovery into question a bit earlier but overall the jury is still out. I think I could argue that this prediction was correct as written but I was expecting more problems earlier in the year.

  • A developed country will suffer a currency crisis - most likely either the UK, US or one of the weaker Eurozone economies (60%).

I think the problems in Greece (and to a lesser extent Spain and Portugal) and the resulting turmoil in the Euro are sufficient to say this prediction was correct. The UK pound has also had a rough time but in both cases 'currency crisis' could still be argued. I expect further problems before the year is out.

  • A new round of bank failures and financial turmoil as the wave of Option ARM mortgage resets starts to hit and commercial real estate collapses including at least one major bank failure (a 'too big to fail' bank) (75%).

Hasn't happened yet. Option ARM resets will be picking up through the second half of the year so I still expect problems from that. A little less confident that it will mean a major bank failure - that is somewhat dependent on the political climate as well.

  • A major terrorist attack in the US (50%) most likely with a connection to Pakistan. The response will be disproportionate to the magnitude of the attack (99%).

The attempted bombing in Times Square appears to have had a Pakistan link. It can't really be called a 'major' attack however. I still think there is a fair chance of this happening before the year is out but odds are a little lower (my estimate of how incompetent most terrorists are has increased a little).

  • Apple will launch a tablet and will aim to do for print media what it has done for music (80%).

The iPad and iBooks launch bear this out I think.

  • Democrats will lose seats in Congress and the Senate in the elections but Republicans will not gain control of either house (70%).

Won't know until November. I think the prediction is still reasonable.

  • One or more developed countries will see significant civil unrest due to ongoing problems with the economy (50%).

The riots and strikes in Greece and strikes in Spain arguably confirm this. The prediction is a little vague however and I was expecting somewhat more serious civil unrest than we've seen so far. It remains to be seen what will happen as the rest of the year unfolds.

Next Decade

  • US will undergo a severe currency crisis (more likely) or sovereign default (less likely) (75%).

No change here.

  • Developed countries' welfare states will begin to collapse (state retirement and unemployment benefits and health care will be severely curtailed or eliminated in more than one developed country) (75%).

Some early signs of this with retirement age increases and other austerity measures in Greece and elsewhere in Europe. I still expect to see a lot more of this before the decade is out.

  • UK will undergo a severe currency crisis or sovereign default (90%).

Odds on this down slightly I think - there's some evidence that the new government is serious about addressing the problems. Less evidence that they will succeed.

  • One or more countries will drop out of the Euro or the entire system will collapse (75%).

I think the problems here have been more widely recognized than when I wrote the prediction. My odds haven't changed much though.

  • A US state will secede (30%).

No change here. And it's secession week.

Replies from: Douglas_Knight, mwengler
comment by Douglas_Knight · 2010-07-03T03:56:37.259Z · LW(p) · GW(p)

I still think there is a fair chance of [a major terrorist attack] happening before the year is out but odds are a little lower (my estimate of how incompetent most terrorists are has increased a little).

Shouldn't the odds go down by about half, just because half the year is used up?

Replies from: mattnewport
comment by mattnewport · 2010-07-03T04:10:06.268Z · LW(p) · GW(p)

The failed Times Square attack raised my probability for attempts at attacks this year but lowered my probability that any attempted attacks would be effective enough to classify as 'major'. On balance I think the odds of a major attack in the remaining 6 months are lower than 50% at this point but events since my original prediction weigh into my estimate now and so it's not a simple matter of adjusting the odds based on elapsed time.

comment by mwengler · 2011-01-03T21:17:54.018Z · LW(p) · GW(p)
  • A developed country will suffer a currency crisis - most likely either the UK, US or one of the weaker Eurozone economies (60%).

I think the problems in Greece (and to a lesser extent Spain and Portugal) and the resulting turmoil in the Euro are sufficient to say this prediction was correct. The UK pound has also had a rough time but in both cases 'currency crisis' could still be argued. I expect further problems before the year is out.

I think this prediction has failed utterly. In the Euro zone, There are/were debt crises in Greece and Ireland, but the currency, the Euro itself did fine. A graph of the variation of the Euro against the US dollar shows no special variation in 2010 compared to its "typical" variations over the last decade. The pound maintained the value in 2010 that it had already fallen to in 2009, hardly even slightly adhering to a prediction about 2010.

Those were exciting predictions. Had you predicted a sovereign debt crisis in a developed country, you would have been right, and it would have been a much less exciting prediction than a currency crisis.

Replies from: mattnewport
comment by mattnewport · 2011-01-04T14:01:26.832Z · LW(p) · GW(p)

There's room for debate whether we saw a true currency crisis in the Euro but 'this prediction has failed utterly' is overstating it. We saw unusually dramatic short term moves in the Euro in May and there was widespread talk about the future of the Euro being uncertain. Questions about the long term viability of the Euro continue to be raised.

I'd argue that charting any of the major currencies against gold indicates an ongoing loss of confidence in all of them - from this perspective the dollar and the euro have both declined in absolute value over the year while trading places in terms of relative value in response to changing perceptions of which one faces the biggest problems.

'Currency crisis' was in retrospect a somewhat ambiguous prediction to make since there is no clear criteria for establishing what constitutes one. I'd argue that the euro underwent the beginnings of a currency crisis in May but that the unprecedented intervention by the ECB forestalled a full blown currency crisis.

Replies from: mwengler
comment by mwengler · 2011-01-05T04:36:19.400Z · LW(p) · GW(p)

I looked at Gold vs Euro from your link over 10 years. It shows a ssteady decline since mid 2004, with no change in that trend to distinguish 2010 from 2009, 2008, 2007, 2006, or 2005. It seems to me that if no special effects in currency vs currency or in currency vs gold can be seen in 2010 that the most rational label for that prediction would be "wrong." YMMV, but I don't see why it should. Would you accept "this prediction has failed" if I leave off the utterly?

comment by knb · 2010-01-02T00:22:44.269Z · LW(p) · GW(p)

US states aren't allowed to secede. Not even Texas. The US government would lose so much prestige from the loss of a state, that they would never allow it. So it would require some kind of armed conflict that no one state could ever win.

Replies from: mattnewport
comment by mattnewport · 2010-01-06T07:43:23.682Z · LW(p) · GW(p)

Are you really certain that the federal government would send the military in to prevent a state seceding if secession was clearly the democratic will of the people of the state? I wouldn't rule out the possibility but I think it would be an unlikely outcome.

Replies from: knb
comment by knb · 2010-01-08T04:13:58.989Z · LW(p) · GW(p)

I'm pretty certain the federal government will not take the blow of a state leaving in the next decade, at least. They might be slightly more likely to let a quirky, small state like Vermont or New Hampshire leave, since clamping down on a tiny state would look bad, and the loss would be negligible. But then they would set a dangerous precedent for more important possible secessionist states like Texas (Texans are somewhat nationalistic, though also often super-american/patriotic), New Mexico (majority-minority state) or Alaska (active secessionist movement).

Replies from: mattnewport
comment by mattnewport · 2010-01-08T05:29:36.801Z · LW(p) · GW(p)

What exactly is the federal government going to do about it though? I think using the military to suppress a state that was attempting a peaceful secession would be very hard for the government to justify. It's a possibility but I think the probability is low that US troops would be deployed on US soil to prevent a state seceding. Plus I expect the federal government to have very major financial problems which will limit its ability to act.

Few people in 1982 would have predicted that the USSR would allow its constituent republics to secede peacefully within a decade.

Replies from: knb
comment by knb · 2010-01-08T06:21:57.597Z · LW(p) · GW(p)

It is settled legally, that the states do not have the authority to secede, they tried during the Civil War. Many people thought that states could leave the union at that time. However the precedent set by Lincoln's actions are unchallenged now by the legal establishment.

Anyway, the procedure would go like this:

  1. State government announces secession.

then

2a. Federal government challenges legality of secession in courts.

3a. Supreme court declares the secession unconstitutional.

Or:

2b. Federal government charges rebels with Treason.

3b. Federal government arrests the secessionists. Using federal troops would likely not be necessary, since national guards are ultimately under the authority of the president, if he calls them up for national service.

Finally, if there was an armed insurrection by natives, they would be put down as domestic terrorists. It would certainly be embarrassing, but not as dangerous as the precedent set by a state leaving the union without a shot fired.

Obviously if the Federal government financially collapses in the next decade, this wouldn't be a problem. But that is very unlikely, since the government has the power to inflate away its debts. With the dollar as global reserve currency, it doesn't really have to worry about an Argentina situation.

Replies from: None, mattnewport
comment by [deleted] · 2011-01-02T11:24:29.535Z · LW(p) · GW(p)

I think this is about right. The US dedication to self determination is generally limited to small ethnic groups conveniently placed in the interest spheres of rival great powers.

comment by mattnewport · 2010-01-08T06:39:03.183Z · LW(p) · GW(p)

With the dollar as global reserve currency, it doesn't really have to worry about an Argentina situation.

I think it is likely that the dollar will not still be the global reserve currency by the end of the decade.

Replies from: knb
comment by knb · 2020-02-28T04:59:08.438Z · LW(p) · GW(p)

Looks like it is still the global reserve currency.

comment by cabalamat · 2010-01-01T18:41:59.921Z · LW(p) · GW(p)

A US state will secede (30%).

I don't see that happening -- which one or ones do you think are most likely to leave?

Scotland may well leave the UK (10%), or the UK leave the EU (15%).

Replies from: mattnewport
comment by mattnewport · 2010-01-06T07:44:38.907Z · LW(p) · GW(p)

Texas is probably the most likely but I can imagine a number of other possibilities. MatthewB's post above outlines a plausible case for California for example.

Replies from: MatthewB
comment by MatthewB · 2010-01-06T08:03:44.350Z · LW(p) · GW(p)

Being from Texas (I was born in Texas, but moved to CA in my mid-20s), I agree with you.

I noticed, when I went to school in Europe in the mid 80s that people there acted as if Texas was almost a different country from the rest of the USA. It was also easy for Europeans to recognize. When a foreign citizen, in Europe, was asked where they were from, Texans would usually answer "Texas", yet if a person from Louisiana, Alabama, Montana, Idaho, or some other more obscure state attempted to explain where they were from in the terms of their home state, it would usually devolve to "I am from the Southern USA" or "I am from the Northwest/Midwest USA".

Only New York and California seemed to enjoy this same recognition in Europe.

But, for Texans, they would consider themselves from Texas, first, and the USA second. Whereas most of the other US citizens from other states seemed to identify as USA citizens first, and then by their state.

Texas has a really strong independence from the USA, and it is pretty much the only state with an active Federal movement (movement to recognize the state as its own Nation). California also have one, but it is not nearly as diverse nor as active as that in TX.

However, despite the strong state recognition of its citizens, I think that there are other states that might lead the pack in an attempt to secede. Most of the former Confederate States still seem to have Very deep grudges against the federal gov't, and when I lived in GA for a few years back in 91/92, I was stunned at how many people I encountered who really believed that the Civil War was still not finished, and that The South Shall Rise Again!

Many Republicans seem to be fomenting this sort of thinking as well, with things like the Tea Baggers, or trying to force the recognition of the USA as a Christian Nation

Replies from: knb
comment by knb · 2010-01-08T04:17:03.071Z · LW(p) · GW(p)

Referring to a (presumably) disfavored political group by a crude sexual dysphemism earned you a vote down. This is not how discourse is done here, please make a note of it.

Replies from: AdeleneDawner
comment by orthonormal · 2011-01-02T16:27:41.531Z · LW(p) · GW(p)

Not badly calibrated for 2010 in retrospect, though I should have realized at the time that some of your conditional probabilities were crazy: there's virtually no chance that the Democrats would have held the House if there had been "a new round of bank failures and financial turmoil", unless that happened after the elections.

comment by NancyLebovitz · 2010-08-02T09:21:55.348Z · LW(p) · GW(p)

A developed country will suffer a currency crisis - most likely either the UK, US or one of the weaker Eurozone economies (60%).

I think you got that one.

comment by mattnewport · 2010-03-05T18:46:40.115Z · LW(p) · GW(p)

One or more developed countries will see significant civil unrest due to ongoing problems with the economy (50%).

This is the sort of thing I was thinking of and expect to see more of.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-03-06T05:47:22.976Z · LW(p) · GW(p)

Haven't riots been going on in Greece pretty regularly? (eg, 11/2009) Did you put at 50% the chance that the riots in Greece would stop? Maybe it was reasonable to put at 50% the chance that the riots would stay at 2009 levels and 50% the chance that they would go back to 12/2008 levels, but it's not clear that "significant" should mean that.

Replies from: mattnewport
comment by mattnewport · 2010-03-06T10:05:27.434Z · LW(p) · GW(p)

Yes, Greece had riots in 2009. I expected increased civil unrest in developed countries in 2010. My impression is that there is more civil unrest in Greece now than there was last year but I don't know how to objectively measure that which makes me think I was not specific enough with my prediction in this case.

Since nobody took the other side of the bet it doesn't matter too much. I'm more interested in how my investments pan out as they represent real bets on my predictions - it's not much use being right if you can't turn it into profit.

comment by mattnewport · 2010-01-30T20:00:42.227Z · LW(p) · GW(p)

Apple will launch a tablet and will aim to do for print media what it has done for music (80%).

I'm going to call this a hit but it was pretty much a gimme. My 80% estimate may have been too low.

comment by mattnewport · 2010-01-14T17:03:15.725Z · LW(p) · GW(p)

Next Year

  • Holiday retail sales will be below consensus forecasts leading to some market turmoil in the early part of the year as the 'recovery' starts to look shaky (70%).

U.S. Retail Sales Unexpectedly Fall After Bigger Gain

Jan. 14 (Bloomberg) -- Sales at U.S. retailers unexpectedly fell in December following a gain the prior month that was larger than previously estimated, signaling a consumer recovery will be uneven.

The 0.3 percent decrease came after a 1.8 percent jump the prior month, Commerce Department figures showed today in Washington. The government last month calculated the November gain at 1.3 percent.

...

Retail sales were projected to rise 0.5 percent after an originally reported 1.3 percent gain in November, according to the median estimate of 80 economists in a separate Bloomberg survey. Forecasts ranged from no change to a gain of 1.2 percent.

I'm inclined to call this a confirmation of the first part of my prediction but in retrospect I could have been more specific as to what would constitute confirmation. As to the resulting market turmoil that constitutes the second half of my prediction, I'd say that's unconfirmed as yet and is also rather unspecific. I'm actually now betting real money on market turmoil by buying VXX which is a bet on increased volatility so I still stand by the second half of the prediction.

I'm going to attempt to continue posting updates on the state of my 1 year predictions as relevant news develops. This prediction exercise is only useful if outcomes are tracked.

comment by mattnewport · 2010-01-06T07:59:55.339Z · LW(p) · GW(p)

One or more developed countries will see significant civil unrest due to ongoing problems with the economy (50%).

I'm not going to claim this [1] as a confirmation of that prediction but I expect to see a lot more of these kinds of demonstrations and on a larger scale. Flaming torches are just the start, the metaphorical pitchforks will come.

I'm curious what the response of the secret service would be to a group of demonstrators with flaming torches surrounding the White House.

[1] "Fire and ice: On Monday, hundreds of people gathered outside the residence of Icelandic President Olafur Ragnar Grimsson in Reykjavik, where they held torches and delivered a petition asking him not to sign the controversial debt legislation."

comment by MichaelVassar · 2009-12-31T19:15:24.500Z · LW(p) · GW(p)

Great example of what I'm talking about. I'd challenge you on most of those actually, if there was a convenient and well structured betting forum, but none of them seem crazy to me.

Replies from: komponisto
comment by komponisto · 2009-12-31T19:53:34.331Z · LW(p) · GW(p)

none of them seem crazy to me

A US state will secede (30%)

None of the others do, but this one seems ludicrous to me.

Replies from: PhilGoetz, MichaelVassar
comment by PhilGoetz · 2009-12-31T22:57:46.332Z · LW(p) · GW(p)

30% probability might be around the point where we start to call things ludicrous. If you talk seriously about things that you think have a 10% chance of happening, you will be beyond the point where most people call it ludicrous, or even crazy; they simply will not understand or believe that that's what you mean.

Replies from: komponisto
comment by komponisto · 2010-01-01T00:17:56.498Z · LW(p) · GW(p)

This comment provides more confirmation for a view I've held for a long time, and which was particularly reinforced by some of the reactions to (the first version of) my Amanda Knox post.

People have trouble distinguishing appropriately among degrees of improbability. This generalizes both underconfidence and overconfidence, and is part of what I regard as a cluster of related errors, including underestimating the size of hypothesis space and failing to judge the strength of evidence properly. (These problems are the reason that judicial systems can't trust people to decide cases without all kinds of artificial-seeming procedures and rules about what kind of evidence is "allowed".)

The reality is that given all the numerous events and decisions we experience on a daily basis and throughout our lives, something with a 10% chance of happening or being true is something that we need to take quite seriously indeed. 10% is, easily, planning-level probability; it should attract a significant amount of our attention. By the same token, something which isn't worth seriously planning on shouldn't be getting more than single digits of probability-percentage, if that.

There is a vast, huge spectrum of degrees of improbability below 1% (never mind 10% or 30%) that careful thinking can allow us to distinguish, even if our evolved intuitions don't. Consider for instance the following ten propositions:

(1) The Republicans will win control of both houses of Congress in the 2010 elections.

(2) It will snow in Los Angeles this winter.

(3) There will be a draft in the U.S. by 2020.

(4) I will be dead in a month.

(5) Amanda Knox (or Raffaele Sollecito) was involved in Meredith Kercher's death.

(6) A U.S. state will make a serious attempt to secede by 2020.

(7) The Copenhagen interpretation of quantum mechanics, as opposed to the many-worlds interpretation, is correct.

(8) A marble statue has waved or will wave at someone due to quantum tunneling.

(9) Jesus of Nazareth rose from the dead.

(10) Christianity is true.

I listed these in (approximately) order of improbability, from most probable to least probable. Now, all of them would be described in ordinary conversation as "extremely improbable". But there are enormous differences in the degrees of improbability among them, and moreover, we have the ability to distinguish these degrees, to a significant extent.

The 10%-30% range is for propositions like (1) ; the 1%-10% range for things like (2) (the last time it snowed in LA was in the 1960s). Around 1% is about right for (3). Propositions (4), (5), and (6) occupy something like the interval from 0.01% to 1% (I find it hard to discriminate in this range, and in particular to judge these three against each other). Propositions (8), (9), and (10), however, are in a completely different category of improbability: double-digit negative exponents, if you're being conservative. We could argue about (7), but it probably belongs somewhere in between (4)-(6) and (8)-(10); maybe around 10^(-10), if you account for post-QM theories somehow turning Copenhagen into something more mundane than it seems now.

So the point is, we, right here, have the tools to make estimates that are a lot more meaningful than "probably yes" or "probably no". I remember reading that we tend to be overconfident on hard things and underconfident on easy things; I think we can afford to be a little more bold on the no-brainers.

Replies from: wedrifid, army1987
comment by wedrifid · 2010-01-01T02:45:42.987Z · LW(p) · GW(p)

Propositions (8), (9), and (10), however, are in a completely different category of improbability: double-digit negative exponents, if you're being conservative.

It would of course be sacrilegious to place (8) below (9) and (10). Nevertheless even in the case of apparently overwhelming evidence, if you disagree with a mainstream belief 10^(-20) times you will be wrong rather a lot more than once.

Meanwhile, quantum tunnelling is a specific phenomenon which, if possible (very likely) gives fairly clear bounds on just how ridiculously improbable it is for a marble statue to wave. Even possible improbable worlds which make quantum tunnelling more likely still leave (8) less probable than (9) (but perhaps not 10).

I personally place (10) at no less than 10^(-5) and would be comfortable accusing anyone going below 10^(-7) of being confused about probabilities (at least as related to human beliefs).

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-01-01T03:33:56.305Z · LW(p) · GW(p)

Nevertheless even in the case of apparently overwhelming evidence, if you disagree with a mainstream belief 10^(-20) times you will be wrong rather a lot more than once.

Like most majoritarian arguments, this throws away information: the relevant reference class is "mainstream beliefs you think are that improbable". (edit: no, I didn't read the whole sentence) In that class, it's not obvious to me that one would certainly be wrong more than once, if one could come up with 10^20 independent mainstream propositions that unlikely and seriously consider them all while never going completely insane. Going completely insane in the time required to consider one proposition seems far more likely than 10^-20, but also seems to cancel out of any decision, so it makes sense to implicitly condition everything on basic sanity.

(Related: Horrible LHC Inconsistency)

Meanwhile, quantum tunnelling is a specific phenomenon which, if possible (very likely) gives fairly clear bounds on just how ridiculously improbable it is for a marble statue to wave. Even possible improbable worlds which make quantum tunnelling more likely still leave (8) less probable than (9) (but perhaps not 10).

(9), and (10) for some definitions of "Christianity", being more likely than (8) seems conceivable due to interventionist simulators (something I really have no idea how to reason about), but not for any other object-level reason I can think of. Can you think of others?

I personally place (10) at no less than 10^(-5) and would be comfortable accusing anyone going below 10^(-7) of being confused about probabilities (at least as related to human beliefs).

I'd be inclined to accuse anyone going above... something below 10^-7... of being far too modest.

Replies from: wedrifid
comment by wedrifid · 2010-01-01T05:28:48.931Z · LW(p) · GW(p)

Like most majoritarian arguments, this throws away information: the relevant reference class is "mainstream beliefs you think are that improbable".

No, that is the reference class intended and described ("apparently overwhelming evidence").

In that class, it's not obvious to me that one would certainly be wrong more than once, if one could come up with 10^20 independent mainstream propositions that unlikely and seriously consider them all while never going completely insane.

Your prior is wrong (that is, it does not reflect the information that is freely available to you).

Going completely insane in the time required to consider one proposition seems far more likely than 10^-20, but also seems to cancel out of any decision, so it makes sense to implicitly condition everything on basic sanity.

Considering normal levels of sanity are sufficient. Failing to account for the known weaknesses in your reasoning is a failure of rationality.

I'd be inclined to accuse anyone going above... something below 10^-7... of being far too modest.

I am comfortable accusing you of being confused about probabilities as related to human beliefs.

comment by A1987dM (army1987) · 2013-11-07T22:23:49.749Z · LW(p) · GW(p)

Propositions (4), (5), and (6) occupy something like the interval from 0.01% to 1% (I find it hard to discriminate in this range, and in particular to judge these three against each other).

I'd guess you could estimate (4) to within an order of magnitude or better from an actuarial table.

comment by MichaelVassar · 2009-12-31T22:05:08.732Z · LW(p) · GW(p)

Well under 30% certainly, but I wouldn't give it under 4%. A decade is long and the US is young.

Replies from: komponisto
comment by komponisto · 2009-12-31T22:27:06.947Z · LW(p) · GW(p)

I think a draft is much more likely.

comment by Kutta · 2009-12-31T17:07:55.438Z · LW(p) · GW(p)

You display a pessimism much greater than I think is warranted. My predictions for some of your statements:

Next decade:

One or more countries will drop out of the Euro or the entire system will collapse (5%).

A US state will secede (3%).

Developed countries' welfare states will begin to collapse (state retirement and unemployment benefits and health care will be severely curtailed or eliminated in more than one developed country) (10%).

Next year:

One or more developed countries will see significant civil unrest due to ongoing problems with the economy (5-20%, depending on how we define a significant unrest).

Holiday retail sales will be below consensus forecasts leading to some market turmoil in the early part of the year as the 'recovery' starts to look shaky (30%).

A developed country will suffer a currency crisis - most likely either the UK, US or one of the weaker Eurozone economies (15%).

A new round of bank failures and financial turmoil as the wave of Option ARM mortgage resets starts to hit and commercial real estate collapses including at least one major bank failure (a 'too big to fail' bank) (15%).

A major terrorist attack in the US (20%) most likely with a connection to Pakistan. The response will be disproportionate to the magnitude of the attack (99%).

Replies from: MichaelVassar
comment by MichaelVassar · 2009-12-31T22:08:30.776Z · LW(p) · GW(p)

These seem overly optimistic to me. Maybe increase the numbers by 50% to 100% other than 99?

comment by MichaelVassar · 2009-12-30T22:37:37.917Z · LW(p) · GW(p)

My second prediction is that the largest area of impact from technological change over the next decade will come from increasing communications bandwidth. Supercomputers a hundred times more powerful than those that exist today don't look revolutionary, while ubiquitous ultra-cheap wireless broadband makes storage and processing power less important. Improvements in small scale energy storage, tech transfer from e-paper and lower power computer chips will probably help make portable personal computers more energy efficient, but for always-on augmented reality (and its sister-tech robotics) in areas with ubiquitous broadband computing off-site is the way to go.

Replies from: sketerpot, orthonormal, cabalamat, timtyler, gwern
comment by sketerpot · 2009-12-31T02:48:37.777Z · LW(p) · GW(p)

Latency worries me, though. Bandwidth has been improving a lot faster than latency for a while now. For always-on augmented reality, I think that we're going to need some seriously more power-efficient computing so we can do latency-limited tasks locally. (Also, communication takes energy too -- often more than computation.)

Good news on that, by the way: modern embedded computer architecture and manufacturing techniques are going in the right direction for this. 3D integration will allow shorter wires, making all digital logic much more power efficient. Network-on-chip architectures will make it easier to incorporate special-purpose hardware for image recognition and such. And if you stick the memory right on top of your processor, that goes a long way to speeding it up and cutting down on energy used per operation. If you want to get even more radical, you could try something like bit-serial asynchronous processors (PDF) or something even stranger.

comment by orthonormal · 2009-12-30T23:38:12.262Z · LW(p) · GW(p)

Agree on the trend, but I'd put significant odds on some (as yet unexpected) trend being "the largest area of impact" in retrospect.

comment by cabalamat · 2010-01-01T18:58:38.882Z · LW(p) · GW(p)

My second prediction is that the largest area of impact from technological change over the next decade will come from increasing communications bandwidth.

And distributed to more people. >60% of people will have at least 1 Mb/s internet access by 2020 (75%).

comment by timtyler · 2009-12-31T09:53:33.456Z · LW(p) · GW(p)

Do you have any ideas about how the scale of the impact from various different technological changes should be measured in this context? As far as I know, there is no standard metric for this. So, I am not clear about what you mean.

comment by knb · 2009-12-31T02:25:27.786Z · LW(p) · GW(p)

Better than even odds that in 2020:

  1. GDP per capita at purchasing power parity for Singapore will be more than US$80,000 in 2008 dollars.

  2. GDP per capita for China (PRC), will be more than twice 2009 GDP

  3. Tourism to suborbital space will cost less than $50000.

Replies from: orthonormal
comment by orthonormal · 2019-12-31T19:43:29.486Z · LW(p) · GW(p)

Outcomes:

1. No.

2. Yes.

3. Hell no.

Replies from: knb
comment by knb · 2020-02-28T04:56:00.382Z · LW(p) · GW(p)

Unless I'm missing something, looks like #1 is actually correct... 2019 GDP PPP per capita for Singapore was 103,181 according to IMF, which adjusts to 84775.10 according to the first inflation calculator on Google.

comment by James_Miller · 2010-01-01T17:34:47.479Z · LW(p) · GW(p)

Within ten years either genetic manipulation or embryo selection will have been used on at least 10,000 babies in China to increase the babies’ expected intelligence- 75%.

Within ten years either genetic manipulation or embryo selection will have been used on at least 50% of Chinese babies to increase the babies’ expected intelligence- 15%.

Within ten years the SAT testing service will require students to take a blood test to prove they are not on cognitive enhancing drugs. – 40%

All of the major candidates for the 2016 presidential election will have had samples of their DNA taken and analyzed (perhaps without the candidates’ permission.) The results of the analysis for each candidate will be widely disseminated and will influence many peoples' voting decisions - 70%

While president, Obama will announce support for a VAT tax - 70%.

While president, Obama will announce support for means testing Social Security - 70%

Within ten years the U.S. repudiates its debt either officially or with an inflation rate of over 100% for one year - 20%.

Within five years the Israeli economy will have been devastated because many believe there is a high probability that an atomic bomb will someday be used against Israel – 30%

Within ten years there will be another $200 billion+ Wall Street Bailout - 80%

Replies from: James_Miller, gwern, ciphergoth, orthonormal, Pablo_Stafforini, James_K, ciphergoth
comment by James_Miller · 2020-01-03T23:12:58.524Z · LW(p) · GW(p)

I was very, very wrong.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2020-05-05T05:21:28.457Z · LW(p) · GW(p)

How many opportunities do you think we get to hear someone make clearly falsifiable ten-year predictions, and have them turn out to be false, and then have that person have the honour necessary to say "I was very, very wrong?" Not a lot! So any reflections you have to add on this would I think be super valuable. Thanks!

comment by gwern · 2010-08-21T09:47:53.452Z · LW(p) · GW(p)
  1. http://predictionbook.com/predictions/1689
  2. http://predictionbook.com/predictions/1690
    I think you are on crack for this one. 15% ?! You seriously think there's a 15% chance that a embryo selection and/or genetic manipulation for IQ will be developed, commercialized, and turned into an infrastructure capable of modifying roughly 9 million pregnancies a year? Where the hell are all the technicians and doctors going to come from, for one thing? There's a long lead time for that sort of thing.
  3. http://predictionbook.com/predictions/1691
    Ditto - America doesn't have that many phlebotomists, and would go batshit over a Collegeboard requirement like that. There would have to be an enormous national outcry over nootropics, and there's zero sign of that, and tremendous takeup of drugs like modafinil. Even a urine or spit test would encounter tremendous opposition, and the Collegeboard has no incentive for such testing. (Cost, blame for false positives, and possibly dragging down scores which would earn it even more criticism. To name just the very most obvious negatives.)
  4. http://predictionbook.com/predictions/1696
    I think you forgot the part of your prediction where all the candidates went insane and agreed to such an incredibly status-lowering procedure, gave up all privacy, and completely forgot about how past candidates got away with not releasing all sorts of germane records.
  5. http://predictionbook.com/predictions/1576 (Not sure if your wording is exactly the same as Cowen's VAT prediction, but I figure it'll do.)
  6. http://predictionbook.com/predictions/1692
    I recently read a book on old age public policy; amidst the endless details and financial minutia, I was deeply impressed how many ways there were to effectively means-test, even inadvertently, without obviously being means-testing or having that name. Judging could be very difficult.
  7. http://predictionbook.com/predictions/1693
    With a probability that high, shouldn't you be desperately diversifying your personal finances overseas? Either fork of your prediction means major pain for US debt, equity, or cash holders.
  8. http://predictionbook.com/predictions/1694
    The odds of an Iranian bomb aren't that terribly high, much less such an outcome happening.
  9. http://predictionbook.com/predictions/1695
    Definitions here are an issue. Some forecasts are for 2-500 billion dollars in defaults on student loans, which likely would provoke another bailout. Would that count? Does a 0% Fed rate and >0% Treasury rate constitute an ongoing bailout? etc.

All in all, this is a set of predictions that makes me think that I really should go on Intrade. I did manage to double my money at the IEM; at the time I assumed it was because I got lucky on picking McCain and Obama for the nominations, but if this is the best a random LWer can do, even aware of biases, basic data, and the basics of probability...

comment by Paul Crowley (ciphergoth) · 2010-01-01T20:55:28.534Z · LW(p) · GW(p)

All of the major candidates for the 2016 presidential election will have had samples of their DNA taken and analyzed (perhaps without the candidates’ permission.) The results of the analysis for each candidate will be widely disseminated and will influence many peoples' voting decisions - 70%

Within five years the Israeli economy will have been devastated because many believe there is a high probability that an atomic bomb will someday be used against Israel – 30%

Within ten years there will be another $200 billion+ Wall Street Bailout - 80%

I'd take the other side on any of these if we can find a way to make it precise.

comment by orthonormal · 2019-12-31T19:33:56.934Z · LW(p) · GW(p)

I hope you paid out on your bets.

comment by Pablo (Pablo_Stafforini) · 2019-12-31T19:22:06.325Z · LW(p) · GW(p)

As far as I can tell, every single one of your predictions has now been falsified.

comment by James_K · 2010-01-02T07:33:37.565Z · LW(p) · GW(p)

"While president, Obama will announce support for means testing Social Security - 70%"

I'd be wiling to take those odds, with some refinements.

Replies from: James_Miller
comment by James_Miller · 2010-01-02T16:54:26.000Z · LW(p) · GW(p)

How about this - I win if before he leaves office I can point to a speech Obama gave in which he advocates means testing Social Security. Otherwise you win. The speech has to be given after today, so you don't fear this is some kind of trick.

If I win I get $100 from you. If you win I give you $233. But with these odds I'm indifferent to making the bet. So for me to be willing to bet I want you to agree that if Obama makes such a speech you have to pay me right away.

Replies from: James_K
comment by James_K · 2010-01-03T01:03:20.398Z · LW(p) · GW(p)

That works for me, with one little change. The end of his term needs to be counted as the end of a presidential election he doesn't win, rather than the inauguration of his successor. This is because the reason I don't think its very likely is that the political effects on him would be dire, so if he does it as a lame duck president he has nothing to lose. I'm still willing to take the risk on his second term since even a second-term president is subject to some political forces.

And as a clarification, I take "means testing" to mean increasing or decreasing social security payouts based on a person's assets or income. It also has to apply to US citizens to count.

And since I'm not an American, I'd just like to confirm that the best is in US dollars. That works for me, and I assume it works for you too.

Replies from: James_Miller
comment by James_Miller · 2010-01-03T17:45:49.274Z · LW(p) · GW(p)

OK, I accept - and yes the bet should be in U.S. dollars.

Please contact me at

EconomicProf@Yahoo.com so we can exchange addresses.

comment by Paul Crowley (ciphergoth) · 2020-05-05T05:19:01.391Z · LW(p) · GW(p)

Hey, looks like you're still active on the site, would be interested to hear your reflections on these predictions ten years on - thanks!

comment by whpearson · 2009-12-30T23:28:20.712Z · LW(p) · GW(p)

We will end the decade with some mobile energy storage system with an energy density close to or better than fat metabolism.

ETA: I mean in the context of electronics.

Replies from: Pfft, gwern, timtyler
comment by Pfft · 2009-12-31T21:35:18.474Z · LW(p) · GW(p)

From looking at the diagram, aren't we starting the decade with such a system (gasoline)?

Replies from: whpearson
comment by whpearson · 2009-12-31T22:13:42.049Z · LW(p) · GW(p)

You are the second person to mistake my intent. I meant in the field of mobile electronics. Take a look at where lithium ion is on this chart.

comment by timtyler · 2009-12-31T09:46:54.121Z · LW(p) · GW(p)

The graph you link to says magnesium and diesel already have greater energy density than fat.

So, I think you have to specify how portable, how common or cheap, and maybe whether you are talking about rechargable or not - or the prediction is probably going to be vague - and subject to the criticism that it has already happened.

Replies from: whpearson
comment by whpearson · 2009-12-31T09:57:18.729Z · LW(p) · GW(p)

I meant commonly used for powering portable electronics. I don't assign a high probability to this. It is the upper bound of what I think worth discussing.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-12-31T23:12:05.598Z · LW(p) · GW(p)

Right. TNT does not count as a mobile energy storage system.

I think you're wrong; but it's a really interesting prediction.

The reason I think you're wrong is that the rate of improvement of technologies in a field is more-or-less fixed within a field, because it depends on the economics, not on the science. Moore's Law exists not because there's some magic about semiconductors, but because the market is sized and structured such that you need to sell people a new system every 2 years, and you need to double performance to get people to buy a new system.

This means you can look at the past exponential curve for battery density, and project it into the future with some confidence. I don't know what the exponent per year is; but my gut feeling before checking any data or doing any calculations is that it isn't high enough.

Replies from: cabalamat, whpearson
comment by cabalamat · 2010-01-01T19:20:10.324Z · LW(p) · GW(p)

Moore's Law exists not because there's some magic about semiconductors, but because the market is sized and structured such that you need to sell people a new system every 2 years, and you need to double performance to get people to buy a new system.

I disagree.

I am typing this on a machine I bought 6 years ago. Its CPU speed is still competitive with current hardware. This lack of speedup is not because processor manufacturers chaven't been trying to make processors faster; they have. The reason for the lack of speedup is that it is hard to do. The problem is more to do with the nature of physical reality than the structure and economics of the computer industry.

Consider cars. They do not halve in price every two years. Why not? Because they are designed to move people around, and people are roughly the same size they have always been. But computers move bits around, and bits can be made very small (both in terms of the size of circuitry and the power dissipated); this is the fundamental reason why the computer/communications industry has been able to halve prices / double capabilities every year or two for the last half century.

comment by whpearson · 2009-12-31T23:40:04.101Z · LW(p) · GW(p)

I don't think there is an exponent curve as such for battery tech. Li-ion came in about 2006? And nothing much has improved since then. The trouble with batteries is you can't just shrink components and get some improvement as you do with semi-conductors. Your components are already on the atomic scale. So more fundamental breakthroughs are needed.

The prediction is based mainly on our increasing control of biology and the ability to work on the small scale. If nothing else we'll invent a way to metabolise fat or other carbohydrates to electricity and have small home bioreactors that produce carbs and make nice little cartridges for people to plug into their electronics. Maybe not in 10 years, but some substantial movement is definitely possible in this direction.

Replies from: timtyler, MatthewB
comment by timtyler · 2010-01-02T11:26:03.393Z · LW(p) · GW(p)

A graph of battery energy density between 1985 and 2008:

http://www.kk.org/thetechnium/Battery%20Energy%20Density.jpg

Extrapolate away!

comment by MatthewB · 2010-01-01T02:30:55.796Z · LW(p) · GW(p)

What about some of the advances in micro-generators and Fuel-Cells that I have read about?

For instance, I have seen one of those tiny turbine engines running to power an equally tiny generator, and it looked to provide a hellofa lotta power for its size. I know the military is putting them into some applications in the field, so it will probably not be too terribly long before we see them on things like Laptops/tablets or cell phones.

Replies from: whpearson
comment by whpearson · 2010-01-01T11:13:34.428Z · LW(p) · GW(p)

I haven't seen anything recent on these. Any keywords to google? The key thing for a consumer electronics application is ease of getting the fuel. People don't want to have to head out to the shops to get it every few days, which is why rechargeable batteries are the current winner.

Replies from: MatthewB
comment by MatthewB · 2010-01-02T11:53:30.339Z · LW(p) · GW(p)

Try "MIT Micro Turbine Generator". That will get you to the base technology. I tried to find the DARPA Page, but it seems to have been buried. The MIT Technology has also got a lot smaller from the 2006 initial turbines, which were roughly the size of a quarter. They know measure less than 1cm on a side. The Generator that creates the electricity from these things is roughly the same size as the turbine. It basically looks like a DVD Motor (really flat and broad).

I saw them as a field power source for laser designator and weapon (a modified laser designator that could be used as a sniper weapon), and as a source for communications gear. They used the same propellant that a butane lighter uses (that stuff in an aerosol can), They were said to run much longer than one day of full use on one charge.

The problems with them:

Heat and noise. They make a high pitched whine that can be muffled, yet is still easy to pick up on a mic that has the appropriate filtering software. The heat can also be shielded, but it creates a problem for the user. A last rumor that I hear is that when these things fail, they can cause the propellant to burn off. I have only heard one person talking about that though.

MIT is not the only one to come up with small turbines to use as power sources. some of the really small jet-turbine engines (1/2" in diameter, and 2" to 3" long) have been discovered to be excellent power sources as well when coupled to a generator.

Two semesters ago, I looked into making my own micro-turbine as a project for an engineering lab (I couldn't find anyone willing to donate the Mill Time on a CAD/CAM mill to make the turbine blades, and I couldn't afford the ready made ones). This is what led to my discovery of most of these (and then friends helped with actually seeing one).

comment by blogospheroid · 2010-01-02T11:45:25.475Z · LW(p) · GW(p)

Atleast one asian movie will exceed $400 mn in worldwide box office gross before the end of the decade.

It will most probably not be a wuxia movie. My guess of its genre is urban action or speculative fiction.

Replies from: orthonormal, rahul, gwern
comment by orthonormal · 2019-12-31T20:07:37.493Z · LW(p) · GW(p)

Wolf Warrior 2 did $874 million in China alone; China's rapidly growing domestic market won this prediction singlehandedly.

comment by rahul · 2011-01-03T07:36:10.720Z · LW(p) · GW(p)

I agree. I especially see a lot of convergence in present day mainstream Bollywood cinema with conventional blockbuster Hollywood fare in terms of both plots and production values. So expect a Moulin Rouge-like crossover musical in English with a major Hollywood box-office draw, an Indian model female lead, rags-to-riches storyline, Inception-like action sequences and CGI by studios in Hyderabad and Bangalore.

comment by gwern · 2010-08-24T11:02:35.167Z · LW(p) · GW(p)

http://predictionbook.com/predictions/1708

Seems like a solid prediction. 2020 allows a lot of growth in China & India, and Bollywood-style movies already play well in the West - look at Slumdog Millionaire which nearly grossed $400M, despite being a British film on Indian matters.

comment by CannibalSmith · 2009-12-31T12:47:39.571Z · LW(p) · GW(p)

I estimate 90% odds that Emotiv's EPOC will fail like the Segway did.

I have one of these puppies. It's the most fickle device I've laid my hands on. It's useless for anything except gaining nerd status points. Hey, do you guys want me to post a detailed review? :)

Replies from: PhilGoetz
comment by PhilGoetz · 2009-12-31T22:50:01.077Z · LW(p) · GW(p)

I'd like to see a review, but it isn't a LW thing. It would be nice to have a forum / news structure, so that we could have a section for "Off-topic posts". Heck, it would be nice to sort the posts by topic.

Replies from: JGWeissman
comment by JGWeissman · 2009-12-31T23:06:25.098Z · LW(p) · GW(p)

it would be nice to sort the posts by topic.

Isn't that what tags are for?

comment by whpearson · 2009-12-30T22:55:35.707Z · LW(p) · GW(p)

For the next decade:

I'd bet about a 2:3 odds that energy consumption will grow on a par or less than population growth.

Any rise in average standard of living will come from making manufacturing/logistics more efficient, or a redistribution from the very rich to the less well off. There is still scope for increased efficiency by reducing the transport of people and more automation.

Replies from: orthonormal, gwern
comment by orthonormal · 2009-12-30T23:36:55.670Z · LW(p) · GW(p)

I'd take the other side at those odds. Per capita energy expenditures in China are set to skyrocket as rural areas industrialize, and I expect the same of many Second World nations. I don't think increases in efficiency will dwarf that effect quite yet.

Replies from: whpearson
comment by whpearson · 2009-12-31T00:05:05.743Z · LW(p) · GW(p)

I'm basically betting that a short term lack of oil (as evidenced by reduced production in 2008 and high current price), will put a break on that expansion. Or the industrialization of china will only happen in if first would countries reduce their energy consumption to allow it, as they did in 2008.

Data from the BP energy review.

Replies from: orthonormal
comment by orthonormal · 2009-12-31T00:25:06.972Z · LW(p) · GW(p)

Interesting consideration; but on the other hand, China isn't afraid to build nuclear power plants or burn coal.

Replies from: whpearson
comment by whpearson · 2009-12-31T00:51:13.065Z · LW(p) · GW(p)

An interesting article on china and energy. Nuclear has a lead time (optimistically ) of 3 years, so their prediction of 60-90 GWe won't be too far off. It actually looks like they are planning more wind than nuclear. I'm really curious where they expect the 500 GWe odd of energy they don't mention to come from. All coal? That'll be pretty dirty.

I was probably a little overconfident in my initial bet. I do expect the ratio of energy consumption growth to population growth to trend downwards though.

Replies from: sketerpot, orthonormal
comment by sketerpot · 2009-12-31T02:33:58.787Z · LW(p) · GW(p)

It actually looks like they are planning more wind than nuclear.

Wrong. (Well, a little bit right, but wrong in all the ways that matter.) According to the article you linked, they're planning to build about 60-90 GW of nuclear capacity (let's say 80 GW to simplify the arithmetic) and 100 GW of wind. But what we really care about is how much energy they get from those sources per year, and to find that, we have to multiply the peak power generation capacities by the capacity factor for each source.

Nuclear power has a capacity factor of at least 93% for the newer plant designs that China is building (or even for older plants after operators get experience), so we'll say that their average production is (80 GW) * 0.93 = 74.4 GW average.

Wind power has a capacity factor of around 21% right now. Since we're talking about 2020, i.e. The Future!!, let's assume they get it up to a whopping 30%. Their energy production from wind would come out to (100 GW) * 0.3 = 30 GW average, or less than half of their projected nuclear production.

The average power figures are much more meaningful than the capacity numbers, but the wind salesmen quote whatever numbers make them sound most impressive, and the news media report it. It's as ubiquitous as it is misleading.

Replies from: whpearson
comment by whpearson · 2009-12-31T10:17:55.337Z · LW(p) · GW(p)

Mea culpa. I forgot how misleading some of the energy numbers could be.

comment by orthonormal · 2009-12-31T01:57:15.074Z · LW(p) · GW(p)

The article estimates that China's electricity capacity will double from 2008 to 2020; it doesn't seem to list an estimate for electricity production, but I'd think it would trend in much the same way, significantly faster than China's (rapidly falling) population increase. Reading this article makes me even more eager than before to take the "over" at these odds.

Replies from: whpearson
comment by whpearson · 2009-12-31T12:27:56.387Z · LW(p) · GW(p)

I'm rethinking my wager. To give you some information that I found. Which I should have looked at before.

Average energy consumption increase over 15 years to 2008 has been 2.13%. This is very choppy data it varies between 0.09% and 4.5%(2004 then trending downwards). This included a doubling on energy consumption by china in 7 years (2001-2008).

Average population growth is trending downwards and is at 1.1%.

I was probably putting too much weight on my own countries not very well thought out energy policy.

What odds would you give on energy consumption growth rate being lower for the next 10 years than the previous 10 (2.4%)?

Replies from: orthonormal
comment by orthonormal · 2010-01-02T01:28:59.013Z · LW(p) · GW(p)

Because of the Second World's larger growth rate (and the fact that they occupy a larger part of the total now), I think the odds of energy growth being lower than 2.4% are somewhat worse than even. I'm quite metauncertain; I don't think I'd actually bet unless someone were giving me 3:2 odds to bet the 'over', or 4:1 odds to bet the 'under'.

comment by Unknowns · 2010-01-01T08:17:04.619Z · LW(p) · GW(p)

I predict a 10% chance that I win my bet with Eliezer in the next decade (the one about a transhuman intelligence being created not by Eliezer, not being deliberately created for Friendliness, and not destroying the world.)

Replies from: Baughn, orthonormal, dfranke
comment by Baughn · 2010-01-01T12:07:02.996Z · LW(p) · GW(p)

I'll go ahead and claim a 98% chance that, if a transhuman, non-Friendly intelligence is created, it makes things worse. And an 80% chance that this is in a nonrecoverable way.

I kinda hope you're right, but I just don't see how.

Replies from: Unknowns
comment by Unknowns · 2010-01-01T13:26:30.277Z · LW(p) · GW(p)

This prediction is technically consistent with my prediction (although this doesn't mean that I don't disagree with it anyway.)

Replies from: Baughn
comment by Baughn · 2010-01-02T17:30:53.742Z · LW(p) · GW(p)

In other words, one of us did not specify the prediction correctly.

I don't think it's me. I deliberately didn't say it'd destroy the world. Would it be correct to modify yours to say "..and not making the world a worse place"?

Replies from: Unknowns, Technologos
comment by Unknowns · 2010-01-02T19:18:16.090Z · LW(p) · GW(p)

No. If you look at the original bet with Eliezer, he was betting that on those conditions, the AI would literally destroy the world. In other words, if both of us are still around, and I'm capable of claiming the money, I win the bet, even if the world is worse off.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-02T20:38:23.304Z · LW(p) · GW(p)

Yup. If he lives to collect, he collects.

comment by Technologos · 2010-01-02T17:35:15.641Z · LW(p) · GW(p)

one of us did not specify the prediction correctly

Assuming that there is, in fact, a correct way to specify the predictions. It's possible that you weren't actually disagreeing and that you both assign substantial probability to (world is made worse off but not destroyed | non-FAI is created) while still having a low probability for (non-FAI is created in the next decade).

comment by orthonormal · 2011-01-02T16:32:00.689Z · LW(p) · GW(p)

Considering that the bet includes "not destroying the world", the only fair way to do this type of bet (for money) is for you to give the other party $X now, and for them to give you $Y later if you turn out to be correct.

Replies from: Unknowns
comment by Unknowns · 2011-01-04T08:04:01.523Z · LW(p) · GW(p)

That's exactly what happened; I gave Eliezer $10, and he will pay me $1000 when I win the bet.

comment by dfranke · 2010-01-01T18:34:40.100Z · LW(p) · GW(p)

I'll put down money on the other side of this prediction provided that we can agree on an objective definition of "transhuman intelligence".

Replies from: Unknowns
comment by Unknowns · 2010-01-01T19:35:39.829Z · LW(p) · GW(p)

My bet with Eliezer can be found at http://lesswrong.com/lw/wm/disjunctions_antipredictions_etc/.

I said there at the time, "As for what constitutes the AI, since we don't have any measure of superhuman intelligence, it seems to me sufficient that it be clearly more intelligent than any human being." Everyone's agreement that it is clearly more intelligent would be the "objective" standard.

In any case, I am risk averse, so I don't really want to bet on the next decade, which according to my prediction would give me a 90% chance of losing the bet. The bet with Eliezer was indefinite, since I already paid; I am simply counting on it happening within our lifetimes.

Replies from: dfranke, LucasSloan
comment by dfranke · 2010-01-01T20:26:05.366Z · LW(p) · GW(p)

I like your side of the original bet because I think the probability that the first superintelligent AI will be only slightly smarter than humans, non-goal-driven, and non-self-improving, and therefore non-Singularity-inducing, is better than 1%. The reason I'm willing to bet against you on the above version is that I think 10% is way overconfident for a 10-year timeframe.

comment by LucasSloan · 2010-01-01T23:12:45.062Z · LW(p) · GW(p)

Would a sped-up upload count as super-intelligent in your opinion?

comment by orthonormal · 2009-12-31T00:12:42.885Z · LW(p) · GW(p)

In an analysis that does not account for any health-care reform bill, the Department of Health and Human Services projected that health care expenditures would double from the 2009 level of $2.2 trillion (16.2% of 2009 GDP) to $4.4 trillion in 2018 (20.3% of projected 2018 GDP). This provides us a baseline from which to predict the cost-control effectiveness of health care reform.

I'm somewhat bullish on the potential of the pilot programs and the excise tax to lower med costs for a given level of health outcomes, although I'm not supremely confident in that. I also think there is a long tail of events or technologies that could unexpectedly increase med expenses (that would do so with or without health-care reform). Furthermore, the current bill will expand coverage for a substantial number of people, as a result of which total expenditures will definitely rise. All things together, here are my (very rough) intuitions:

  • I'd give 1:1 odds that health-care expenditure is less than or equal to $5 trillion in 2018.
  • I'd give 5:1 odds that it's less than or equal to $4 trillion in 2018.
  • I'd give 5:1 odds that it's greater than $6 trillion in 2018.
  • Conditioned on current health-care reform failing (i.e. no pilot programs or excise tax), I'd only give 2:1 odds that health-care expenditure is less than or equal to $4.4 trillion in 2018. (Long tails and overly rosy estimates.)

EDIT: I had this up for a few minutes with different numbers, before I remembered that the individual mandate and subsidies would raise med expenses significantly.

Replies from: orthonormal, gwern
comment by orthonormal · 2019-12-31T19:49:28.866Z · LW(p) · GW(p)

3.6 trillion in 2018 (17.7% of GDP), so we don't even need to argue about adjusting for inflation to judge these. Thanks Obamacare!

comment by gwern · 2010-08-18T10:03:39.996Z · LW(p) · GW(p)

Are those figures inflation-adjusted?

In order:

(As health reform passed, I omit any consideration of #4.)

comment by LucasSloan · 2010-01-05T05:23:42.515Z · LW(p) · GW(p)

I get into UC Berkeley - 70%

Replies from: gwern
comment by gwern · 2010-08-26T10:18:44.493Z · LW(p) · GW(p)

http://predictionbook.com/predictions/1719 but what date should the prediction terminate on?

Replies from: LucasSloan
comment by LucasSloan · 2010-08-26T14:06:11.802Z · LW(p) · GW(p)

About 3 months ago.

Replies from: gwern
comment by gwern · 2010-08-27T04:14:37.014Z · LW(p) · GW(p)

o.0

OK, did you get in or no?

Replies from: LucasSloan
comment by LucasSloan · 2010-08-27T06:50:59.510Z · LW(p) · GW(p)

Yes.

Replies from: gwern
comment by gwern · 2010-08-27T07:04:28.431Z · LW(p) · GW(p)

Congratulations to the both of us, then.

comment by Morendil · 2010-01-02T18:00:06.109Z · LW(p) · GW(p)

I expect that Brain-Computer Interfaces will make their way into consumer devices by the next decade, with disruptive consequences, once people become able to offload some auxiliary cognitive functions into these devices.

Call it 75% - I would be more than mildly surprised if it hadn't happened by 2020.

For what I have in mind, what counts as BCI is the ability to interact with a smartphone-like device in an inconspicuous manner, without using your hands.

My reasoning is similar to Michael Vassar's AR prediction, and based on the iPhone's success. That doesn't seem owed to any particular technological innovation; rather, Apple made things usable that were only previously feasible in the technical sense. A mobile device for searching the Web, finding out your GPS position and compass orientation, and communicating with others was technically feasible years ago. Making these features only slightly less awkward than previously has revealed a hidden demand for unsuspected usages, often combining old features in unexpected ways.

However, in many ways these interfaces are still primitive and awkward. "Sixth Sense" type interfaces are interesting, but still strike me as overly intrusive on others' personal space.

It would make sense to me to be able, say, to subvocalize a command such as "Show me the way to metro station X", then have my smartphone gently "tug" me in the right direction as I turn left and right, using a combination of compass and vibrations. This is only one scenario that strikes me as already easy to implement, requiring only some slightly greater integration of functionality.

I expect such things to be disruptive, because the more transparent the integration between our native cognitive abilities, and those provided by versatile external devices connected to the global network, the more we will effectively turn into "augmented humans".

When we merely have to think of a computation to have it performed externally and receive the result (visually or otherwise), we will be effectively smarter than we are now with calculators (and already essentially able, some would say, to achieve the same results).

I am not predicting with 75% probability that such augmentation will be pervasive by 2020, only that by then some newfangled gadget will have started to reveal hidden consumer demand for this kind of augmentation.

ETA: I don't mind this comment being downvoted, even as shorthand for "I disagree", but I'd be genuinely curious to know what flaws you're seeing in my thinking, or what facts you're aware of that make my degree of confidence seems way off.

Replies from: Morendil, Morendil, gwern, Morendil, spasinsky
comment by Morendil · 2023-04-09T11:43:10.736Z · LW(p) · GW(p)

Ruling this prediction as wrong. (Only three years late, but who's counting.)

comment by Morendil · 2017-09-16T14:58:34.214Z · LW(p) · GW(p)

By now this looks rather unlikely in the original time-frame, even though there are still encouraging hints from time to time.

comment by gwern · 2010-08-24T11:07:10.926Z · LW(p) · GW(p)

I'm not thrilled about your vagueness about what technologies count as a BCI. Little electrodes? The gaming device that came out last year or so got a lot of hype, but the gamers I've talked to who have actually used it were all deeply unimpressed. Voice recognition? Already here in niches, but not really popular.

If you can't think of what interfaces specifically*, then maybe you should phrase your prediction as a negative: 'by 2020, >50% of the smart cellphone market will use a non-gestural non-keyboard based interface' etc.

* and you really should be able to - just 9 years means that any possible tech has to have already been demonstrated in the lab and have a feasible route to commercialization; R&D isn't that fast a process, and neither is being good & cheap enough to take over the global market to the point of 'pervasive'

Replies from: Morendil, Morendil
comment by Morendil · 2010-08-28T18:37:23.112Z · LW(p) · GW(p)

Yep, electrodes, as in the gaming devices. Headsets is the form factor I have in mind, so not necessarily electrodes if this is to be believed. I don't want to commit to burdensome implementation details but voice isn't what I mean - it doesn't count as "unobtrusive" to my way of thinking.

I envision something where I can just form the thought "nearest MacDonalds" (ETA: or somehow bring up a menu selecting that among even a restricted set) without it being conspicuous for an outside observer, and get some form of feedback from the device leading me in the right direction. Visual overlay would work, but so would a physical tug.

comment by Morendil · 2013-06-05T20:19:45.059Z · LW(p) · GW(p)

Three and a half years in, this.

Replies from: shminux
comment by shminux · 2013-06-05T20:40:54.490Z · LW(p) · GW(p)

Any updates to your original prediction?

Replies from: Morendil, Morendil
comment by Morendil · 2013-11-30T10:21:26.504Z · LW(p) · GW(p)

Now this.

comment by Morendil · 2013-06-06T06:10:35.905Z · LW(p) · GW(p)

I think I've come round to Gwern's point of view - this is a bit too vague. The news item I posted makes me feel like we're still on track for it to happen, though I could be a few years off the mark. I might knock it down to 65% or so to account for uncertainty in timing.

comment by spasinsky · 2010-01-03T00:17:26.389Z · LW(p) · GW(p)

Given the feasibility that currently exists for gadgets that you envision... and Apple's uncanny ability to bring those ides to market... I say 2015 is a 75% target for the iThought side-processor device. :) .

comment by Kevin · 2010-01-01T06:17:37.959Z · LW(p) · GW(p)

By 2020, an Earth-like habitable extrasolar planet is detected. I would take a wager on this one but doubt anyone would give me even odds.

Will anyone give me even odds if the bet is by 2015?

Replies from: Nick_Tarleton, Kevin, gwern
comment by Nick_Tarleton · 2010-01-01T06:29:11.706Z · LW(p) · GW(p)

I think I'd give better-than-even odds for either date, and would be shocked if no one else would. How are you defining "Earth-like" and "habitable"?

Replies from: Unknowns, wedrifid
comment by Unknowns · 2010-01-01T06:40:11.416Z · LW(p) · GW(p)

I think he just meant with liquid water, some type of atmosphere, and approximately earth sized. Given this, my guess is that they find one within the next three years. If he meant "habitable" to human beings without protection, i.e. oxygen atmosphere etc., then this is extremely unlikely (less than 2% chance) that they will find such a thing by 2020.

Replies from: gwern, Kevin
comment by gwern · 2010-08-19T08:49:45.017Z · LW(p) · GW(p)

Is it possible to have liquid water without life? I remember reading that an oxygen atmosphere was quite impossible, but am not sure about liquid water.

Replies from: Unknowns
comment by Unknowns · 2010-08-19T10:28:57.516Z · LW(p) · GW(p)

There could be an oxygen atmosphere without life for a short period of a planet's history (I'm not sure how long.) It wouldn't be possible for it to remain permanently.

According to our evidence, Mars had liquid water for a very long period, but no one considers this to be proof that there was life there.

Replies from: gwern
comment by gwern · 2010-08-20T09:03:34.216Z · LW(p) · GW(p)

According to our evidence, Mars had liquid water for a very long period,

I went to check this - maybe liquid water is a short-term enough thing that its mere presence is still weak evidence for an active biosphere, but apparently one timeline puts liquid water as present in large quantities for >600 million years. Bleh.

comment by Kevin · 2010-01-01T07:18:00.467Z · LW(p) · GW(p)

Yes.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-01T09:07:15.561Z · LW(p) · GW(p)

I'm not sure we have the technology to make that call even if such a planet does, in fact, lie within range of our telescopes.

Replies from: Kevin
comment by Kevin · 2010-01-01T10:36:22.075Z · LW(p) · GW(p)

We don't. My prediction then is only almost certainly true if we define habitable as a planet in a sun's habitable zone. However, I still think finding a habitable planet, per Unknowns's definition, is likely to happen by 2020.

http://www.npr.org/templates/story/story.php?storyId=101493448

If Kepler does indeed find hundreds of planets in habitable zones, that should get the popular imagination going enough for the successor to Kepler to be very well funded. Kepler Mark II in the air by 2017?

comment by wedrifid · 2010-01-01T06:44:40.147Z · LW(p) · GW(p)

I think I'd give better-than-even odds for either date, and would be shocked if no one else would.

At even odds I would take a loan to make the bet.

comment by gwern · 2010-08-19T08:48:56.046Z · LW(p) · GW(p)

http://predictionbook.com/predictions/1676

Replies from: Kevin, Kevin
comment by Kevin · 2010-08-24T06:52:28.327Z · LW(p) · GW(p)

http://news.ycombinator.com/item?id=1628822

Replies from: gwern
comment by gwern · 2010-08-24T09:14:10.504Z · LW(p) · GW(p)

That's a good link (maybe half-forgotten rumors of this were why I guessed so high), but I hope you're not expecting me to close the prediction as correct based on just online rumors. :)

Replies from: Kevin
comment by Kevin · 2010-08-24T09:22:13.052Z · LW(p) · GW(p)

:) Definitely not closed yet, but I figured I would put the link up just as a running update of the prediction.

comment by Richard_Kennaway · 2009-12-31T00:24:24.405Z · LW(p) · GW(p)

I am 99% confident that AGI comparable to or better than a human, friendly or otherwise, will not be developed in the next ten years.

I am 75% confident that within ten years, the Bayesian paradigm of AGI will be just yet another more or less useful spinoff of the otherwise failed attempt to build AGI.

Replies from: timtyler, PhilGoetz, gwern
comment by timtyler · 2009-12-31T09:35:25.214Z · LW(p) · GW(p)

Shane Legg gives a 10% probability of that here:

http://www.churchofvirus.org/bbs/attachments/agi-prediction.png

My estimate here is a bit bigger - maybe around 15%:

http://alife.co.uk/essays/how_long_before_superintelligence/graphics/pdf_no_xp.png

You seem to be about ten times more confident than us. Is that down to greater knowledge - or overconfidence?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-12-31T13:30:49.620Z · LW(p) · GW(p)

You seem to be about ten times more confident than us. Is that down to greater knowledge - or overconfidence?

You seem to be about ten times less confident than me. Is that down to greater knowledge - or underconfidence?

Replies from: timtyler
comment by timtyler · 2009-12-31T15:07:20.742Z · LW(p) · GW(p)

I'm not very confident - primarily because we are talking ten years out - and the future fairly rapidly turns into a fog of possibilities which makes it difficult to predict.

Which brings us back to why you seem so confident. What facts, or observations are the ones you find which provide the most compelling evidence that intelligent machines are at least ten years off. Indeed, how do you know that the NSA doesn't have such a machine chained up in its basement right now?

Replies from: Richard_Kennaway, MatthewB
comment by Richard_Kennaway · 2009-12-31T16:44:48.645Z · LW(p) · GW(p)

What facts, or observations are the ones you find which provide the most compelling evidence that intelligent machines are at least ten years off.

It hasn't worked in sixty years of trying, and I see nothing in the current revival to suggest they have any ideas that are likely to do any better. To be specific, I mean people such as Marcus Hutter, Shane Legg, Steve Omohundro, Ben Goertzel, and so on -- those are the names that come to me off the top of my head. And by their current ideas for AGI I mean Bayesian reasoning, algorithmic information theory, AIXI, Novamente, etc.

I don't think any of these people are stupid or crazy (which is why I don't mention Mentifex in the same breath as them), and I wouldn't try to persuade any of them out of what they are doing unless I had something demonstrably better, but I just don't believe that collection of ideas can be made to work. The fundamental thing that is lacking in AGI research, and always has been, is knowledge of how brains work. The basic ideas that people have tried can be classified as (1) crude imitation of the lowest-level anatomy (neural nets), (2) brute-forced mathematics (automated reasoning, logical or probabilistic), or (3) attempts to code up what it feels like to be a mind (the whole cognitive AI tradition).

Indeed, how do you know that the NSA doesn't have such a machine chained up in its basement right now?

My estimates are unaffected by hypothetical possibilities for which there is no evidence, and are protected against that lack of evidence.

Besides, the current state of the world is not suggestive of the presence of AIs in it.

ETA: But this is becoming a digression from the purpose of the thread.

Replies from: timtyler, Jack
comment by timtyler · 2009-12-31T19:02:18.151Z · LW(p) · GW(p)

Thanks for sharing. As previously mentioned, we share a generally negative impression of the chances of success in the next ten years.

However, it appears that I give more weight to the possibility that there are researchers within companies, within government organisations, or within other countries who are doing better than you suggest - or that there will be at some time over the next ten years. For example, Voss's estimate (from a year ago) was "8 years" - see: http://www.vimeo.com/3461663

We also appear to differ on our estimates of how important knowledge of how brains work will be. I think there is a good chance that it will not be very important.

Ignorance about NSA projects might not affect our estimates, but perhaps it should affect our confidence in them. An NSA intelligent agent might well remain hidden - on national security grounds. After all, if China's agent found out for sure that America had an agent too, who knows what might happen?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-12-31T23:07:40.267Z · LW(p) · GW(p)

I would guess that the NSA is more interested in quantum computing than in AI.

Replies from: timtyler
comment by timtyler · 2010-01-01T10:41:49.238Z · LW(p) · GW(p)

They are the National Security Agency. Which of those areas presents the biggest potential threat to national security? With a machine intelligence, you could build all the quantum computers you would ever need.

comment by Jack · 2010-01-01T09:28:00.936Z · LW(p) · GW(p)

The fundamental thing that is lacking in AGI research, and always has been, is knowledge of how brains work.

This is my sense as well. I also think there is a substantial limit on what we're likely to learn about the brain given that we can't study brain functionality with large scope, neuron-level definition, in real time given obvious ethical constraints. Does anyone know of any technologies on the horizon that could change this in the next ten years?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-01T10:15:59.985Z · LW(p) · GW(p)

http://lesswrong.com/lw/vx/failure_by_analogy/

Replies from: taw, Jack
comment by taw · 2010-01-02T23:30:51.258Z · LW(p) · GW(p)

From quote in that post:

"One of [the Middle Ages'] characteristics was that 'reasoning by analogy' was rampant; another characteristic was almost total intellectual stagnation, and we now see why the two go together.

There's no reason to spread such myths about medieval history.

The main characteristics of the Early Middle Ages were low population densities, very low urbanization rates, very low literacy rates, and almost zero lay literacy rates. Being in a reference class of times and places with such characteristics, it would be a miracle if any significant progress happened during Early Middle Ages.

High and Late Middle Ages on the other hand had plenty of technological and intellectual progress.

I'm much more surprised why dense, urbanized, and highly literate Roman Empire was so stagnant.

Replies from: Jawaka
comment by Jawaka · 2010-01-05T14:03:32.587Z · LW(p) · GW(p)

China also springs to mind. I have listened to documentary about the Chinese empire and distinctly remember how advanced yet stagnant it seemed. At the time my explanation was authoritarianism.

comment by Jack · 2010-01-01T12:19:04.017Z · LW(p) · GW(p)

All that is fine.

But 1) I'm not sure anyone has a good grasp of what the properties we're trying to duplicate are. I'm sure some people think they do and it is possible someone has stumbled on to the answer but I'm not sure there is enough evidence to justify any claims of this sort. How exactly would someone figure out what general intelligence is without ever seeing it in action? The interior experience of being intelligent? Socialization with other intelligences? An analogy to computers?

2) Lets say we do have or can come up with a clear conception of what the AGI project is trying to accomplish without better neuroscience. It isn't then obvious to me that the way to create intelligence will be easy to derive without more neuroscience. Sure, from just from a conception of what flight is it is possible to come up with solutions to the problem of heavier than air flight. But for the most part humans are not this smart. Despite the ridiculous attempts at flight with flapping wings I suspect having birds to study --weigh, measure and see in action-- sped up the process significantly. Same goes for creating intelligence.

(Prediction: .9 probability you have considered both these objections and rejected them for good reason. And .6 you've published something that rebuts at least one of the above. :-)

comment by MatthewB · 2010-01-01T02:42:16.931Z · LW(p) · GW(p)

The NSA does have some scary machines chained in their "Basement," yet I doubt any of them approach AGI. All of them(that I am aware of - so, that would be 2) are geared toward some pretty straightforward real-time data mining, and I am told that the other important gizmos do pretty much the same thing (except with crypto).

I doubt that they have anything in the NSA (or other spooky agencies) that significantly outstrips many of the big names in Enterprise. After all, the Government does go to the same names to buy its supercomputers that everyone else does. It's just the code that would differ.

Replies from: timtyler
comment by timtyler · 2010-01-01T10:48:19.516Z · LW(p) · GW(p)

So: you have a hotline to the NSA, and they tell you about all their secret technology?!? This is one of the most secretive organisations ever! If you genuinely think you know what they are doing, that is probably because they have you totally hoodwinked.

Replies from: MatthewB
comment by MatthewB · 2010-01-02T12:02:19.185Z · LW(p) · GW(p)

Hardly a hotline... A long, long time ago, when I was very young, I wound up working with the NSA for about six months. I was supposed to have finished school and gone to work for them full time... But, I flaked when I discovered that I could get laid pretty easily (women seemed much more important than an education at the time).

I still keep in touch, and I have found that an awful lot of their work is not hard to find out about. They may have me hoodwinked, as my job was hoodwinking others. However, I don't usually spend my time with any of my former co-workers talking about stuff that they shouldn't be talking about. Most of it is about stuff that is out in the open, yet that most people don't care about, or don't know about (usually because it's dead boring to most people).

And, I am not aware that I have stumbled onto any secret technology. Just two machines that I found to be freakishly smart. One of them did stuff that Google can probably now do (image recognition), and I am pretty sure that the other used something very similar to Mathematica. I was really impressed by them, but then I also did not know that things like Mathematica existed at the time. At the time I saw them, I was told by my handler than they were "Nothing compared to the monsters in the garage."

Edit: Anyone may feel free to think that I am a nut-job if they wish. At this point, I have little to no proof of anything at all about my life due to the loss of everything I ever owned when my wife ran off. So, you may take my comments with a grain of salt until I am better known.

comment by PhilGoetz · 2009-12-31T23:05:31.727Z · LW(p) · GW(p)

Can you be more specific about what you mean by the Bayesian paradigm of AGI? Is it necessarily a subset of good-old-fashioned symbolic AI? In that case, it's been dead for years. But if not, I can't easily imagine how you're going to enforce Bayes' theorem; or what you're going to enforce it on.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-01-02T14:00:58.730Z · LW(p) · GW(p)

Here's an example of what I had in mind by "the Bayesian paradigm" -- see especially pp.12-13. Bayesian reasoning may be the one correct form of reasoning about probabilities, just as the first-order predicate calculus is the one correct form of reasoning about the true and the false, but that does not make of it a method to automatically solve problems.

I also had in mind something broader than just Bayesian reasoning, although that's a major part: the coupling of that with a goal system based on utility functions and their maximisation (the major thrust of the paper I linked).

comment by gwern · 2010-08-18T10:05:32.066Z · LW(p) · GW(p)

I am 99% confident that AGI comparable to or better than a human, friendly or otherwise, will not be developed in the next ten years.

http://predictionbook.com/predictions/1670

I am 75% confident that within ten years, the Bayesian paradigm of AGI will be just yet another more or less useful spinoff of the otherwise failed attempt to build AGI.

I don't know how one would judge this and so haven't made a prediction for this one.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-08-18T13:25:21.615Z · LW(p) · GW(p)

Thanks for putting that up. I hadn't been aware of PredictionBook, so I've just made an account and posted a more precise prediction there myself.

Replies from: gwern
comment by gwern · 2010-08-19T04:07:18.843Z · LW(p) · GW(p)

Hopefully my comments and importation of predictions will lead to more PB awareness on LW.

comment by PhilGoetz · 2010-01-09T03:38:03.629Z · LW(p) · GW(p)

Carry-on luggage on US airlines will be reduced to a single handbag that inspectors can search thoroughly, in 2010 or 2011.

Replies from: gwern
comment by gwern · 2010-08-26T10:19:38.347Z · LW(p) · GW(p)

http://predictionbook.com/predictions/1720

Replies from: PhilGoetz
comment by PhilGoetz · 2010-08-26T22:46:22.821Z · LW(p) · GW(p)

2010 is almost over, so the odds of my being right are now considerably less.

Replies from: gwern
comment by gwern · 2010-08-27T04:12:52.606Z · LW(p) · GW(p)

Well, we're only 8/24 of the way to the end of 2011, so you could still be right. Ganbaru!

comment by magfrump · 2010-01-03T08:01:31.346Z · LW(p) · GW(p)

I would say better-than even chances that sites like intrade gain prestige in the next decade

and betting on predictions will become common ( 90% that there is a student at 75% or so of high schools in 2020 that will take bets on future predictions on any subject, 40% that >5% of US middle class will have made a bet about a future prediction)

naive guesses based largely on http://www.fivethirtyeight.com/2009/11/case-for-climate-futures-markets-ctd.html

I predict further that I will continue to post on LW at least once a month next year (90%) and in 2020 (50%)

Replies from: orthonormal, gwern
comment by orthonormal · 2010-01-03T08:31:20.240Z · LW(p) · GW(p)

I predict further that I will continue to post on LW at least once a month next year (90%) and in 2020 (50%)

Is there any comparable website that you were posting on in 2000 and continue to post on today? I agree that LW is awesome, but web communities have a short shelf life (and a tendency to be superseded as web technology improves).

Replies from: magfrump
comment by magfrump · 2010-01-03T18:59:37.769Z · LW(p) · GW(p)

Probably a good reason to adjust the estimate down. On the other hand I was 11 in 2000 so I wouldn't have been on this kind of site anyway, and conditional on the prediction that news-betting becomes more prestigious rationality almost certainly will.

Point taken, with the real point being that I have no sense of how long a decade is, so I'll adjust that down to a 20%

I have stayed in touch with a different web community for five years, with which I'm still in touch, although only barely at the level of once a month. So my odds for awesomeness overcoming shelf-lifes may be higher than for most.

comment by gwern · 2010-08-25T06:16:25.254Z · LW(p) · GW(p)
  1. http://predictionbook.com/predictions/1710

    Kind of vague, but I suppose it's not too hard to do a search and note that the NYT only mentioned Intrade a few times in the 2000s and more in the 2010s.

  2. http://predictionbook.com/predictions/1709

    I have no idea how one would measure this one. I'm sure that at any high school you could find a student willing to wager with you on any damn topic you please.

  3. Not including a prediction for middleclasses. Already true if you count sports, as many prediction markets such as Betfair do.
  4. http://predictionbook.com/predictions/1711
  5. http://predictionbook.com/predictions/1712

    Agree with orthonormal that this is seriously over-optimistic. The only site I even use today that I did in 2000 would be Slashdot, and I haven't commented there in a dog's age.

Replies from: magfrump
comment by magfrump · 2010-08-25T10:16:13.433Z · LW(p) · GW(p)

I probably meant for claim 3 to exclude sports.

Replies from: gwern
comment by gwern · 2010-08-25T12:34:03.040Z · LW(p) · GW(p)

Well, then you're using a variant definition of prediction market, and before I can feel confident judging any prediction of yours, I need to know what your idiosyncratic interpretation of the phrase is.

Replies from: magfrump
comment by magfrump · 2010-08-25T19:59:25.995Z · LW(p) · GW(p)

I agree that I wasn't making the most coherent claim, and since it's been a long time I can't guarantee fidelity of what I originally intended.

But my best guess would be, trying to phrase this as concretely as possible, was that I meant to predict that either

a) sports betting agencies would expand into non-sports venues and see significant business there

or b) newer betting agencies not created to serve sports would achieve similar success

I would be "disappointed" if "non-sports" meant something like player movement between teams and "excited" if it meant something like unemployment rates and vote shares in elections.

comment by pdf23ds · 2010-01-03T03:37:23.756Z · LW(p) · GW(p)

I have nowhere admitted that I have evidence of anyone else's mortality THAT I COULD PRESENT TO THEM. That is, I have no evidence for the mortality of people now alive, only for those already dead.

Hmm. You seem to be a taking the position of a radical skeptic here. Would you agree? That position is almost always associated with sophistry, and neatly explains everyone's reaction to you, I believe. AFAIK, there's really no answer to radical skepticism (that's acceptable to the skeptics).

ETA: I wish he had had a chance to respond to this. Seems like it more directly addressed the troll's issues than other comments. Oh well, whatever.

comment by dfranke · 2010-01-01T19:05:48.043Z · LW(p) · GW(p)
  • By the end of 2013: Either the Iranian regime is overthrown by popular revolution, or there is an overt airstrike against Iran by either the US or Israel, or Israel is attacked by an Iranian nuclear weapon (70%).

  • Essentially seconding mattnewport: the price of gold reaches $3000USD, or inflation of the US dollar exceeds 12% in one year (65%).

  • The current lull in the increase of the speed at which CPUs perform sequential operations comes to an end, yielding a consumer CPU that performs sequential integer arithmetic operations 4x as quickly as a modern 3GHz Xeon (80%).

  • Android-descended smartphones outnumber iPhone-descended smartphones (60%).

  • The number of IMAX theaters in the US triples (40%).

Replies from: gwern, sketerpot
comment by sketerpot · 2010-01-02T08:29:59.093Z · LW(p) · GW(p)

The current lull in the increase of the speed at which CPUs perform sequential operations comes to an end, yielding a consumer CPU that performs sequential integer arithmetic operations 4x as quickly as a modern 3GHz Xeon (80%).

When you say sequential integer operations, do you mean integer operations that really are sequential? In other words, the instructions can't be performed in parallel because of data dependencies? If not, then this is already possible with a sufficiently wide superscalar processor or really big SIMD units.

But let's assume you really mean sequential integer operations. The only pipeline stage in this example that can't work on several instructions at once is the execute stage, so I'm assuming that's where the bottleneck is here. This means that the speed is limited by the clock frequency. So, here are two ways to achieve your prediction:

  1. Crank up the clock! Find a way to get it up to 12 GHz without burning up.

  2. Make the execute stage capable of running much faster than the rest of the processor does. This is natural for asynchronous processors; in normal operation the integer functional units will be sitting idle most of the time waiting for input, and the bulk of the time and complexity will be in fetching the instructions, decoding them, scheduling them, and in memory access and I/O. But in your contrived scenario, the integer math units could just go hog wild and the rest of the processor would keep them fed. This can be done with current semiconductor technology, I'm pretty sure.

So, either way, kind of an ambitious prediction. I like it.

Replies from: Valkyrie_Ice
comment by Valkyrie_Ice · 2010-01-02T11:20:15.793Z · LW(p) · GW(p)

Have you not heard that they discovered a way to use graphene as a one to one replacement for copper in chip production. That alone will allow speeds of 12-15GHz.

I would put faster chips using multicore running at many times current speeds will be available by 2011-2012 at near 100% certainty.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-02T12:16:05.796Z · LW(p) · GW(p)

Have you not heard that they discovered a way to use graphene as a one to one replacement for copper in chip production. That alone will allow speeds of 12-15GHz.

This seems to be still very far from application, a quick search on your claim turned up only this paper that isn't cited by anybody yet, publicized in a few popular articles.

Replies from: sketerpot
comment by sketerpot · 2010-01-02T21:33:06.035Z · LW(p) · GW(p)

Let's assume they put it into practice and start mass-producing processors with graphene interconnects with better-than-copper resistivity. We've got two things to worry about here: speed and power.

The speed of signal propagation along a wire depends on RC, the product of the resistance and the capacitance. Graphene lowers the resistance of a wire of a given size, but does nothing to lower the capacitance -- that depends on the insulator surrounding the wire and the shape of the wire and its proximity to other wires. The speed gains from graphene look moderate, but significant.

The power dissipated by sending signals through wires will be most of the power of future processors, if current trends continue. Power is a barrier to clocking chips fast. We can overclock processors a lot, but you've got to worry about them burning up. Decreasing resistivity improves the power situation somewhat, but the bulk of the interconnect's influence on power comes from its capacitance. Transistors have to charge and discharge the capacitance of the wires, and that takes power. So on power, graphene will help somewhat, but it's not the slam-dunk that Valkyrie Ice is expecting.

tl;dr: Graphene interconnect sounds good, but not fantastic.

Replies from: Bo102010
comment by Bo102010 · 2010-01-02T22:29:08.853Z · LW(p) · GW(p)

Thank you - I was wanting to write something along similar lines in response to Valkyrie Ice's comment, but wouldn't have ended up with something this compact.

I'll add that clocking is just a piece of the puzzle when it comes to making computers that compute faster.

comment by Jack · 2010-01-01T10:30:48.968Z · LW(p) · GW(p)

The second estimation in each paragraph is conditional on the first.

By 2020 some kind of CO2 emissions regulation (cap and trade) will be in place in the US(.85). But total CO2 emissions in the US for 2019 will be no less than 95% of total CO2 emissions for 2008 (.9).

Obama wins reelection (.7). The result will be widely attributed to an improving economy (in the media and in polls and whether or not the economy actually improves) (.85)

By 2020 open elections are held for the Iranian presidency (no significant factions excluded from participation) (.5). The president (or some other position selected through open elections) is the highest position in the Iranian state (.5)

Replies from: philwelch, gwern
comment by philwelch · 2010-01-02T00:56:31.412Z · LW(p) · GW(p)

"The president (or some other position selected through open elections) is the highest position in the Iranian state (.5)"

Qualify this. Formally, the highest position in the British state is unelected. In terms of political power, the highest position in the British state is elected.

Replies from: Jack
comment by Jack · 2010-01-02T04:42:02.960Z · LW(p) · GW(p)

In terms of political power.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-31T00:39:27.865Z · LW(p) · GW(p)

For the next decade: collaborative filtering.

Replies from: orthonormal, JGWeissman, PhilGoetz
comment by orthonormal · 2011-01-02T16:16:52.833Z · LW(p) · GW(p)

Just one word: plastics.

comment by JGWeissman · 2009-12-31T01:28:08.139Z · LW(p) · GW(p)

Based on this article on collaborative filtering, we already have it. Every time I buy anything online, I am told what other products people buy who also bought what I bought. It is the central component of the StumbleUpon service.

So, what are you predicting?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-31T01:47:31.777Z · LW(p) · GW(p)

I'm predicting that in 2020 you'll look back at this blog comment and say, "Wow, he sure called that one."

Replies from: cabalamat, ciphergoth
comment by cabalamat · 2010-01-01T19:32:33.890Z · LW(p) · GW(p)

I think it's more likely people will say "too vague a prediction".

Replies from: TurnTrout
comment by TurnTrout · 2019-12-31T00:54:53.396Z · LW(p) · GW(p)

Too vague a prediction indeed, but also collaborative filtering seems to have become a cornerstone of modern online advertising / content recommendation services.

comment by Paul Crowley (ciphergoth) · 2020-05-05T05:48:52.181Z · LW(p) · GW(p)

I look back and say "I wish he had been right!"

comment by PhilGoetz · 2009-12-31T22:54:10.832Z · LW(p) · GW(p)

I don't know. It makes sense; but I thought the same thing in 1999.

There is a lot of interest in using CF to sell people more mass-market things, eg Netflix; less interest in helping people find obscure things from the long tail that they might have a special interest in; still less interest in using CF for social networking.

comment by MichaelVassar · 2009-12-30T22:00:12.333Z · LW(p) · GW(p)

My first prediction is that as is usually the case, political and random events will change the way people live far more over the next year than technology will. Given the current state of the financial system, I would place about even odds on politics having more impact than technology over the next decade, but with the caveat that over such a long time scale political and technological events will surely be interwoven.

Replies from: DanArmak, timtyler, knb
comment by DanArmak · 2009-12-30T22:46:10.402Z · LW(p) · GW(p)

There's no separation to be had between politics and technology.

The biggest influence on technology is regulation which outlaws, restricts, or places huge financial barriers to entry (as with medical research); another non-trivial influence is politically controlled financing of R&D.

And arguably, the biggest influence on politics that isn't itself political is technology (case in point: modern communications, computer, and the Internet spreading censored information, creating more popular awareness and coordinating protests.)

So I think political and technological events are inseparable over almost any timescale.

Replies from: MrHen
comment by MrHen · 2009-12-31T15:36:58.491Z · LW(p) · GW(p)

I agree that there is little to no separation, but I think a distinction can be made. Namely, there are two different words that mean different things. When predicting what is going to affect people you can probably find a way to split the techno-political mash usefully. This may be as simple as using one word over the other.

comment by timtyler · 2009-12-31T09:48:46.571Z · LW(p) · GW(p)

It seems pretty vague - do you have any ideas about how this should be measured?

comment by knb · 2009-12-31T02:04:00.865Z · LW(p) · GW(p)

That scarcely seems to be a testable prediction. Random, political, and technological events are tightly interwoven, first of all. Unless you plan to perform an experiment or do some kind of remarkably complex and research dense correlational analysis, how do you expect to determine whether you were right or wrong?

For instance, if a government sponsored project produces a type of cheap, practical fusion, is that tech change or a political change? Are terrorist attacks random or political?

In any case, I would guess that if you did a survey, people would more often say that technological change was more important.

comment by NancyLebovitz · 2010-01-04T12:54:11.086Z · LW(p) · GW(p)

Secession: If you mean a state trying to leave the US in the next decade, 5%. If you mean a state actually being allowed to leave, I put it at 0%.

Insurrection in the next decade: I'm defining an insurrection as at least 1000 people in the same or closely allied organizations with military weapons taking violent action against the US government: 30%. They'll lose. It's certainly possible that my opinion on this is based on reading too much left wing material which is very nervous about the right. On the other hand, 1000 isn't a lot of people.

All predictions are 10 years out unless otherwise noted.

The rest of the world: Another EU-style organization gets started: 30%. The advantages of having a large population are getting obvious, and it's astonishing to me to see countries begging for a chance to give up some national sovereignty.

Fabbing: Automated custom shoes: 50%. Possibly wishful thinking-- I have feet which aren't quite in the easy-to-fit range.

On one hand, there's a massive market. On the other, shoes are a way of signaling status, and it's probable that I wildly underestimate how hard it is to put a shoe together. And shoes are a way of signaling status, so a custom machine shoe for the general market can't look much different from the standard shoes.

Custom machine shoes will start out expensive, and be for the athletic market.

Another product: Sous vide cookers (poaching food in vacuum-sealed bags at precisely controlled temperatures: has many good effects and should be especially appealing to geeks) will be down to $200 within 5 years (80%) and half as common as microwaves within 10 years (50%).

Obama will be re-elected unless he is assassinated (5%), there is a major terrorist attack on US soil (I'm not betting on that one, too random, and how he responds will have an unpredictable effect, too), or the economy doesn't improve (I think it will, but don't have a percentage).

Replies from: mwengler, John_Maxwell_IV, Kevin, gwern
comment by mwengler · 2011-01-03T21:50:14.121Z · LW(p) · GW(p)

Just as a check on 0% for a state being allowed to secede, consider this.

What would you put at the probability that there would be sufficient devastation in the eastern seaboard of the US in the next decade from (for example) bio or nuclear attacks or terrorism? If that happened, what would be the probability that the US would be disbanded as a going concern? I realize you would likely assign very small numbers to these possibilities, but possibly > 0%. If you assign >0% to this, then you assign >0% to a state being allowed to secede. (recapitulating an objection voiced to me by Anna Salamon when I made a claim of extremely small probability for some risk or another).

Replies from: Costanza, NancyLebovitz
comment by Costanza · 2011-01-03T23:54:48.712Z · LW(p) · GW(p)

There's a lot of ruin in a nation. The main axis nations of World War II -- Germany, Italy, and Japan -- provide some examples of nations that were really, really traumatized and damaged. Out of the three, only Germany split apart, and that only because of competing foreign occupiers. Even then it reunited as soon as it got the chance. I don't think there's enough hostility or just plain difference between most of the states west of the Mississippi to cause them to separate, especially under threat of external attack. If anything, I'd expect them to band together as tightly as possible.

comment by NancyLebovitz · 2011-01-03T23:34:05.521Z · LW(p) · GW(p)

I don't think the US would go away even if the eastern seaboard was nothing but glassy craters and deadly microbes.

That being said, it's conceivable that some technological or ideological change could weaken the central government to the point that states would be let go, though it's hard to imagine something that drastic shaping up in as little as 9 years. I'm also not sure what change could happen which would break the federal government while leaving state governments intact.

Ok, though-- in a decade, something very odd could happen. I don't think a lot of people were predicting the dissolution of the USSR before it happened.

Meanwhile, sous vides don't seem to be a lot cheaper or more popular, but I didn't put as extreme a probability on that one.

comment by John_Maxwell (John_Maxwell_IV) · 2010-01-05T06:00:24.047Z · LW(p) · GW(p)

Secession: If you mean a state trying to leave the US in the next decade, 5%. If you mean a state actually being allowed to leave, I put it at 0%.

Surely you mean "my estimate rounds to 0%"?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-01-06T01:21:57.386Z · LW(p) · GW(p)

I meant 0%, but you probably have a point that I should present the chance as negligible rather than non-existent. Is there a limit, though? Does it make sense to say that there's a non-zero chance that a state will propose secession and be allowed to leave by tomorrow morning?

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-06T01:29:37.525Z · LW(p) · GW(p)

Does it make sense to say that there's a non-zero chance that a state will propose secession and be allowed to leave by tomorrow morning?

Yep. It even makes sense to say that there's a non-zero chance that a state seceded last month, and that we haven't heard about it yet. The word 'epsilon' is useful in such cases; it means 'nearly zero' or 'too close to zero to calculate'.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-01-09T05:16:10.586Z · LW(p) · GW(p)

The word 'epsilon' is useful in such cases; it means 'nearly zero' or 'too close to zero to calculate'.

"Negligible" is a much better word, in my opinion, since epsilon is (conventionally) an arbitrarily small number, not a sufficiently small number. You could use "infinitesimal", but nothing in reality is actually infinitesimally small (including probabilities), so again you'd be inaccurate. I always get frustrated when people misuse precise mathematical words that have lots of syllables in them. The syllables are there to discourage colloquial use! I don't mind if you try to show off your knowledge, but for heaven's sake don't screw up and use that precise brainy term wrong!

Replies from: Tyrrell_McAllister, AdeleneDawner, byrnema
comment by Tyrrell_McAllister · 2010-01-09T05:50:51.460Z · LW(p) · GW(p)

You're straddling a strange line here. You're demanding a certain amount of strictness that is itself short of perfect strictness.

There's no such thing as an "arbitrarily small number". There are numbers chosen when any positive number might have been chosen. In particular, a given epsilon need not be "negligible". Really, to conform to the strict mathematical usage, one shouldn't say "epsilon" without first saying "For every". Once you're not demanding that, you're not using the "precise mathematical words" in the precise mathematical way.

I'm not saying that you're on some slippery slope where anything goes. But I wouldn't say that AdeleneDawner is either.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-01-12T06:05:53.419Z · LW(p) · GW(p)

You're demanding a certain amount of strictness that is itself short of perfect strictness.

Actually, I'm fine with people speaking vaguely, I just don't want to see terminology misused.

There's no such thing as an "arbitrarily small number".

"Through adding zeroes between the decimal point and the 7 in the string '.7', the number we are representing can be made arbitrarily small." Is this a misuse of the word "arbitrarily"?

In particular, a given epsilon need not be "negligible". Really, to conform to the strict mathematical usage, one shouldn't say "epsilon" without first saying "For every".

The important think about an epsilon in a mathematical proof is, conventionally, that it can be made arbitrarily small. This is a human interpretation I am adding on to the proof itself. If the important thing about a variable in a proof was that the variable could become arbitrarily large, my guess is that a variable other than epsilon would not be used.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-01-12T15:22:36.760Z · LW(p) · GW(p)

"Through adding zeroes between the decimal point and the 7 in the string '.7', the number we are representing can be made arbitrarily small." Is this a misuse of the word "arbitrarily"?

Your usage is fine, so long as it's clear that "arbitrarily small" is a feature of the set from which you are choosing numbers, or of the process by which you are constructing numbers, and not of any particular number in that set. This is clear with the context that you give above. It wasn't as clear to me when you wrote that "epsilon is (conventionally) an arbitrarily small number".

comment by AdeleneDawner · 2010-01-09T05:38:22.566Z · LW(p) · GW(p)

'Kay.

I'm not the only one you should be ranting at, though - I picked it up here, not in a math class, and I suggested it because it's in common use.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-01-12T06:07:45.610Z · LW(p) · GW(p)

Yep, it is probably unrealistic to expect random folks to avoid picking up multisyllable terms in the way they pick up regular words.

comment by byrnema · 2010-01-09T05:44:06.165Z · LW(p) · GW(p)

Don't forget "modulo".

Suppose that Nancy meant 0% except for a few special cases that she didn't think should be relevant. Then she could say, '0% modulo some special cases'.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-09T09:48:56.497Z · LW(p) · GW(p)

I often use epsilon in the same informal way AdeleneDawner does, though I'm perfectly aware of the formal use. Still, I think the informal use of "modulo" is more defensible - it maps more closely to the mathematical meaning of "ignoring this particular class of ways of being different"

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-01-12T06:09:51.133Z · LW(p) · GW(p)

Could you explain this in greater detail? This way of using "modulo" bothers me significantly, and I think it's because I either don't know about one of the ways "modulo" is used in math, or I have an insufficiently deep understanding of the one way I do know that it's used.

Replies from: RobinZ
comment by RobinZ · 2010-01-12T13:14:59.576Z · LW(p) · GW(p)

In modulo arithmetic, adding or subtracting the base does not change the value. Thus, 12 modulo 9 is the same as 3 modulo 9. Thus, for example, "my iPhone is working great modulo the Wifi connection" implies that if you can subtract the base ("the Wifi connection") you can transform a description of the current state of my iPhone into "working great".

(For your amusement: modulo in the Jargon File. Epsilon is there too.)

Edit: Actually, in this case, you would have to add the base, because my Wifi isn't working, but the statement remains the same.

comment by Kevin · 2011-01-03T23:22:32.141Z · LW(p) · GW(p)

You can get a hacker sous vide setup for under $200 today. http://news.ycombinator.net/item?id=2058982

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-01-03T23:37:00.054Z · LW(p) · GW(p)

I think you could when I made the prediction-- what I had in mind was a sous vide cooker that you didn't need to put together.

comment by gwern · 2010-08-26T10:16:04.104Z · LW(p) · GW(p)
  1. http://predictionbook.com/predictions/1713 & http://predictionbook.com/predictions/1714
  2. http://predictionbook.com/predictions/1715
  3. An EU-style organization - you'll have to be more specific than that. Every region has a bunch of multinational orgs like the UN. Africa has the Union of African States, Asia has ASEAN, SAARC, BIMSTEC, etc. Maybe you would prefer a prediction like 'at least 10 nations in Asia/Africa/South America will create a new common currency and switch to it'?
  4. http://predictionbook.com/predictions/1716 I agree that this one is wishful thinking on your part. :)
  5. http://predictionbook.com/predictions/1717 & http://predictionbook.com/predictions/1718 I agree that it's perfectly possible (surely right now) to sell a sous-vide cooker for $200; I question that there is demand enough, and really have no idea about the business environment. Cynicism tells me that there is no enormous revolution in American cuisine in the offing to the point where effectively half the middle-class has a sous-vide cooker, though. I mean come on.
  6. http://predictionbook.com/predictions/452 for his re-election
Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-26T13:37:09.037Z · LW(p) · GW(p)

Thanks, mostly.

I think it would have been more fair to make my predictions 1. "A state will not try to secede" and "A state will not succeed at seceding".

Other than that, it's interesting to see how uncertain I am that some of my predictions are the result of my own thinking rather than emotional effects from people I've been reading.

(3) What I had in mind for an EU-style organization was dropping restrictions on trade and travel. At this point, I'm not as optimistic, but that feels more like mood than new information. I don't know whether dropping restrictions on trade requires a common currency.

(4) Computer-fabbed custom-fitted shoes are a lot easier than AI. If you don't think that's at all likely within 10 years, does this affect any predictions you might have for AI? Your answer is about there not being a market for them-- I'd say that the market isn't perceived. Either way, I don't get the impression that that tech is ready to do it yet.

It might make more sense for a computer to measure the feet and make the pieces, but have human beings put the shoes together. :-/

I'm also assuming shoes would be mailed rather than being shoes on demand-- shoes on demand would be another jump in technology.

Thinking about it a little more, the footprint in stores could be pretty small-- just the measuring device. I'm not sure how much support from store staff it would be apt to need at the beginning.

This sort of development is also dependent on how much capital is available, and I'm not feeling optimistic about that.

(5) The conveyor belt for new aspects of food (perhaps unsurprisingly) seems to be more efficient for prepared food and ingredients than for cooking methods. I still haven't had sous vide food myself, but everything I've heard about it makes it sound wonderful. I think there will be a sudden shift with sous vide food becoming available in mid-range restaurants followed by a lot of people wanting to cook it.

ETA: The website didn't just format the numbers into pretty paragraphing, it "corrected" the numbers.

Replies from: gwern
comment by gwern · 2010-08-27T04:27:45.128Z · LW(p) · GW(p)

For 3, a monetary union isn't necessary; look at the US & Mexico & Canada, thanks to NAFTA. Certainly helps, though. I don't really see any areas which might do this sort of thing. Open borders and no trade barriers is a very Western 1st World sort of thing to do, and the obvious candidates like Japan don't really have an incentive to do so. (Japan has no land borders, so having passport checks doesn't really increase the cost of flying or boating to it.)

For 4: I think custom-fitting is already possible, and has been since the early laser scanners came out in the... '80s? But like the sous-vide, I'm not confident in their uptake. (It's kind of like jetpacks and flying cars and pneumatic postal systems. We have them; we just don't use them.)

ETA: The website didn't just format the numbers into pretty paragraphing, it "corrected" the numbers.

This is part of standard markdown; you can number each item '1.' if you want! If you want a number item you can escape it with a backslash, or you can do like I did and insert a paragraph after the bullet (newline, and then indent the paragraph by 4-5 spaces).

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-27T12:36:02.999Z · LW(p) · GW(p)

Damn, on 3 I didn't say what I meant. The genuinely big deal is freedom to relocate and work.

Do you have a source for computerized custom-fitting of shoes? The big deal isn't just the fitting, though, it's reasonably-priced manufacture.

Afaik, jet-packs can be made, but carrying enough fuel for significant travel isn't feasible.

As for flying cars, it finally occurred to people that there were weather and pilot safety issue.

I don't see those sorts of considerations applying to sous vide or computerized custom shoes.

The futuristic prediction which seems to be not happening because people just don't want it is video which shows your face while you're talking on the phone.

Replies from: jimrandomh, Richard_Kennaway, gwern
comment by jimrandomh · 2010-08-30T13:25:41.625Z · LW(p) · GW(p)

Someone I know has a foot problem. Her orthopedist recommended having a scan done to produce inserts to adjust the shape of her regular shoes, and said if that didn't work, then entirely custom shoes could be made. So computerized custom-fit shoes do exist, but they're considered a medical item which makes them expensive.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-30T15:03:38.603Z · LW(p) · GW(p)

That sounds to me as though the inserts are customized, but the custom shoes would be made by humans.

comment by Richard_Kennaway · 2010-08-27T16:20:42.065Z · LW(p) · GW(p)

The futuristic prediction which seems to be not happening because people just don't want it is video which shows your face while you're talking on the phone.

That one's already happened. My new iPhone does video calls, and so does Skype on any computer with a webcam. That wasn't driven by demand, though, it was more that the technology all became ubiquitous for other purposes and it was easy to stitch it together to provide videophone functionality, even if it isn't actually used very much.

comment by gwern · 2010-08-27T13:45:54.463Z · LW(p) · GW(p)

Do you have a source for computerized custom-fitting of shoes? The big deal isn't just the fitting, though, it's reasonably-priced manufacture.

IIRC, I read it a long time ago in a mouldering paperback of Alvin Toffler's The Third Wave. (Or was it Future Shock?) But even without having read about clothes in particular, I have read about 3D models of statues etc. being generated through rotating the object while shining a laser on it; thus obviously one can generate a human model (I think CGI already does this), and fit clothes on that model. I would be deeply shocked if no one has ever used laser modeling to fit garments of some kind.

I don't see those sorts of considerations applying to sous vide or computerized custom shoes.

Considerations like expense and minimal benefit don't apply? Mm, well, as Marx said, nous verrons. Figuring out whose perception of reality is clearer is one of the points of recording predictions.

Replies from: whpearson, NancyLebovitz
comment by whpearson · 2010-08-27T14:21:06.974Z · LW(p) · GW(p)

I would be deeply shocked if no one has ever used laser modeling to fit garments of some kind.

You don't have to be shocked. Here is one.

I think what user-specific clothing and shoes currently lacks is sufficiently advanced robotics. If you are doing the obvious, cutting out bits of material and attaching them together you have quite a few problems. You are having to manipulate non-standard sized bits of flexible material. The production line deals with many of the same sized and shaped bits of material so you can change molds/tools dependent upon the size of the shoe.

The knitting machine above removes that consideration as it produces the finished garment in one piece.

I found this pdf on customized shoe production from 2001 (requires login) while trying to find some videos of shoe manufacturing to confirm my ideas. I don't have time to look into it, but seems relevant to the discussion.

comment by NancyLebovitz · 2010-08-27T13:56:27.805Z · LW(p) · GW(p)

The hard part of computerized custom shoes might be designing the shoes rather than measuring the foot. Also note that the shoe has to fit while you're walking, though that seems like just adding difficulty rather than a whole new problem.

I should have been more precise about the difference I see between flying cars and sous vide cooking. Flying cars include infrastructure and group effects in a way that sous vide cookers do not.

comment by ejklake · 2010-01-03T14:30:21.696Z · LW(p) · GW(p)

No predictions about the state of the environment? Is every point of contention too close to call, then?

comment by cabalamat · 2010-01-01T18:51:24.933Z · LW(p) · GW(p)

China is the 2nd biggest economy in 2020 (99%). Note I'm counting the EU as lots of countries, not as one big economy. Counting the EU together, China will be the 3rd biggest.

Pirate Parties will have been in government for a time in at least one country by 2020 (90%)

Pirate Parties will win >=10 seats in the European parliament in 2014 (75%), and <=30 seats (75%).

The Conservatives will win a majority the next UK general election (60%), there will be no overal majority (37%), or any other outcome (3%).

Replies from: ciphergoth, gwern
comment by Paul Crowley (ciphergoth) · 2010-01-01T20:54:17.550Z · LW(p) · GW(p)

The Conservatives will win a majority the next UK general election (60%), there will be no overall majority (37%), or any other outcome (3%).

Do you have bets on Intrade or Betfair for those guesses? It's probably better for you to bet directly than for me to do arbitrage on you :-) They have around 68% Conservative victory, 26% no overall majority, and around 6% Labour victory.

Betfair

comment by gwern · 2010-08-21T09:56:09.192Z · LW(p) · GW(p)

China is the 2nd biggest economy in 2020 (99%). Note I'm counting the EU as lots of countries, not as one big economy. Counting the EU together, China will be the 3rd biggest.

Now, that's unfair. You've already won that one, and any look at the numbers would've told you this was a like 99.999% prediction or something.

Replies from: FAWS
comment by FAWS · 2010-08-21T10:11:44.510Z · LW(p) · GW(p)

No, there is a reasonable (IMO >1%) chance China could overtake the USA or EU in the next ten years.

Replies from: gwern
comment by gwern · 2010-08-21T23:07:15.929Z · LW(p) · GW(p)

I'm a little confused what you're predicting. China is already the 2nd biggest economy, my understanding was, unless the EU is counted as a single economy. So your 99% prediction is actually 'China will not become the world's largest economy and will remain #2/#3'?

comment by scientism · 2010-01-01T22:49:22.850Z · LW(p) · GW(p)

Next 10 years:

  1. Nativism discredited (80%)

  2. Traditional economics discredited (80%)

  3. Cognitivism/computationalism discredited (70%)

  4. Generative linguistics discredited (60%)

To elaborate somewhat: By #1 I mean that in the fields of biology, psychology and neuroscience the idea that behaviours or ideas or patterns of thought can be "innate" will be marginalised and not accepted by mainstream researchers.

By #2 I mean that, not only will behavioural economics provide accounts of deviations from traditional economic models, but mainstream economists will accept that these models need to be discarded completely and replaced from the ground-up with psychologically-plausible models.

By #3 I mean the idea that the brain can be thought of as a computer and the "mind" as its algorithms will be marginalised. I give this lower odds than nativism being discredited only because the cognitivist tradition has managed to sustain itself through belligerence rather than evidence and is therefore likely to be more persistent and pernicious. Nativism, on the other hand, has persisted because of the difficulty of experimentally demonstrating that certain behaviours are learned rather than innate (as well as belligerence).

By #4 I mean that traditional linguistics, and especially generative grammar, will be marginalised. This one has long puzzled me since the generative grammarians based their ideas on intuition and explicitly deny a role for data or experiment (or the need to reconcile their beliefs with biology). The main problem has been the absence of a viable alternative research program. This is beginning to change.

Replies from: orthonormal, whpearson, Zack_M_Davis, gwern, DanielLC
comment by orthonormal · 2010-01-01T23:00:32.848Z · LW(p) · GW(p)

If we could agree on a suitable judging mechanism, I would bet up to $10,000 against you on #1 and on #3 at those odds (or even at substantially different odds). I also disagree on the latter claim in #2, but that's not as much of a slam dunk for me as the others.

comment by whpearson · 2010-01-01T23:00:03.510Z · LW(p) · GW(p)

Can you unpack what you mean by innate. I think babies would have a hard time surviving if sucking things wasn't a behaviour that was with them from their genes.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-02T00:18:31.322Z · LW(p) · GW(p)

And more generally, the distinction innate/learned is overly simplistic in a lot of contexts; rather, there are adaptations that determine the way organism develops depending on its environment. The standard reference I know of is

J. Tooby & L. Cosmides (1992). `The psychological foundations of culture'. In J. Barkow, L. Cosmides, & J. Tooby (eds.), The adapted mind: Evolutionary psychology and the generation of culture. Oxford University Press, New York.

comment by Zack_M_Davis · 2010-01-01T23:42:00.706Z · LW(p) · GW(p)

A few thoughts:

  • It would be valuable to do an outside view sanity check: historically, how frequently have research programs of similar prestige been discredited?

  • There are all the standard problems with authority---lots of folks insist that they're in the mainstream and that opposing views have been discredited. Clearly nativism &c. have been discredited in your mind; when do they get canonically discredited? Sometimes I almost think that everyone would be better off if everyone just directly talked about how the world really is rather than swiping at the integrity of each other's research programs, but I'm probably just being naive.

  • Re 3, my domain knowledge is somewhat weak, so everyone ignore me if my very words are confused, but I'm not sure what would count as a refutation or the mind being an algorithm. Surely (surely?) most would agree that the brain is not literally a computer as we ordinarily think of computers, but I understand algorithm in the broadest sense to refer to some systematic mechanism for accomplishing a task. Thought isn't ontologically fundamental; the brain systematically accomplishes something; why shouldn't we speak of abstracting away an algorithm from that? Maybe I've just made computationalism an empty tautology, but I don't ... think so.

  • I don't think the innate/learned dichotomy is fundamental; it's both, everyone knows that's it's both, everyone knows that everyone knows that it's both. Like that old analogy, a rectangle's area is a product of length and width. What specific questions of fact are people confused about?

Replies from: scientism
comment by scientism · 2010-01-02T00:57:27.647Z · LW(p) · GW(p)

I think these research programs represent something without a clear historical precedent. Traditional economics and generative linguistics, for example, could be compared to pre-scientific disciplines that were overthrown by scientific disciplines. But both exhibit a high degree of formal and institutional sophistication. I don't think pre-Copernican astronomy had the same level of sophistication. Economics also has data (although so did geocentric astronomy) whereas the generative tradition in linguistics considers data misleading and prefers intuitive judgement. What neither has is a systematic experimental research program or a desire to integrate with the natural sciences.

Cognitivism is essentially Cartesian philosophy with a computer analogy and experiments. In practice it just becomes experimental psychology with some extra jargon. Nativism, too, comes from Cartesian philosophy (Chomsky was quite explicit about this). While cognitivism has experiments it has an interpretation that isn't founded in experiment (the type of computer the brain is supposed to be and the algorithms it could be said to run is not addressed) and an opposition to integration with the natural sciences (the so-called "autonomy of psychology" thesis).

These research programs are similar to pre-scientific research programs but have managed to persist in a world where you have to attempt to "look scientific" in order to secure research grants and they reflect this fact.

You point to many problems and I wouldn't take any bets because of these. It would be too difficult to judge who had won. On the nature/nurture debate: Empiricism evolved into constructivism/interactionism (i.e., the developing organism interacting with the environment with genes driving development), which is the dominate view in biology, and it's not obvious what, precisely, modern Nativists believe. But it is obvious that they still exist since naive nativist talk persists almost everywhere else. It's similarly difficult to figure out what computationalists mean by their analogies and the degree to which they intend them to be analogies vs. literal propositions. This is probably why the natural sciences tend not to base research programs on analogies. What is clear is that they have a particular style of interpreting their results in terms of representations and sequential processing that is clearly at odds with biology and display no interest in addressing the issue.

Replies from: orthonormal
comment by orthonormal · 2010-01-02T01:20:29.796Z · LW(p) · GW(p)

Nativism, too, comes from Cartesian philosophy (Chomsky was quite explicit about this).

First, this is the genetic fallacy. Secondly, I don't take Chomsky's authority seriously.

The experimental evidence that, say, Steven Pinker presents in How the Mind Works for innate mental traits and for the computational perspective are sound, and have nothing to do with Cartesian dualism.

Replies from: scientism
comment by scientism · 2010-01-02T01:37:32.655Z · LW(p) · GW(p)

The point is that the views have their origins in philosophy rather than experiment. We're not dealing with a research program developed from a set of compelling experimental results but a research program that has inherited a set of assumptions from a non-empirical source. This is more obviously the case with computationalism, where advocates have shown almost no interest in establishing the foundational assumptions of their discipline experimentally, and some claim that to do so would be irrelevant. But it's also true for nativism where almost no thought is given to how nativist mechanisms would be realised biologically.

comment by gwern · 2010-08-23T14:39:30.754Z · LW(p) · GW(p)

I'm not entering any of these into PredictionBook because all 4 strike me as hopelessly argumentative and subjective. (Take #1 - what, you mean stigmatised even more than it already is as the province of racists/sexists/-ists?)

comment by DanielLC · 2010-01-02T01:08:46.180Z · LW(p) · GW(p)

Regarding 3, there's no way to find evidence against it (or for it, for that matter). You can't look at a given system and measure its sentience. The closest to that anyone's ever attempted is to try and test intelligence, but that assumes cognitivism/computationalism, among other things.

I agree with orthonormal, except that I don't have $10,000 to bet.

comment by AdeleneDawner · 2010-01-04T02:08:00.633Z · LW(p) · GW(p)

Whatever position I'm taking (away from this thread) is irrelevant for the purposes of this thread.

Incorrect. You seem to have a concept of what rationality is that's not very close to the reality; the reality doesn't involve ignoring data, but rather giving it appropriate weight relative to the situation at hand. The high probability that you're not actually interested in learning about or doing the things that we're doing here is definitely relevant to any thread you make an appearance in.

You may say that my prediction is trite or obvious, but that's about the limit for reasonable critical response.

I propose that commenting that you're using the words 'evidence' and 'legitimate' in nonrational ways is also a reasonable response.

There are no such things as independent trolls.

I doubt this. I suggest that an accurate definition of 'troll' would hinge on whether the suspected troll is trying to evoke emotions in eir target, and if so, which ones. This does make it hard to determine whether someone is actually a troll, or just socially clumsy, but we seem to have evidence of the former in your case: Your repeated comments about other posters' emotional and mental states suggest that you're thinking about those mental states much more than is normal, and an attempt to alter those states would be a likely reason for such interest.

My negative "karma", accrued on this thread, appears to me as a reflection of the emotional imbalance (and, therefore, irrationality) that appears to permeate this site.

If you're actually interested in learning why we believe that this isn't true, there are several relevant discussions that we could provide you with links to. I somehow doubt that you're actually interested in that, though.

comment by bloody_mike · 2010-01-02T14:02:29.535Z · LW(p) · GW(p)

next year:

  • Germanys (as of 2009) foreign minister Westerwelle will trip over a mistake in international diplomacy and the German CDU / FDP government will fall apart. -more pedestrians than ever get killed in motorcycle crashes in London, and motorcycles get banned.
  • the congestion zone will become bigger

next ten years:

  • Chinese economy collapses and plunges the world economy into its' biggest crisis ever -China colonialises Africa even further, and builds more factories. At the end of the decade, production has moved from China to Africa, leaving Billions of Chinese unemployed, Africa gets richer. This marks a turning point.
Replies from: taw
comment by taw · 2010-01-02T23:36:25.938Z · LW(p) · GW(p)

I would take you on most of your predictions on even odds. I'd gladly on 50% odds bet that:

  • No collapse of German government in 2010
  • No ban on motorcycles in London in 2010
  • No Congestion zone enlargement in 2010
  • Chinese economy in 2020 is more prosperous than in 2009
  • Chinese industrial production in 2020 larger than in 2009
  • Non-agricultural employment in China in 2020 larger than in 2009
Replies from: taw
comment by taw · 2011-01-02T12:24:45.429Z · LW(p) · GW(p)

I was right on all of these:

  • No collapse of German government in 2010
  • No ban on motorcycles in London in 2010
  • No Congestion zone enlargement in 2010

I'm ridiculously confident about predictions for 2020 as well.

comment by Thomas · 2009-12-31T19:25:20.838Z · LW(p) · GW(p)

I give 75% probability, that a RPSOP will be launched before 2020. (And that I will be down voted for this prediction!).

Replies from: EStokes, wedrifid, RolfAndreassen, Nick_Tarleton, Zack_M_Davis, Eliezer_Yudkowsky
comment by EStokes · 2010-01-01T00:50:43.822Z · LW(p) · GW(p)

What technology would a RPSOP require, and what exactly would a RPSOP do?

If you knew enough that you predicted your comment would be downvoted as it is, then you could've explained your reasons better in the first place.

Replies from: Thomas
comment by Thomas · 2010-01-01T10:30:04.378Z · LW(p) · GW(p)

At least, we already have humans with the prenatal genetic screening. This can qualify as a (slow) Self Optimizing Process of the human kind. Better and better children are born.

A software/hardware analog could also be rapid.

Replies from: EStokes
comment by EStokes · 2010-01-01T16:57:25.649Z · LW(p) · GW(p)

Downvoted for lack of a clear definition of a RPSOP.

Edit: A definition that I can understand. :P

Replies from: Nick_Tarleton, Nick_Tarleton
comment by Nick_Tarleton · 2010-01-01T17:28:10.449Z · LW(p) · GW(p)

He answered elsewhere.

(Downvote seconded; please use standard terms, define terms, and make responsive replies.)

comment by Nick_Tarleton · 2010-01-01T17:25:38.560Z · LW(p) · GW(p)

Presumably, it stands for Really Powerful Self-Optimizing Process, aka Really Powerful Optimization Process, aka superintelligence.

(Downvote seconded.)

comment by wedrifid · 2009-12-31T22:38:10.209Z · LW(p) · GW(p)

(And that I will be down voted for this prediction!).

There are very few predictions that you could append this to without earning a downvote.

comment by RolfAndreassen · 2009-12-31T20:47:36.137Z · LW(p) · GW(p)

What is an RPSOP? First page of Google doesn't seem to know, and searching on LessWrong finds only this comment.

Replies from: Thomas
comment by Nick_Tarleton · 2010-01-01T17:29:28.793Z · LW(p) · GW(p)

Care to bet?

comment by Zack_M_Davis · 2009-12-31T20:34:25.256Z · LW(p) · GW(p)

(Parent was at -1.) Upvoted for correctly predicting being downvoted.

Replies from: DanArmak
comment by DanArmak · 2009-12-31T21:02:50.487Z · LW(p) · GW(p)

I predict that I will be upvoted for correctly predicting this.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-31T21:09:46.426Z · LW(p) · GW(p)

User's karma is below 0 but doesn't show that way, for those of you wondering how someone could possibly be an LW reader since February and still not know better than to pull magic numbers out of his ass.

Replies from: PhilGoetz, Alicorn, orthonormal, Thomas
comment by PhilGoetz · 2009-12-31T21:19:00.030Z · LW(p) · GW(p)

You have no way of knowing that he's pulling them out of his ass. And even if he is, I think it's appropriate for this thread. There are dozens of equally unjustified predictions below that you didn't jump on. WTF?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-31T21:28:23.494Z · LW(p) · GW(p)

I looked through this thread and didn't see anything equally unjustified. The only prediction that came close was the one about a US state seceding, and that was by a user who was willing to make bets on their other predictions. This is just good-old-fashioned "Hey Rocky, watch me pull an amazing prediction out of my hat!"

Replies from: MichaelVassar, Larks, komponisto
comment by MichaelVassar · 2009-12-31T21:58:12.008Z · LW(p) · GW(p)

I agree, but I do think that given your status in the community maybe it behooves you to be nicer if at all possible.

Replies from: komponisto, Eliezer_Yudkowsky
comment by komponisto · 2009-12-31T22:15:32.744Z · LW(p) · GW(p)

Strongly agree.

(I think what happened here was that Eliezer was particularly annoyed at this prediction, since it's the sort of thing that gives his field a bad name.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-31T22:32:40.782Z · LW(p) · GW(p)

Down that road lies madness. I'm not Gandalf.

comment by Larks · 2009-12-31T23:07:15.493Z · LW(p) · GW(p)

While the estimate does seem crazy, no-one's actually offered Thomas a bet, and mattnewport didn't offer to bet; Thomas has, thus far, done what mattnewport did, except he didn't offer realistic bets.

If you're going to do a bit of policing of LW, you ought to either address your comment to him and offer him advice to improve, or else block him: if his comments are unerringly useless, you shouldn't be warning the flock about the wolf but removing him.

comment by komponisto · 2009-12-31T21:51:09.412Z · LW(p) · GW(p)

Upvoted for the reference, which I got.

comment by Alicorn · 2009-12-31T21:20:24.775Z · LW(p) · GW(p)

A user's karma is not a causal explanation for any ratio of Less Wrong readership to number-ass-pulling exhibited by that user.

comment by orthonormal · 2010-01-02T01:39:13.684Z · LW(p) · GW(p)

I shouldn't think there's anything wrong (within this thread) with just stating your probability estimates, so long as you're doing so in good faith. YMMV on whether the conditional clause holds in this case, obviously...

comment by Thomas · 2010-01-01T00:05:52.525Z · LW(p) · GW(p)

Every prediction of the future is a wild guess. It was just a wild guess back in 1931, that an A bomb will explode in a decade or two.

Need 100 more examples?

It's just my estimation, if you have a better one, please fell free to express it!

comment by adefinitemaybe · 2010-01-02T03:28:05.839Z · LW(p) · GW(p)

I predict that no legitimate evidence for my individual mortality will be presented to me during 2010... or ever. (100%)

Replies from: Jack, orthonormal, Unknowns, orthonormal, Alicorn, Zack_M_Davis, Technologos, Richard_Kennaway, Technologos, byrnema, adefinitemaybe
comment by Jack · 2010-01-02T11:25:20.654Z · LW(p) · GW(p)

I'll give you some great odds then. I will give you $5. In exchange, you will write a will giving me (or my heirs, should I be dead) your entire estate. Cryonic preservation counts as death. Deal? This is like a free $5 for you. If your current net worth is greater than $100,000, you are over 50 years of age or you have high earning potential we can talk about bumping that up to $25! It's like I'm giving it away!

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T11:35:22.529Z · LW(p) · GW(p)

What's up Jack, can't you comprehend what you read? You have to present the evidence of my mortality to ME. It is irrelevant whether or not I ever die.

And you're the one who is trying to tell me that Unknowns' linked article means anything?

Replies from: Alicorn, Jack
comment by Alicorn · 2010-01-02T15:03:01.133Z · LW(p) · GW(p)

you have to present the evidence of my mortality to ME.

If you're hoping to bait someone into slaying you so you can satisfy your deathwish without getting stiffed on life insurance payouts to your loved ones, you're missing a few steps.

1) You haven't provided your address! Read up on trivial inconveniences.

2) Post someplace with a larger population of violent criminals, or at least gun owners. We're pretty harmless around here.

3) Choose a venue where people cannot vent their frustration with you via downvoting. That way, it may build up to an efficacious level!

If you're being a troll: save yourself some trouble and go away. Eliezer's been irritable lately and is apt to boot you.

If you're actually, sincerely trying to have a fun exchange about whether you might be immortal (quantum or otherwise):

1) The little numbers on your comments are, in fact, important and should interest you. They indicate how you're being received, on average. You may have corrupted public opinion of this account to the point where you can't salvage it, but it can still provide valuable information, and nothing's stopping you from trying again in a month or two when you've mulled things over. If this is the venue you've chosen to have your discussion on, surely you think we make suitable interlocutors - and hopefully, that will let you find informational value in numeric disapproval too.

2) When people quibble with you over what words mean, that requires your attention too! They're trying to find a way to communicate with you. They're not being mean. If they were being mean, they'd just interpret you in the stupidest way possible and then snicker at you, not try to share information about how words get used.

3) Pretty much nobody here will find themselves strangely compelled by protestations that a) your opponents have poor reading comprehension; b) it is unfair to expect you to do any background reading even when links are supplied; c) you are winning; d) we are engaged in groupthink; e) you are an unheard-of non-quantumly immortal creature (although I'd be pretty impressed by certain sorts of video evidence!). Saying it once might be too much to resist! I understand! But repeating yourself isn't going to do anything that saying it once didn't.

Replies from: Vladimir_Nesov, adefinitemaybe
comment by Vladimir_Nesov · 2010-01-02T16:23:02.409Z · LW(p) · GW(p)

This should go in some kind of "Loud Passerby's Guide to Less Wrong" wiki page so that we would be able to post a link next time... The only way to stop repetition is to distill the argument in a reusable form.

comment by adefinitemaybe · 2010-01-03T02:27:04.993Z · LW(p) · GW(p)

If you're being a troll: save yourself some trouble and go away. Eliezer's been irritable lately and is apt to boot you.

If I were a troll, would going away somehow be more beneficial to me than being booted? I mean, does this Eliezer actually physically kick suspected trolls? Or would the end result be the exact same? And if I were a troll, do you suppose saving myself some trouble would be my overriding concern?

If you're actually, sincerely trying to have a fun exchange about whether you might be immortal (quantum or otherwise):

1) The little numbers on your comments are, in fact, important and should interest you. They indicate how you're being received, on average.

Why should I care how I'm being received by anonymous faceless strangers, whose posts I may never even have read, and who may not even be taking part in the discussion? Are there Wrongie awards up for grabs? I believe the girl mentioned something about a... check? Please don't tell me that I should alter my thinking and the expression of my ideas to suit popular opinion! Is that what you do?

You may have corrupted public opinion of this account to the point where you can't salvage it...

So, public opinion is going to hold something I wrote on this thread against me forever, and use it against me on other threads, whether it agrees with me on those other threads or not? Have I stumbled into the Old Fishwives' forum by mistake?

If this is the venue you've chosen to have your discussion on, surely you think we make suitable interlocutors - and hopefully, that will let you find informational value in numeric disapproval too.

I began posting here about a minute after I arrived for the first time. I'm just beginning to learn about the mindsets of some of the participants here. I'm not overly impressed so far. I'm prepared though, to give it a chance. No point in going by first impressions. I'll need a few more listenings before I decide if I like it or not.

When people quibble with you over what words mean, that requires your attention too! They're trying to find a way to communicate with you.

I received a response that said (in its entirety) "Taboo legitimate". The word 'Taboo' was a link to an article about something or other that appeared irrelevant. Do you suppose that poster was trying to find a way to communicate? When I questioned him, he apologized for his brevity and expanded on his post, and I withdrew the word 'legitimate", as it was redundant anyway. Others chimed in (as you do here) telling me how I was the one at fault.

Pretty much nobody here will find themselves strangely compelled by protestations that a) your opponents have poor reading comprehension

Yet, Jack, for one, appears to have poor reading comprehension.

b) it is unfair to expect you to do any background reading even when links are supplied

I don't protest that it's unfair. I state that I'm not prepared to do it. If you don't like it, either stop providing such reading material in lieu of originally-phrased arguments, or don't engage me. Nobody is forcing you to respond to me.

c) you are winning

I didn't make this a competition, however, I am winning the debate. It's immaterial that people don't find themselves compelled by my stating such (in face of the many votes that state otherwise, and yet fly in the face of the rather obvious missing evidence of my mortality). Is humility big here?

d) we are engaged in groupthink

So, you're saying that the group, as a body, denies it is engaged in Groupthink? Is there any room for discussion on that?

e) you are an unheard-of non-quantumly immortal creature (although I'd be pretty impressed by certain sorts of video evidence!).

I didn't say that. I said I could be that. Part of your job is to present me with evidence that I'm not. America was unheard-of... until it was heard-of. And it was right there. The people just couldn't hear of it, at the beginning.

Right, enough fun. Let's stick to the topic at hand from now on.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-03T03:48:18.758Z · LW(p) · GW(p)

Looks like people are getting fatigued with downvoting you, so I'll be deleting all your comments from now on.

I can't say I'm happy that you got so many replies, either; but I suppose if the community were sufficiently annoyed with the repliers, they could downvote the replies.

Replies from: Nick_Tarleton, Alicorn, Bo102010, Unknowns
comment by Nick_Tarleton · 2010-01-03T03:53:22.467Z · LW(p) · GW(p)

Psychology note: I'd rather people had stopped replying too, but looking at any particular reply, I'm more likely to upvote it out of positive affect ("take that!") than remember to think about incentives, and if I did remember, I'd still have a hard time downvoting a true, (locally) on-topic comment. Probably others are doing the same. Time to be more self-aware.

Replies from: Jack
comment by Jack · 2010-01-03T04:21:22.510Z · LW(p) · GW(p)

I found the question "What is wrong with this person?" quite interesting and some of the responses were insightful in this regard. We don't get to encounter extreme irrationality very often and I think the experience of failing to communicate is a good one to have occasionally. Being reminded what bad epistemic hygiene looks like is a great reminder to keep washing up. I also think one or two of the replies include good material to put on a Less Wrong intro/about/faq page if we ever get around to doing it.

The problem is that once you start arguing with someone giving up without resolution is like ending sex before orgasm. So it went on much longer than it should have.

comment by Alicorn · 2010-01-03T03:50:37.526Z · LW(p) · GW(p)

Do people get their downvotes back when a downvoted comment is deleted?

Replies from: rhollerith_dot_com, Eliezer_Yudkowsky
comment by RHollerith (rhollerith_dot_com) · 2010-01-03T06:07:19.684Z · LW(p) · GW(p)

Do people get their downvotes back when a downvoted comment is deleted?

As of about 9 months ago, no.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-03T03:52:12.283Z · LW(p) · GW(p)

Not that I know of. If one were going to implement a behavior like that, people would get downvotes back when a post went to -3 or under and became invisible by default. But coding resources are scarce.

comment by Bo102010 · 2010-01-03T05:29:39.028Z · LW(p) · GW(p)

While I applaud drumming out a troll, I don't mind that he wasn't ignored (at least for a while).

Meeting a crazy person/anoying person/young earth creationist/troll gives you the chance to practice Bayesian Judo. There have been times when I wish I'd dedicated more practice to countering ridiculous but verbose interlopers.

Edit: Looks like Jack said this earlier and better.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-03T13:21:58.640Z · LW(p) · GW(p)

But it's not Bayesian Judo: you can't argue with a rock, a person whose intent is to ignore what you say.

Replies from: Bo102010
comment by Bo102010 · 2010-01-03T16:17:38.258Z · LW(p) · GW(p)

Fair point. Perhaps it's arguing with the keeper of the box?

comment by Unknowns · 2010-01-03T04:33:43.680Z · LW(p) · GW(p)

I upvoted this (it was at -1) because despite the fact that I was one of the ones replying, I'm not interested in hearing any more of it either.

Replies from: byrnema
comment by byrnema · 2010-01-03T05:06:14.596Z · LW(p) · GW(p)

I will admit that I feel differently.

All of this commenter’s comments are contained in a single thread, of which adefinitemaybe is the parent. It will already be down-voted to the bottom of the Open Thread, so when these comments are no longer ‘Recent’, if someone is reading this far, it is because they find this thread interesting -- or even hilarious, like I do.

I find the thread hilarious not only because the dialogue is clever, but also because I'm enjoying the feeling that adefinitemaybe is the mad, crazy, irreverent jester in the court who provides us the opportunity to laugh at ourselves.

I mean, this is great stuff:

Really though, do you guys ever just say die, from the get go, and move on to the next, actually debatable, thing? It's like a threw ball or som.... {Woof!} As I was saying, it's like I threw a B-A-L-L or something.

It’s a problem if someone is belligerent all the time, and it’s really annoying if they post in more than one thread. (The last troll was really quite a troll because he would drop inane comments all over the place – down-voting those comments would do nothing to hide them because they were nested in otherwise good threads.)

Given that I happen to enjoy this character (without approving of the behavior) – thinking, say, of Han Solo or even Churchill -- I would like to suggest the following umbrella solution for all troublesome behavior; trolls and belligerent personalities alike:

A person can’t comment unless their karma is above -10, and this should be announced in some apparent place. That way people with belligerence and intelligence can game their comments so that they don’t go below the threshold. Keep in mind that this really would force them to be a positive influence; because the community could always down-vote all their comments to kick them out.

comment by Jack · 2010-01-02T12:51:38.321Z · LW(p) · GW(p)

Oh no. I've been quite convinced by this thread. It is clearly impossible to present you with anything you'll recognize as evidence of your mortality.

I'm serious about the bet though. Or does your belief that there is no evidence that you are mortal not change the your belief that you are indeed mortal?

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-03T01:49:57.797Z · LW(p) · GW(p)

Oh no. I've been quite convinced by this thread. It is clearly impossible to present you with anything you'll recognize as evidence of your mortality.

Yes, it is clearly impossible. As I predicted. Although you could have at least tried. Does this mean I DO win? Or is this some bizarro debateland, where you still win?

I'm serious about the bet though. Or does your belief that there is no evidence that you are mortal not change the your belief that you are indeed mortal?

I'm not interested in your bet. I might die. There is an equal weight of evidence (i.e., zero) for my immortality. You want to bet $5 against my entire estate on a coin toss. No thanks. Perhaps that why I didn't predict that's I wouldn't die in 2010.... or ever.

Replies from: Bo102010
comment by Bo102010 · 2010-01-03T02:17:29.572Z · LW(p) · GW(p)

Would it be accurate to summarize your views as follows:

(1) There is zero evidence for the proposition that you are mortal

(2) There is zero evidence for the proposition that you are immortal

Therefore: You will act as though you are immortal

?

comment by orthonormal · 2010-01-02T19:02:27.808Z · LW(p) · GW(p)

I at first assumed (when you had only the first two comments to your name) that you had been a lurker here for some time, had tried to make either a joke or a half-serious point, and might have been blindsided by the downvotes. In the intervening time, I have changed my opinion of you, and I think that Less Wrong is not suitable for you.

There are plenty of Internet sites composed of people whose purpose is simply to signal that they are the cleverest person there. Although I do not expect you to be aware of this fact, Less Wrong is not primarily such a place. This community is defined by, among other things, wanting to understand others' objections before dismissing them, even should that require five minutes of reading and pondering. If you look back over your exchange, several of us have replied in good faith to clarify your assertion and expand upon the areas of disagreement. You have not shown the least interest in returning the favor.

Your presence here is wasting everyone's time, including yours; you might be much happier, I think, engaging in this behavior on another forum where it is more the norm. Although you may not believe yourself to be a troll, you will almost certainly be regarded as one here.

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-03T01:10:04.705Z · LW(p) · GW(p)

I have changed my opinion of you, and I think that Less Wrong is not suitable for you.

Thank you. I don't care what you think about my suitability here, and I suspect your motives for so advising me to be based in unjustified ill-will. Please restrict your subsequent responses to me to the topic in hand.

If you look back over your exchange, several of us have replied in good faith to clarify your assertion and expand upon the areas of disagreement. You have not shown the least interest in returning the favor.

I made a prediction. If you wish to challenge that prediction, the onus is upon you to present me with evidence of my mortality. I will either accept that evidence or reject it. Readers may judge the rationality of each side, based upon our exchanges. Nothing else is relevant. Unless you require me to explain the prediction to you.

Your presence here is wasting everyone's time, including yours

I feel my time here has been far from wasted. If you feel that commenting on my prediction or responding to my posts represents a waste of your time, perhaps you should consider just doing something else. Responding to a different post, for example. If you are being forced to respond to my posts, by someone who may be inside your house watching you, please indicate by blinking one eye.

you might be much happier, I think, engaging in this behavior on another forum where it is more the norm.

What behavior? Making a prediction, then defending it? No, I think the New Year's Prediction thread on this site is as good a place as any. In fact, it's ideal.

Although you may not believe yourself to be a troll, you will almost certainly be regarded as one here.

I'm sorry, I don't speak weasel. Did you say "Although you are apparently not a troll, I'm going to do my best to convince others that you are, and, perhaps, eventually have you banned. Basically because I feel somewhat threatened by your manner of thinking."? If I'm a troll, who, until provoked (as here), sticks to arguing his side of a relevant debate in good faith, what does that make you and others, having made specific posts telling me how I'm not suitable and would be happier elsewhere?

Again, I'm not interested in what you think about me, or in receiving any advice you might have for me. If you don't want to engage me, don't. If nobody engages me, I'll leave. Obviously.

Apart from that, I'm ready and willing to respond in good faith to anyone who makes an originally-phrased argument here against my prediction. I'm not interested in being directed to read the ideas of third parties. I can find my own way to the library.

Replies from: Bo102010
comment by Bo102010 · 2010-01-03T01:39:13.405Z · LW(p) · GW(p)

I'm really curious why you didn't take Jack's $5 above.

comment by Unknowns · 2010-01-02T06:28:10.957Z · LW(p) · GW(p)

Clearly, if you observed that you were 1,000,000,000 years old, this would support the theory that you are immortal. But you observe that you not that old, but much less. Therefore, by Bayes theorem, it becomes more probable that you are not immortal. Since you assign odds of 100% to not being presented with such evidence, this means you should be willing to wager any amount, against nothing, that you would not receive such evidence. You may therefore now send me all your money.

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T07:26:13.616Z · LW(p) · GW(p)

If I am immortal I am ageless. I may look a certain way, but that is irrelevant. Even if I was born, in the normal human way, I'd still be ageless, and I'd still be immortal. That I took human form for a few years is irrelevant. Unless you can show that no immortals were ever born, in the normal human way, and appeared to age, and even appeared to die (immortals can't really die), your 'evidence' does not constitute evidence.

"100%" can't denote odds. And, being an immortal, I'm not a betting man.

Replies from: Unknowns, adefinitemaybe
comment by Unknowns · 2010-01-02T07:34:33.492Z · LW(p) · GW(p)

Even if it is possible that you always existed, you do not remember always existing. If you remembered always existing, this would increase the chance that you are immortal. Since you observe your lack of memory of always existing, your chances of being mortal increase.

As for this :"Unless you can show that no immortals were ever born..." etc., I do not need to show this unless I want to prove with 100% certainty that you are not immortal. However, I do not need to prove this: it is enough to give some evidence, however limited, that you are mortal, and I have done this.

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T08:31:32.154Z · LW(p) · GW(p)

No, you haven't done that.

I may have memory of always existing. However, that would be irrelevant. If I told you about it, and you accepted my evidence, and presented it back to me as evidence, it could only represent evidence of my immortality. I require evidence, from you, of my mortality.

I may have no memory of always existing. However, that would be irrelevant also, given that human beings, apparently, (and, for all anybody knows, immortals) have no knowledge of the rules governing immortality. Perhaps immortals don't have such memory. Plenty of human beings suffering amnesia have no memory of having lived in previous periods. That doesn't constitute evidence that they didn't live in those periods.

Replies from: Unknowns
comment by Unknowns · 2010-01-02T10:10:44.808Z · LW(p) · GW(p)

http://lesswrong.com/lw/ih/absence_of_evidence_is_evidence_of_absence/

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T10:19:59.024Z · LW(p) · GW(p)

Do you have any evidence of my mortality to present to me or don't you? Please, don't respond with any more links to other site pages in lieu of original, rational thought and coherent argument.

Replies from: Jack
comment by Jack · 2010-01-02T10:56:29.761Z · LW(p) · GW(p)

The article you were linked to explains exactly what you're getting wrong. i looked it up to give it to you before I saw that Unknowns already had. It is a waste of everyone's time to repeat the the argument the article makes in are own words. It is extremely short. If you can't be bothered to read it then everyone is going to assume that you aren't arguing in good faith. They would be right to do so.

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T11:25:17.474Z · LW(p) · GW(p)

If we are all to just read the articles, what's the point of having a discussion forum? If Unknowns wants to use the content of that article to make his argument, then he should do so, and make the argument in his own words. It is sheer laziness (and verging on the plagiaristic) to just point to articles and say "My argument is in there somewhere. Please respond as soon as you identify it. Of course, I'll get all bent out of shape if you misinterpret what I meant to say, although I'm prepared to accept some associated praise for having ALSO thought that which the article's author has taken the time to write down and publish."

"If you can't be bothered to read it then everyone is going to assume that you aren't arguing in good faith."

I'm taking the time to construct original arguments here. I'm also taking the time to read all original responses that people offer, and respond to those. I'm receiving some responses to those arguments in the form of links to articles penned by third parties. And you have the audacity to threaten me that people will assume my motives are unwholesome if I refuse to accept that sort of lazy response as legitimate (in the non-geek sense of the word), and to inform me that they would be right to do so? Suppose, in lieu of this response to you, I just directed you to The Collected Works of Friedrich Nietzsche? Would you read them and get back to me with your rebuttal? If the article is as short as you say, shouldn't Unknowns have the common courtesy to paraphrase it here?

I made a prediction. So far it has come true. Nobody has yet presented me with any evidence of my mortality. If that pains any of you to the point of abusing the privilege of this site's forward-thinking voting system, please, take a moment to give yourself a good slap.

Replies from: nhamann
comment by nhamann · 2010-01-02T17:59:34.849Z · LW(p) · GW(p)

The point in you reading the articles is that the inferential distance between you and the rest of the members of this community is so large that communication becomes unwieldy. Like it or not, the members of Less Wrong (like the members of most communities which engage in specialized discourse) chunk specific, technical concepts into single words. When you do not understand the precise meaning of the words as they are being used, there is a disconnect between you and the members of the community.

The specific problem here is in the use of the word "evidence." By evidence, we mean (roughly) "any observation which updates the probability of a hypothesis being true." By probability, we mean Bayesian probability. I'm not going to go through the probability calculation, but other commenters are correct: given the evidence that you are not really, really old, you should revise the probability assigned to your hypothesis of immortality down significantly.

If you are not going to do the requisite reading that would enable you to participate in this discussion community, it would probably be best for both you and everyone here if you just left now. If you do feel like participating, I highly recommend going through the sequences.

comment by adefinitemaybe · 2010-01-02T07:47:12.747Z · LW(p) · GW(p)

Anyway, not to worry. We can still be sure of taxes.

comment by orthonormal · 2010-01-02T05:39:33.977Z · LW(p) · GW(p)

You use this word, "evidence". I do not think it means what you think it means.

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T06:27:17.441Z · LW(p) · GW(p)

Your link directs to one definition of the word. Whatever interpretation of "evidence" you reasonably use in context (which qualification excludes yours), you won't be able to present me with any for my individual mortality... ever. Note: The challenge is not to provide evidence for my theory, but of my mortality

Replies from: Unknowns
comment by Unknowns · 2010-01-02T06:33:43.071Z · LW(p) · GW(p)

"I am mortal" and "I am immortal" are definitely two theories, and each of them can be supported or opposed by many types of evidence. Providing evidence against "you are immortal" is providing evidence for your mortality. I pointed to some such evidence in my other comment. If you do not accept this type of evidence, what is your definition of "legitimate evidence"?

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T07:37:07.413Z · LW(p) · GW(p)

I don't claim to be either mortal or immortal. I predict that no evidence of my mortality will be presented to me in 2010... or ever. My definition of legitimate evidence is evidence which would leave me without a reasonable doubt that I'm going to die... and stay dead.... forever. In the absence of any such evidence being presented to me, I have no recourse but to consider myself to be immortal. You may too. I'm particularly fond of coconuts and virgins.

Really though, do you guys ever just say die, from the get go, and move on to the next, actually debatable, thing? It's like I threw ball or som.... {Woof!} As I was saying, it's like I threw a B-A-L-L or something.

Replies from: Unknowns
comment by Unknowns · 2010-01-02T07:40:37.501Z · LW(p) · GW(p)

So if I showed you that there was a 90% chance of that happening, you would still say there is no legitimate evidence, just because a 10% chance would be enough for a reasonable doubt?

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T07:58:11.945Z · LW(p) · GW(p)

If you show me that there's a 90% chance of that happening, I'll make you an archbishop. I say there's no chance of that happening. So that, I'm afraid, leaves you awkwardly situated amongst the highly smiteable.

comment by Alicorn · 2010-01-02T03:43:53.889Z · LW(p) · GW(p)

You have a 0% credence in every state of affairs, including ones that you haven't thought of, which include a mechanism for you (the software) to note the death of your hardware? Or do you mean something else?

comment by Zack_M_Davis · 2010-01-02T03:40:08.584Z · LW(p) · GW(p)

Taboo legitimate.

Replies from: wedrifid, adefinitemaybe
comment by wedrifid · 2010-01-02T03:45:28.378Z · LW(p) · GW(p)

I think he means nobody will prove to him that he has been killed.

comment by adefinitemaybe · 2010-01-02T06:53:23.163Z · LW(p) · GW(p)

I'm new here. Is there a shorthand code in operation? Or are words rationed? Perhaps you'd like to expand your comment?

Replies from: radical_negative_one, Zack_M_Davis, anonym, Nick_Tarleton
comment by radical_negative_one · 2010-01-02T07:18:02.832Z · LW(p) · GW(p)

Zack M Davis's point is explained by the article he has linked to.

The tl;dr version is that your use of a certain word (in this case "legitimate") is not helping a productive conversation. Instead, explain exactly what you mean when you say "legitimate", because the word can mean different things, so it's not clear which meaning you're using.

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T09:24:53.664Z · LW(p) · GW(p)

I've already retracted the word "legitimate" as being redundant (although I don't see how you can have any fruitful discussion if you're going to ask for the meanings of everyday words like "legitimate". Don't you find such a practice bogs things down?)

Now, can someone please offer a friendly explanation as to why ALL of my comments have attracted negative votes, even though people are still engaging me, and still failing to provide any evidence as to my mortality?

Even my original post, which simply outlines my 2010 prediction, has garnered 4 negative votes. Wasn't I supposed to make a New Year's prediction on this New Year's Predictions thread? Why would a 2010 prediction get any negative votes, or any votes at all, until either it came true or 1/1/11?

Replies from: Bindbreaker
comment by Bindbreaker · 2010-01-02T10:13:55.991Z · LW(p) · GW(p)

You're being downvoted because people think you're either being irrational or trolling.

New Year's prediction: adefinitemaybe will be banned from Less Wrong. Sixty-five percent.

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T10:35:13.644Z · LW(p) · GW(p)

I made a prediction. People challenged me on that...weakly. I responded to them, rationally and in good faith. People are voting me down because I issued a challenge they can't meet; because I hit them with a conundrum that they can't solve. I've dented pride. And, because I'm new, the humiliation they irrationally feel is doubled. How can I be trolling when all I've done is respond, amicably and on-topic, to posters' comments? Isn't it true that all the hostility is coming from their side?

"New Year's prediction: adefinitemaybe will be banned from Less Wrong. Sixty-five percent."

Why, what rule have I broken? Is there a rule about riding roughshod over wannabe thinkers intellectual shortcomings?

Mommy, the geeks won't let me sit at their table! And, because I lack the "karma" that only the geeks can issue, I can't do anything about it.

New Year Prediction: The Less Wrong community becomes so insular and inbred that its members discover that more and more of their thoughts begin to be spawned retarded. Have you people had a look at yourselves recently?

Replies from: Bindbreaker
comment by Bindbreaker · 2010-01-02T11:04:04.193Z · LW(p) · GW(p)

Rationale behind my prediction:

I don't dislike you (I've upvoted some of your comments, downvoted others, and left some alone entirely), but people who are being consistently downvoted have been told to leave in the past. You match that profile the best of anyone I've seen on this site-- better than someone who Eliezer recently asked to leave. Eliezer was himself downvoted when he announced that, so I'm not sure whether this rule is still in effect, which is why I estimated sixty-five percent instead of ninety.

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T11:50:22.000Z · LW(p) · GW(p)

I've been signed up here for about three hours - in total length of membership, not accumulated posting time. There has never been a time when any of my posts had a higher vote score than zero. It's possible that that is the result of a net negative vote on each, but I'm inclined to think that I haven't received one positive vote from anyone. Perhaps you were confusing me with someone else.

If I'm asked to leave by an authority, I will leave immediately without complaint. However, I can't envision having any kind of enjoyable posting future here anyway. I don't think people here are interested in exploring new territory, as much as belonging to a pretend thinkers club.

Replies from: Bindbreaker
comment by Bindbreaker · 2010-01-02T12:07:58.712Z · LW(p) · GW(p)

It is the result of a net negative or zero vote on each. Independent of any action by other members, I know I've upvoted three of your posts-- "I've already retracted the word "legitimate" as being redundant..." "I may have memory of always existing..." and "Anyway, not to worry. We can still be sure of taxes." I am not sure why you would doubt me on this.

Did you read any of the articles here or on Overcoming Bias before signing up?

comment by Zack_M_Davis · 2010-01-02T08:13:27.480Z · LW(p) · GW(p)

Right, sorry. You say (presumably jokingly) that there's no "legitimate" evidence for your mortality, but surely the fact that you're human and humans have been known to die eventually is probabilistic evidence that you are mortal. I was trying to hint at this by indicating that there were hidden assumptions in the word legitimate, but on reflection I might have been misusing/overloading the taboo terminology. Do downvote the grandparent.

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T08:50:26.999Z · LW(p) · GW(p)

Perhaps I should have dispensed with the word "legitimate". In retrospect it was redundant. If evidence is not legitimate, it is not evidence.

Again, I don't know if I am human, in the generally accepted sense. I don't know that I'm going to die. Even if I am a normal human being, however, I can't accept that what has purportedly happened to any other human being must happen to me. Especially, given the fact that I don't know what actually happened to the vast majority of people that were ever born. Far more people have "disappeared" out of my life (after having briefly entered it) than have apparently died (to my satisfaction, evidence-wise). So, for me, individually, the evidence would suggest that most people disappear (go on living elsewhere in the world - out of my ken), rather than die.

Where did anyone get the idea that the preponderance of evidence shows, to the satisfaction of any individual, that most human beings die? Isn't that just hearsay, based on very small evidence samples?

Is it possible that only human beings who maintain close relationships with other human beings die? Could it be that many loners are immortal? Is there any global agency that is matching deaths to births, and investigating all anomalies?

comment by anonym · 2010-01-02T07:21:20.011Z · LW(p) · GW(p)

Click the 'Taboo' link in the grandparent comment of this one.

comment by Nick_Tarleton · 2010-01-02T07:16:23.598Z · LW(p) · GW(p)

Did you click on the link?

comment by Technologos · 2010-01-02T03:53:50.171Z · LW(p) · GW(p)

Are you banking on subjective quantum immortality?

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T06:37:20.908Z · LW(p) · GW(p)

I'm not "banking on" anything. So far, I am immortal. As I am immortal, I am not subject to the limitations that mortal humans have been seen to have been subject to until now.

Replies from: radical_negative_one
comment by radical_negative_one · 2010-01-02T07:52:15.969Z · LW(p) · GW(p)

Are you a human being, adefinitemaybe? It seems that all humans in the past have died, and all humans currently alive appear to be following the same pattern that leads to eventual death. How are you different from other humans who are known to be mortal?

Your suggestion that you are immortal is basically the same as saying, "cars are known to break down under certain conditions, and my car is just like the others, but this specific car hasn't broken down yet so I'll assume that it will never break down."

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T08:13:11.751Z · LW(p) · GW(p)

Am I a human being (if it's not a personal question)? I don't know. What am I, omnipotent? Perhaps I took human form. Anyway, even if I know something you don't, the prediction is that I will never be presented with evidence of my mortality... obviously, by any of you 6.8 billion mayflies.

Again, whether I'm actually immortal or not is irrelevant.

comment by Richard_Kennaway · 2010-01-02T19:16:17.151Z · LW(p) · GW(p)

I shall assume that you are human (which I think is virtually certain) and speaking in good faith (which I shall assume for the sake of the conversation). You say "I don't know if I am human, in the generally accepted sense", but I do not believe you.

These being so, evidence that you will die and not live again, and that you did not exist before you were conceived, lies in such observations as these:

  1. The tendency of every human body to stop working within a century and then disintegrate. Not merely the observation that people die, which is as old as there have been people, but the extensive knowledge of how and why they die.

  2. The absence of any reliable evidence of survival of the mind in any form thereafter.

  3. The absence of any reliable evidence of existence of that mind before conception.

  4. The absence of any reliable evidence of a mind existing independently of the physical body; the existence of much reliable evidence to the effect that the mind is a physical process of the brain.

Further argument against the idea of any sort of intangible mental entity separate from material things, can be found here.

Of course, many have argued otherwise. Not merely books, but whole libraries could be collected arguing for the existence of souls independent of the body and their immortality. But even if the matter were seriously contendable, that would not alter the existence of the evidence I have given, merely put up other evidence against it.

So there is the evidence that you asked for. I am of course only summarising things here. But what else is possible? If someone who knows no mathematics at all starts babbling to me about the 4-colour theorem, what can I do but advise them to study mathematics for a few years?

That being said, however, you might indeed be immortal! There is one slender chance: advances in medicine. We live at the first time in history when we can begin to see the chance of remedying the fragility of the body. Just beginning, and as yet only a chance. Had you been born two centuries ago, you would certainly be dead today, drowned in the river of time. The challenge, should you choose to accept it, is to stay alive long enough to benefit from medical advances that will enable you to swim upstream, and perhaps even overtake the current. That is the only route to immortality there is, and the best reason there has ever been to take care of your body as well as you possibly can, to make it last until you can catch that boat.

Edited to add: two slender chances! The other is cryonics. Live as well as you can for as long as you can, and if radical life extension still isn't available, get your head frozen. And don't get Alzheimer's.

I'm taking the time to construct original arguments here.

You are not. You began with a bare demand for evidence of your mortality. (Why? Why that question, and why here?) When you didn't like the answers, you demanded more loudly, then threw a tantrum. You even gave yourself a Signal from Fred here:

Mommy, the geeks won't let me sit at their table!

You intended that ironically, but it exactly describes your situation: a child who has wandered into a conversation among adults and understands nothing. You do not even understand that there is something for you to understand. But the remedy is easy: read every post linked to here. It's about the size of a book. Post nothing until you have finished. If you understand what you read there -- not agree with, but understand -- then there will be enough of a common background to have a useful conversation.

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T23:58:42.202Z · LW(p) · GW(p)

I shall assume that you are human (which I think is virtually certain) and speaking in good faith (which I shall simply assume for the sake of the conversation).<

  1. Why wouldn't you just take it as read that I'm speaking in good faith? You've used a lot of words in attempting to paint me as a country bumpkin, not fit to tie your intellectual sandals. That you preface all that with a comment about having to overtly assume my good faith makes me think you're not that sure about the bumpkin thing.

  2. You can't just assume I'm human. If that were valid, we could all just assume whatever we wanted here, and claim we had won our arguments.

You say "I don't know if I am human, in the generally accepted sense", but I do not believe you.<

Apart from your beliefs being entirely irrelevant, how is it possible for you to form an opinion about what I claim not to know, that is not entirely founded on your emotion? Since the greatest philosophers have struggled over the ages with the question "Am I?", I don't see how "Am I human?" will likely ever have a cut and dried answer.

These being so, evidence that you will die and not live again, and that you did not exist before you were conceived, lies in such observations as these:

  1. The tendency of every human body to stop working within a century and then disintegrate. Not merely the observation that people die, which is as old as there have been people, but the extensive knowledge of how and why they die.<

How many human bodies have you personally witnessed stop working and begin disintegrating, within 100 years? I don't grant that that is a tendency at all. We only have information about those who have died. If we are to examine the chances of my immortality, we must look for those who haven't died. Do you have any relevant input with respect to people who haven't died? If not, does the fact that you, personally, don't, constitute evidence of anything? If not, does the same apply to all other individuals? If so, may we say that there is no reliable evidence that humans all die?

  1. The absence of any reliable evidence of survival of the mind in any form thereafter.<

Do you have any reliable evidence for the existence of the mind at ANY time? If so, can you present it to me. That you purportedly think will not convince me. Isn't it true that the existence of mind can only ever be hearsay (and, no, I'm not singling you out here)?

  1. The absence of any reliable evidence of existence of that mind before conception.<

Oh! please. What an entirely stupid thing to write. It's fundamental that evidence can't be produced for the non-existence of a thing. What are you going to say "It isn't there, none of us present can experience it, so it can't exist"?

  1. The absence of any reliable evidence of a mind existing independently of the physical body; the existence of much reliable evidence to the effect that the mind is a physical process of the brain.<

Again, you have no reliable evidence for the existence of mind (in the form it is widely thought to exist, i.e., one each, inside our bodies somewhere, etc.). For all you know, you're plugged into the Matrix.

Of course, many have argued otherwise. Not merely books, but whole libraries could be collected arguing for the existence of souls independent of the body and their immortality.<

Wow, is that what got your backs up? The idea that I might be trying to prove the existence of God? Is that what all the witch-hunting is about? My other prediction is future formation of an Atheist Inquisition, as Atheism gradually takes the classic form of a religion.

But even if the matter were seriously contendable, that would not alter the existence of the evidence I have given, merely put up other evidence against it.<

I resent your framing the debate (which you, contradictorily, have felt the need to participate in at length) as not seriously contendable. You've given no such evidence. Everything you've said relies on the definite prior existence of things as yet not proven to exist, and the definite non-existence of certain other things. Your arguments then, are entirely untenable.

So there is the evidence that you asked for. I am of course only summarising things here. But what else is possible? If someone who knows no mathematics at all starts babbling to me about the 4-colour theorem, what can I do but advise them to study mathematics for a few years?<

"If someone who knows no mathematics at all starts babbling to me about the 4-colour theorem..." Do you ever even think a little bit before you type something? Or do you just copy and paste from your Bumper Book of Insults for Use by the Pompous?

That being said, however, you might indeed be immortal!<

That's not being argued. The challenge is for you to present me with evidence of my mortality. So far, you've listed a lot of irrelevancies about what you personally believe about "human beings". All your evidence stands or falls on a) human beings all having died in the past (which, of course we KNOW they haven't, or we wouldn't be having this conversation), b) all human beings being conventional, and c) my being a conventional human being too. For it to carry any weight at all (and I suspect it wouldn't ever), you'd first have to prove that there is only one kind of human being, that those human beings all die (and have all died, say, within 200 years of being born), and that I am a human being (and not just in one physical shape and form either, but entirely).

Had you been born two centuries ago, you would certainly be dead today, drowned in the river of time.<

Certainly? Does that word have a special meaning here I don't know about? Can you list any other certainties? If you can make the list long enough, we can dispense with any further musing here. For my part, I know of nothing that is certain. Of course, I'm only a humble child amongst knowledgeable adults here.

The challenge, should you choose to accept it, is to stay alive long enough to benefit from medical advances that will enable you to swim upstream, and perhaps even overtake the current. That is the only route to immortality there is...<

Are you the Omnipotent One they said we should seek out? You could be in the running for Atheist Pope one day.

"I'm taking the time to construct original arguments here."

You are not. You began with a bare demand for evidence of your mortality. (Why? Why that question, and why here?) When you didn't like the answers, you demanded more loudly, then threw a tantrum.<

I didn't. I began by making a prediction on a prediction thread. I didn't ask for responses. I was then challenged on that prediction and have since defended it. Can we say in light of that, that your "Why that question, and why here?" question is silly (or is it something someone in a synod of atheists is bound to ask)? When I didn't "like" the answers, I showed how they were inadequate. My responses never got "louder". That you think they did, probably has more to do with faulty wiring in your brain, and perhaps your own tendency to try to shout people down. Have your speakers checked. And I didn't throw a tantrum. I merely questioned the rationale of corrupting the voting system by using it to put down supposed "heresy". To censor. To maintain group integrity. To encourage Groupthink.

"Mommy, the geeks won't let me sit at their table!"

You intended that ironically, but it exactly describes your situation: a child who has wandered into a conversation among adults and understands nothing.<

I understand that you are a pompous windbag who can't form a coherent thought, due to his brain being fogged by a fantastic hubris. Was that "3. The absence of any reliable evidence of existence of that mind before conception." part of the adult conversation? What about ""If someone who knows no mathematics at all starts babbling to me about the 4-colour theorem..."? I may be a child, but those statements seem stupid to me.

My statement re "the geeks table" had more to do with the sad reality that complete censorship and ideas control is more assured here than at any Medieval synod.

You do not even understand that there is something for you to understand.<

Oh well, that settles it. You win. You now move on to the quarter-final round against the guy who thinks he's Napoleon at Bedlam Hospital. It should be a cracker!

But the remedy is easy: read every post linked to here. It's about the size of a book. Post nothing until you have finished. If you understand what you read there -- not agree with, but understand -- then there will be enough of a common background to have a useful conversation.<

Groupthink gone wild. "Will you confess your heresy, and bow down and kiss the ringI If you do, all this torture will stop." I don't have to read and regurgitate other people's ideas. My brain makes its own ideas. Ideas, apparently, that encourage you to respond in long diatribes. Do you mean you'd write MORE if I first read the articles before posting again? Please feel free not to respond at all.

comment by Technologos · 2010-01-02T17:22:57.035Z · LW(p) · GW(p)

Based on your observation that your direct evidence for human mortality is limited, you conclude that you will never receive evidence, under some definition, that you are mortal. Do you have any evidence (under the same definition) that I am mortal, or indeed that anybody else is? If yes, could you please explain the difference in conclusions from identical evidence?

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-03T01:39:33.686Z · LW(p) · GW(p)

No, I have no evidence for your mortality. Although it's possible that I could someday have such evidence (based on the generally-accepted definition of mortality), I could never be in a position to present YOU with any.

My underlying interest in this theme lies in the direction of why we blindly accept our own mortality on such little individually-beheld evidence. Is it possible that, as with increasing average human lifespan, dying has more to do with belief about dying than with any physiological limitations? Accidents and murder, etc., apart, could we believe ourselves to 200 years old, if we could shake off the ingrained belief in the inevitability of death? Could avoidance of news reports involving death help? Are we really to believe that gains in average human lifespan are solely due to improvements in factors external to the body? If so, why is that many remote, relatively under-developed areas produce so many centenarians? Does belief that it's easily achievable to live to, say, 90 years have nothing to do with individually achieving such longevity? And does "genetics" have more to do with monkey see, monkey do than we imagine?

Note: In 1900, global average lifespan was 31 years. By mid century, it was 48 years. In 2005, it was 65.6 years. http://www.who.int/global_health_histories/seminars/presentation07.pdf

Replies from: Unknowns
comment by Unknowns · 2010-01-03T02:12:57.917Z · LW(p) · GW(p)

You do have some evidence that you are similar to other people. Consequently, if you had evidence for the mortality of someone else, this would be evidence for your own mortality. You have admitted that it is possible to have evidence for the mortality of someone else, and therefore it is possible for you to have evidence of your own mortality.

comment by byrnema · 2010-01-03T03:03:47.910Z · LW(p) · GW(p)

May we define what 'you' is?

For example, if 'you' is a username here on LessWrong ('adefinitemaybe'), then you could be "mortal" because your account can be deleted.

Or if you identify with the atoms you are composed of, then the issue of your mortality is again different...

Later edit: OK, you're 'apparently human'. Please don't respond to this message, as I plan to delete it since its apparently noise.

comment by adefinitemaybe · 2010-01-02T03:37:27.915Z · LW(p) · GW(p)

Which, to all intents and purposes, means that I'm immortal. Please form an orderly queue to worship, leave offerings, etc.

Replies from: wedrifid
comment by wedrifid · 2010-01-02T04:05:19.646Z · LW(p) · GW(p)

I would really rather bet against you. Let's select a suitable arbitrator and translate that probability into some (finite) odds.

Say, hypothetically, that every living relative of yours out to fourth cousin is captured and brought before you. They are then, every man woman and child, beaten to death with a rubber chicken. The assailant then begins to beat you with the aforementioned toy and you exhibit similar symptoms of physical decay to your previously bludgeoned kin. No unbiased arbitrator would judge that no legitimate evidence for your individual mortality has been presented to you. Short of non-occamian priors the evidence is clear.

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T07:07:23.559Z · LW(p) · GW(p)

(First, there is nothing to bet on. Your mission, s.y.c.t.a.i., is to provide me with evidence of my individual mortality. Whether I actually die at some point or not is irrelevant.)

So, if all your relatives were already dead, and your heart stopped beating for one reason or another, there would be little point in attempting to revive you? Could it be that all my dead relatives were mortal, while I am not? Even if I bleed when pricked by some defective design element of a Chinese-made rubber chicken? And how could I know for sure that these mere mortals were actually relatives of mine. I mean, tsk, they're a bit ephemeral, aren't they?

I require evidence of my mortality, not my propensity to bruise and bleed when hit. I predict that I'm never going to be presented with such evidence.

Meanwhile, I've noticed that the voting appears a little biased toward people who are losing the debate. What's that all about? Groupthink?

Replies from: wedrifid
comment by wedrifid · 2010-01-02T08:53:37.374Z · LW(p) · GW(p)

First, there is nothing to bet on. Your mission, s.y.c.t.a.i., is to provide me with evidence of my individual mortality. Whether I actually die at some point or not is irrelevant.

The bet would (quite obviously) be on whether you are provided evidence of your individual mortality.

So, if all your relatives were already dead, and your heart stopped beating for one reason or another, there would be little point in attempting to revive you?

No, you have it backwards. Chewbacca was born on Kashyyyk but lives on Endor.

Could it be that all my dead relatives were mortal, while I am not? Even if I bleed when pricked by some defective design element of a Chinese-made rubber chicken? And how could I know for sure that these mere mortals were actually relatives of mine. I mean, tsk, they're a bit ephemeral, aren't they?

Orthonormal was kind enough to provide you with an explanation of what evidence means.

I require evidence of my mortality, not my propensity to bruise and bleed when hit. I predict that I'm never going to be presented with such evidence.

You already have overwhelming evidence of your mortality. Providing more by beating you with a rubber chicken until you were bloody and bruised would just be icing.

Meanwhile, I've noticed that the voting appears a little biased toward people who are losing the debate. What's that all about? Groupthink?

Votes would have hovered around 0 if you had let it go when it turned out your joke didn't quite work. Meanwhile, I had best not reply further lest I be found to violate the "don't feed the trolls" injunction.

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T09:12:21.897Z · LW(p) · GW(p)

"Votes would have hovered around 0 if you had let it go when it turned out your joke didn't quite work. Meanwhile, I had best not reply further lest I be found to violate the "don't feed the trolls" injunction."

Wow, that's pretty harsh. Can you provide any evidence to back up that accusation as to my intent? How do I know you're not just a sore loser?

Meanwhile, I'm posting in good faith, and I think I'm holding my ground pretty well. The "there's no real evidence that humans always die" thing that occurred to me (see Zack's comments on this thread segment) strikes me as very discussable.

Replies from: wedrifid
comment by wedrifid · 2010-01-02T09:40:36.658Z · LW(p) · GW(p)

Meanwhile, I'm posting in good faith, and I think I'm holding my ground pretty well.

It sounds like you are posting in good faith. Just go easy on the "but I'm winning! lolz, groupthink!" stuff, that tends to be a tipping point.

I do recommend you look a bit closer at what people are telling you about 'evidence', it's important. I have been involved with communities in which finding clever ways to say "That isn't evidence. Where is your evidence?" in response to any given piece of evidence is rewarded with status and the stronger the evidence ignored the more respect is granted. This, for most part, isn't one of them. If you continue to speak nonsense and fail to comprehend those who are engaged with you you'll just be voted down to oblivion.

Replies from: adefinitemaybe
comment by adefinitemaybe · 2010-01-02T10:09:55.906Z · LW(p) · GW(p)

No, you accused me of being a troll. Are you now stating you believe me to be a troll posting in good faith?

"If you continue to speak nonsense and fail to comprehend those who are engaged with you you'll just be voted down to oblivion."

What a perfectly rational argument. I'm speaking nonsense, therefore, you are right and I'm wrong, and I must change. Brilliant... if a trifle lazy. Okay, tell me what to say so that I get voted most popular boy. What's considered acceptable rational thought around here?

I must say that I loved your 'Just go easy on the "...groupthink!" stuff, that tends to be a tipping point .'

And that from a "rational thinker"!