What we're losing

post by PhilGoetz · 2011-05-15T03:34:40.718Z · LW · GW · Legacy · 79 comments

Contents

79 comments

More and more, LessWrong's posts are meta-rationality posts, about how to be rational, how to avoid akrasia, in general, without any specific application.  This is probably the intended purpose of the site.  But they're starting to bore me.

What drew me to LessWrong is that it's a place where I can put rationality into practice, discussing specific questions of philosophy, value, and possible futures, with the goal of finding a good path through the Singularity.  Many of these topics have no other place where rational discussion of them is possible, online or off.  Such applied topics have almost all moved to Discussion now, and may be declining in frequency.

This isn't entirely new.  Applied discussions have always suffered bad karma on LW (statistically; please do not respond with anecdotal data).  I thought this was because people downvote a post if they find anything in it that they disagree with.  But perhaps a lot of people would rather talk about rationality than use it.

Does anyone else have this perception?  Or am I just becoming a LW old geezer?

At the same time, LW is taking off in terms of meetups and number of posts.  Is it finding its true self?  Does the discussion of rationality techniques have a larger market than debates over Sleeping Beauty (I'm even beginning to miss those!)  Is the old concern with values, artificial intelligence, and the Singularity something for LW to grow out of?

(ADDED: Some rationality posts are good.  I am also a lukeprog fan.)

79 comments

Comments sorted by top scores.

comment by Scott Alexander (Yvain) · 2011-05-15T16:35:13.179Z · LW(p) · GW(p)

Agreed.

One person at the Paris meetup made the really interesting and AFAICT accurate observation that the more prominent a Less Wrong post was, the less likely it was to be high quality - ie comments are better than Discussion posts are better than Main (with several obvious and honorable exceptions).

I think maybe it has to do with the knowledge that anything displayed prominently is going to have a bunch of really really smart people swarming all over it and critiquing it and making sure you get very embarrassed if any of it is wrong. People avoid posting things they're not sure about, and so the things that get main-ed tend to be restatements of things that create pleasant feelings in everyone reading them without rocking any conceivable boat, and the sort of overly meta- topics you're talking about lend themselves to those restatements - for example "We should all be more willing to try new things!" or "Let's try to be more alert for biases in our everyday life!"

Potential cures include greater willingness to upvote posts that are interesting but non-perfect, greater willingness to express small disagreements in "IAWYC but" form, and greater willingness to downvote posts that are applause lights or don't present non-obvious new material. I'm starting to do this, but hitting that downvote button when there's nothing objectively false or stupid about a post is hard.

Replies from: lukeprog, steven0461, Will_Newsome, atucker, jsalvatier, steven0461
comment by lukeprog · 2011-05-16T19:13:19.542Z · LW(p) · GW(p)

I agree that theoretical-sciency-mathy-insightful stuff is less common now than when Eliezer was writing posts regularly. I suspect this is largely because writing such posts is hard. Few people have that kind of knowledge, thinking ability, and writing skills, and the time to do the writing.

As someone who spends many hours writing posts only to have them nit-picked to death by almost everyone who bothers to comment, I appreciate your advice to "express small disagreements in 'IAWYC but' form."

As for your suggestion to downvote posts that "don't present non-obvious new material," I'm not sure what to think about that. My recent morality post probably contains only material that is obvious to someone as thoroughly familiar with LW material as yourself or Phil Goetz or Will Newsome or Vladimir Nesov or many others, but on the other hand a great many LWers are not quite that familiar, or else haven't taken the time to apply earlier lessons to a topic like morality (and were thus confused when Eliezer skipped past these basics and jumped right into 'Empathic Metaethics' in his own metaethics sequence).

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2011-05-18T00:21:48.554Z · LW(p) · GW(p)

I enjoyed your morality post, as I do most of your posts, and certainly wouldn't accuse it of not presenting non-obvious new material.

comment by steven0461 · 2011-05-16T20:10:34.607Z · LW(p) · GW(p)

I'm starting to do this, but hitting that downvote button when there's nothing objectively false or stupid about a post is hard.

I don't find it hard, but whenever I vote a comment below zero for not adding anything, it just gets fixed back to zero by someone who probably wouldn't have voted otherwise.

comment by Will_Newsome · 2011-05-16T00:57:14.767Z · LW(p) · GW(p)

I had implicitly resignedly assumed that the bland re-presentation of old material and applause light posts were part of a consciously directed memetic strategy. Apparently I'd underestimated the size of the disgruntled faction. From now on I will be less merciful with my downvoting button.

comment by atucker · 2011-05-17T20:01:03.278Z · LW(p) · GW(p)

As the author of one of the rehash posts, I agree that these sorts of topics are generally pretty boring and uninteresting to read. There's nothing surprising or new in them, and they seem pretty obvious when you read them.

But the point (of mine at least) wasn't really to expose any new material, so much as to try to push people into doing something useful. As far as I can tell, a large portion of the readers of LW don't implement various easy life-improvement methods, and it was really more intended as a push to encourage people to use them.

On the one hand, a lot of interesting stuff on LW is "applied rationality" and its really fun to read, but I'm fairly skeptical as to how useful it is for most people. There's nothing wrong with it being interesting and fun, but there are other things to talk about.

comment by jsalvatier · 2011-05-15T20:03:43.528Z · LW(p) · GW(p)

Perhaps it would be easier and/or more constructive to comment 'I don't disagree with anything here, but I don't think this is valuable'?

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2011-05-15T21:04:36.915Z · LW(p) · GW(p)

Perhaps, but I expect far fewer people would do so: it's less anonymous and more likely to cause confrontations/bad feelings.

Replies from: Nornagest, Rain
comment by Nornagest · 2011-05-16T09:11:27.798Z · LW(p) · GW(p)

Sounds like a great time to invoke some strategic applied sociopathy.

comment by Rain · 2011-05-15T21:11:02.444Z · LW(p) · GW(p)

Well-Kept Gardens Die By Pacifism seems particularly relevant here.

comment by steven0461 · 2011-05-16T01:54:04.059Z · LW(p) · GW(p)

One part of what's going on may be that the site allows anyone to register and vote, and so there's a feedback loop where people who are less like the core demographic and more like the rest of the internet come in and vote for posts that appeal more to average people from the internet, which in turn causes more average people from the internet to register and vote, and all this creates a pressure for the site to want to become every other site.

Another part of what's going on may be that the site has been focusing more and more on the idea that rationality gives you easy and obvious personal superpowers (as opposed to just helping you figure out what goal to strive toward and with what strategies), and while I'm not saying there's no truth to that, it doesn't strike me as being why most of us originally got interested in these issues, and a lot of the support for it feels like it was selected to support an easily-marketable bottom line.

comment by Eugine_Nier · 2011-05-15T04:11:39.131Z · LW(p) · GW(p)

I'm somewhat puzzled by your terminology since the topics you call "meta-rationality":

about how to be rational, how to avoid akrasia, and so on.

strike me as much more practical and applied then the ones you call "applied rationality":

philosophy, value, and possible futures

which strike me as much more meta.

Going by the list of topics you're complaining about, it appears that you are the one who "would rather talk about rationality than use it."

Replies from: Yvain, PhilGoetz, Bongo
comment by Scott Alexander (Yvain) · 2011-05-15T16:42:05.822Z · LW(p) · GW(p)

Phil's terminology is probably the way I would have worded the same.

Posts that talk about things like "how do we use the anthropic principle", "what is morality", "what decision theory makes sense", "what is a mysterious answer to a mysterious question", etc. all seem object-level...

...whereas there's another class of posts that always uses the word "rationality" - ie "how can we be more rational in our lives", "how can we promote rationality", "am I a good enough rationalist if..." "who is/isn't a rationalist" et cetera, and these seem properly termed meta-level because they involve being rational about rationality.

I have a feeling the latter class of posts would benefit if they tried to taboo "rationality".

Replies from: Bongo, David_Gerard, jsalvatier
comment by Bongo · 2011-05-16T04:09:18.646Z · LW(p) · GW(p)

I have a feeling the latter class of posts would benefit if they tried to taboo "rationality".

or: use rationality and don't mention it.

comment by David_Gerard · 2011-05-15T19:34:02.290Z · LW(p) · GW(p)

I have a feeling the latter class of posts would benefit if they tried to taboo "rationality".

Bingo.

Perhaps these would be good rewrite targets.

comment by jsalvatier · 2011-05-15T20:06:56.573Z · LW(p) · GW(p)

Much clearer than the original post.

comment by PhilGoetz · 2011-05-15T04:31:28.251Z · LW(p) · GW(p)

I see your point. I don't think of them as meta, because I see them as rungs on a ladder with a definite destination. I changed the wording a little.

Replies from: jsalvatier
comment by jsalvatier · 2011-05-15T07:57:40.535Z · LW(p) · GW(p)

Perhaps 'abstract' is a better word than 'meta' here.

Replies from: Gray
comment by Gray · 2011-05-16T21:38:13.139Z · LW(p) · GW(p)

The prefix 'meta' is incredibly overused...just saying.

Replies from: wedrifid
comment by wedrifid · 2011-05-17T09:27:35.743Z · LW(p) · GW(p)

Bravo

comment by Bongo · 2011-05-15T04:23:05.955Z · LW(p) · GW(p)

Yeah, I agree with PhilGoetz but downvoted because of bizarre terminology.

comment by wedrifid · 2011-05-15T19:01:06.094Z · LW(p) · GW(p)

Does the discussion of rationality techniques have a larger market than debates over Sleeping Beauty (I'm even beginning to miss those!)

Wow, I'd forgotten all about those. Those days were fun. We actually had to, well, think occasionally. Nothing remotely challenging has cropped up in a while!

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-05-16T07:05:50.963Z · LW(p) · GW(p)

Those days were fun. We actually had to, well, think occasionally. Nothing remotely challenging has cropped up in a while!

If you like thinking about challenging theoretical rationality problems, there are plenty of those left (logical uncertainty/bounded rationality, pascal's wager/mugging/decision theory for running on error-prone hardware, moral/value uncertainty, nature of anticipation/surprise/disappointment/good and bad news, complexity/Occam's razor).

I've actually considered writing a post titled "Drowning in Rationality Problems" to complain about how little we still know about the theory of rationality and how few LWers seem to be working actively on the subject, but I don't know if that's a good way to motivate people. So I guess what I'd like to know is (and not in a rhetorical sense), what's stopping you (and others) from thinking about these problems?

Replies from: cousin_it, lukeprog, wedrifid, Vladimir_Nesov, lukeprog, Will_Newsome, John_Maxwell_IV, XiXiDu
comment by cousin_it · 2011-05-16T08:18:50.408Z · LW(p) · GW(p)

I'd like you to write that post.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-05-16T08:39:40.933Z · LW(p) · GW(p)

Maybe I will, after I get a better idea why more people aren't already working on these problems. One reason not to write it is that the feeling of being drowned in big problems is not a particularly good one, and possibly de-motivating. Sometimes I wish I could go back to 1998, when I thought Bayesianism was a pretty much complete solution to the problem of epistemology, except for this little issue of what to expect when you're about to get copied twice in succession...

Replies from: cousin_it, XiXiDu
comment by cousin_it · 2011-05-16T08:55:04.352Z · LW(p) · GW(p)

By the way, have you seen how I've been using MathOverflow recently? It seems that if you can reduce some problem to a short math question in standard terms, the default next action (after giving it your own best shot) should be posting it on MO. So far I've posted two problems that interested me, and both got solved within an hour.

Replies from: XiXiDu
comment by XiXiDu · 2011-05-16T13:31:02.663Z · LW(p) · GW(p)

So far I've posted two problems that interested me, and both got solved within an hour.

It's all magic to me but it looks like a very effective human resource. Have you considered pushing MathOverflow to its limits and see if those people there might actually be able to make valuable contributions to open problems faced by Less Wrong or the SIAI?

I assume that the main obstacle in effectively exploiting such resources as MathOverflow is to formalize the problems that are faced by people working to refine rationality or create FAI. Once you know how to ask the the right questions one could spread them everywhere and see if there is someone who might be able to answer them, or if there is already a known solution.

Currently it appears to me that most of the important problems are not widely known, a lot of them being mainly discussed here on Less Wrong or on obscure mailing lists. By formalizing and spreading the gist of those problems one would be able to make people aware of Less Wrong and risks from AI and exploit various resources.

What I am thinking about is analogous to a huge roadside billboard with a short but succinct description of an important problem. Someone really smart or knowledgeable might drive-by and solve it. Not only would the solution be valuable but you would win a potential new human resource.

Replies from: cousin_it
comment by cousin_it · 2011-05-16T13:39:48.251Z · LW(p) · GW(p)

I'm all for exploiting resources to the limit! The bottleneck is formalizing the problems. It's very slow and difficult work for me, and the SIAI people aren't significantly faster at this task, as far as I can see.

comment by XiXiDu · 2011-05-16T13:11:19.277Z · LW(p) · GW(p)

One reason not to write it is that the feeling of being drowned in big problems is not a particularly good one, and possibly de-motivating

No! What is particularly demotivating for me is that I don't know what heuristics I can trust and when I am better off trusting my intuitions (e.g. Pascal's Mugging).

If someone was going to survey the rationality landscape and outline what we know and where we run into problems, it would help a lot by making people aware of the big and important problems.

comment by lukeprog · 2011-05-16T20:28:05.591Z · LW(p) · GW(p)

I suspect that clearly defining open rationality problems would act as a focusing lens for action, not a demotivator. Please do publish your list of open rationality problems. Do for us what Hilbert did for mathematicians. But you don't have to talk about 'drowning.' :)

Replies from: utilitymonster
comment by utilitymonster · 2011-05-17T22:24:29.480Z · LW(p) · GW(p)

Second the need for a list of the most important problems.

comment by wedrifid · 2011-05-16T12:45:46.445Z · LW(p) · GW(p)

So I guess what I'd like to know is (and not in a rhetorical sense), what's stopping you (and others) from thinking about these problems?

Part of this issue for me, at least with respect to thinking about the problems in the context of lesswrong, is that the last efforts in that direction were stiffled rather brutally - to the extent that we lost one of the most technically orientated and prolific users. This isn't to comment on the rightness or wrongness of that decision - just a description of an influence. Having big brother looming over the shoulder specifying what you may think makes it work instead of fun. And not work that I have any particular comparative advantage in! (I can actually remember thinking to myself "folks like Wei Dai are more qualified to tackle these sort of things efficiently, given their prior intellectual investments".

Creating of a decision theory email list with a large overlap of LW posters also served to dilute attention, possibly reducing self-reinforcing curiosity to some degree.

But for me personally I have just had other things to be focussing my intellectual attention on. I felt (theoretical) rationality and decision theory get uncached from my brain as I loaded it up with biology and German. This may change in the near future. I'm heading over to Jasen's training camp next month and that is likely to kick the mental focus around about.

I second cousin_it's interest in your aforementioned post! It would actually be good to know which problems are not solved as opposed to which problems I just don't know the solution to. Or, for that matter, which problems I think I know the solution to but really don't.

Replies from: Wei_Dai, jimrandomh, Oscar_Cunningham
comment by Wei Dai (Wei_Dai) · 2011-05-16T20:05:37.892Z · LW(p) · GW(p)

Having big brother looming over the shoulder specifying what you may think makes it work instead of fun.

I do not really get this reaction. So what if Eliezer has a tendency to over-censor? I was once banned completely from a mailing list but it didn't make me terribly upset or lose interest in the subject matter of the list. The Roko thing seems even less of a big deal. (I thought Roko ended up agreeing that it was a mistake to make the post. He could always post it elsewhere if he doesn't agree. It's not as if Eliezer has control over the whole Internet.)

And not work that I have any particular comparative advantage in! (I can actually remember thinking to myself "folks like Wei Dai are more qualified to tackle these sort of things efficiently, given their prior intellectual investments".

I didn't think I had any particular advantage when I first started down this path either. I began with what I thought was just fun little puzzle in an otherwise well-developed area, which nobody else was trying to solve because they didn't notice it as a problem yet. So, I'm a bit weary about presenting "a list of really hard and important problems" and scaring people away. (Of course I may be scaring people away just through this discussion, but probably only a minority of LWers are listening to us.)

I second cousin_it's interest in your aforementioned post! It would actually be good to know which problems are not solved as opposed to which problems I just don't know the solution to. Or, for that matter, which problems I think I know the solution to but really don't.

I guess another factor is that I have the expectation that if someone is really interested in this stuff (i.e., have a "burning need to know"), they would already have figured out which problems are not solved as opposed to which problems they just don't know the solution to, because they would have tried every available method to find existing solutions to these problems. It seems unlikely that they'd have enough motivation to make much progress if they didn't have at least that level of interest.

So I've been trying to figure out (without much success) how to instill this kind of interest in others, and again, I'm not sure presenting a list of important unsolved problems is the best way to do it.

Replies from: wedrifid, Vladimir_Nesov
comment by wedrifid · 2011-05-16T20:35:23.903Z · LW(p) · GW(p)

So I've been trying to figure out (without much success) how to instill this kind of interest in others, and again, I'm not sure presenting a list of important unsolved problems is the best way to do it.

I'm not sure either. It would perhaps be a useful reference but not a massive motivator in its own right.

What I know works best as a motivator for me is putting up sample problems - presenting the subject matter in 'sleeping hitchiker terrorist inna box' form. When seeing a concrete (albeit extremely counterfactual) problem I get nerd sniped. I am being entirely literal when I say that takes a massive amount of willpower for me to stop myself from working on it. To the extent that there is less perceived effort for tackling the problem for 15 hours straight than there is for putting it aside. And that can be the start of a self reinforcement cycle at times.

The above is in contrast to just seeing the unsolved problems listed. That format is approximately inspiration neutral.

By the way, is that decision theory list still active? I was subscribed but haven't seen anything appear of late.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-05-16T20:43:37.916Z · LW(p) · GW(p)

What I know works best as a motivator for me is putting up sample problems - presenting the subject matter in 'sleeping hitchiker terrorist inna box' form.

That seems like a useful datum, thanks.

By the way, is that decision theory list still active? I was subscribed but haven't seen anything appear of late.

It's still active, but nobody has made a post for about a month.

Replies from: wedrifid
comment by wedrifid · 2011-05-17T09:28:51.097Z · LW(p) · GW(p)

It's still active, but nobody has made a post for about a month.

Ahh, there we go. Cousin_it just woke it up!

comment by Vladimir_Nesov · 2011-05-16T21:15:25.670Z · LW(p) · GW(p)

I guess another factor is that I have the expectation that if someone is really interested in this stuff (i.e., have a "burning need to know"), they would already have figured out which problems are not solved as opposed to which problems they just don't know the solution to, because they would have tried every available method to find existing solutions to these problems.

Discussing things that are already known can help in understanding them better. Also, the "burning need to know" occasionally needs to be ignited, or directed. I don't study decision theory because I like studying decision theory in particular, even though it's true that I always had a tendency to obsessively study something.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-05-17T09:53:16.082Z · LW(p) · GW(p)

But decision theory ought to be a natural attractor for anyone with intellectual interests (any intellectual question -> how am I supposed to answer questions like that? -> epistemology -> Bayesianism -> nature of probability -> decision theory). What's stopping people from getting to the end of this path? Or am I just a freak in my tendency to "go meta"?

Replies from: Vladimir_Nesov, cousin_it, Eliezer_Yudkowsky, Will_Newsome
comment by Vladimir_Nesov · 2011-05-17T11:36:41.720Z · LW(p) · GW(p)

What's stopping people from getting to the end of this path?

The wealth of interesting stuff located well before the end.

comment by cousin_it · 2011-05-17T11:01:17.201Z · LW(p) · GW(p)

Seconding Eliezer. Also, please do more of the kind of thinking you do :-)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-17T10:20:49.188Z · LW(p) · GW(p)

Yes, you're a freak and nobody but you and a few other freaks can ever get any useful thinking done and didn't we sort of cover this territory already?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-05-17T15:34:26.309Z · LW(p) · GW(p)

I'm confused. Should I stop thinking about how exactly I'm "freaky" and how to possibly reproduce that "freakiness" in others? Has the effort already reached diminishing returns, or was it doomed from the start? Or do you think I'm just looking for ego-stroking or something?

Replies from: Davorak
comment by Davorak · 2011-05-17T19:56:21.460Z · LW(p) · GW(p)

Going meta takes resources. Resources could instead be applied directly to the problem in front of you. If not solving the problem right in front of you causes long term hard to recover from problems it makes sense to apply your resources directly to the problem at hand.

So:

(any intellectual question -> how am I supposed to answer questions like that? -> epistemology -> Bayesianism -> nature of probability -> decision theory)

Seems rational when enough excess resources are available. To make more people follow this path you need:

  • To increase the resources of those you are trying to teach.
  • Lower the resource cost of following the path

Lesswrong.com and Lesswrong meetup groups teach life skills to increase the members resources. At the same time they gather people who know skills on the path with those who want to learn lowering the resource cost of following the path. Many other methods exist, I have just mentioned two. A road is being built it has just not reached where you are yet.

Perhaps you are ahead of the road marking the best routes, or clearing the ground, but not everyone have the resources to get so far without a well paved road.

comment by Will_Newsome · 2011-05-21T09:28:53.224Z · LW(p) · GW(p)

Or morality! (Any action -> but is that the right thing to do? -> combinatorial explosion of extremely confusing open questions about cognitive science and decision theory and metaphysics and cosmology and ontology of agency and arghhhhhh.) It's like the universe itself is a Confundus Charm and nobody notices.

How much of decision theory requires good philosophical intuition? If you could convince everyone at MathOverflow to familiarize themselves with it and work on it for a few months, would you expect them to make huge amounts of progress? If so, I admit I am surprised there aren't more mathy folk sniping at decision theory just for meta's sake.

comment by jimrandomh · 2011-05-16T21:11:16.122Z · LW(p) · GW(p)

Creating of a decision theory email list with a large overlap of LW posters also served to dilute attention, possibly reducing self-reinforcing curiosity to some degree.

I wasn't aware this list existed, but would be very interested in reading its archives. Do you have a link?

comment by Oscar_Cunningham · 2011-05-17T10:20:27.742Z · LW(p) · GW(p)

I second jimrandomh's interest in the mailing list? Can I be signed up for it? Are there archives?

Replies from: wedrifid
comment by wedrifid · 2011-05-17T10:47:05.187Z · LW(p) · GW(p)

decision-theory-workshop.googlegroups.com

I'm not sure who admins (and so can confirm new subscribers). It's a google group so the archive may well survive heat death.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-05-17T12:23:54.256Z · LW(p) · GW(p)

Thanks.

(BTW Google seems to be messing with the structure of the URLs for groups, the address that currently works is https://groups.google.com/group/decision-theory-workshop/ )

comment by Vladimir_Nesov · 2011-05-16T18:27:31.087Z · LW(p) · GW(p)

I've been less engaged with the old topics for the last several months while trying to figure out an updateful way of thinking about decision problems (understand the role of observations, as opposed to reducing them to non-observations as UDT does; and construct an ADT-like explicit toy model). This didn't produce communicable intermediate results (the best I could manage was this post, for which quite possibly nobody understood the motivation). Just a few days ago, I think I figured out the way of formalizing this stuff (which is awfully trivial, but might provide a bit of methodological guidance to future research).

In short, progress is difficult and slow where we don't have a sufficient number of tools which would suggest actionable open problems that we could assign to metaphorical grad students. This also sucks out all motivation for most people who could be working on these topics, since there is little expectation of success and little understanding of what such success would look like. Even I actually work while expecting to most likely not produce anything particularly useful in the long run (there's only a limited chance for limited success), but I'm a relatively strange creature. Academia additionally motivates people by rewarding the activity of building in known ways on existing knowledge without producing a lot of benefit, but producing visible, and possibly of high quality, if mostly useless results that gradually build up to systematic improvements.

Replies from: cousin_it, FAWS
comment by cousin_it · 2011-05-17T11:04:16.921Z · LW(p) · GW(p)

Just a few days ago, I think I figured out the way of formalizing this stuff

Uhh, so why don't I know about it? Could you send an email to me or to the list?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-17T11:39:25.458Z · LW(p) · GW(p)

Because it's awfully trivial and it's not easy to locate all the pieces of motivation and application that would make anyone enthusiastic about this. Like the fact that action and utility are arbitrary mathematical structures in ADT and not just integer outputs of programs.

Replies from: cousin_it
comment by cousin_it · 2011-05-17T11:43:08.259Z · LW(p) · GW(p)

Hm, I don't see any trivial way of understanding observational knowledge except by treating it as part of the input-output map as UDT suggests. So if your idea is different, I'm still asking you to write it up.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-17T13:23:46.704Z · LW(p) · GW(p)

In one sentence: Agent sees the world from within a logical theory in which observations are nonlogical symbols. I'll of course try to write this up in time.

comment by FAWS · 2011-05-16T18:59:04.847Z · LW(p) · GW(p)

(the best I could manage was this post, for which quite possibly nobody understood the motivation)

I'm reasonably sure that's because the problem you see doesn't actually exist in your example and you only think it does because you misapplied UDT. If you think this is important why did you never get back to our discussion there as you promised? That might result in either a better understanding why this is so difficult for other people to grasp (if I was misunderstanding you or making a non-obvious mistake) or either a dissolution of the apparent problem or examples where it actually comes up (if I was right).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-16T19:21:33.257Z · LW(p) · GW(p)

For some reason, I find it difficult to reason about these problems, and have never acquired a facility of easily seeing them all the way through, so it's hard work for me to follow these discussions. I expect I was not making an error in understanding the problem the way it was intended, and figuring out the details of your way of parsing the problem was not a priority.

why did you never get back to our discussion there as you promised?

It feels emotionally difficult to terminate a technical discussion (where all participants invested nontrivial effort), while postponing it for a short time can be necessary, in which case there is an impulse to signal to others the lack of intention to actually stop the discussion, to signal the temporary nature of the present pause (but then, motivation to continue evaporates or gets revoked on reflection). I'll try to keep in mind that making promises for continuing the discussion is a bad, no good way of communicating this (it happened recently again in a discussion with David Gerard about merits of wiki-managing policies; I edited out the promise in a few hours).

At this point, if you feel that you have a useful piece of knowledge which our discussion failed to communicate, I can only offer you a suggestion to write up your position as a (more self-contained) discussion post.

comment by lukeprog · 2011-05-16T20:00:44.628Z · LW(p) · GW(p)

Yes. There are tons of open, difficult rationality/philosophical problems. If they haven't 'cropped up in a while' on LW, it's because those who are thinking about them aren't taking the time to write about them. That's quite understandable because writing takes a lot of time.

However, I tend to think that there are enough very smart rationalists on LW that if we can cogently bring everybody up to the cutting edge and then explain what the open problems are, progress will be made.

That's really where I'm going with my metaethics sequence. I don't have the hard problems of metaethics solved; only the easy ones. I'm hoping that bringing everybody up to the cutting edge and explaining what the open problems are will launch discussions that lead to incremental progress.

comment by Will_Newsome · 2011-05-16T17:27:05.408Z · LW(p) · GW(p)

I have a vague intuition there's something interesting that could happen with self-modifying AIs with creator and successor states knowably running on error-prone hardware while having pseudo-universal hypothesis generators that will of course notice the possibility of values corruption. I guess I'm still rooting for the 'infinite reflection = contextually perfect morality' deus ex machina. Utility functions as they're normally idealized for imagining superintelligence behavior like in Basic AI Drives look an awful lot like self-protecting beliefs, which feels more and more decision theoretically wrong as time goes on. I trust the applicability of the symbols of expected utility theory less over time and trust common beliefs about the automatic implications of putting those symbols in a seed AI even less than that. Am I alone here?

The reason I am not attempting to tackle those problems is because I hang out with Steve Rayhawk and assume that if I was going to make any progress I'd have to be roughly as smart and knowledgeable as Steve Rayhawk, 'cuz if he hasn't solved something yet that means I'd have to be smarter than him to solve it. I subconsciously intuit that as impossible so I try to specialize in pulling on less mathy yarns instead, which is actually a lot more possible than I'd anticipated but took me a long time to get passable at.

Replies from: timtyler
comment by timtyler · 2011-05-16T19:22:19.843Z · LW(p) · GW(p)

I trust the applicability of the symbols of expected utility theory less over time and trust common beliefs about the automatic implications of putting those symbols in a seed AI even less than that. Am I alone here?

The current theory is all fine - until you want to calculate utility based on something other than expected sensory input data. Then the current theory doesn't work very well at all. The problem is that we don't yet know how to code: "not what you are seeing, how the world really is" in a machine-readable format.

comment by John_Maxwell (John_Maxwell_IV) · 2012-08-04T20:17:20.168Z · LW(p) · GW(p)

I've actually considered writing a post titled "Drowning in Rationality Problems" to complain about how little we still know about the theory of rationality and how few LWers seem to be working actively on the subject, but I don't know if that's a good way to motivate people.

I don't think it's necessary to frame the large number of problems you identify as a case of "drowning". An alternative framing might be one about unexplored territory, something like "rationality is fertile ground for intellectual types who wish to acquire status by solving problems".

As for why more people aren't working on them, it could come down to simple herd effects, or something like that.

comment by XiXiDu · 2011-05-16T13:05:52.512Z · LW(p) · GW(p)

Could someone point me to an explanation of what is meant by 'logical uncertainty'?

I've actually considered writing a post titled "Drowning in Rationality Problems" to complain about how little we still know about the theory of rationality...

This sounds incredible interesting, I would love to read it!

Replies from: cousin_it
comment by cousin_it · 2011-05-16T13:45:16.667Z · LW(p) · GW(p)

Logical uncertainty is uncertainty about the unknown outputs of known computations. For example, if you have a program for computing the digits of pi but don't have enough time to run it, you have logical uncertainty about the billionth digit. You can express it with probabilities or maybe use some other representation. The mystery is how to formulate a decision process that makes provably "nice" decisions under logical uncertainty, and to precisely define the meaning of "nice".

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2011-05-17T19:24:44.112Z · LW(p) · GW(p)

So basically the stuff you don't know because you don't have logical omniscience.

comment by lukeprog · 2011-05-15T15:36:10.236Z · LW(p) · GW(p)

I'm certainly trying to apply rationality to solve big important problems, but that is taking me a while. About half of my posts so far have been written (sneakily) for the purpose of later calling back to them when making progress in metaethics and CEV.

Replies from: David_Gerard
comment by David_Gerard · 2011-05-15T19:32:01.744Z · LW(p) · GW(p)

That's why they're good, then: they have a real problem behind them.

comment by gjm · 2011-05-15T11:52:34.232Z · LW(p) · GW(p)

I share Phil's perception that LW is devoting more time to what you might call "practical rationality in everyday life" and less to the theory of rationality, and his feeling that that's less interesting (albeit perhaps more useful).

I share everyone else's opinion that Phil's terminology is bizarre.

comment by lucidfox · 2011-05-15T05:19:59.894Z · LW(p) · GW(p)

My main concern about Less Wrong recently has been the proliferation of posts related to the Singularity and HP: MoR, which I frankly don't care about. For a site that encourages people to think outside the box, it's at times biased against unorthodox opinions, or at least, I get downvoted for arguing against the Singularity and immortality and for pointing out flaws in MoR. At these times the site seems cultish in a way that makes me feel uncomfortable.

I was drawn here both by Eliezer's meta-rationality posts and by discussions about quantum mechanics, philosophy of mathematics, game theory, and such. However, recently I've been growing increasingly skeptical about the dissociation between LW's stated goals and the actual behavior I observe here.

Replies from: saturn, PhilGoetz
comment by saturn · 2011-05-15T07:53:47.815Z · LW(p) · GW(p)

If you want to talk about about quantum mechanics, philosophy of mathematics, game theory, and such, why not start threads about those topics instead of arguing against the Singularity and immortality and pointing out flaws in MoR—things you don't even care about?

comment by PhilGoetz · 2011-05-15T05:33:21.118Z · LW(p) · GW(p)

I'm confused - you perceive a dissociation, yet you seem to agree with the emphasis on discussions of rationality. If we want LW to go in opposite directions, and both come to the conclusion that LW is going in the wrong direction, is there a conservation-of-evidence problem here? What would it take for someone to believe LW is going in the right direction?

Replies from: lucidfox
comment by lucidfox · 2011-05-15T07:33:52.627Z · LW(p) · GW(p)

I think there's a false dichotomy here.

You want LW to feature more discussions of applied rationality, of practical uses of the mental skills sharpened here.

I want LW to feature more discussions about abstract matters, about the framework of rationality and the means to sharpen said skills.

The two aren't necessarily mutually exclusive. One doesn't have to arrive at the expense of the other. What I don't want LW to become is a Singularity cult or a personality cult, or really any kind of cult - a community where anyone not sharing the group's mainstream opinions is considered wrong by default. I'm not saying LW is or has become that - generally, I found that at least discussions of non-mainstream opinions are welcome as long as they're backed by valid arguments - but I do see signs that it can possibly turn that way.

comment by EchoingHorror · 2011-05-15T04:38:47.896Z · LW(p) · GW(p)

In my vision for the future of the rationalist community, most members are interested in the core of meta-rationality and anti-akrasia and each is interested in a set of peripheral topics (various ways of putting practicing rationality, problems like Sleeping Beauty, trading tutoring, practicing skills, helping the community in practical ways, study groups, social meetings with rationalists, etc.). Some fringe members will be involved in the peripherals and rationality applications but not theory, but they probably won't last long. LW is the core, and will be based around meta-rationality. Meetup groups will form around what local members are interested in, and start talking about those things online, maybe in their Google groups, maybe on their own websites, but probably somewhere cozier and not part of current LW, so the local communities can build their relationships. Meetup groups in different areas talking about the same things will merge their online discussions when they want to, possibly as part of LW.

But perhaps a lot of people would rather talk about rationality than use it. It's the easy thing to do. Meetups might be useful to get people to observe more evidence and encounter new problems, encouraging the use of rationality.

Or we could skip straight to that by creating subforums for specific topics like Probability, Values, AI, and Singularity for LW users, inviting more posts on the topics you're missing.

comment by steven0461 · 2011-05-16T20:08:52.281Z · LW(p) · GW(p)

I wish there were more posts that tried to integrate math with verbal intuitions, relative to posts that are either all the way on the math side or all the way on the words side.

comment by timtyler · 2011-05-16T16:54:46.263Z · LW(p) · GW(p)

It seems rathrer llike Eliezer Yudkowsky's blog without (much) Eliezer Yudkowsky.

Which is unfortunate - if understandable.

comment by Kutta · 2011-05-15T15:10:13.151Z · LW(p) · GW(p)

I think that less Singularity discussion is the result of the related topics having been already discussed many times over. There hasn't been a new idea in AI and decision theory since a while. I'm not implying though that we've finished these topics once and for all. There is certainly a huge amount of stuff to be discovered, it's just that we don't seem to happen upon much stuff these days.

comment by wedrifid · 2011-05-15T14:05:27.437Z · LW(p) · GW(p)

Quality is a bigger concern than subject matter. But that is easily solved by just reading posts and mainly posts by Luke. :)

comment by David_Gerard · 2011-05-15T19:30:30.574Z · LW(p) · GW(p)

Is the old concern with values, artificial intelligence, and the Singularity something for LW to grow out of?

"A community blog devoted to refining the art of human rationality" suggests those aren't actually the focus, and that when LW grows up it won't be about AI and the Singularity.

I do agree that some more application would be good, but that tends to go in discussion if at all. Better there than nowhere.

comment by Goobahman · 2011-05-16T00:22:09.349Z · LW(p) · GW(p)

One of the big things about improving rationality is 'Getting Crap Done' and I think the problem is that for an online community wherein most of us are anonymous, there's not a lot on here to help us with that.

Now this site has helped me conceptualize and visualize in a way that I didn't realize was possible. It helped me to see things as they are, and how things could be. The problem is that whilst I'm flying ahead in terms of vision, I still sleep in and get to work late, I still play world of warcraft over going to the local toastmasters meetup, I still haven't opened up my online trading account.

It's like I know what to do, but in terms of generating the willpower and committment and motivation to do it, this site just becomes another of many shiny distractions. The thing is as an online community I'm not sure how much you could remedy that.

The Meetups however I think is possibly the best thing to come out of this site so far, as it has inspired me to start my own, which has come to great success, even after a somewhat rocky start, and provides a way for those who feel isolated to find a network of support.

Just my two humble cents.

comment by MartinB · 2011-05-15T12:24:26.778Z · LW(p) · GW(p)

Is not 'how to be rational, how to avoid akrasia' how one puts 'rationality into practice'? Without hard working producers there is no singularity.

+1 for suitable filtering, or a decent subclustering that keeps everyone happy

comment by [deleted] · 2011-05-16T19:56:36.151Z · LW(p) · GW(p)

I would bet that we'll soon see a resurgence of discussion on decision theory, anthropics etc. in the next few months. If I'm as typical a user as I think I am, then there are a dozen or so people who were largely drawn to LessWrong by those topics, but who stayed silent as they worked on leveling up. lukeprog's recent posts will probably accelerate that process.

comment by wedrifid · 2011-05-15T13:18:58.758Z · LW(p) · GW(p)

More and more, LessWrong's posts are meta-rationality posts, about how to be rational, how to avoid akrasia, and so on. This is probably the intended purpose of the site. But they're starting to bore me.

Agree. The part them makes them boring is that the 'how to' stuff is, basically, rubbish. There are other communities dedicated to in the moment productivity guides. By people who know far more about the subject. Albeit people who maybe don't two box and are perhaps dedicating all their 'productivity' towards 'successful' but ultimately not very important goals.