Where do I most obviously still need to say "oops"?

post by lukeprog · 2011-11-22T01:48:16.486Z · LW · GW · Legacy · 62 comments

Eliezer once told me:

The most common error I see on Less Wrong is the failure to say "Oops."

If there's one rationality skill I like to think I'm pretty good at, it's this one: the skill of saying "Oops."

In fact, I say "Oops, fixed, thanks" so often on Less Wrong I once suggested I should have a shortcut for it: "OFT."

And I don't just say "oops" for typos and mistakes in tone, but also for mistakes in my facts and arguments.

It's not that I say "oops" every time I'm challenged at length, either. I don't say "oops" until I actually think I was significantly wrong; otherwise, I stand my ground and ask for better counter-arguments.

But I'm sure I can improve.

Wanna help me debug my own mind?

Tell me: On which issues do you think I most obviously still need to say "Oops"?

62 comments

Comments sorted by top scores.

comment by prase · 2011-11-22T12:31:12.055Z · LW(p) · GW(p)

Well, since you have asked for feedback, I may provide some, although probably not of the kind requested by this post.

Your repeated requests for feedback accompanied by links proving that you are able to correct your mistakes and like the corrections ... create an impression of some heavy signalling going on. Namely, it's one of the norms here - and probably also among the readers of your blog - to be able to accept constructive criticism, to avoid confirmation bias, to respect a lot of debate rules etc. Signalling adherence to those norms increases one's status. But then, if the signalling part is too apparent, it naturally leads to suspicion of hypocrisy and distrust on the meta-level.

Of course this all is obvious. I write it only because I am not sure whether you realise that the way you ask for feedback may appear to fall into this category. Or actually belong to that category. Substituting the usual pride for never being wrong by pride for not being the sort of person who takes pride from never admitting wrongness is a useful mind hack, but still it's a goal different from being right in the first place.

Not that I could find a single instance where your signalling goals prevented you from finding the truth efficiently. You are certainly not the usual open-mindedness signaller. If there is a danger, it is certainly subtle. Just you don't need to ask for feedback this way. Valuable criticism is usually spontaneous: when people detect a mistake, they say it (unless the local norms discourage such reactions, but that's certainly not the case on LW). On the other hand, when requested to do so, people start to hastily search for something to be criticised, and either find an unimportant detail or even construct a non-existent problem packed in a cloak of rationalisations, or they fail to find anything and produce equally useless you-are-so-awesome-so-that-I-can't-find-a-single-problem response. (Not that it isn't pleasant to hear the latter.)

In short, if you make a mistake, don't be afraid we'll keep it secret.

comment by shokwave · 2011-11-22T06:11:47.191Z · LW(p) · GW(p)

On which issues do you think I most obviously still need to say "Oops"?

I swear I'm not just going meta to look good, but...

This issue, the issue of getting feedback, debugging your mind, improving yourself, your posts, and your style. What you are doing is amazing already; the feedback we can give takes you from the top 99th percentile to the top half of the 99th percentile. But you are very nearly at the point where we can't help you anymore, because we don't know anything that you don't already know and use. Between this and the google survey you ought to get decent data, but beyond these ...

Well, I feel like the biggest difference between you and Eliezer (and I speak of how I perceive you on this site) is apart from subject matter (you obviously can't rehash what he's written and he hasn't the research time and knowledge to do neuroscience) your levels of confidence. Eliezer is self-confident, you ask us for tips to improve. That affects your appearance on the site. So I think you should - not scale down your feedback requests - make your feedback requests stealthier. Maybe in person, in private message, or in comments instead of posts. Maybe focus on collecting metrics data like upvotes or number of dissenting vs approving comments.

Is this something that fits with your observations?

Replies from: Cthulhoo, lukeprog
comment by Cthulhoo · 2011-11-22T08:40:10.461Z · LW(p) · GW(p)

Eliezer is self-confident, you ask us for tips to improve. That affects your appearance on the site.

I agree with this observation, but I'm not sure it's a bad thing. Eliezer is often a bit out of reach to be a proper model: I have often the feeling that I will need countless years to reach his level, I I ever manage to reach it. I can instead relate better to Luke's quest of going from Padawan to Jedi, and I don't mind seeing this kind of behavior from him, at least until he completes his training and becomes a full time member of the Jedi Council.

comment by lukeprog · 2011-11-22T08:33:56.340Z · LW(p) · GW(p)

I feel like the biggest difference between you and Eliezer [is] your levels of confidence.

Yes. Also, our respect for fashion.

I've thought about making my requests for feedback stealthier, but... do I really want to appear more confident than I am? The appearance of overconfidence helps in dating and probably in executive directoring, but there are costs — such as outsiders dismissing me (and SIAI) as falling prey to timelines optimism.

Replies from: shokwave
comment by shokwave · 2011-11-23T05:17:22.533Z · LW(p) · GW(p)

do I really want to appear more confident than I am?

No. You should appear more confident than you currently appear, but you should appear that confident because you are that confident.

comment by [deleted] · 2011-11-22T11:12:41.135Z · LW(p) · GW(p)

I don't actually expect you to get the best feedback by pinging the Discussion board. In addition to reading your content, I've probably interacted with you in person more than most of LW, and I'm still not sure I am qualified to deliver the kind of feedback you'd actually find useful.

It doesn't hurt to ask, of course. But the best feedback I've ever gotten was from people who interact with me daily; people that see me at my worst, see the behavior patterns I'm blind to, and see the inconsistencies in my reactions to different situations and people.

LW will offer laser-focused feedback on your intellectual views, perhaps. But if you want to hear difficult truths that will really help you, it seems like you should turn to those closest to you.

One danger is that you reach out for feedback like this, receive only minor recommendations, and then subconsciously feel as though you've made the motions necessary to be justified in thinking that you're on the right track. I don't expect that this is what you're doing, but I've made simpler mistakes before.

comment by SilasBarta · 2011-11-23T19:19:16.641Z · LW(p) · GW(p)

"Oops, it looks like half the threads in the discussion section are started by me."

Replies from: None
comment by [deleted] · 2011-11-23T23:54:04.650Z · LW(p) · GW(p)

.

Replies from: Davorak
comment by Davorak · 2011-11-24T01:12:10.121Z · LW(p) · GW(p)

It is a bad thing if it discourages people you want posting from posting. Which could happen if Luke came off as dominate and territorial. I do not think Luke appears dominate and territorial so this has not registered as a problem to me.

comment by Mercurial · 2011-11-22T04:10:13.824Z · LW(p) · GW(p)

Luke, I don't feel I know you well enough to help you with your quest to locate any lingering wrongness in you. From what I've seen of your writing and what I've heard from people who have met you, you're doing a really amazing job of walking the rationalist talk. The fact that you even ask the community here this question is quite a testament to your taking this stuff seriously and actually using it. I think I should be asking you this question!

But your asking this makes me think of something. If you, or Eliezer, or someone else of that calibre of rational competence pointed out to me an area where I need to say "Oops" (or otherwise direct rational attention), I'd like to think that I'd take that seriously. I suspect I'd take it even more seriously if there were some avenue for me to ask such people for that help the way you've asked the whole Less Wrong community here.

So I wonder: Might it be a good move to set up something like that? We might not yet have a good metric in place for what constitutes someone's degree of rationality, but I'd imagine if two or three black-belt Bayesians all agree that someone is wrong about something, that should still count for something and is probably a reasonable direction to consider in the absence of a more objective metric. So if there were something set up where people could actively ask for that feedback from known people of skilled rationality (or people designated by those with known impressive levels of rationality), I wonder if that would be useful. What do you think? Or would that just be redundant with respect to the Rationality Dojos you mentioned are coming?

Replies from: lukeprog
comment by lukeprog · 2011-11-22T05:38:00.851Z · LW(p) · GW(p)

If this could be arranged in the future, we'd want to involve top-level non-SIAI rationalists like Julia Galef to avoid results dictated by the SIAI memeplex rather than by rationality skills. (By "top-level" I don't mean "popular" but "seriously skilled in rationality".)

Replies from: XiXiDu, Mercurial
comment by XiXiDu · 2011-11-22T09:42:15.247Z · LW(p) · GW(p)

The questions that I really care about, the important questions, are either not solved yet or it is everyone versus the "SIAI memeplex".

All of the following people would disagree with the Singularity Institute on some major issues: Douglas Hofstadter, Steven Pinker, Massimo Pigliucci, Greg Egan, Holden Karnofsky, Robin Hanson, John Baez, Katja Grace, Ben Goertzel...just to name a few. Even your top donor thinks that the Seaststeading Institute deserves more money than the Singularity Institute. And in the end they would either be ignored, downvoted, called stupid or their disagreement a result of motivated cognition.

Replies from: wedrifid
comment by wedrifid · 2011-11-22T11:43:08.477Z · LW(p) · GW(p)Replies from: XiXiDu
comment by XiXiDu · 2011-11-22T11:59:28.240Z · LW(p) · GW(p)

...my downvotes of Ben Goertzel and suggestions that he was stupid are not based on motivated cognition...

Never said that.

The impressiveness of his name (that thing which causes you to refer to him as an authority)...

WTF? I don't think that he has any authority. This essay pretty much shows that he is a dreamer. Two quotes from the essay:

“Of course, this faith placed in me and my team by strangers was flattering. But I felt it was largely justified. We really did have a better idea about how to make computers think. We really did know how to predict the markets using the news.”

or

“We AI folk were talking so enthusiastically, even the businesspeople in the company were starting to get excited. This AI engine that had been absorbing so much time and money, now it was about to bear fruit and burst forth upon the world!”

Pfft.

Replies from: wedrifid
comment by wedrifid · 2011-11-22T12:56:00.976Z · LW(p) · GW(p)Replies from: XiXiDu, XiXiDu, lessdazed
comment by XiXiDu · 2011-11-22T13:20:44.648Z · LW(p) · GW(p)

By listing people who would be downvoted as though it is a significant fact it implies that they have status such that they deserve to be mentioned in that context.

The suggestion was to get feedback from experts on matters of rationality. I listed some people and listed some possible reactions. This doesn't imply that I believe that those people are actually worthy of being asked or that the reactions would be unjustified.

There are various problems, 1.) there are not many people that are on the same level as LW 2.) a lot of those people who are not part of the "SIAI memeplex", as Luke called it, actually disagree on important issues 3.) there is a lack of critical examination and debate with those that are critical of SI key positions. Especially #3 tends to be problematic as disagreement is often met with the mentioned reactions, which doesn't imply that the reactions are not justified, but unhelpful.

The fact that someone is actually worthy of downvotes has rather significant implications on whether or not motivated cognition must be involved in the downvoting.

I never said that there is any motivated cognition involved in the downvoting of people on LW. I said that people on LW are sometimes claiming that their opponents disagreement is caused by motivated cognition, that the people who disagree with LW are often disagreeing due to motivated cognition.

To summarize: a place to receive feedback on issues of rationality from outsiders wouldn't work out in my opinion as long as the people on LW are not willing to dissolve the disagreement but rather tend to ignore or simply downvote their opponents because they are perceived to be stupid etc. (which might very well be justified).

Replies from: daenerys
comment by daenerys · 2011-11-23T05:40:53.855Z · LW(p) · GW(p)

Upvoting and agreeing.

This is more about LW in general, than to Luke personally, but I do feel that this site has a strong "cult" feel to it. I think there are a lot of things that rational people might disagree with LW consensus on, and instead of welcoming non-LW ideas, people say that they aren't rational.

For example, I think that if 500 people all thought about AI and FAI rationally and independently, they wouldn't all just happen to come to the exact same conclusions considering that there are so many unknowns. But the group-think here is such that anyone with a differing opinion is kept out. I am actually thinking about making a post on this, but we'll see how busy I am.

Replies from: lessdazed, daenerys
comment by lessdazed · 2011-11-23T06:51:16.637Z · LW(p) · GW(p)

people say that they aren't [censored].

That would be an awful group habit. Narrowness, specificity, is a virtue. Show me something written on the subject by Ben Goetzel, and I'll show you specific logical fallacies.

Replies from: prase
comment by prase · 2011-11-23T10:41:47.704Z · LW(p) · GW(p)

I like your censorship.

comment by daenerys · 2011-11-23T06:13:38.204Z · LW(p) · GW(p)

And my point is only strengthened by the fact that anytime I use LW-speak and say things that agree with LW consensus they are quickly upvoted, but any time that I say things that are non-LW consensus, they are quickly downvoted (see above).

Replies from: ArisKatsaris, antigonus
comment by ArisKatsaris · 2011-11-23T14:14:15.476Z · LW(p) · GW(p)

There exist hundreds of LW readers, and downvotes don't mean that the majority disapproves, it means that atleast one out of those hundreds of readers disapproved with a downvote. (Or more precisely that n+1 readers disapproved with a downvote where n readers approved with an upvote)

As such arguments of the style "I got downvoted for speaking against group consensus, therefore LessWrong is exhibiting groupthink" don't seem that convincing to me. Even if that was the only reason you got downvoted, even if it was a completely unjust downvote -- it just means that one reader was exhibiting groupthink behaviour and downvoted you unjustly for going against the norm.

Replies from: daenerys
comment by daenerys · 2011-11-23T16:50:25.288Z · LW(p) · GW(p)

First, I want to agree that not every downvote is due to being against group consensus. Sometimes posts (including my own) just aren't well thought out.

I also agree that a downvote only represents one person and not the whole of the LW community. In fact I've thought it would be interesting to be able to see how many upvotes and downvotes each comment has. There's a massive difference between a comment with a score of 0 (or any other score) that got there because 20 people upvoted it and 20 people downvoted it, versus a score of 0 because nobody voted for it at all. The first kind of comment is more likely to be interesting and controversial.

However, I would argue that it doesn't take many people to effect a culture. If there were no moderation, how many trolls do you think it would take to significantly lower the LW quality?

Let's back up and take a macro view (something LWers seem to dislike, but which I think is often necessary.); If posting a non-consensus view tends to result in a negative or 0 karma, and posting a consensus view tends to result in positive karma, then it is likely that people are going to post consensus views much more often than non-consensus views. Even if no one admits it consciously, I would bet the karma system has at least a subconscious effect on what people post.

Because people post consensus views more often, people tend to agree with the consensus more. For example if consensus view has 10 good posts/comments supporting it and nonconsensus view has 0-1 good posts/comments supporting it, then you have seen more and better arguments for the consensus view, and are more likely to believe that.

As a site leans more and more toward a single worldview, people with differing worldviews start actively staying away. If I was a rationalist who rationally thought that the Singularity and FAI weren't the most pressing concerns right now, the chances are low that I would want to join this site. Without having a wide range of views to moderate opinion, consensus becomes like a runaway train becoming progressively more one-sided.

From personal experience, I've known pretty much sense I've joined this site that it's not the sort of place I'd want to stay around. I'll stay around for a month or two, learn what I can from here, and go my merry way. The reason not being random downvotes, which I really don't care about, but rather the resulting culture which I do in fact consider to be an echo chamber that has never learned to argue politely. (In the article I just linked to, I would say that LW is in "Draft 2" stage.)

Replies from: ArisKatsaris, Vladimir_Nesov, None, lessdazed
comment by ArisKatsaris · 2011-11-23T17:43:45.149Z · LW(p) · GW(p)

If posting a non-consensus view tends to result in a negative or 0 karma, and posting a consensus view tends to result in positive karma, then it is likely that people are going to post consensus views much more often than non-consensus views.

I likewise object to those who mindlessly downvote or upvote. On my part I think nonsensical upvotes are worse than nonsensical downvotes, though, as it causes karma inflation. There exist people who'll upvote people just for saying "Hi", and I'm guessing they upvote every other comment they see. This rewards quantity instead of quality -- which serves to reduce quality.

Without having a wide range of views to moderate opinion, consensus becomes like a runaway train becoming progressively more one-sided.

This is largely what the article on Evaporative cooling of group beliefs is about. But I don't know how to fix this perceived problem, except to encourage you to speak and post about those non-consensus positions.

comment by Vladimir_Nesov · 2011-11-24T00:53:29.503Z · LW(p) · GW(p)

If I was a rationalist who rationally thought that [...]

Bayes, sister!

Replies from: steven0461
comment by steven0461 · 2011-11-24T01:37:41.581Z · LW(p) · GW(p)

Aumann to that, brother.

comment by [deleted] · 2011-11-24T00:45:48.896Z · LW(p) · GW(p)

.

comment by lessdazed · 2011-11-23T17:05:51.868Z · LW(p) · GW(p)

a macro view (something LWers seem to dislike

Why do you say?

Replies from: daenerys
comment by daenerys · 2011-11-23T17:51:21.050Z · LW(p) · GW(p)

a macro view (something LWers seem to dislike

Why do you say?

The first time I noticed this was in the recent PUA flamewars where everyone tried talking about very specific instances of a single (as in "one" not as in "not dating") guy picking up a single female, and the net utility of these specific interactions, etc.

No one ever tried to say, "Hey, let's take a step back and look at what the culture as a whole does". My only participation in those flamewars (besides linking to some relevant scientific research I'd stumbled across) was to bring up the macro view. I'll admit my posting of it wasn't the best. I'm sure someone else could make the point better. But the thing is, no one did. I observed the flamewars from afar, waiting for someone to bring up what I considered to be a rather obvious point (I had 0 desire to participate), but the focus consistently remained on the specifics.

Of course, I'll admit that this is just one anecdote, and was taken from a flamewar that most of the moderate voices of reason had already left. However it was useful in that it helped me see a general pattern, and since then, I've always seemed to notice that LW discussions tend to focus on "drilling down" and never really on "panning out". i.e. Let's get an example of an example and discuss that, rather than let's take a step back and look at an overall culture or system.

I was going to find some comments that support this, but it is so pervasive that instead I will say; Click on Recent Comments. How many times do you see people discussing specific examples of a theory? How many times do you see people discussing a macro (culture, system, etc) view? I will bet there are at least three times as many of the former than the latter (excepting this particular discussion).

That's ok. It's the easiest way to apply rationality, and I'll admit that. Really "something LWers seem to dislike" was a throwaway comment. An aside. I wouldn't even bother supporting it if not that (like pretty much every time someone has a "I feel X" comment) specific examples were asked for, as to why.

Replies from: lessdazed
comment by lessdazed · 2011-11-23T19:30:46.029Z · LW(p) · GW(p)

No one ever tried to say, "Hey, let's take a step back and look at what the culture as a whole does".

  1. That would have involved conjunctive predictions about such a society I'm not confidently able to make, or worse, assuming that everything else stayed constant when discussing counterfactual scenarios.

  2. As I said there, I think you overestimated the extent to which people's goals overlapped, and terms like "general egalitarianism" hid that (first, because 'general' allows for everyone to imagine the term describes things occurring with their preferred amount of exceptions to accommodate differences, and second because 'egalitarianism' allows everyone to imagine that their preferred level of influence is applied to make outcomes equal and their preferred amount of unconstraint allows greatly unequal outcomes).

  3. "the focus consistently remained on the specifics." I saw a main purpose of your arguing that there are great areas of agreement at the beginning a meta comment like yours to be to establish a necessary, or at least very helpful, groundwork for the possibility of constructively engaging at the meta level. Assuming you agree, you might agree with my drilling down to specific scenarios as being the most constructive thing granting my opinion that nothing close to such a consensus exists.

    A societal level discussion requires agreement about many related things in a dynamic system, and would be great provided people agreed on values. However, some people believe that, e.g. there is no difference between manipulation and persuasion (waves), or that consciously artificially ('faking') social signals constitutes rape when someone has a relationship with the signaler based on his signals (or even if they in fact weren't a necessary condition for the relationship? I'm not sure). I don't think it would be productive to go from there to speculation about types of society in which each individual has the opportunity to bend their (unfalsifiable) predictions about what would happen to conform to his or her political view.

    I understand you think it likely negative utility of PUA at a societal level might swamp even the local utility it produces and render that argument moot, but I really think people had thought of that and rejected it. We were really stuck on the lower level and I don't think it at all likely things would have been better at a higher one.

  4. As I recall, others did bring up the macro view, though disproving "No one ever tried to say," might require sifting through over a thousand comments. Something like "three times as many," as you said, might be accurate to reflect the fewer meta-level comments but if true that would make the PUA example an anomaly even if it were true.

argue politely

I think this can become a lost purpose to some extent as the purposes of arguing politely are to ensure content rather than tone is the focus, people don't become angry and think unclearly, etc. Autistic spectrum people who primarily care about ideas don't need to care about tone as much, particularly when replying to others whose personalities they already know.

I am home for thanksgiving, just the other day my family was making sandwiches. We were pulling ingredients out of cabinet, setting the table etc. and my mother was taking things out of the refrigerator. She began putting onions, meat, etc. on the counter, but what drew the eye was a large clear freezer bag containing several pounds of sliced tomatoes, and while taking things out of the fridge she asked me, "What would you like to put on your roast beef sandwich?" "Tomatoes," I replied. "They're on the counter," she said. "I know," I said. She looked at me. I looked at her. Oops I thought to myself, I'm doing it again. ;-)

The point being that I'm happy with the way people sometimes rudely disagree with me here. I'll concede a great deal as far as how I'm used to speaking with others not being appropriate for those I know nothing about, or my perhaps too-blunt mode of speech even when it is appropriate instilling poor habits I embody when it isn't appropriate, but one can't strongly conclude things about the content of discourse from its form.

The problem of being an echo chamber is only tangentially related to politeness.

Because people post consensus views more often, people tend to agree with the consensus more.

Have you read Every Cause Wants to be a Cult, EvaporativeCooling of Group Beliefs, Groupthink, etc.?

Replies from: daenerys
comment by daenerys · 2011-11-23T20:52:13.527Z · LW(p) · GW(p)

This thread itself is an example of how LW ends up drilling down to insignificant details: I say LW is cultish. Give us an example, you say (LW in general. Not you specifically). I say downvoting leads to group think. Give us an example you say. I say, (as an aside, it NOT really being my point at all), that LWers focus on the details at the expense of the whole. Give us an example, you say. I say here is a specific example, and some generalities. NOW you will debate with me, but at this point we are SO far down the rabbit hole, that it doesn't even matter anymore.

If you want to deconstruct my post on meta PUA, feel free to do so...However please do it as a response to my actual post, and not here, where it was just used as an example and I personally admitted that it wasn't the most well-written post.

As I recall, others did bring up the macro view, though disproving "No one ever tried to say," might require sifting through over a thousand comments.Something like "three times as many," as you said, might be accurate

I did in fact read almost all of the 1000+ posts in that thread hoping someone would post a macro view. You say they were there, but I did not see it. I will say "a hundred times as many", and feel fairly confident that I did not just entirely miss at least 13 posts. Feel free to prove me wrong. I will happily claim the mistake if you do.

you might agree with my drilling down to specific scenarios as being the most constructive thing granting my opinion that nothing close to such a consensus exists.

When people talk about generalities, it is imperative that all involved are willing to follow logical lines of thought without demanding proof and examples at every turn.Generalities contain so many specifics that you have to agree with other people that you hold the same general idea, while being willing to disagree on specifics. That's the only way these discussions can work. I understand that's not how LW works and I myself said

That's ok. It's the easiest way to apply rationality, and I'll admit that.

I would like to say that I completely understand why LW doesn't do generalities. I understood when I wrote my last post. I accept this. You are correct in this. But when I say LW doesnt talk about generalities, you explain WHY that is, but it doesn't change the fact that LW doesn't talk generalities, which is the original point you wanted me to prove.

Autistic spectrum people who primarily care about ideas don't need to care about tone as much,

Trust me, I know what people on the Autism spectrum are like. I have worked for years in child and disability care, and currently have two clients with Asperger's. I have to deal with this at work, so I know that the Queen Anne's Revenge Lego ship costs $120 and has 1095 pieces and is for ages 9-16, and that the Burj Khalifa is the tallest building in the world and it's in Dubai and has over 160 stories. But is this really the conversation model we want to have? (I know it's something I certainly don't want to have to put up with, when I'm not being paid good money to) Because if it is, the ability of this site to attract smart NTs is going to drastically decrease. If you want LW to be a non-NT haven, that's great! Just call it that, instead of calling it a blog on rationality.

There's a joke on "Glee" where a character claims to have Asperger's, and uses that as an excuse to be rude to everyone. Saying people are on the Autism spectrum is an explanation of impoliteness, and a good way to apologize for rudeness (i.e. "I'm sorry, I didn't mean for my words to be taken that way. I have a problem with being too blunt"). It is NOT however a free ticket to be rude, without regard to consequences.

Being "polite" isn't just to help convince the other person to listen to you. It's to not completely drive the NTs away altogether.

(I also realize that I myself am not being overly polite right now, and I apologize. But I don't have time to make all these points in a polite manner, and apparently you are more than happy with people being blunt with you, so I will take you at your word in the matter.)

Have you read.........

I know that recommending the sequences is the LW version of "Fuck you". I assume, being a long time LW vet, that you are aware of the same. For your information, I have read much of the Death Spirals and the Cult Attractor sequence, but had not read the wiki (your link was broken, btw), and just read the first link.

However, my first contact with these ideas was actually from a TED talk a good while back (long ago enough that I don't remember who it was and can't find it). I think the TED talk comments are a good example of how people can have interesting discussions, but remain civil.

I am getting tired of this thread, and am unlikely to continue commenting. I'm gonna go back to transcribing videos.

Replies from: TimS, lessdazed
comment by TimS · 2011-11-23T21:45:36.261Z · LW(p) · GW(p)

When people talk about generalities, it is imperative that all involved are willing to follow logical lines of thought without demanding proof and examples at every turn.Generalities contain so many specifics that you have to agree with other people that you hold the same general idea, while being willing to disagree on specifics. That's the only way these discussions can work.

I wish I could upvote again just for this point.

comment by lessdazed · 2011-11-23T22:22:20.456Z · LW(p) · GW(p)

Give us an example you say.

Code is destiny.

I mainly interact through the recent comments screen. Consequently I don't always know what had proceeded, and didn't here. If excessive drilling is a problem, I think we have found a major reason why, and it is fixable in many ways.

I barely cared about the cult aspect, and that's why I didn't say anything about it and just pointed. I realize the cult aspect was central to your thinking even as the level issue was the one I cared about.

I also realize that I myself am not being overly polite right now, and I apologize.

I honestly didn't notice.

I know that recommending the sequences is the LW version of "Fuck you".

I agree, which is why I have never once ever recommended that a person read the sequences...I think. I have frequently recommended posts, wiki entries, entire sequences minus 1/3 of their content, etc. That's because it actually provides useful information. In your case I didn't recommend you read them, but asked if you had, as I wasn't sure.

TED

Shermer? Benscoter?

comment by antigonus · 2011-11-23T06:26:37.365Z · LW(p) · GW(p)

For what it's worth, I've posted a fair number of things in my short time here that go against what I assume to be consensus, and I've mostly only been upvoted for them. (This includes posts that come close to making the cult comparison.)

comment by XiXiDu · 2011-11-22T13:27:54.651Z · LW(p) · GW(p)

Erm, before you waste more time on this issue, you are right that Ben Goertzel was a bad, or even completely misplaced choice for the category of people I wanted to list.

comment by lessdazed · 2011-11-22T19:28:53.432Z · LW(p) · GW(p)

And in the end they would either be ignored, downvoted, called stupid or their disagreement a result of motivated cognition.

"Called" both referred to the named people and linked "their disagreement" to "a result of motivated cognition". It's a syllepsis.

And in the end they would either be ignored, downvoted, or called stupid, or their disagreement was called a result of motivated cognition.

comment by Mercurial · 2011-11-23T04:40:28.916Z · LW(p) · GW(p)

That would be awesome, for sure! But I'd also prefer not to see this get frozen in planning just because there's a theoretical possibility of making it better. I'd still consider SIAI-biased advice to be vastly better than no advice at all.

comment by Incorrect · 2011-11-22T02:46:30.658Z · LW(p) · GW(p)

We already have too much lesswrong-exclusive jargon. "OFT" is unnecessary.

Replies from: windmil, lukeprog
comment by windmil · 2011-11-22T03:02:00.157Z · LW(p) · GW(p)

Even though if it were accepted it might be OFT used.

comment by lukeprog · 2011-11-22T02:52:45.878Z · LW(p) · GW(p)

That was my decision, too. :)

comment by scientism · 2011-11-22T21:01:59.552Z · LW(p) · GW(p)

This isn't really an "oops" but I do think you should spend some time exploring alternative approaches to cognitive science. The usual SIAI position seems to be to act as if there's a single, homogeneous field called "cognitive science" and contrast it with dissenting non-empirical philosophy. At the very least, say, read Tim van Gelder's introductory chapter to "Mind as Motion" to get a sense of the dynamicist critique of computationalism (and, if inclined, look at some of the empirical research) and check out JJ Gibson's "Senses Considered as Perceptual Systems" and "The Ecological Approach to Visual Perception" for ecological psychology. There's also a lot of neuroscience outside cognitive/computationalist neuroscience.

Replies from: lukeprog, lukeprog
comment by lukeprog · 2011-11-26T15:47:27.444Z · LW(p) · GW(p)

Van Gelder represents computationalism this way:

According to [the computational] approach, when I return a serve in tennis, what happens is roughly as follows. Light from the approaching ball strikes my retina and my brain's visual mechanisms quickly compute what is being seen (a ball) and its direction and rate of approach. This information is fed to a planning system which holds representations of my current goals (win the game, return the serve, etc.) and other background knowledge (court conditions, weaknesses of the other player, etc.). The planning system then infers what I must do: hit the ball deep into my opponent's backhand. This command is issued to the motor system. My arms and legs move as required.

In its most familiar and strike and successful applications, the computational approach makes a series of further assumptions. Representations are static structures of discrete symbols. Cognitive operations are transformations from one static symbol structure to the next. These transformations are discrete, effectively instantaneous, and sequential. The mental computer is broken down into a number of modules responsible for different symbol-processing tasks. A module takes symbolic representations as inputs and computes symbolic representations as outputs.

This is indeed a popular formulation of the computational theory of mind originally defended by Putnam and Fodor, but I'm not sure I've seen it endorsed in so many incorrect details by a major Less Wrong author. For example my post Neuroscience of Human Motivation disagrees with the above description on several points.

Replies from: scientism
comment by scientism · 2011-11-26T19:11:16.253Z · LW(p) · GW(p)

I'm not sure the implementation details are particularly relevant to his main argument though. The central concern is that computation is step-wise whereas dynamicism is continuous in time. So a computational approach, by definition, will break a task into a sequence of steps and these have an order but not an inherent time-scale. (It's hard to see how an approach would be computationalist at all if this were not the case.) This has consequences for typical LessWrong theses. For example, speeding up the substrate for a computation has an obvious result: each step will be executed faster. If we have a computation consisting of three steps, S1 -> S2 -> S3, and each one takes 10 ms and we speed it up by a factor of 10 we'll have a computation that executes in 3 ms instead of 30 ms. But if we have a dynamical equation describing the system this isn't the case. I can speak of the system moving between states - say, S(t) -> S(t+1) - but if we speed up the components involved by 10x (say, these are neural states, and we're speeding up the neurons) I don't get the same thing but faster, I get something else entirely. Perhaps the result would be greater sensitivity to shorter time-scales but given that the brain is likely temporally organised I'm inclined to think what I'd get would be a brain that doesn't work at all.

Replies from: lukeprog
comment by lukeprog · 2011-11-26T21:49:09.939Z · LW(p) · GW(p)

See, this is why I make LW discussion posts asking where I need to say "oops." :)

I encountered dynamical systems when I read my first cogsci textbook, and was probably too influenced by its take on dynamical systems. Here's what Bermudez writes on pages 429-430:

We started out... with the idea that the dynamical systems approach might be a radical alternative to some of the basic assumptions of cognitive science — and in particular to the idea that cognition essentially involves computation and information processing. Some proponents of the dynamical systems approach have certainly made some very strong claims in this direction. Van Gelder, for example, has suggested that the dynamical systems model will in time completely supplant computational models, so that traditional cognitive science will end up looking as quaint (and as fundamentally misconceived) as the computational governor.

[But] as we have seen throughout this book, cognitive science is both interdisciplinary and multi-level... This applies to the dynamical systems hypothesis no less than to anything else. There is no more chance of gaining a complete picture of the mind through dynamical systems theory than there is of gaining a complete account through neurobiology, say, or AI....

...Dynamical systems models are perfectly compatible with information-processing models of cognition. Dynamical systems models operate at a higher level of abstraction. They allow cognitive scientists to abstract away from details of information-processing mechanisms in order to study how systems evolve over time. But even when we have a model of how a cognitive system evolves over time we will still need an account of what makes it possible for the system to evolve in those ways.

Bermudez illustrates his point by saying that dynamical systems theory can do a good job of modeling traffic jams, but this doesn't mean we no longer have to think about internal combustion engines, gasoline, etc.

What do you think?

Replies from: scientism
comment by scientism · 2011-11-26T23:13:43.349Z · LW(p) · GW(p)

I think it's essentially begging the question. Van Gelder is questioning whether there is computation going on at all, so to say that dynamical systems abstract away from the details of the information-processing mechanisms is obviously to assume that computation is going on. That might be a way somebody already committed to computationalism could look to incorporate dynamical systems theory but it's not a response to Van Gelder. This is obvious from the traffic analogy. The dynamical account of traffic is obviously an abstraction from what actually happens (internal combustion engines, gasoline, etc). But the analogy only holds with cognitive science if you assume what actually happens in cognitive systems to be computation. What Van Gelder is doing is criticising computationalism for not be able to properly account for things that are critical to cognition (such as evolution in time). It's not clear to me what it could mean to abstract away from computational models in order to study how systems evolve over time if those models do not themselves say anything about how they evolve over time. I think Van Gelder addresses this. It's difficult to get an algorithmic model to be time-sensitive.

That said, whether the dynamical approach alone is adequate to capture everything about cognition is another matter. There are alternative approaches that provide an adequate description of mechanisms but that are more sensitive to the issue of time. For example, see Anthony Chemero's Radical Embodied Cognitive Science where he argues that we need ecological psychology to make sense of the mechanisms behind the dynamics. Typically dynamicists operate on a embodied/ecological perspective and don't simply claim that the equations are the whole explanation (they are concerned with, say, neurons, bodies, the environment, etc). I think Bermudez is also confused about levels here. Presumably the mechanism level for cognition is the brain and its neurons, and perhaps the body and parts of the environment, and a computational account is an abstraction from those mechanisms just as much as a dynamical equation is. It's common in computationalism to confuse and conflate identifying the brain as a computer with merely claiming that a computational approach gives an adequate descriptive account of some process is the brain. So, for example, I could argue that an algorithm gives an adequate description of a given brain process because it is not time sensitive and can therefore be described as a sequence of successive states without reference to its evolution in time. But that would not imply that the underlying mechanisms are computational, only that a computational description gives an adequate account.

Replies from: lukeprog
comment by lukeprog · 2011-11-27T00:23:19.233Z · LW(p) · GW(p)

But that would not imply that the underlying mechanisms are computational, only that a computational description gives an adequate account

Could you elaborate what you mean by this? Our most successful computational models of various cognitive systems at different levels of organization do remarkably well at predicting brain phenomena, to the point where we can simulate increasingly large cortical structures.

I read most of Van Gelder's last article on dynamical cognitive systems before he switched to critical thinking and argument mapping research, in BBS, and I'm still not seeing why computationalism the and dynamical systems approach are incompatible. For example, Van Gelder says that a distinguishing feature of dynamical systems is it quantitative approach to states - but of course computationalism is often quantitative about states, too. Trappenberg must be confused, too, since his textbook on computational neuroscience talks several times about dynamical systems and doesn't seem to be aware that they are somehow at odds with his program of computationalism.

Naively, it looks to me like the dynamical systems approach was largely a rection to early versions of the physical symbol system hypothesis and neural networks, but if you understand computationalism in the modern sense (which often includes models of time, quantitative state information, etc.) while still describing the system in terms of information processing, then there doesn't seem to be much conflict between the two.

Even Chemero agrees:

On our view, dynamical and [computational] explanation of the same complex system get at different but related features of said system described at different levels of abstraction and with different questions in mind. We see no a priori reason to claim that either kind of explanation is more fundamental than the other.

comment by lukeprog · 2011-11-26T10:36:13.764Z · LW(p) · GW(p)

Thanks for the specific recommendations.

comment by CharlesR · 2011-11-22T14:03:15.089Z · LW(p) · GW(p)

Put CSA out its misery?

Replies from: David_Gerard, lukeprog
comment by David_Gerard · 2011-11-22T20:44:00.470Z · LW(p) · GW(p)

Turn it into somewhere to post a link and synopsis to your LessWrong posts.

The quality of commenting there is just really awful compared to the comments on your LessWrong posts, but it has a separate audience that LW doesn't. Even a muddle-headed audience that reads the thing may produce useful results. (They might come here.)

Replies from: lukeprog
comment by lukeprog · 2011-11-22T21:02:12.189Z · LW(p) · GW(p)

It's possible that I should just start deleting all comments that are shitty, but that would require lots of time.

Replies from: David_Gerard, None, CharlesR, lessdazed
comment by David_Gerard · 2011-11-23T08:52:18.766Z · LW(p) · GW(p)

Only stuff you'd delete as abusive rubbish. But the sincere stupidity is sincere. Best treat it like comments on a newspaper site ;-) You clearly have a readership there (aggrieved theists who think they're philosophers), and surely there's something you can do with that.

comment by [deleted] · 2011-11-22T22:33:15.388Z · LW(p) · GW(p)

.

comment by CharlesR · 2011-11-22T21:58:22.329Z · LW(p) · GW(p)

I don't think that is a good use of your time.

comment by lessdazed · 2011-11-22T22:47:47.364Z · LW(p) · GW(p)

Establishing a hierarchy of moderators would be a good idea. Those who become moderators will become invested in the site and its ideas.

Replies from: lukeprog
comment by lukeprog · 2011-11-22T22:54:08.405Z · LW(p) · GW(p)

I don't want to even invest that much effort in CSA. It's really just a place for me to post occasional articles that don't fit elsewhere, and to post weekly links.

comment by lukeprog · 2011-11-22T22:01:02.696Z · LW(p) · GW(p)

Are you seeing it as being in "misery" because you are expecting it to be what it once was instead of the much more modest, personal, and erratic thing I have declared that it now is?

Replies from: CharlesR
comment by CharlesR · 2011-11-23T02:21:18.984Z · LW(p) · GW(p)

I am wondering if you have good reasons for investing any time there at all. It's possible. That's why I phrased it as a question.

Replies from: lukeprog
comment by lukeprog · 2011-11-23T03:43:08.597Z · LW(p) · GW(p)

There are still a surprising number of readers.

Replies from: CharlesR
comment by CharlesR · 2011-11-23T04:10:23.706Z · LW(p) · GW(p)

Is that your true rejection?

Replies from: lukeprog
comment by lukeprog · 2011-11-23T04:53:31.384Z · LW(p) · GW(p)

I think so. If it wasn't so instrumentally useful due to its continuing readership, I'd just freeze the whole site and let it sit there as an archive and never have to delete spam comments again.

Replies from: CharlesR
comment by CharlesR · 2011-11-23T05:43:37.913Z · LW(p) · GW(p)

So you think you can reach a wider audience because many of those people will (for whatever reason) not follow you here.

I think another reason could be: Because one day I want to write a book.

comment by Luke_A_Somers · 2011-11-22T04:03:10.144Z · LW(p) · GW(p)

I'm reminded of my recent brainstorming session on that neutrino business.

I'm not sure why such errors wouldn't be addressed in their normal contexts right where they've been made.

comment by [deleted] · 2011-11-22T22:31:26.765Z · LW(p) · GW(p)

.