Hold Off On Proposing Solutions
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-17T03:16:04.000Z · LW · GW · Legacy · 52 commentsContents
52 comments
From Robyn Dawes’s Rational Choice in an Uncertain World.1 Bolding added.
Norman R. F. Maier noted that when a group faces a problem, the natural tendency of its members is to propose possible solutions as they begin to discuss the problem. Consequently, the group interaction focuses on the merits and problems of the proposed solutions, people become emotionally attached to the ones they have suggested, and superior solutions are not suggested. Maier enacted an edict to enhance group problem solving: “Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.” It is easy to show that this edict works in contexts where there are objectively defined good solutions to problems.
Maier devised the following “role playing” experiment to demonstrate his point. Three employees of differing ability work on an assembly line. They rotate among three jobs that require different levels of ability, because the most able—who is also the most dominant—is strongly motivated to avoid boredom. In contrast, the least able worker, aware that he does not perform the more difficult jobs as well as the other two, has agreed to rotation because of the dominance of his able co-worker. An “efficiency expert” notes that if the most able employee were given the most difficult task and the least able the least difficult, productivity could be improved by 20%, and the expert recommends that the employees stop rotating. The three employees and . . . a fourth person designated to play the role of foreman are asked to discuss the expert’s recommendation. Some role-playing groups are given Maier’s edict not to discuss solutions until having discussed the problem thoroughly, while others are not. Those who are not given the edict immediately begin to argue about the importance of productivity versus worker autonomy and the avoidance of boredom. Groups presented with the edict have a much higher probability of arriving at the solution that the two more able workers rotate, while the least able one sticks to the least demanding job—a solution that yields a 19% increase in productivity.
I have often used this edict with groups I have led—particularly when they face a very tough problem, which is when group members are most apt to propose solutions immediately. While I have no objective criterion on which to judge the quality of the problem solving of the groups, Maier’s edict appears to foster better solutions to problems.
This is so true it’s not even funny. And it gets worse and worse the tougher the problem becomes. Take artificial intelligence, for example. A surprising number of people I meet seem to know exactly how to build an artificial general intelligence, without, say, knowing how to build an optical character recognizer or a collaborative filtering system (much easier problems). And as for building an AI with a positive impact on the world—a Friendly AI, loosely speaking—why, that problem is so incredibly difficult that an actual majority resolve the whole issue within fifteen seconds.2 Give me a break.
This problem is by no means unique to AI. Physicists encounter plenty of nonphysicists with their own theories of physics, economists get to hear lots of amazing new theories of economics. If you’re an evolutionary biologist, anyone you meet can instantly solve any open problem in your field, usually by postulating group selection. Et cetera.
Maier’s advice echoes the principle of the bottom line, that the effectiveness of our decisions is determined only by whatever evidence and processing we did in first arriving at our decisions—after you write the bottom line, it is too late to write more reasons above. If you make your decision very early on, it will, in fact, be based on very little thought, no matter how many amazing arguments you come up with afterward.
And consider furthermore that we change our minds less often than we think: 24 people assigned an average 66% probability to the future choice thought more probable, but only 1 in 24 actually chose the option thought less probable. Once you can guess what your answer will be, you have probably already decided. If you can guess your answer half a second after hearing the question, then you have half a second in which to be intelligent. It’s not a lot of time.
Traditional Rationality emphasizes falsification—the ability to relinquish an initial opinion when confronted by clear evidence against it. But once an idea gets into your head, it will probably require way too much evidence to get it out again. Worse, we don’t always have the luxury of overwhelming evidence.
I suspect that a more powerful (and more difficult) method is to hold off on thinking of an answer. To suspend, draw out, that tiny moment when we can’t yet guess what our answer will be; thus giving our intelligence a longer time in which to act.
Even half a minute would be an improvement over half a second.
1Robyn M. Dawes, Rational Choice in An Uncertain World, 1st ed., ed. Jerome Kagan (San Diego, CA: Harcourt Brace Jovanovich, 1988), 55–56.
2See Yudkowsky, “Artificial Intelligence as a Positive and Negative Factor in Global Risk.”
52 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Gray_Area · 2007-10-17T04:40:23.000Z · LW(p) · GW(p)
What circles do you run in Eliezer? I meet a fair number of people who work in AI, (you can say I "work in AI" myself) and so far I can't think of a single person who was sure of a way to build general intelligence. Is this attitude you observe a common one among people who aren't actually doing AI research, but who think about AI?
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-17T05:10:46.000Z · LW(p) · GW(p)
Oh, I'm not talking about the mainstream AI field. Most of them know better. I mean, say, a random middle or upper-class individual in Silicon Valley, or a random user on an IRC channel.
However, the rule about instantly solving Friendly AI may apply even within the AI field, since it's a more difficult problem.
comment by Constant2 · 2007-10-17T06:06:45.000Z · LW(p) · GW(p)
It's obvious how to build AI. You just add complexity. AIs need complexity. :-)
Replies from: xenohunter↑ comment by xenohunter · 2023-08-11T15:29:43.524Z · LW(p) · GW(p)
And some emergent properties for sure!
comment by Richard_Hollerith · 2007-10-17T08:41:08.000Z · LW(p) · GW(p)
And a randomness-adder :)
comment by Eddieosh · 2007-10-17T09:14:48.000Z · LW(p) · GW(p)
I've just finished a 3-day training course on TRIZ (http://en.wikipedia.org/wiki/TRIZ) a problem solving technique, one of the recurring themes throughout the course was what to do about all the solutions that come out even before you've figured out what the true problem is you're trying to solve. The advice was to write the solutions down (rather than be diverted by them or try to bat them away), use them to help examine the problem a bit more and then carry on until you have enough information to make useful judgements about all the solutions you've generated; this was very helpful advice. You need to have a sound way of formulating and exploring the problem space, as well as generating solutions, otherwise you'll become too distracted by all the great solutions your brain is generating.
comment by logicnazi · 2007-10-17T09:21:30.000Z · LW(p) · GW(p)
I just want to remark that it is far from obvious on apriori grounds that there is no elegant general AI algorithm that will solve all the other problems quite nicely. We've only learned this by the continued failure to find such an algorithm or anything like it by the AI community and the continued small successes of more specific less elegant approaches.
comment by Rick_Smith · 2007-10-17T10:07:46.000Z · LW(p) · GW(p)
AI's need Emergence too. Make sure to add some of that to the soup ;^)
comment by Alan_Crowe · 2007-10-17T11:55:07.000Z · LW(p) · GW(p)
X3J13, the ANSI committee that standarised Common Lisp, had many problems to solve. Kent Pitman credits Larry Masinter with imposing the disciple of seperating problem descriptions from proposed solutions and gives insights into what that meant in practise in a post to comp.lang.lisp
The general interest lies in that fact that the X3J13 Issues were all written up and are available on line.
http://www.lispworks.com/documentation/HyperSpec/Front/X3J13Iss.htm
or
http://www.lisp.org/HyperSpec/FrontMatter/X3J13-Issues.html
so if you wish to study how this works there is a resource you can analyse.
I should confess that my interest has been in content not process. I have been reading these issues to learn Common Lisp. Are these pages really a useful resource for scholars wishing to study the separation of problem descriptions from proposed solutions? I don't know.
comment by Tiedemies2 · 2007-10-17T12:01:16.000Z · LW(p) · GW(p)
I think this argument is flawed with respect to the more technology-oriented questions. Most people do not seriously claim to solve AI problems. What most people (like myself) who are slightly educated in the field (I did an undergrad minor in AI, just very simple stuff) will do is they will suggest an approach that they would try if they had to start working on it. Technical questions also usually yield to evidence very quickly whenever it matters, i.e., when someone would start burning money on an implementation. That is not to say some time and resources are not to be saved by using the maxim outlined here.
OTOH, the part about economists is valid, since most people have very strong ideas (usually wrong ones) about what will work, e.g., as a policy. But then again, most people have no way of wasting (other peoples') resources based on these faulty ideas.
No, wait...
comment by 4σ · 2007-10-17T13:42:33.000Z · LW(p) · GW(p)
The latest of a number of really good posts from you that directly address the concern of this blog. You seem to be really starting to "grok" the terrifying reality of just how biased we are by the very nature of our thought processes, and coming up with good and useful steps to reduce those biases. Nicely done.
comment by michael_vassar3 · 2007-10-17T14:33:23.000Z · LW(p) · GW(p)
This post makes me wonder how much time passed for Eliezer between concluding that a technological singularity was a probable part of the future and deciding that creating an AGI was the best response, and likewise how much time passed between concluding that AGI Friendlyness would be a difficult problem and concluding that working on a theory of AGI Friendlyness was the best response.
comment by Richard_Hollerith · 2007-10-17T18:55:12.000Z · LW(p) · GW(p)
Eliezer, I get the impression that your recent blog entries will make me a better rationalist or if not that a better inventor of software, organizational innovations and social arrangements that will help people become better rationalists.
Good stuff, I say.
comment by anonymous_poster · 2007-10-17T19:09:57.000Z · LW(p) · GW(p)
A surprising number of people I meet seem to know exactly how to build an Artificial General Intelligence, without, say, knowing how to play the guitar or juggle (much easier problems).
Replies from: danlowlite↑ comment by danlowlite · 2010-10-29T13:58:30.753Z · LW(p) · GW(p)
Yes, but while those two topics may be interesting to me, other "easy" problems (home and car maintenance, farming) are not so much even though I recognize their importance. I'm not going to learn how to do everything basic before I am going to learn something complicated. Am I?
Is an AI?
And these problems aren't even easy, really. Like the person who knows how to make an AI, one imagines they "know" how to play guitar. There's a competence level and there is a deeper mastery/creation level. I know three chords; I am not .
Unless that was your point.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-17T21:03:02.000Z · LW(p) · GW(p)
Playing the guitar has human-aesthetic components so it's a subproblem of Friendly AI, not just AGI. Building an AI that juggles is a valid challenge. As for trying to do it yourself, that quite misses the point. A mathematician may not be able to do high-speed mental arithmetic, but ought to know how to build a calculator.
comment by Luis_Enrique2 · 2007-10-17T22:02:27.000Z · LW(p) · GW(p)
I remember reading something much like this in I am right and you are wrong by Edward de Bono, who as I recall wrote that we should try to hold on to the "I haven't made my mind up" state much longer than we do, and be prepared to say "I don't know" much more often than we do (I think he even proposed a new word we could use to answer questions with that meant we don't have a reason to think either way yet). This was about 15 years ago so I've probably mis-remembered.
I was a philosophy undergrad at the time, and when I asked my tutors about de Bono, they told me he was a vacuous 'self-help' nitwit I should ignore.
comment by GreedyAlgorithm · 2007-10-17T22:45:03.000Z · LW(p) · GW(p)
"My Ap distribution is rather flat."
Hm, MADIRF? :)
comment by Doug_S. · 2007-10-18T00:14:35.000Z · LW(p) · GW(p)
Completely useless methods for building a general intelligence:
Method 1: Put some bacteria on a lifeless planet with liquid water. Wait until one evolves.
Method 2: Find a fertile human of each gender and induce them to mate. Wait nine months.
comment by Douglas_Knight2 · 2007-10-18T03:08:14.000Z · LW(p) · GW(p)
Luis Enrique, See above about "We Change Our Minds Less Often Than We Think"; my interpretation is that the people are trying to believe that they haven't made up their minds, but they are wrong. That is, they seem to be implementing the (first) advice you mention. Maybe one can come up with more practical advice, but these are very difficult problems to fix, even if you understand the errors. On the other hand, the main part of the post is about a successful intervention.
comment by Rolf_Nelson2 · 2007-10-18T05:31:08.000Z · LW(p) · GW(p)
Constant, regarding "analysis paralysis," keep in mind there are often two separate questions:
How much time should I spend thinking about X?
Given I'm allocating T time to think about X, how should I divide up T among different thought subtasks?
Analysis Paralysis would generally be a problem with (1).
The current blog post applies more to (2). In the Maier example, the participants presumably know they have a sizable chunk of time blocked out, and the experimental group presumably gets better results not by spending more time overall, but because they reserved a good chunk of T to spend learning the problem, without committing right away to a solution.
comment by Lewsome · 2010-01-17T00:44:36.261Z · LW(p) · GW(p)
The notion of delaying proposition of 'solutions' as long as possible seems an excellent technique for group work where stated propositions not only appear prematurely but become entangled with other, perhaps unproductive interpersonal dynamics, and where the energy of the deliberately 'unmade up' group mind can possibly assist the individual to internally change position. The thorny bit for me however, is the individual trying to 'hold that non-thought' - a challenge that is more or less equivalent to stopping, or even slowing the thought process deliberately, which is meditation after all - something we mere mortals haven't found all that easy so far. Indeed, some argue that many of us aren't even aware there is an 'internal dialogue', let alone knowing how to stop it. In other words, it's easy to say don't make up your mind, but not so easy to enact.
Replies from: ericn↑ comment by ericn · 2010-12-30T06:45:00.617Z · LW(p) · GW(p)
It's okay to think up solutions. You just have to write them down and refocus on the problem.
This is how a brainstorming session is supposed to work. The main goal of the facilitator is to keep the group criticism from spinning out of control. Usually, if someone proposes a solution, someone will shout out an objection to it. But we should still be thinking about the problem. Just write down the solution and shush the objection, then return to the problem.
comment by ericn · 2010-12-30T06:38:21.552Z · LW(p) · GW(p)
I agree. I really hate our notion that "you shouldn't bring up a problem unless you have a solution".
It is obvious to anyone that solves problems that we should analyze the problem before letting our minds move on to a solution.
Replies from: FriendlyViking↑ comment by FriendlyViking · 2011-03-17T17:33:34.673Z · LW(p) · GW(p)
The people advocating that might be confusing analysis with politics. It's annoying when someone criticises your political idea but offers no alternative; it feels (sometimes accurately) that they're disrupting the conversation but offering no input. So in a political debate, a ground rule might be "don't criticise my solution if you don't have a solution of your own".
Rationally, however, that doesn't excuse not assessing the solution. And it's also important to remember that one potential solution is "do nothing" or "carry on doing what we were doing already". So, in most cases, ANY new solution had an alternative solution to which it can be compared.
comment by wamblewire · 2011-04-29T08:38:05.801Z · LW(p) · GW(p)
That's like arguing food doesn't taste good because I can't prove it.
Replies from: wamblewire, AdeleneDawner↑ comment by wamblewire · 2011-04-29T08:41:50.075Z · LW(p) · GW(p)
Please stop misusing the word edict.
↑ comment by AdeleneDawner · 2011-04-29T08:42:21.626Z · LW(p) · GW(p)
...what? How is the 'question' of whether food tastes good even related to this? It's nothing like a problem needing a solution.
comment by MarkusQ · 2011-08-11T05:51:36.636Z · LW(p) · GW(p)
Are you sure of that citation? I just looked for it in a copy of Dawes's "Rational Choice in an Uncertain World" and again with the full text search in Google books
http://books.google.com/books/about/Rational_choice_in_an_uncertain_world.html?id=rcU1BsfrM2kC
and did not find any mention of Maier's work. Also, though Maier does frequently use the "Changing Work Procedures" problem, I haven't turned up any publication by him that matches this description. (Note that this failure is quite possibly mine; I haven't done an exhaustive search).
-- MarkusQ
Replies from: byrnema↑ comment by byrnema · 2011-08-26T11:30:33.027Z · LW(p) · GW(p)
I'm thinking perhaps it is this book by Norman R.F. Maier:
Problem Solving Discussions and Conferences, published by McGraw-Hill Education (December 1963).
Does anyone know of more recent journal article on the topic, 'wait before proposing solutions'?
comment by mat33 · 2011-10-08T02:22:20.106Z · LW(p) · GW(p)
"why, that problem is so incredibly difficult that an actual majority resolve the whole issue within 15 seconds.", "We Change Our Minds Less Often Than We Think" and "Cached Thoughts"...
Right. We don't do a lot of "our" thinking ourselves. We aren't individually sentient, not really. We don't notice it, but the actual thinking is going on in our subcultures. The sad and funny thing is, we don't even try to understand the cognition of our subcultures, when we research cognition.
Replies from: stcredzero, papetoast↑ comment by stcredzero · 2012-06-03T21:04:59.839Z · LW(p) · GW(p)
I think I'm sentient. If you're not sentient, I would surmise that you believe you're lucky enough to be in a competent subculture -- one self-aware enough to bring this realization to you.
Could one devise a series of experiments to show that individuals aren't sentient, but "subcultures" are?
↑ comment by papetoast · 2022-12-19T10:39:56.395Z · LW(p) · GW(p)
We aren't individually sentient, not really.
We do less thinking that we imagine, but we still think. However, I still argee (to a lesser extent) that (sub)cultures fixed many thoughts of many people.
The sad and funny thing is, we don't even try to understand the cognition of our subcultures, when we research cognition.
I find 2 possible meaning of "we" here, but the sentence is false in both senses:
- "We" = all of humanity: The "cognition of subcultures" sounds like half Anthropology and half Psychology, and I imagine it has been researched.
- "We" = individuals, rationalists: If your goal is to think by yourself, it is minimally useful to understand how culture "think" for you. Knowing how to not let culture think for you is enough.
comment by ameriver · 2012-05-20T03:05:38.126Z · LW(p) · GW(p)
This is one of the techniques I've always thought sounded really useful, but never had a clear enough picture of to implement for myself. Does anyone have an example (a transcript, or something of the like) of groups and/or individuals successfully discussing a problem for 5 or 10 minutes without proposing any solutions? I have trouble imagining what that would look like.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-20T04:23:52.043Z · LW(p) · GW(p)
No transcript. But I do this professionally all the time. Clients frequently come to me with a design in mind for a solution, and it's often important to back them up and get them to tell me what the problem actually is.
Usually, I start with the question "How would you be able to tell that this problem had been solved?" and repeat it two or twenty times in different words until someone actually tries to answer it.
On one occasion I handed a client my pen and asked whether it was a solution to their problem. They looked at me funny and said it wasn't. I asked them how they knew that, and after a while one of them said "well, for one thing, it doesn't do X" and I said "great!", took the pen back, and wrote "has to do X". Then I handed them the pen back and said "OK, suppose I add the ability to do X somehow to this pen. Is it a solution to your problem now?" and after a couple of iterations they got it and started actually telling me what their problem was.
The thing that used to astonish me is how often the proposed solution utterly fails to even address the problem articulated by the same person who proposed the solution. I've come to expect it.
Replies from: Jonathan_Graehl, Epiphany↑ comment by Jonathan_Graehl · 2012-06-15T07:11:54.758Z · LW(p) · GW(p)
I start with the question "How would you be able to tell that this problem had been solved?" and repeat it two or twenty times in different words until someone actually tries to answer it.
I handed a client my pen and asked whether it was a solution to their problem
Bleakly funny. Thanks for that. I usually retreat (probably with an angry or pained look on my face) when I notice I'm not really being heard. But sometimes it's better to play and explore.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-15T15:18:29.628Z · LW(p) · GW(p)
(nods) It's kind of critical in a systems engineering role.
Only vaguely relatedly, one of my favorite lines ever came from my first professional mentor, about a design he was proposing: "It does what you expect, but you have to expect the right things."
↑ comment by Epiphany · 2012-09-21T04:40:15.477Z · LW(p) · GW(p)
Usually, I start with the question "How would you be able to tell that this problem had been solved?" and repeat it two or twenty times in different words until someone actually tries to answer it.
What a true and hilarious depiction of life. I have the exact same problem doing web development. Because the people giving me projects are not IT people they tend to come up with totally dysfunctional solutions. Yet they almost always start by telling me how they want the problem solved. I have to dig to find out what the problem is first but I just ask them "What result do you want?" or "What purpose do you want this to serve?" and say "I can't make it serve the purpose without knowing what the purpose is." That works for me, without me having to ask them 20 times. Then again maybe you're doing projects in radically different contexts all the time, or with completely different people who vary in their ability to see the point in answering that question. I work with a limited number of people and contexts, all of which I understand pretty well, so my problem clarification process is pretty simple.
Replies from: TheOtherDave, Zian↑ comment by TheOtherDave · 2012-09-21T06:55:38.393Z · LW(p) · GW(p)
Yeah, it's different people and a different context every time.
↑ comment by Zian · 2013-01-22T05:05:08.342Z · LW(p) · GW(p)
What purpose do you want this to serve ... I work with a limited number of people and contexts, all of which I understand pretty well, so my problem clarification process is pretty simple..
In my experience as a programmer (who wore all the software-related hats), I found that even when I understood the domain quite well, inquired about the purpose multiple times, and wrote little stories illustrating my interpretation of the users' desires, I could walk away from early usability tests with major changes to the project.
In one particularly memorable instance, I got all the way through making paper prototypes and making pretend e-mails. Then, I convinced my manager to try out the system. The process started in a pre-existing e-mail package and then routed stuff to the proposed custom software. He sat down, opened up the pretend e-mail, and started to save the attached files. At that point, we discovered that there was no need for the custom software and killed the entire project.
comment by Insert_Idionym_Here · 2012-09-10T22:05:41.006Z · LW(p) · GW(p)
I have attempted using this in more casual decision making situations, and the response I get is nearly always something along the lines of "Okay, just let me propose this one solution, we won't get attached to it or anything, just hear me out..."
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-10T22:13:37.869Z · LW(p) · GW(p)
What do you do in this situation? Let them speak? Ask them to write down their solution, to be discussed later?
Oops... Couldn't resist proposing solutions.
Replies from: Insert_Idionym_Here↑ comment by Insert_Idionym_Here · 2012-09-11T05:08:44.824Z · LW(p) · GW(p)
To be perfectly honest, at the time I simply planted my face on the table in front of me a few times. I was at a dinner party with friends of my mother's; I would have sounded extremely condescending otherwise.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-11T16:30:12.826Z · LW(p) · GW(p)
Ah yes, status mismatch in a not very rational crowd. Not much you can do there.
comment by Zian · 2014-11-28T08:19:34.745Z · LW(p) · GW(p)
There's a comment already asking for more modern articles/citations/research on this topic but in case someone wants to run with this idea in real life, you can find a summary of Norman Maier's research at http://www.iaf-world.org/Libraries/IAF_Journals/Assets_and_Liabilities_in_Group_Problem_Solving.sflb.ashx
The article was written by Norman Maier in 1967 and reprinted in Psychological Review in 1999. For those of you with access to well-funded libraries, the citations are:
- Psychological Review, Volume 74, Number 4, Pages 239-249.
- Group Facilitation: A Research and Applications Journal — Volume 1, Number 1, Winter 1999, Pages 45-51
And, to be on really solid ground, you'd want the actual source article(s) that the above review refers to. They are:
- Hoffman, L. R., & Maier, N. R. F. The use of group decision to resolve a problem of fairness. Personnel Psychology, 1959, 12, 545-559
- Maier, N. R. F. Screening solutions to upgrade quality: A new approach to problem solving under conditions of uncertainty. Journal of Psychology, 1960, 49, 217-231.
- Maier, N. R. F. Problem solving discussions and conferences: Leadership methods and skills. New York: McGraw-Hill, 1963.
- Maier, N. R. F., & Hayes, J. J. Creative management. New York: Wiley, 1962.
- Solem, A. R. 1965: Almost anything I can do, we can do better. Personnel Administration, 1965, 28, 6-16.
comment by [deleted] · 2015-06-21T04:57:12.679Z · LW(p) · GW(p)
sd
comment by MiaNetmaking · 2019-01-10T14:55:43.840Z · LW(p) · GW(p)
From which edition of the book does the reference originate? At first glance it does not seem to be included in the second edition and I'm curious to read more about it.
comment by lesswronguser123 (fallcheetah7373) · 2024-04-06T10:15:10.495Z · LW(p) · GW(p)
https://intelligence.org/files/AIPosNegFactor.pdf
is the missing citations
comment by Linda Linsefors · 2024-08-08T11:29:08.172Z · LW(p) · GW(p)
I think the main important lesson is to not get attached to early ideas. Instead of banning early ideas, if anything comes up, you can just write tit down, and set it aside. I find this easier than a full ban, because it's just an easier move to make for my brain.
(I have a similar problem with rationalist taboo [? · GW]. Don't ban words, instead require people to locally define their terms for the duration of the conversation. It solves the same problem, and it isn't a ban on though or speech.)
The other important lesson of the post, is that, in the early discussion, focus on increasing your shared understanding of the problem, rather than generating ideas. I.e. it's ok for ideas to come up (and when they do you save them for later). But generating ideas is not the goal in the beginning.
Hm, thinking about it, I think the mechanism of classical brainstorming (where you up front think of as many ideas as you can) is to exhaust all the trivial, easy to think of, ideas, as fast as you can, and then you're forced to think deeper to come up with new ideas. I guess that's another way to do it. But I think this is method is both ineffective and unreliable, since it only works though a secondary effect.
. . .
It is interesting to comparing the advise in this post with the Game Tree of Aliment [LW · GW] or Builder/Breaker Methodology also here [LW · GW]. I've seen variants of this exercise popping lots of places in the AI Safety community. Some of them pare probably inspired by each other, but I'm pretty sure (80%) that this method have been invented several times independently.
I think that GTA/BBM works for the same reason the advice in the post works. It also solves the problem of not getting attached, and also as you keep breaking your ideas and explore new territory, you expand your understanding or the problem. I think an active ingrediens in this method is that the people playing this game knows that alignment is hard, and go in expecting their first several ideas to be terrible. You know the exercise is about noticing the flaws in your plans, and learn from your mistakes. Without this attitude, I don't think it would work very well.