A medium for more rational discussion

post by Adam Zerner (adamzerner) · 2014-02-24T17:20:49.248Z · LW · GW · Legacy · 21 comments

Contents

21 comments

It would be cool if online discussions allowed you to 1) declare your claims, 2) declare how your claims depend on each other (ie. make a dependency tree), 3) discuss the claims, and 4) update the status of the claim by saying whether or not you agree with it, and using something like the text shorthand for uncertainty to say how confident you are in your agreement/disagreement.

I think that mapping out these things visually would allow for more productive conversation. And it would also allow newcomers to the discussion to quickly and easily get up to date, rather than having to sift through tons of comments. On this note, there should also probably be something like an answer wiki for each claim to summarize the arguments and say what the consensus is.

I get the feeling that it should be flexible though. That probably means that it should be accompanied by the normal commenting system. Sometimes you don't actually know what your claims are, but need to "talk it out" in order to figure out what they are. Sometimes you don't really know how they depend on each other. And sometimes you have something tangential to say (on that note, there should probably be an area for tangential comments, or at least a way to flag them as tangential).

As far who would be interested in this, obviously this Less Wrong community would be interested, and I think that there are definitely some other online communities that would (Hacker News, some subreddits...).

Also, this may be speculating, but I would hope that it would develop a reputation for the most effective way to have a productive discussion. So much so that people would start saying, "go outline your argument on [name]". Maybe there'd even be pressure for politicians to do this. If so, then I think this could put pressure on society to be more rational.

What do you guys think?

 

EDIT: If anyone is actually interested in building this, you definitely have my permission (don't worry about "stealing the idea"). I want to build it, but 1) I don't think I'm a good enough programmer yet, and 2) I'm busy with my startup.

EDIT: Another idea: if you think that a statement commits an established fallacy, then you should be able to flag it (like this). And if enough other people agree, then the statement is underlined or highlighted or something. The advantage to this is that it makes the discussion less "bulky". A simple version of this would be flagging things as less than DH6. But there are obviously a bunch of other things worth flagging that Eliezer has talked about in the sequences that are pretty non-controversial.

EDIT: Here is a rough mockup of how it would look. Notes: 

- The claims should show how many votes of agreement/disagreement they got. Probably using text shorthand for uncertainty.

- The claims should be colored green if there is a lot of agreement, and red if there is a lot of disagreement.

- See edit above. Commenting in the discussion should be like this. And you should be able to flag statements as fallacious in a similar way. If there is enough agreement about the flag, the statement should be underlined in red or something.

21 comments

Comments sorted by top scores.

comment by trist · 2014-02-24T22:29:15.541Z · LW(p) · GW(p)

I've been working on a tool like this. Done well it would be applicable to more than just debate... If folks want to collaborate, I'm interested.

Replies from: trist
comment by trist · 2014-02-27T01:42:18.859Z · LW(p) · GW(p)

Update on the collaboration:

One person has contacted me so far. We're each prototyping to our own vision with plans to share our results (with eachother at least) at some point. We'd love to exchange previews with more people, please don't let our working on it stop you from prototyping your own vision of how it might work.

comment by Stefan_Schubert · 2014-02-24T19:35:17.092Z · LW(p) · GW(p)

Argumentation theorists have worked out models for structuring arguments. See, e.g. the Toulmin Model:

http://www-rohan.sdsu.edu/~digger/305/toulmin_model.htm

The problem with these models is that they can become a straitjacket - it might become too boring to argue using them. That said, I'm in principle all for making the structure of your arguments clearer and more explicit. Analytic philosophers have worked a lot on this, of course. Mathematical logic was in part invented for this purpose. Then again, even though it is good to have such tools available, it is sometimes better to write in ordinary prose without use of any special aids. You have to make a cost-benefit analysis: do you gain more than you lose by using a model such as Toulmin's or the one you envisage, or formal logic, than by writing in ordinary prose without any special aids?

Replies from: trist, chaosmage, adamzerner
comment by trist · 2014-02-24T22:57:27.074Z · LW(p) · GW(p)

The people who gain the most from structured arguments are the people who don't need to sift through ten blog posts and hundreds of comments. The gains for the writers are more along the lines of less time reiterating arguments in different contexts.

comment by chaosmage · 2014-02-25T12:29:46.152Z · LW(p) · GW(p)

Given a Toulmin-like model, shouldn't it be fairly easy to automate the process of writing it out as prose?

comment by Adam Zerner (adamzerner) · 2014-02-24T20:17:28.621Z · LW(p) · GW(p)

I agree that ordinary prose is needed sometimes. And that different situations call for different degrees of structuring to the argument. Which is why I think that it'd be very important to make this tool flexible.

comment by Gunnar_Zarncke · 2014-02-24T19:40:50.742Z · LW(p) · GW(p)

Whoever is going to implement this: There exists a wiki which seems to be near optimally suited to build this: The SmallestFederatedWiki how it looks source on github wired post about it. It is decentralized, simple to deploy and most importantly easy to extend via scripting plugins. You should be able to easily add voting and a tree/graph view.

(I'm not affiliated with this, just toyed around with it. The code is readable and indeed small)

comment by [deleted] · 2014-02-25T12:28:55.030Z · LW(p) · GW(p)

The Logical Structure of Objectivism is an example of a philosophical diagram. I am not an objectivist, this diagram and philosophy is flawed, but it is an example of what you are proposing.

comment by Slider · 2014-02-25T09:39:45.470Z · LW(p) · GW(p)

I have had similar thougths too, and hereby announce slight willingness to cooperate.

The main deviation from the mockup seems to be that I think that prouse is going to be dead weight to computers. And anything that requires participants first to to read and figure out lots of prose will also miss out on the computability. Thus I think that the free prose should be reduced nearly to the level of simple tags. Paragraphs surely won't do. The units should be language independent but be rendered (or approximated) to participants in their language. Some "basic" relationships such as belief, impication, conjucation are a given but I think it would be vital for flexibility to add new notions. Ideally the basic relationsships would not be "built in" but could in principle be inputted from scratch by users. It's a little "make a big database and hope for the best". The critical part would make it semi legible particable to someone not versed in it.

The core of the notions would be various rules of inference. For example the notion of modus ponens can be summed up as a rule regarding truths and implications. A rule of the same level would be appeal to authority. You could choose which notion princples you belief in (and they would be processed as beliefs as good as any). If many people do this (and set their beliefs as public) you could have distribution on why people belief what they belief. You could also point new users to take into consideration arguments that have gone throught for previous users that have believed like them (or maybe only those belief transitions that people have found positive). If an argument doesn't go throught with you you could point out why, which would allow further discussion. Actually one argument could branch into several depending on who has issue with which side and counterpoint of it.

The a big part of the cognitive work when using the system would be to express the thought with such clarity it can be explictly marked up. However it could be controlled by what kind of scope you allow for atomic notions. Say one says he believes in some "-ism". That could be accepted as a simple belief->"-ism". Then if big conflicts arise because of the ambiguity people could offer more elaborate explicit expressions of it. Some fo the "definitions" could be rejected or just fall out of fashion or focus. Explicications could be a move in the game.

By taking a record of what notion princples people believe in you could play a game where you extract a set of principles foreign to your own and try to apply them to get to a particular conclusion. This could be done without direct live interaction. Another level of game goal could be that when a user of with that set of princples logs in they could entertain the argument and the system could check whether the argument is successful (with the user being the judge). One game could be that a persons asks for checking their beliefs for possibilties to derive "a" and "not a".

The tricky part is how to make the notion princples easy enough to express comprehensively. Or coming up with the core or seed notion princples that meaningfull activity can be carried out.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2014-02-25T16:34:54.065Z · LW(p) · GW(p)

The a big part of the cognitive work when using the system would be to express the thought with such clarity it can be explictly marked up.

I think that that's too big an obstacle. And for that reason, I think it needs to be more lightweight. You could get a lot of the benefits of structure with it being lightweight, without pushing people away from using it.

comment by lmm · 2014-02-25T08:49:39.402Z · LW(p) · GW(p)

It's an interesting idea - but ideas are cheap. My advice is to make a working prototype as quickly as possible and see whether it feels useful.

comment by Douglas_Knight · 2014-02-25T03:35:40.517Z · LW(p) · GW(p)

I suggest first experimenting with maximum flexibility to figure out what kind of structure you really want. That is, experimenting in person with paper and pencil.

Replies from: mickytallor, adamzerner
comment by mickytallor · 2014-02-25T06:28:30.194Z · LW(p) · GW(p)

Nice, your logic is clear and straight forward.

comment by Adam Zerner (adamzerner) · 2014-02-25T03:45:54.458Z · LW(p) · GW(p)

Sounds like a good idea. I don't have the time or programming knowledge to build this now, but if no one else does, I'll get around to it eventually.

comment by [deleted] · 2014-02-24T23:57:54.214Z · LW(p) · GW(p)

I have often thought about methods of optimizing communication. Mainly forums and blogs since IRL is harder. My current interest is Karma systems. For instance I think it would be interesting to have 3 numbers. Positive, negative, neutral. The goal of neutral is mainly to handle small ups or downs, mainly downs. For instance if 1 person marks up, 40 mark neutral, and 3 mark down, you would less likely to focus on the negatives. Is your net score is negative 2 but neutral is 40, perhaps you merely pissed off reactionaries or feminists or some other group who is more motivated to throw down negatives or positives. I know I saw some people discussing what to do if you think a post is being unfairly downvoted but don't think it deserves and upvote.

This particular idea may not end up being supported by evidence as useful but I think its useful to think about such things even if you can't come up with a practically superior solution.

One argument against the above method is effort. Will people bother with it. This is probably the most common complaint, that more effective, assuming they are, methods will also require more effort.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2014-02-25T00:13:58.153Z · LW(p) · GW(p)

I agree - it should be something that people think about. And I like the idea of having neutral votes for reasons you say, and because I don't see much downside to it.

comment by [deleted] · 2014-02-27T22:30:00.401Z · LW(p) · GW(p)

I have a system like this, but slightly different.

Entries are structured as a DAG, where an edge between nodes indicates some degree of causal dependency. Each entry has a prediction market associated with it. Every entry is intended to be either a statement of existing fact, or a plan that can succeed or fail.

The software is pretty awful, though. It's written in PHP and is mostly unstable. I started rewriting it a long time ago in Python with Flask and haven't gotten far mostly due to a lack of time.

This is less for discussion, and more for planning. It was originally intended for group strategizing, but was never really used.

This will never be popular, though. You'll definitely not get anyone normal to use it. There are plenty of systems like this that exist (structured debate sites) and they're pretty tiny.

Also, using voting as the metric is pretty shitty for obvious reasons. Plenty of people agree with false things; especially rationalist/LW types (it's easier to convince yourself of what you already believe if you're better at convincing in general).

comment by Shmi (shminux) · 2014-02-24T17:32:44.122Z · LW(p) · GW(p)

Designing and coding a tool like that is probably easier than speccing it. If you posted a few sketches with use cases, scenarios and examples, someone might get excited enough to get involved.

Replies from: adamzerner, adamzerner
comment by Adam Zerner (adamzerner) · 2014-02-25T02:10:25.957Z · LW(p) · GW(p)

See the edit to the question for a mockup.

comment by Adam Zerner (adamzerner) · 2014-02-24T18:05:41.041Z · LW(p) · GW(p)

Sure, I'll probably do it later tonight.