Posts

Comments

Comment by drea on Circles of discussion · 2016-12-27T05:51:18.546Z · LW · GW

I would like to vote up this recommendation:

How about a technological solution for representing arguments with clarity so that both sides:

  • can see what is being said in clearly labeled propositions.
  • can identify errors in logic and mark them down.
  • can weed out opinions from experimentally confirmed scientific facts.
  • can link to sources and have a way to recursively examine their 'truth rating’ down to the most primary source.

This is an un-explored area, and seems to me like it would have a higher ROI than a deep dive into variations on voting/rating/reputation systems.

Comment by drea on On the importance of Less Wrong, or another single conversational locus · 2016-12-11T20:12:35.108Z · LW · GW

I see various people volunteering for different roles. I'd be interested in providing design research and user experience support, which would probably only be needed intermittently if we have someone acting as a product manager. It might be nice to have someone in a light-weight graphic design role as well, and that can be freelance.

Like ananda, I'm happy to do this as an open-contribution project rather than paid. I'll reach out to Vaniver via email.

Comment by drea on On the importance of Less Wrong, or another single conversational locus · 2016-12-11T19:59:36.728Z · LW · GW

I can see value in having LW as a prototype or scratch pad, making simple modifications of existing discussion platforms (e.g. improved moderator powers as discussed above). Then Arbital can do the harder work of building a collaborative truth-seeking platform, adding in features to, for example, support Double Crux, fine-typed voting, or evidence (rather than comments).

Perhaps in the end there's a symbiosis, where the LW is for discussion, and when a topic comes up that needs truth-seeking it's moved to Arbital. That free's Arbital from having to include a solved problem in it's code base.

Comment by drea on Double Crux — A Strategy for Resolving Disagreement · 2016-12-11T19:20:25.196Z · LW · GW

Nice association.

I see this model as building on Laddering or the XY problem, because it also looks for a method of falsifiability.

It's closer to a two-sided use of Eric Ries' Lean Startup (the more scientific version), where a crux = leap of faith assumption. I've called the LoFA a "leap of faith hypothesis", and your goal is to find the data that would tell you the assumption is wrong.

The other product design thinker with a similar approach is Tom Chi who uses a conjecture -> experiment -> actuals -> decision framework.

In all of these methods, the hard work/thinking is actually finding a crux and how to falsify it. Having an "opponent" to collaborate with may make us better at this.

Comment by drea on Open thread, Dec. 05 - Dec. 11, 2016 · 2016-12-11T19:02:41.500Z · LW · GW

Thank you! I needed that work-around.