Which areas of rationality are underexplored? - Discussion Thread

post by casebash · 2016-12-01T22:05:27.780Z · LW · GW · Legacy · 69 comments

Contents

69 comments

There seems to actually be real momentum behind this attempt as reviving Less Wrong. One of the oldest issues on LW has been the lack of content. For this reason, I thought that it might be worthwhile opening a thread where people can suggest how we can expand the scope of what people write about in order for us to have sufficient content.

Does anyone have any ideas about which areas of rationality are underexplored? Please only list one area per comment.

69 comments

Comments sorted by top scores.

comment by gjm · 2016-12-01T22:39:31.127Z · LW(p) · GW(p)

(This is in the same general area as casebash's two suggestions, but I think it's different enough to be worth calling out separately.)

Most of the material on LW is about individual rationality: How can I think more clearly, approximate the truth better, achieve my goals? But an awful lot of what happens in the world is done not by individuals but by groups. Sometimes a single person is solely responsible for the group's aims and decision-making, in which case their individual rationality is what matters, but often not. How can we get better at group rationality?

(Some aspects of this will likely be better explored for commercial gain than for individual rationality, since many businesses have ample resources and strong motivation to spend them if the ROI is good; I bet there are any number of groups out there offering training in brainstorming and project planning, for instance. But I bet there's plenty of underexplored group-rationality memespace.)

Replies from: Viliam
comment by Viliam · 2016-12-05T14:13:54.373Z · LW(p) · GW(p)

One simple idea is to make a list of people who seem individually rational to you, and ask them what are their areas of expertise. Then, if you have a question related to the area, ask them. (An equivalent of "use google" or "ask at StackExchange", but perhaps better for questions when there is a lot of misinformation out there, or where your question would be dismissed as "too open" on SE, or where you want to find out about your unknown unknowns, etc.) If people start doing this regularly, then having an expert in the group will automatically increase the whole group's expertise. Most people don't mind talking about their hobbies; but with rationalists you may get the extra advantage of them telling you "actually, I don't know" when they happen to not know.

For instrumental rationality, find a group of people who actually want to improve at instrumental rationality (as opposed to people who merely visit LW to kill time), and create a private discussion. It's better if you can also see each other in real life, for example at meetups.

Robin Hanson would probably recommend having an internal prediction market and using it frequently. But that can create perverse incentives if you bet on stuff you can influence. (Maybe there is a way to fix this, but that needs to be considered specifically. You want a situation where people can benefit from helping a project, but not from sabotating it. Like, when you believe the project will fail, the optimal strategy would be to abstain from voting on it, not to vote against it. But the people who voted for the project would still lose their points if the project fails. It's just that no one can gain points from a failing project.) It would be probably more useful when the group gets larger, so that people can bet on things they personally don't influence.

comment by moridinamael · 2016-12-02T14:53:18.975Z · LW(p) · GW(p)

Optimizing group norms for effectiveness. Could also be phrased as "team-level rationality."

There are certain group norms (or "cultural practices or attitudes") that are generally good to have in place, irrespective of what the goal of the group is. Many of these are so obvious and natural that almost all human cultures develop them organically. Some of them are more controversial, because they border on politicized topics. Some of them are yet undiscovered.

I would further editorialize that Less Wrong has historically been paralyzed by insinuations of phygishness whenever the topic of optimizing for group norms comes up. I find this annoying. You can't have the results of the (fictional) Bene Gesserit or Mentats, or the Beisutsukai Order, or for that matter the (actual) Navy SEALS, NASA Apollo program, gold medalist Olympic team, or McKinsey-level consulting firm without committing to the idea that you're going to be establishing a novel set of group norms geared toward optimizing some specific purpose.

As a group we're going to find it difficult to obtain extraordinary results if we rely on ordinary cultural technologies.

Replies from: oooo
comment by oooo · 2016-12-05T06:56:16.201Z · LW(p) · GW(p)

I upvoted you because I noticed that the term "team level rationality" piquing my interest. Is "team level" or "group rationality" emphasized or taught in follow-on CFAR workshops?

This seems like a potential area of low-hanging fruit where existing "executive team coaching program" content could be adapted. Somebody hypothesized that the growing popularity of local meetups and professional growth sapping LW readership. Group effectiveness content, especially in the context of the world-class teams/names/organizations that you listed, could potentially be immediately implemented in local meetups and in professional capacities.

I don't doubt however that as difficult as it has been for a community to generate individual rationality content, group effectiveness content is even harder to generate due to a perceived smaller set of individuals capable of proven & effective group enhancement, longer timeframes to realize group results and outline group experiments, plus a will and capability to explain said technique progressions.

EDIT: Kahneman's "Thinking, Fast and Slow" has an anecdote about the "leaderless group challenge" as the inspiration for his illusion of validity cognitive bias. The group challenge is an example of the type of activity, often described as "team building exercises", that could be adapted specifically to raise small collective acuity and coordination effectiveness. As far as I'm aware, no widely available content exists specific outside of specific business, military or other domain-specific niches.

Another indirect tangent is the "Checklist Manifesto" by Atul Gawande, drawing on his experience with medical errors (especially in high-performing OR units). Although this a huge step in the right direction, it still doesn't quite get to the root of formulating and internalizing a set of practices specific to enhancing collective effectiveness (even in a small groups).

comment by sen · 2016-12-03T07:50:27.817Z · LW(p) · GW(p)

Non-bayesian reasoning. Seriously, pretty much everything here is about experimentation, conditional probabilities, and logical fallacies, and all of the above are derived from bayesian reasoning. Yes, these things are important, but there's more to science and modeling than learning to deal with uncertainty.

Take a look at the Wikipedia page on the Standard Model of particle physics, and count the number of times uncertainty and bayesian reasoning are mentioned. If your number is greater than zero, then they must have changed the page recently. Bayesian reasoning tells you what to expect given an existing set of beliefs. It doesn't tell you how to develop those underlying beliefs in the first place. For much of physics, that's pretty much squarely in the domain of group theory / symmetry. It's ironic that a group so heavily based on the sciences doesn't mention this at all.

Rationality is about more than empirical studies. It's about developing sensible models of the world. It's about conveying sensible models to people in ways that they'll understand them. It's about convincing people that your model is better than theirs, sometimes without having to do an experiment.

It's not like these things aren't well-studied. It's called math, and it's been studied for thousands of years. Everything on this site focuses on one tiny branch, and there's so much more out there.

Apologies for the rant. This has been bugging me for a while now. I tried to create a thread on this a little while ago and met with the karma limitation. I didn't want to deal with it at the time, and now it's all coming back to me, rage and all.

Also, this discussion topic is suboptimal if your aim is to explore new areas of rationality, as it presumes that all unexplored areas will arise from direct discussion. It should have been paired with the question "How do we discover underexplored areas of rationality?" My answer is to that is to encourage non-rational discussion where people believe, intuitively or otherwise, that it should be possible to make the discussion rational. You're not going to discover the boundaries of rationality by always staying within them. You need to look both outside and inside to see where the boundary might lie, and you need to understand non-rationality if you ever want hope of expanding the boundaries of rationality.

End rant.

Replies from: Johannes_Treutlein, btrettel, TheAncientGeek, WhySpace_duplicate0.9261692129075527, ChristianKl, turchin
comment by Johannes Treutlein (Johannes_Treutlein) · 2017-01-10T13:22:50.416Z · LW(p) · GW(p)

Rationality is about more than empirical studies. It's about developing sensible models of the world. It's about conveying sensible models to people in ways that they'll understand them. It's about convincing people that your model is better than theirs, sometimes without having to do an experiment.

Hmm, I'm not sure I understand what you mean. Maybe I'm missing something? Isn't this exactly what Bayesianism is about? Bayesianism is just using laws of probability theory to build an understanding of the world, given all the evidence that we encounter. Of course that's at the core just plain math. E.g., when Albert Einstein thought of relativity, that was an insight without having done any experiment, but it is perfectly in accordance with Bayesianism.

Bayesian probability theory seems to be all we need to find out truths about the universe. In this framework, we can explain stuff like "Occam's Razor" in a formal way, and we can even include Popperian reasoning as a special case (a hypothesis has to condense probability mass on some of the outcomes in order to be useful. If you then receive evidence that would have been very unlikely given the hypothesis, we shift down the hypothesis' probability a lot (=falsification). If we receive confirming evidence that could have been explained just as well by other theories, this only slightly upshifts our probability; see EY's introduction.) But maybe this is not the point that you were trying to make?

I also think that EY is not Bayesian sometimes. He often assigns something 100 per cent probability without any empirical evidence, but because simplicity and beauty of the theory. For example that MWI is correct interpretation of QM. But if you put 0 probability on something (other interpretations), it can't be updated by any evidence.

Hmm, I'm quite confident (not 100%) that he's just assigning a very high probability to it, since it seems to be the way more parsimonious and computationally "shorter" explanation, but of course not 100% :) (see Occam's razor link above for why Bayesians give shorter explanations more a priori credence.)

Regarding Kuhnianism: Maybe it's a good theory of how the social progress of science works, but how does it help me with having more accurate beliefs about the world? I don't know much about it, so would be curious about relevant information! :)

comment by btrettel · 2016-12-08T05:22:20.480Z · LW(p) · GW(p)

Is there a single book or resource you would recommend for learning how group theory/symmetry can be used to develop theories and models?

I work in fluid dynamics, and I've mainly seen group theory/symmetry mentioned when forming simplifying coordinate transformations. Fluid dynamicists call these "dimensionless parameters" or "similarity variables". I am certain other fields use different terminology.

Replies from: sen
comment by sen · 2016-12-09T10:44:07.842Z · LW(p) · GW(p)

See my response below to WhySpace on getting started with group theory through category theory. For any space-oriented field, I also recommend looking at the topological definition of a space. Also, for any calculus-heavy field, I recommend meditating on the Method of Lagrange Multipliers if you don't already have a visual grasp of it.

I don't know of any resource that tackles the problem of developing models via group theory. Developing models is a problem of stating and applying analogies, which is a problem in category theory. If you want to understand that better, you can look through the various classifications of functors since the notion of a functor translates pretty accurately to "analogy".

I have no background in fluid dynamics, so please filter everything I say here through your own understanding, and please correct me if I'm wrong somewhere.

I don't think there's any inherent relationship between dimensionless parameters and group theory. The reason being that dimensionless quantities can refer to too many things (i.e., they're not really dimensionless, and different dimensionlessnesses have different properties... or rather they may be dimensionless, but they're not typeless). Consider that the !∘sqrt∘ln of a dimensionless quantity is also technically a dimensionless quantity while also being almost-certainly useless and uninterpretable. I suppose if you can rewrite an equation in terms of dimensionless quantities whose relationships are restricted to have certain properties, then you can treat them like other well-known objects, and you can throw way more math at them.

For example, suppose your "dimensionless" quantity is a scaling parameter such that scale * scale → scale (the product of two scaling operations is equivalent to a single scaling operation). By converting your values to scales, you've gained a new operation to work with due to not having to re-translate your quantities on each successive multiplication: element-wise exponentiation. I'd personally see that as a gateway to applying generating series (because who doesn't love generating series?), but I guess a more mechanics-y application of that would be solving differential equations, which often require exponentiating things.

Any time you have a set of X quantities that can be applied to one another to get another of the X quantities, you have a group of some sort (with some exceptions). That's what's going on with the scaling example (x * x → x), and that's what's not going on with the !∘sqrt∘ln example. The scaling example just happens to be a particularly simple example of a group. You get less trivial examples when you have multiple "dimensionless" quantities that can interact with one another in standard ways. For example, if vector addition, scaling, and dot products are sensible, your vectors can form a Hilbert space, and you can use wonderful things like angles and vector calculus to meaningful effect.

I can probably give a better answer if I know more precisely what you're referring to. Do you have examples of fluid dynamicists simplifying equations and citing group theory as the justification?

Replies from: btrettel
comment by btrettel · 2016-12-10T01:26:57.651Z · LW(p) · GW(p)

Thanks for the detailed reply, sen. I don't follow everything you said, but I'll take a look at your recommendations and see after that.

I can probably give a better answer if I know more precisely what you're referring to. Do you have examples of fluid dynamicists simplifying equations and citing group theory as the justification?

Unfortunately, the subject is rather disjoint. Most fluid dynamicists would have no idea that group theory is relevant. My impression is that some mathematicians have interpreted what fluid dynamicists have done for a long time in terms of group theory, and extended their methods. Fluid dynamicists call the approach "dimensional analysis" if you reduce the number of input parameters or "similarity analysis" if you reduce the number independent variables of a differential equation (more on the latter later)

The goal generally is dimension reduction. For example, if you are to perform a simple factorial experiment with 3 variables and you want to sample 8 different values of each variable, you have 8^3 = 512 samples to make, and that's not even considering doing multiple trials. But, if you can determine a coordinate transformation which reduces those 3 variables to 1, then you only have 8 samples to make.

The Buckingham Pi theorem allows you to determine how many dimensionless variables are needed to fully specify the problem if you start with dimensional quantities. (If everything is dimensionless to begin with, there's no benefit from this technique, but other techniques might have benefit.)

For a long list of examples of the dimensionless quantities, see Wikipedia. The Reynolds number is the most well known of these. (Also, contrary to common understanding, the Reynolds number doesn't really say anything about "how turbulent" a flow is, rather, it would be better thought of as a way to characterize instability of a flow. There are multiple ways to measure "how turbulent" a flow is.)

For a "similarity variable", I'm not sure what the best place to point them out would be. Here's one example, though: If you take the 1D unbounded heat equation and change coordinates to \eta = x / \sqrt{\alpha t} (\alpha is the thermal diffusivity), you'll find the PDE is reduced to an ODE, and solution should be much easier now. The derivation of the reduction to an ODE is not on Wikipedia, but it is very straightforward.

Dimensional analysis is really only taught to engineers working on fluid mechanics and heat transfer. I am continually surprised by how few people are aware of it. It should be part of the undergraduate curriculum for any degree in physics. Statisticians, particularly those who work in experimental design, also should know it. Here's an interesting video of a talk with an application of dimensional analysis to experimental design. As I recall, one of the questions asked after the talk related the approach to Lie groups.

For an engineering viewpoint, I'd recommend Langhaar's book. This book does not discuss similarity variables, however. For something bridging the more mathematical and engineering viewpoints I have one recommendation. I haven't looked at this book, but it's one of the few I could find which discusses both the Buckingham Pi theorem and Lie groups. For something purely on the group theory side, see Olver's book.

Anyhow, I asked about this because I get the impression from some physicists that there's more to applications of group theory to building models than what I've seen.

Consider that the !∘sqrt∘ln of a dimensionless quantity is also technically a dimensionless quantity while also being almost-certainly useless and uninterpretable.

This is an important realization. The Buckingham Pi theorem doesn't tell you which dimensionless variables are "valid" or "useful", just the number of them needed to fully specify the problem. Whether or not a dimensionless number is "valid" or "useful" depends on what you are interested in.

Edit: Fixed some typos.

Replies from: sen
comment by sen · 2016-12-10T13:02:16.688Z · LW(p) · GW(p)

Regarding the Buckingham Pi Theorem (BPT), I think I can double my recommendation that you try to understand the Method of Lagrange Multipliers (MLM) visually. I'll try to explain in the following paragraph knowing that it won't make much sense on first reading.

For the Method of Lagrange Multipliers, suppose you have some number of equations in n variables. Consider the n-dimensional space containing the set of all solutions to those equations. The set of solutions describes a k-dimensional manifold (meaning the surface of the manifold forms a k-dimensional space), where k depends on the number of independent equations you have. The set of all points perpendicular to this manifold (the null space, or the space of points that, projected onto the manifold, give the zero vector) can be described by an (n-k)-dimensional space. Any (n-k)-dimensional space can be generated (by vector scaling and vector addition) of (n-k) independent vectors. For the Buckingham Pi Theorem, replace each vector with a matrix/group, vector scaling with exponentiation, and vector addition with multiplication. Your Buckingham Pi exponents are Lagrange multipliers, and your Pi groups are Lagrange perpendicular vectors (the gradient/normal vectors of your constraints/dimensions).

I guess in that sense, I can see why people would make the jump to Lie groups. The Pi Groups / basis vectors form the generator of any other vector in that dimensionless space, and they're obviously invertible. Honestly, I haven't spent much time with Lie Groups and Lie Algebra, so I can't tell you why they're useful. If my earlier explanation of dimensionless quantities holds (which, after seeing the Buckingham Pi Theorem, I'm even more convinced that it does), then it has something to do with symmetry with respect to scale, The reason I say "scale" as opposed to any other x * x → x quantity is that the scale kind of dimensionlessness seems to pop up in a lot of dimensionless quantities specific to fluid dynamics, including Reynold's Number.

Sorry, I know that didn't make much sense. I'm pretty sure it will though once you go through the recommendations in my earlier reply.

Regarding Reynold's Number, I suspect you're not going to see the difference between the dimensional and the dimensionless quantities until you try solving that differential equation at the bottom of the page. Try it both with and without converting to dimensionless quantities, and make sure to keep track of the semantics of each term as you go through the process. Here's one that's worked out for the dimensionless case. If you try solving it for the non-dimensionless case, you should see the problem.

It's getting really late. I'll go through your comments on similarity variables in a later reply.

Thanks for the references and your comments. I've learned a lot from this discussion.

Replies from: btrettel
comment by btrettel · 2016-12-10T21:49:09.864Z · LW(p) · GW(p)

Glad to help. I'll go through your recommendations later this month when I have more time.

Replies from: None
comment by [deleted] · 2016-12-12T13:25:23.553Z · LW(p) · GW(p)

Could you guys cooperate or something and write an intro Discussion or Main post on this for landlubbers? Pretty please?

I have glanced at a very brief introductory article on dim.an. in regards to Reynold's number when I wondered whether I could model dissemination of fern's spores within a ribbon-shaped population, or just simply read about such model, but it all seemed like so much trouble. And even worse, I had a weird feeling like 'oh this has to be so noisy, how do they even know how the errors are combined in these new parameters? Surely they don't just sum.'

(Um, a datapoint from a non-mathy person, I think I'm not alone in this.)

Replies from: btrettel
comment by btrettel · 2016-12-12T21:16:54.937Z · LW(p) · GW(p)

Sure, I'd be interested in writing an article on dimensional analysis and scaling in general. I might have time over my winter break. It's also worth noting that I posted on dimensional analysis before. Dimensional analysis is not as popular as principal components analysis, despite being much easier, and I think this is unfortunate.

I don't know what a "ribbon-shaped population" is, but I imagine that fern spores are blown off by wind and then dispersed by a combination of wind and turbulence. Turbulent dispersion of particles is essentially an entire field by itself. I have some experience in it from modeling water droplet trajectories for fire suppression, so I might be able to help you more, assuming I understand your problem correctly. Feel free to send me a message on here if you'd like help.

And even worse, I had a weird feeling like 'oh this has to be so noisy, how do they even know how the errors are combined in these new parameters? Surely they don't just sum.'

Could you explain this a little more? I'm not exactly following.

Because dimensional homogeneity is a requirement for physical models, any series of independent dimensionless variables you construct should be "correct" in a strict sense, but they are not unique, and consequently you might not naively pick "useful" variables. If this doesn't make sense, then I could explain in more detail or differently.

Replies from: None
comment by [deleted] · 2016-12-14T16:02:59.536Z · LW(p) · GW(p)

Yes, I remember that post. It was 'almost interesting' to me, because it is beyond my actual knowledge. So, if you could just maybe make it less scary, we landlubbers would love you to bits. If you'd like.

I agree about the wind and the turbulence, which is somewhat "dampered" by the prolonged period of spore dissemination and the possibility (I don't know how real) of re-dissemination of the ones that "didn't stick" the first time. The thing I am (was) most interested in - how fertilization occurs in the new organisms growing from the spores - is further complicated by the motility of sperm and the relatively big window of opportunity (probably several seasons)... so I am not sure if modeling the dissemination has any value, but still. This part is at least above-ground. It's really an example of looking for your keys under a lamplight.

re: errors. I mean that it seemed to me (probably wrongly) that if you measure a bunch of variables, and try to make a model from them, then realise you only want a few and the others can be screwed together into a dimensionless 'thing', then how do you know the, well, 'bounds of correctness' of the dimensionless thing? It was built from imperfect measurements that carried errors in them; where do the errors go when you combine variables into something new? (I mean, it is a silly question, but i haz it.)

('ribbon-shaped population' was my clumsy way of describing a long and narrow, but relatively uninterrupted population of plants that stretches along a certain landscape feature, like a beach. I can't recall the real word right now.)

Replies from: btrettel
comment by btrettel · 2016-12-15T22:33:30.685Z · LW(p) · GW(p)

Romashka, I appreciate the reply.

Yes, I remember that post. It was 'almost interesting' to me, because it is beyond my actual knowledge. So, if you could just maybe make it less scary, we landlubbers would love you to bits. If you'd like.

If you don't mind, could you highlight which parts you thought were too difficult?

Aside from adding more details, examples, and illustrations, I'm not sure what I could change. I will have to think about this more.

re: errors. I mean that it seemed to me (probably wrongly) that if you measure a bunch of variables, and try to make a model from them, then realise you only want a few and the others can be screwed together into a dimensionless 'thing', then how do you know the, well, 'bounds of correctness' of the dimensionless thing? It was built from imperfect measurements that carried errors in them; where do the errors go when you combine variables into something new? (I mean, it is a silly question, but i haz it.)

This is an important question to ask. After non-dimensionalizing the data and plotting it, if there aren't large gaps in the coverage of any dimensionless independent variable, then you can just use the ranges of the dimensionless independent variables.

I could add some plots showing this more obviously in a discussion post.

Here are some example correlations from heat transfer. Engineers did heat transfer experiments in pipes and measured the heat flux as a function of different velocities. They then converted heat flux into the Nusselt number and the velocity/pipe diameter/viscosity into the Reynolds number, and had another term called the Prandtl number. There are plots of these experiments in the literature and you can see where the data for the correlation starts and ends. As you do not always have a clear idea of what happens outside the data (unless you have a theory), this usually is where the limits come from.

comment by TheAncientGeek · 2016-12-11T13:00:20.660Z · LW(p) · GW(p)

. Bayesian reasoning tells you what to expect given an existing set of beliefs. It doesn't tell you how to develop those underlying beliefs in the first place

That's a very important point, and it is a pity that everyone decided to focus on the narrower point about physics.

There's a wider point, still, about ontological radicalism, doubling back, paradigm shifts and all that Kuhnian stuff that's completely missed by emphasising Bayes, and thereby implying that everything is a linear stepwise refinement of models under evidence.

comment by WhySpace_duplicate0.9261692129075527 · 2016-12-05T17:07:52.322Z · LW(p) · GW(p)

group theory / symmetry

The Wikipedia page for group theory seems fairly impenetrable. Do you have a link you'd recommend as a good place to get one’s feet wet in the topic? Same with symmetry.

Thanks!

Replies from: sen
comment by sen · 2016-12-06T06:19:54.539Z · LW(p) · GW(p)

"Group" is a generalization of "symmetry" in the common sense.

I can explain group theory pretty simply, but I'm going to suggest something else. Start with category theory. It is doable, and it will give you the magical ability of understanding many math pages on Wikipedia, or at least the hope of being able to understand them. I cannot overstate how large an advantage this gives you when trying to understand mathematical concepts. Also, I don't believe starting with group theory will give you any advantage when trying to understand category theory, and you're going to want to understand category theory if you're interested in reasoning.

When I was getting started with category theory, I went back and forth between several pages (Category Theory, Functor, Universal Property, Universal Object, Limits, Adjoint Functors, Monomorphism, Epimorphism). Here are some of the insights that made things click for me:

  • An "object" in category theory corresponds to a set in set theory. If you're a programmer, it's easier to think of a single categorical object as a collection (class) of OOP objects. It's also valid and occasionally useful to think of a single categorical object as a single OOP object (e.g., a collection of fields).
  • A "morphism" in category theory corresponds to a function in set theory. If you think of a categorical object as a collection of OOP objects, then a morphism takes as input a single OOP object at a time.
  • It's perfectly valid for a diagram to contain the same categorical object twice. Diagrams only show relations, and it's perfectly valid for an OOP object to be related to another OOP object of the same class. When looking at commutative diagrams that seem to contain the same categorical object twice, think of them as distinct categorical objects.
  • Diagrams don't only show relationships between OOP objects. They can also show relationships between categorical objects. For example, a diagram might state that there is a bijection between two categorical objects.
  • You're not always going to have a natural transformation between two functors of the same category.
  • When trying to understand universal properties, the following mapping is useful (look at the diagrams on Wikipedia): A is the Platonic Form of Y, U is a fire that projects only some subset of the aspects of being like A.
  • The duality between categorical objects and OOP objects is critical to understanding the difference between any diagram and its dual (reversed-morphisms). Recognizing this makes it much easier to understand limits and colimits.

Once you understand these things, you'll have the basic language down to understand group theory without much difficulty.

comment by ChristianKl · 2016-12-04T09:42:42.500Z · LW(p) · GW(p)

If you look at the recent posts Double Crux is not about Bayesian reasoning.

Discussions about system 1 and system 2 and how to have the two in sync are not about Bayesian reasoning either.

There are also many other topics that are not about Bayes.

Replies from: sen
comment by sen · 2016-12-04T11:35:14.458Z · LW(p) · GW(p)

System 1 and 2 I don't think are relevant since they're not areas of rationality. It's the difference between a design and an implementation. I don't think this thread is about implementation optimizations, and I do see numerous threads on that topic.

Regarding double crux, I actually don't see that when I browse through the recent threads, even going back several pages. Through the site search, I was able to find another post that links to a November 29th thread, which I think is the one you're talking about.

Here's an excerpt from that double crux thread.

Ideally, B is a statement that is somewhat closer to reality than A—it's more concrete, grounded, well-defined, discoverable, etc. It's less about principles and summed-up, induced conclusions, and more of a glimpse into the structure that led to those conclusions.

(It doesn't have to be concrete and discoverable, though—often after finding B it's productive to start over in search of a C, and then a D, and then an E, and so forth, until you end up with something you can research or run an experiment on).

That's not out of context. The entire game description and recommendations are written with the focal point of increasing precision and making beliefs more concrete.

I want you to take the time to seriously consider whether you think I'm crazy for thinking that "increasing precision" and "making beliefs more concrete" could possibly be a bad thing when trying to understand how someone thinks. Think about what your gut reaction was when you read that. Think about what alternative there could be. Please don't read on until you're sure I'm just trolling so maybe you can see how screwed up this place this.


How about doing the exact opposite? How about making things less precise? How about throwing away useless structure and making it easier to reason by analogy, thereby letting people expose the full brunt of their intuition and experience that really leads to their beliefs? How about making beliefs less concrete, and therefore more abstract, more general, and easier to see relationships in other domains?

If you convince someone that A really might not lead to B and that there are n experiments you could use to tell, whoopee do, they are literally never going to use that again. If you discover that you believe uniforms lead to bullying because you mentally model social dynamics as particle systems, and bullying as a problem that occurs in high-chaos environments, and that uniforms go a long way in cooling the system thereby reducing the chaos and bullying... That's probably going to stick with you for a while, despite being a complete ungrounded non-sequitur.

comment by turchin · 2016-12-03T15:18:51.245Z · LW(p) · GW(p)

I remember it was a poll in LW about how they use Bayes theorem in practical life (can't find a link). There was only a few answer about actual practical usage. There is not much practical situations where it was useful.

But it is good as a symbol of group membership and also in internet discussion.

I also think that EY is not Bayesian sometimes. He often assigns something 100 per cent probability without any empirical evidence, but because simplicity and beauty of the theory. For example that MWI is correct interpretation of QM. But if you put 0 probability on something (other interpretations), it can't be updated by any evidence. He also did it when he said that self-improving paper clip maximizer is the main risk of AI. But there are other risks of AI which are also deadly. (I counted around 100).

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-12-11T12:45:18.754Z · LW(p) · GW(p)

But it is good as a symbol of group membership

Is it good to think Bayes is this wonderful summum bonum of rationality, and not even notice how little use you yourself are making of it?

also in internet discussion.

Is it good to come across to someone with a pluralistic understanding of reasoning as a dogmatist?

I also think that EY is not Bayesian sometimes.

Elephant spotted.

comment by casebash · 2016-12-01T22:10:50.846Z · LW(p) · GW(p)

Social skills - this skills are incredibly important for actually getting anything done in the real world. The biggest issue I see with discussing this topic is that it will inevitably lead to discussion of PUA. This will force us to either censor the conversation or to have people put off from Less Wrong by this. In particular, it could cause less women to contribute to this site.

Replies from: root, scarcegreengrass
comment by root · 2016-12-02T21:11:14.838Z · LW(p) · GW(p)

I vaguely remember a comment made by Vladimir_M, citing PUA as 'the elephant in the room'. I'd imagine there's some variant of Godwin's law in which someone will eventually say 'hey, why does nobody care about the elephant in the room?', so maybe the question should be 'Are we fully prepared and able to debunk PUA beliefs?'.

Replies from: Viliam, ChristianKl
comment by Viliam · 2016-12-03T19:34:28.955Z · LW(p) · GW(p)

maybe the question should be 'Are we fully prepared and able to debunk PUA beliefs?'.

No, it definitely shouldn't be.

First, you already have the bottom line written.

Second, beliefs are true or false individually; if you put a large set of beliefs (I am not ever sure what exactly qualifies as a "PUA belief" these days) in one package, and try to reject the whole package (or accept it), you will almost certainly acquire some false beliefs.

Third, framing the statements as somehow belonging to an outgroup already removes rationality from the debate. (Also, what happens if some belief is shared, for example, by both evolutionary psychologists and PUAs? Do these also get dismissed as "PUA beliefs"? What if PUAs also believe that 2+2=4? Because I suspect many of them do.)

Replies from: root
comment by root · 2016-12-04T00:55:17.513Z · LW(p) · GW(p)

Do we really need to take the whole package in? If we have (n) beliefs, some number of them might be useful, some of them would be less effective than advertised, and some could be useless if not harmful.

Replies from: Viliam
comment by Viliam · 2016-12-04T18:51:30.453Z · LW(p) · GW(p)

Sure, there are at least two ways how to go stupid about this.

One of them is saying "here is a package that contains at least one true statement, I am going to adopt it as a whole".

Other is hearing a statement in isolation and saying "hey, this statement is a part of this package, and we reject that package as a whole, right?"

comment by ChristianKl · 2016-12-04T09:58:16.581Z · LW(p) · GW(p)

PUA comes from a bunch of nerds trying to tackle the problem of how to "systematically win" in the interaction with females. Mostly with relatively little contact to prior art and little experience in existing frameworks for building skills in human interaction.

At the start most of the PUA framework was created by discussions on an online forum.

Those basic underlying factors lead to a lot of problems that PUA does have. They are also present on LessWrong and not easily debunked.

Replies from: Viliam
comment by Viliam · 2016-12-04T19:08:27.278Z · LW(p) · GW(p)

At the beginning, the community at least tried to be experiment-driven, but after it became more popular, it gradually became adsense-driven. The more outrage, the more money, regardless of whether the advice actually works for anyone. (If there even is an advice, instead of mere bragging, preferably unverifiable.)

Also, people are mostly unable to tell a difference between "X made me successful" and "I am a successful person that happened to write an article about X". So instead of advice that helps unattractive guys get a date, it became a list of things attractive guys can do and still get laid. Not the same thing.

But generally, it seems to me there is a repeating pattern -- there is an official socially accepted narrative which contains a few blind spots and outright falsehoods. Then come people who point out the falsity, and gain a following. Gradually the group develops its own narrative, also full of blind spots and falsehoods. Until at some moment someone successfuly points out a mistake in the new narrative, and then the history repeats again.

Replies from: ChristianKl
comment by ChristianKl · 2016-12-05T00:04:01.030Z · LW(p) · GW(p)

At the beginning, the community at least tried to be experiment-driven, but after it became more popular, it gradually became adsense-driven.

I don't think most of the successful PUA memes win because of adsence optimization. Most of the ebooks and videos many PUA community folk consumes is pirated.

Money get's made by building a reputation and then charging high prices for bootcamps.

Eban Pegan might have made more money with selling Double Your Dating with adsence but he's not the most popular guy within the PUA community.

comment by scarcegreengrass · 2016-12-02T13:37:57.750Z · LW(p) · GW(p)

Not /inevitably/. And, worse case, we could censor only the mindkilled parts of a discussion.

comment by ChristianKl · 2016-12-02T00:06:35.950Z · LW(p) · GW(p)

I believe that there are many areas where we can use the framework of prediction and calibration to assess expert experience. I think that's true in many domains and can allow us to have verified expertise in those domains in a way that's very different from verifying knowledge with college degrees.

comment by Sable · 2016-12-01T23:33:49.439Z · LW(p) · GW(p)

This is more of a practical suggestion than a theoretical one, but what if we had an instant message feature? Some kind of chat box like google hangouts, where we could talk in a more immediate sense to people rather than through comment and reply.

As an addendum, and as a way of helping newer members, maybe we could have some kind of Big/Little program? Nothing fancy, just a list of people who have volunteered to be 'Bigs,' who are willing to jump in and discuss things with newer members.

A 'little' could ask their big questions as they make their way through the literature, and both Bigs and Littles would gain a chance to practice rationality skills pertaining to discussion (controlling one's emotions, being willing to change one's mind, etc.) in real time. I think this would help reinforce these habits.

The LessWrong study hall on Complice is nice, but it's a place to get work done, not to chat or debate or teach.

Replies from: Vaniver, ChristianKl, NatashaRostova
comment by Vaniver · 2016-12-02T19:51:50.571Z · LW(p) · GW(p)

Like others pointed out, there's a Slack channel administered by Elo, a lesswrong IRC, and a SSC IRC. (I'm sometimes present in the first, but not the other two; I don't know how active they are now.)

As an addendum, and as a way of helping newer members, maybe we could have some kind of Big/Little program? Nothing fancy, just a list of people who have volunteered to be 'Bigs,' who are willing to jump in and discuss things with newer members.

Is the idea here pairing (Alice volunteers as a Big and is matched up with Bob, they swap emails / Hangouts / etc. and have one-on-one conversations about rationality / things that Bob doesn't understand yet) or in-need matching (Alice is the Big on duty at 7pm Eastern time, and Bob shows up in the chat channel to ask questions that Alice answers), or something else?

This also made me think of the possibility of something like "Dear Prudence"; maybe emails about some question that are then responded to in depth, or maybe chat discussions that get recorded and then shared, or so on.

(Somewhat tangential, but there are other things you can overlay on top of online communities in order to mimic some features of normal geographic communities, which seem like they make them more human-friendly but require lots of engagement on the part of individuals that may or may not be forthcoming.)

Replies from: Sable
comment by Sable · 2016-12-03T04:34:44.259Z · LW(p) · GW(p)

Thanks for the info - I'll check out some of the chat channels. I had no idea they existed.

As for the idea, I hadn't thought it through quite that far, but I was picturing something along the lines of your second suggestion. Any publicized and easily accessible way of asking questions that doesn't force newer members to post their own topics would be helpful.

I remember back when I was just starting out on LessWrong, and being terrified to ask really stupid questions, especially when everyone else here was talking about graduate level computer science and medicine. Having someone to ask privately would've sped things up considerably.

comment by ChristianKl · 2016-12-01T23:54:30.419Z · LW(p) · GW(p)

This is more of a practical suggestion than a theoretical one, but what if we had an instant message feature? Some kind of chat box like google hangouts, where we could talk in a more immediate sense to people rather than through comment and reply.

There the Slack.

Replies from: scarcegreengrass
comment by scarcegreengrass · 2016-12-02T13:35:41.576Z · LW(p) · GW(p)

You can message Elo here on LW for more information about the Less Wrong slack. We have some great discussions!

comment by NatashaRostova · 2016-12-02T00:28:40.370Z · LW(p) · GW(p)

Join the SlateStarCodex IRC :)

comment by turchin · 2016-12-02T15:22:32.598Z · LW(p) · GW(p)

Rationality of life extension. Or may be I don't know and it was already explored?

Replies from: btrettel, Vaniver, ChristianKl
comment by btrettel · 2016-12-06T00:26:30.762Z · LW(p) · GW(p)

I think life extension should be discussed more here.

Many rationalists disappointment me with respect to life extension. Too many of them seem to recognize that physical conditioning is important, yet very few seem to do the right things. Most rationalists who understand that physical conditioning is important think they should do something, but that something tends to be almost exclusively lifting weights with little to no cardiovascular exercise. (I consider walking to barely qualify as cardiovascular exercise, by the way.) I think both are important, but if you only could do one, I'd pick cardio because it's much easier to improve your cardiovascular capacity that way. (Cardiovascular capacity/VO2max correlates well with longevity, as discussed here.) I'm not alone in the belief that cardio is much more important; similar things have been said for a long time. I'd recommend Ken Cooper's first book for more on this perspective.

The inability for rationalists to regularly do cardiovascular exercise probably stems from similar problems that cause cryocrastination. I'd like to see more on actually implementing cardiovascular exercise routines. I have some notes on this which could help. Off the top of my head I can remember that there's evidence morning runners tend to maintain the habit better and that there's evidence that exercising in a group helps with compliance. I personally find Beeminder to help a little bit, but not much.

comment by Vaniver · 2016-12-02T19:20:38.951Z · LW(p) · GW(p)

It's unclear to me how rationality and life extension are related. Are you thinking about the following, or something different?

  1. Lots of philosophical / cultural effort has been put into accepting the inevitability of death, but this is mistakenly used to accept the nearness of death despite changing technology meaning that's in play. Rationality helps carve out the parts of that which are no longer appropriate.

  2. Life extension is one of the generic instrumental goods, in that whatever specific goals you have, you can probably get more of them with a longer life than a shorter one. This makes it a candidate as a common interest of many causes.

  3. Rationality habits are especially useful in life extension research, because of the deep importance of reasoning from uncertain data; 30-year olds can't quite wait for a 60-year study of intermittent fasting to complete in order to determine whether or not they should do intermittent fasting starting when they are 30.

Replies from: turchin
comment by turchin · 2016-12-02T22:07:41.568Z · LW(p) · GW(p)

I have been thinking about all three things. I have strong connections with life extension community and we often discuss such topics.

I am planning to write about how much time you could buy by spending money on life extension, on personal level and on social level. I want to show that fighting aging is underestimated from effective altruistic point of view. I would name it second most effective way to prevent sufferings after x-risks prevention.

I have a feeling that as most EA-people are young they are less interested in fighting aging, as it is remote to them, and they also will survive until Strong AI anyway, which will either kill them or make immortal (or even something better, which we can't guess).

Replies from: Vaniver, btrettel
comment by Vaniver · 2016-12-02T22:26:20.937Z · LW(p) · GW(p)

I have a feeling that as most EA-people are young they are less interested in fighting aging, as it is remote to them, and they also will survive until Strong AI anyway, which will either kill them or make immortal (or even something better, which we can't guess).

There's a general point that lots of futurists are the sort of people who would normally be very low time preference (that is, they have a low internal interest rate) but who behave in high time preference ways because of their beliefs about the world, and this causes lots of predictable problems and is not obviously the right way to cash out their beliefs about the world. (For example, consider the joke of 'the Singularity is my retirement plan,' which is not entirely a joke if you expect AI to hit in, say, 2040 but for you to be able to start collecting from an IRA in 2050.)

Maybe the right approach is that it's worth explicitly handling the short, medium, and long time horizons and investing effort along each of those lines. Things like life extension that make more sense in long time horizon worlds are probably still worth investing in, even if there's only a 10-30% chance we actually have that long.

comment by btrettel · 2016-12-06T00:27:39.006Z · LW(p) · GW(p)

I want to show that fighting aging is underestimated from effective altruistic point of view. I would name it second most effective way to prevent sufferings after x-risks prevention.

I'd be very interested in seeing this.

comment by RomeoStevens · 2016-12-02T02:57:04.318Z · LW(p) · GW(p)

S1 training. I.e. creativity training, physical training, hedonic resetting, internal trust building etc.

Replies from: btrettel, username2
comment by btrettel · 2016-12-08T05:25:32.243Z · LW(p) · GW(p)

Creativity is another big area that seems neglected. I've read a fair amount on the subject, but feel I have barely touched the surface.

I also feel it probably is relevant to AI, so I'm somewhat surprised to see so little discussion of it here. (By "AI" I mean a number of things here. Might be easiest to see it as application of computers to solve problems.) At the moment, AI works when the actions one can take are clear (e.g., small number of valid moves in a game). When the possible actions are not precisely specified, the specification becomes the issue. Generating these possibilities is not trivial, and frequently this is what creativity is.

comment by username2 · 2016-12-05T09:06:31.564Z · LW(p) · GW(p)

Search term is "trigger action planning".

Replies from: RomeoStevens
comment by RomeoStevens · 2016-12-06T22:36:41.654Z · LW(p) · GW(p)

That is a very different beast from deliberate practice of feldenkrais for example.

Replies from: username2
comment by username2 · 2016-12-13T22:16:51.524Z · LW(p) · GW(p)

You asked for "S1 training." S1 means "System 1" right? System 1 training is done by reinforcement, and trigger action planning is the mechanism by which one optimally sets and resets such reinforcements.

So if that's not what your want, then what are you asking for?

Replies from: RomeoStevens
comment by RomeoStevens · 2016-12-15T09:49:45.271Z · LW(p) · GW(p)

System 1 training is done by reinforcement

I'm not sure this adequately describes S1 training.

trigger action planning is the mechanism by which one optimally sets and resets such reinforcements.

also not sure optimal is true.

Replies from: username2
comment by username2 · 2016-12-15T10:48:05.095Z · LW(p) · GW(p)

Care to explain? Reinforcement training relies on providing the right feedback at the right moment for maximal effect. A trigger action plan is how you set an "alarm" in advance to arrange for maximal impact. This enables optimal reinforcement per unit of effort put in (or if not, then that is indicative that a better trigger action plan could have been used).

Replies from: RomeoStevens
comment by RomeoStevens · 2016-12-17T04:34:31.407Z · LW(p) · GW(p)

You made a rather strong claim, I expressed skepticism. Reinforcement loops do play in to deliberate practice (which is the strongest model of how experts attain expertise AFAIK), but jumping from deliberate practice to taps is a non-obvious transformation.

comment by casebash · 2016-12-01T22:13:38.391Z · LW(p) · GW(p)

Debating - Given that we are a community that wants to have a good understanding of different arguments and it is also useful to be persuasive, I think that it would be worthwhile seeing what we can learn from the debating community.

Replies from: Vaniver, ChristianKl
comment by Vaniver · 2016-12-02T19:49:18.474Z · LW(p) · GW(p)

I think that it would be worthwhile seeing what we can learn from the debating community.

Consider the article Flowsheet Logic and Notecard Logic. I suspect most of the things we would learn would be antipatterns, but it's still useful to have negative examples (especially when those examples are widespread).

Replies from: casebash
comment by casebash · 2016-12-04T19:40:14.310Z · LW(p) · GW(p)

That's American debating. American debating is weird.

Replies from: username2
comment by username2 · 2016-12-05T09:05:32.507Z · LW(p) · GW(p)

Care to clarify the difference? Oxford-style debate isn't any better in this regard. Is there some other form of debate that is?

Replies from: casebash
comment by casebash · 2016-12-05T10:29:18.438Z · LW(p) · GW(p)

By Oxford-style do you mean British parliamentary (BP). In BP a) people speak at a rate that can actually be understood b) debating is about arguments having an impact, not just maximising the number of arguments.

comment by ChristianKl · 2016-12-02T00:40:50.175Z · LW(p) · GW(p)

The debating community doesn't have the goal of arguments being in touch with reality. The only thing that matters is whether a judge will accept the argument.

When it comes to thinking about whether a scientific paper makes an argument that's likely robust, that's quite different.

The skill of not being convinced by persuasive arguments that aren't in touch with reality is valuable.

Replies from: casebash
comment by casebash · 2016-12-02T09:48:37.544Z · LW(p) · GW(p)

It is true that you need additional skills, but it doesn't mean that debating isn't a good community for developing the particular skills that I mentioned.

comment by btrettel · 2016-12-06T00:08:51.865Z · LW(p) · GW(p)

A big gap I see is memory. Having read a few books on learning and memory, I think what's been posted on LessWrong has been fragmented and incomplete, and we're in need of a good summary/review of the entire literature. There's a lot of confusion on the subject here too, e.g., this article seems to think spaced repetition and mnemonics are mutually exclusive techniques, but they're not at all. When I used Anki I frequently used mnemonics as well. The article seems to be an argument against bad flash cards, not spaced repetition in general. Probably over a year ago I did start writing a sequence on memory enhancement, but it is a low priority task for me it and do not anticipate completing it any time soon.

comment by [deleted] · 2016-12-02T21:27:58.824Z · LW(p) · GW(p)

The case of ugh fields that subjectively seem useful.

For example, I don't care much about base rates for violence occurring to any defined population subgroup. I think it most useful to me, personally, to 1) consider my own (and sometimes other people's) safety my own responsibility at any given moment, 2) to bite the bullet when it appears that I have misjudged the situation. However, I am not sure if these limits were set 'rationally', or simply because I do not like to think about the subject.

Or to put it another way, "suppose thinking about something is SCARY beyond its actual worst impact on some instrumental goal, is there any practical point where you jury-rig some heuristics so that the goal is okay enough?"

comment by root · 2016-12-02T21:25:36.494Z · LW(p) · GW(p)

It could be a difficult endeavour but I'd love to see what we can do with what we already have on LW. I don't see any easily-discoverable links to (for example) the Repository repository. Would anyone be kind as to share links to some pages they believe are useful, but are not easily reachable?

Here's a possibly bad list, but some useful-looking results by searching for 'economics':

  • Here is a post with a few recommendations in the comments, which seem interesting but I don't really know if the recommendations are still good, or have been superseded by fresher material.

  • Here is an interesting analysis by Jonah Sinick.

  • Here there is a collection of lectures about economics.

  • Here should be more, but I trust that the veterans could fill this in higher numbers and higher quality than I possibly could.

Replies from: scarcegreengrass
comment by scarcegreengrass · 2016-12-03T16:16:47.591Z · LW(p) · GW(p)

Can you clarify what you mean by the Repository repository? I'm not familiar with that term.

Replies from: arundelo
comment by arundelo · 2016-12-03T16:46:29.525Z · LW(p) · GW(p)

"Repository repository" -- a post listing various "repository" posts, like the "Solved Problems Repository", the "Useful Concepts Repository", the "Mistakes Repository", and the "Good things to have learned" post.

comment by gucciCharles · 2016-12-13T10:27:31.226Z · LW(p) · GW(p)

The consequences of our beliefs about status and signalling. For example given how pervasive signaling is in our lives should we optimize our lives for the most status etc.

We know that we care about status. We know that we can't talk to people about that in real life. Should we then make status our motivating terminal value?

comment by scarcegreengrass · 2016-12-02T13:39:56.544Z · LW(p) · GW(p)

Xenophobia maybe. People talk a lot about avoiding the distortion of your own subculture, but I'm also interested in how to avoid bias when thinking about the unfamiliar subcultures of others.

comment by Nate_Rausch · 2016-12-09T19:32:53.774Z · LW(p) · GW(p)

Well, the "dark arts" might deserve a second look.

We shouldn't pivot too far. Politics clearly is a mind-killer, and exploiting human weaknesses to further your cause is not inherently good.

But I think we have grouped too many things into one basket. In order for rationality to succeed, we must manage to find the balance between being effective and being pure in our ideals.

We do not want to be the stereotypical investment banker, without any morals who will do whatever works. Yet we also don't want to be the environmentalist who don't do anything that works, because people should just care about the environment.

Instead, I think, we need to be a bit nuanced with what works. Some actions work and are clearly immoral - like lying. Others work and are not in conflict with any value, like making people feel good.

I think a good model for this is Elon Musk. He seems to be as idealistic as is possible within a framework of getting things done. He does not lie, but he does care a lot about building a good product, for example, unlike a lot of other enviromentalist entrepreneurs.

So I think we need to open this conversation. What is effective to influencing people? What is effective towards garnering resources to a cause? What is effective in capturing people's attention?

And then when we have the answers, and the details of the strategies, then we can compare them against our values and rule out the ones that are in direct conflict.

comment by ChristianKl · 2016-12-04T23:04:48.165Z · LW(p) · GW(p)

Such as?

It's not like this topic appear the first time. I have written plenty on LW about it.