Academic Rationality Research

post by ChristianKl · 2021-07-25T13:59:02.848Z · LW · GW · 4 comments

Contents

4 comments

There are now at least two academic research groups on rationality, in the sense we use the word in here, in Germany that seem to be little known in the US rationality community. The point of this post is telling you they exist if you didn't already know.

There's the Rationality Enhancement Group lead by Falk Lieder in the Max-Plank Insititute in Tübingen and there's a group on Adaptive Rationality in the Max-Plank Insititute in Berlin.

When Falk Lieder was in at our European community weekend he repeatidly said that he's interested in collaborating with the wider rationality community. There's a list of publications of his group and also a Youtube channel that presents a few ideas. 

The adaptive rationality group has decided to speak of rationality techniques we would likely call applied rationality techniques as "boosting decision-making" in contrast to the academic literature on nudging. I think it's worth exploring whether we should also apply their term for the cluster of techniques like Double Crux. 

4 comments

Comments sorted by top scores.

comment by dxu · 2021-07-27T03:55:14.100Z · LW(p) · GW(p)

This seems cool! Strongly upvoted for signal-boosting.

comment by goose000 · 2021-07-27T20:44:48.498Z · LW(p) · GW(p)

Cool, thanks for sharing.

I posted about my academic research interest here [LW · GW], do you know their research well enough to give input on whether my interests would be compatible? I would love to find a way to do my PhD in Europe, but especially Germany.

Replies from: ChristianKl
comment by ChristianKl · 2021-07-27T21:10:21.196Z · LW(p) · GW(p)

Your post suggests that your target is to do research that's supposed to influence AI. As far as I understand the two groups their goal focuses on improving human rationality. 

My mental model of Falk Lieder would likely say something like: "The operations research team leader   background is interesting. Did you find a way to bring findings from computational game theory / cognitive science / system modeling / causal inference into a way that you believe helps people in your organization make better decisions? If so it would be great to study in an academically rigorous way whether those interventions lead to better outcomes." 

Replies from: goose000
comment by goose000 · 2021-07-28T18:49:31.529Z · LW(p) · GW(p)

Ahh, I think I did not think through what "rationality enhancement" might mean; perhaps my own recent search and the AI context of Yudkowsky's original intent skewed me a little. I was thinking of something like "understanding and applying concepts of rationality" in a way that might include "anticipating misaligned AI" or "anticipating AI-human feedback responses". 

I like the way you've framed what's probably the useful question. I'll need to think about that a bit more.