post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by ChristianKl · 2018-05-05T18:02:31.242Z · LW(p) · GW(p)
From what I learned so far, the LessWrong community did not appear to be a big fan of the reptilian brain, and wants to overcome biases.

Where have you learned that? It sounds to me like you project preconceived notions you got elsewhere on our community.

Our community isn't opposed to emotions. CFAR doesn't have classes that center around overcoming specific biases but classes on various techniques and those techniques acknowledge that humans have emotions that matter.

Your post reminds me of a talk at the first Quantified Self conference where I was. A lot of attempts at optimizing according to simple feedback processes ignore the fact that second-order cybernetics matters.

comment by ashtonabc · 2018-05-05T03:13:50.908Z · LW(p) · GW(p)

I like the thinking that went into this post, but I also think it's difficult to make any definitive statements here. None of the actions you've given are entirely independent of the others (for example, you can attend new events with friends). It's also difficult to use an algorithmic approach without a good way of measuring expected returns, which are difficult to intuit and change significantly over time.

Even ignoring varying individual experiences with online dating (conventionally attractive/non-minority individuals tend to have better success), there may be actions that you can take to make it more efficient. It can be also done at times when you are unable to go out and can be done for even for small amounts of time.

I think algorithmic/rationalist approaches [LW · GW] to dating are really interesting. I'm not certain that reinforcement learning is any different than non-algorithmic/rationalist approaches though. Aren't humans always trying to maximize our expected reward?