3-P Group optimal for discussion?
post by AiresJL · 2020-07-13T22:23:44.644Z · LW · GW · No commentsThis is a question post.
Contents
Ask 1: Specifically, are there 2 volunteers that would like to meet 90 minutes once a week in-person in SF starting July/August to discuss 2 things: TLDR; Let me know if you'd like to join a small (3 person) discussion group interested in emotions, decision theory and AI. I'll also have friends drop-in such as Peter Dayan, Joscha Bach, Kenneth Stanley, Matei Zaharia and Jeff Clune. None No comments
Assertion 1: Optimal size for a weekly discussion group is 3 and the optimal setting is in person
After 8 years hiatus on Lesswrong, i'd like to revisit lesswrong. In my time away, I have learned that categorically the best medium to learn something is through real human to human interaction bar none. Therefore, in order to catch-up with the latest body of knowledge in less-wrong, i'd like to ask for help.
Ask 1: Specifically, are there 2 volunteers that would like to meet 90 minutes once a week in-person in SF starting July/August to discuss 2 things:
1. What is an emotion in the context of information and decision theory?
2. Can we replicate emotion in AI?
These questions may seem stupid at first given the nature of the term "emotions," however in practical terms, GPT-3 used for NLP will fail any version of the turing test that analyzes whether the AI violates predictive emotional congruity. In fact, I would bet that all chatbots trained on just linguistic sequences will fail any and all emotional congruity tests.
-----------------------------------------------------------------------------------------------
***Emotions are a trick of language. Fundamentally there is an ordinal ranking of all actions a you make take at time t. The selection of an action constitutes either random selection or some ranking based on an objective function (more on this later) where you select the highest ranking action. These values associated with each action that necessitates an apple to apple comparison is effectively a vector v, that represents the relative value of an action. This vector is in fact the emotional vector that corresponds to the free-energy principle biological process proposed by Karl Friston in 2018. It manifests as what you feel although typically our attention sensor queue is only sensitive to high intensity emotions, emotional tensors t about an action a which as an effect of emergent complexity represents the entire spectrum of human thought. Through drop-out causation analysis we can radically reduce the solution space for the so-called mind-body philosophy problem.
TLDR; Let me know if you'd like to join a small (3 person) discussion group interested in emotions, decision theory and AI.
I'll also have friends drop-in such as Peter Dayan, Joscha Bach, Kenneth Stanley, Matei Zaharia and Jeff Clune.
Answers
No comments
Comments sorted by top scores.