Posts
Comments
I've noticed that some skydivers wear necklaces with a "closing pin." Skydiving really is a lifestyle, and I don't think that many people outside of skydiving would recognize a closing pin or wear it as jewelry.
The answer seems to be yes.
On the manifund page it says the following:
Virtual AISC - Budget version
Software etc
$2K
Organiser salaries, 2 ppl, 4 months
$56K
Stipends for participants
$0
Total $58K
In the Budget version, the organisers do the minimum job required to get the program started, but no continuous support to AISC teams during their projects and no time for evaluations and improvement for future versions of the program.Salaries are calculated based on $7K per person per month.
Based on the minimum threshold of $28k, that would seem to offer about 2 ppl for 2 months.
In my country it says to take 1-2 paracetamol, so that might be the cause of the confusion.
Thank you for your feedback! This is a mistake on my part. I will take the article down until I've looked into this and have updates my resources.
Edit: I have updated the article. It should be better now :)
After learning about the Portfolio Diet I have been doing the same! Whenever I'm cooking I tend to ask three questions:
- Can I add nuts or a nut paste to this dish? I love adding tahini!
- Can I add more fiber? Tends to be by replacing the source of carbohydrates with black rice, quinoa, bulgur or whole wheat bread. And I always have oats for breakfast.
- Can I add some plant protein? Either by replacing something or by adding something extra.
For me these questions work because I'm already eating plenty of fruits and vegetables. And, I haven't really added plant sterols into my diet yet.
Good luck to you!
I didn't know these numbers and I didn't know about the Taeuber paradox, but they definitely put Part 5 into perspective.
I wonder if early treatment should be considered a refinement? That is debatable and I honestly don't know the answer. But it does put an upper bound on the benefits of starting early treatment, for which I'm grateful.
You make some good points, but thinking about the fact that researchers should correct for multiple-hypothesis testing always makes me a little sad—this almost never happens. Do you have an example where a study does this really nicely?
Also, do you have any input on the hypothesis that treating early is a worthwhile risk?
There is one ingredient I highly recommend from my personal experience: stop watching porn and stop masturbating.
We have an incredible drive to procreate and in my experience this is dampened by watching porn and masturbating—which makes a lot of sense. We're stilling our drive to connect in an intimate way with a poor substitute that presses similar buttons.
I notice myself becoming more outgoing, more confident, motivated to do sports, etc. It makes sense that we would have a build in drive to be sexy.
I wanted to make this comment for a while now, but I was worried that it wasn't worth posting because it assumes some familiarity with multi-agent systems (and it might sound completely nuts to many people). But, since your writing on this topic has influenced me more than anyone else, I'll give it a go. Curious what you think.
I agree with the other commenters that having GPT-3 actively meddle with actual communications would feel a little off.
Although intuitively it might feel off—for me it does as well—in practice GPT-3 would be just another agent interacting with all the other agents using global neural workspace (mediated by vision).
Everyone is very worried about transferring their thinking and talking to a piece of software, but for me this seems to come from a false sense of agency. It is true that I have no agency over GPT-3, but the same goes for my thoughts. All my thoughts simply appear in my mind and I have absolutely zero choice in choosing them. From my perspective there is no difference other than automatically identifying with thoughts as "me".
There might be a big difference in performance. I can systematically test and tune GPT-3. If I don't like how it performs, then I shouldn't let it influence my colony of agents. But if I do like it, then it is the first time that I have added another agent to my mind, that has the qualities I have chosen.
It is very easy to read about non-violent communication (or other topics like biases), and think to yourself, that is how I want to write and act, but in practice it is hard to change. By introducing GPT-3 as another agent, that is tuned for this exact purpose, I might be able to make this change orders of magnitude easier.
I think that is a fair point, I honestly don't know.
Intuitively, the translation would seem to help me more to become less reactive. I can think of two reasons:
- It would unload some of the cognitive effort when it is most difficult, making it easier to switch the focus to feelings and needs.
- It would make the message different every time, attuned to the situation. I would worry that the pop-up would be automatically clicked away after a while, because it is always the same.
But having said that, it is a fair point and I would definitely be open to any solution that would achieve the same result.
Can you explain why you think this is blunting feelings? Since my intention would be for it to do the exact opposite.
If we look at the example you give: shielding a partner from having to read a "fuck you". In that case you would read the message containing "fuck you" first and GPT-3 might suggest that this person is feeling sad, vulnerable and angry. In my perspective this is exactly the opposite of "blunting feelings". I would be pointed towards the inner life of the other person.
Whenever I receive a message containing high conflict content and I have enough peace of mind, I tend to translate it in my head towards feelings and needs. So, in this case there is also a function involved F(feelings, original message) = output, but it is running on biological hardware.
When I do this myself I'm not a 100% right. In fact, I'm probably not even 50% right and this really doesn't matter. The translation doesn't have to be perfect. The moment I realize that this person might be sad because a certain need is not being fulfilled, then I can start asking questions.
It takes some effort to learn what is alive in the other person and it often doesn't matter if your guess is wrong. People deeply appreciate when you're curious about their inner lives.
On the senders side it can do exactly the same thing. GPT-3 guesses what is alive in me, opening up the opportunity to find out myself. You're worried about 10% being lost in translation. In my personal experience, most people (including myself), when they're upset are completely unaware of their feelings and needs. They get so absorbed by their situation that it causes a lot of unnecessary suffering.
I agree that if your device automatically translated everything you send/receive without you having any control over it, that it would be incredibly dystopian, but that is not what I'm suggesting.
This would be a service that people actively chose and pay for, GPT-3 is not free. If you don't like it, don't use it.
I agree, that once you're using the service that it would be very difficult to ignore the suggestions, but again that is the whole point. The goal of the app is to intervene in the case of a high conflict situation and point you in the directions of the other person feelings and needs. Or reminding you of your own feelings and needs.
Now, it might be true, that at this point GPT-3 is not very good at reminding your of the other persons feelings and needs—I can give you that. And it might have been a mistake of me to only use examples generated by GPT-3.
But whenever I speak to someone that is angry or upset, I wish that I could hear their feelings and needs. Just like myself, I believe that people find it difficult to express or hear those when shit hits the fan.