Posts
Comments
Could make this a report-based system? If the user reported a potential spam, then in the submission process ask for reasons, and ask for consent to look over the messages (between the reporter and the alleged spammer); if multiple people reported the same person it will be obvious this account is spamming with DM?
edit: just saw previous comment on this too
Thanks, I was thinking of the latter more (human irrationality), but found your first part still interesting. I understand irrationality was studied in psychology and economics, and I was wondering on the modeling of irrationality particularly, for 1-2 players, but also for a group of agents. For example, there are arguments saying for a group of irrational agents, the group choice could be rational depending on group structure etc. On individual irrationality and continued group irrationality, I think we would need to estimate the level of (and prevalence of ) irrationality in some way that captures unconscious preferences, or incomplete information. How to best combine these? Maybe it would just be just more data driven.
I am not sure if it needs to be conditional on if the event is unusual or not, or if would happen again or not in a forward looking sense in reality. Could you explain why the restriction there? Especially on <We do not call any behavior or emotional pattern ‘trauma’ if it is obviously adaptive.>
How do we best model an irrational world rationally? I would assume we would need to understand at least how irrationality works?
Sharing an interesting report of the state of ai https://www.stateof.ai/
This includes multiple aspects of the current state of AI, and is reasonably good on the technical side.
Just saw the OP replied in another comment that he is offering advice.
It’s probably less on all internet but more on the rlhf guidelines (I imagine the human reviewers receive a guideline based on the LLM-training company’s policy, legal, and safety experts’ advice). I don’t disagree though that it could present a relatively more objective view on some topics than a particular individual (depending on the definition of bias).
Yeah for sure!
For PII - A relatively recent survey paper: https://arxiv.org/pdf/2403.05156
- For pii/memorization generally:
- https://arxiv.org/pdf/2302.00539
- https://arxiv.org/abs/2202.07646
- Lab's LLM safety section typically has a pii/memorization section
- For demographics inference:
For bias/fairness - survey paper: https://arxiv.org/pdf/2309.00770
This is probably far from complete, but I think the references in the survey paper, and in the Staab et al. paper should have some additional good ones as well.
This is a relatively common topic in responsible AI; glad to see reference on Staab et al, 2023! For PII (Personally Identifiable Information) - RLHF typically is the go to method for refusing such prompts, but since they are easy to be undone, efforts had been put into cleaning the pretaining safety data. For demographics inference - seems to be bias related as well.
No worries; thanks!
Examples of right leaning projects that got rejected by him due to his political affiliation, and if these examples are AI safety related
Out of curiosity - “it's because Dustin is very active in the democratic party and doesn't want to be affiliated with anything that is right-coded” Are these projects related to AI safety or just generally? And what are some examples?
1. Maybe for everyone it would be different. It might be hard to have a standard formula to find obsessions. Sometimes it may come naturally through life events/observations/experiences. If no such experience exists yet, or one seems to be interested in multiple things, I have received an advice to try different things, and see what you would like (I agree with it). Now that I think about it, it would also be fun to survey people and ask them how they got their passion/do what they do (and to derive some standard formula/common elements if possible)!
2. I think maybe we can approach with " the best of one's ability", and when we reach that, the rest may depend a lot on luck and other things too. Maybe through time, we could get better eventually, or maybe some observations/insights accidentally happened, and we found a breakthrough point, with the right accumulation of previous experience/knowledge.
https://arxiv.org/pdf/1803.10122 I have a similar question and found this paper source. One thing I am not sure of is if this is no longer the same concept/close enough concept that people currently talk about, nor if this is the origin.
https://www.sciencedirect.com/science/article/pii/S0893608022001150 This paper seems to suggest something at least about multimodal perception with reinforcement learning/agent type of set up.
“A direction: asking if and how humans are stably aligned.” I think this is a great direction, and the next step seems to be breaking out what are humans aligned to - the examples here seems to mention some internal value alignment, but wondering if it would also mean external value system alignment.