Posts
Comments
Right now, the USG seems to very much be in [prepping for an AI arms race] mode. I hope there's some way to structure this that is both legal and does not require the explicit consent of the US government. I also somewhat worry that the US government does their own capabilities research, as hinted at in the "datacenters on federal lands" EO. I also also worry that OpenAI's culture is not sufficiently safety-minded right now to actually sign onto this; most of what I've been hearing from them is accelerationist.
Interesting class of miscommunication that I'm starting to notice:
A: I'm considering a job in industries 1 and 2
B: Oh I work in 2, [jumps into explanation of things that will be relevant if A goes into industry 2].
A: Oh maybe you didn't hear me, I'm also interested in industry 1.
B: I... did hear you?
More generally, B gave the only relevant information they could from their domain knowledge, but A mistook that for anchoring on only one of the options. It took until I was on both sides of this interaction for me to be like "huh, maybe I should debug this." I suspect this is one of those issues where just being aware of it makes you less likely to fall into it.
I saw that news as I was polishing up a final draft of this post. I don't think it's terribly relevant to AI safety strategy, I think it's just an instance of the market making a series of mistakes in understanding how AI capabilities work. I won't get into why I think this is such a layered mistake here, but it's another reminder that the world generally has no idea what's coming in AI. If you think that there's something interesting to be gleaned from this mistake, write a post about it! Very plausibly, nobody else will.
Did you collect the data for their actual median timelines, or just its position relative to 2030? If you collected higher-resolution data, are you able to share it somewhere?
I really appreciate you taking the time and writing a whole post in response to my post, essentially. I think I fundamentally disagree with the notion that any past of this game is adversarial, however. There are competing tensions, one pulling to communicate more overtly about their feelings, and one pulling to be discreet and communicate less overtly. I don't see this as adversarial because I don't model the event " finds out that is into them" to be terminally bad, just instrumentally bad; It is bad because it can cause the bad things, which is what a large part of my post is dedicated to.
I find it much more useful to model this as a cooperative game, but one in which is cooperating with two different counterfactual s, the one who reciprocates the attraction and the one who does not. is trying to maximize both people's values by flirting in the way I define in this post, there's just uncertainty over which world they live in. If they knew which world they lived in, then the strategy for maximizing both and 's values looks a lot less conflicted and complicated; either they do something friendship-shaped or something romance-shaped, probably.
Ah that's interesting, thanks for finding that. I've never read that before, so that wasn't directly where I was drawing any of my ideas from, but maybe the content from the post made it somewhere else that I did read. I feel like that post is mostly missing the point about flirting, but I agree that it's descriptively outlining the same thing as I am.