Posts
Comments
I agree. I also think this is a topic that needs to be seriously considered and discussed because not doing so may leave behind a hidden hindrance to accurate collective assessment and planning for AI risks. Because contrary to our conceits and aspirations, our judgements aren't at all immune to the sway of biases, flawed assumptions, and human emotions. I'm not sure how to put this, but people on this forum don't come off as very worldly, if that makes sense. A lot of people are in technical professions where understanding of political realities seem to be lacking. The US and China stand to be the two major drivers of AI development in the next decades. Increasingly they don't see eye to eye, and an arm-race dynamic might develop. So I feel there's been a lot of focus on the technical/theoretical side of things, but not enough concern over the practical side of development, the geopolitical implications, and all that might entail.
I haven't bothered to read the entire post, but I wanted to chime in to express my support for this proposal.