Posts
Comments
Humans won’t figure out how to make systems with goals that are compatible with human welfare and realizing human values
This is a very interesting risk, but in my opinion an overinflated one. I feel that goals without motivations, desires or feelings are simply a means to an end. I don't see why we wouldn't be able to make programmed initiatives in our systems that are compatible with human values.
The new UI is great, and I agree with the thinking behind de-emphasizing karma votes at the top. It could sometimes create inherent bias and assumptions (no matter whether the karma is high or low) even before reading a post, whereas it would make more sense at the end of the post.
I also welcome everyone's comments, inputs, feedback and suggestions. This is the first edition of Systema Robotica, and I intend to build upon this early framework.
If you're a robotics founder or roboticist, and would like to add your robot to the Robot Archive, you can do so here: https://systemarobotica.com/archive
The Robot Archive is a dynamic public wiki that codifies all robots within the robot taxonomy.