awg's Shortform
post by awg · 2023-02-27T16:31:56.204Z · LW · GW · 4 commentsContents
4 comments
4 comments
Comments sorted by top scores.
comment by awg · 2023-05-07T15:41:03.577Z · LW(p) · GW(p)
«Boundaries» and AI safety compilation [LW · GW] and Embedded Agents [LW · GW] got me thinking:
Cancerous cells are misaligned subsystems with respect to the human body. Their misalignment results in behavior that violates the usual functional boundaries of other subsystems.
comment by awg · 2023-04-06T17:55:34.472Z · LW(p) · GW(p)
One thing I have observed in myself as I've followed AI more closely, especially as the pace has seemed to escalate in the past few weeks/months, is that my level of care for climate change has dropped significantly. (Maybe irrationally, to some degree.) I find myself being bored by appeals to climate change risk at this point, especially longer-term risks. They feel paltry in comparison to the risks posed by AGI. Like, assuming timelines <30-50 years, either AGI goes well and then climate change is a solved problem, or AGI doesn't go well and then climate change is no longer a concern.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-04-06T20:05:22.263Z · LW(p) · GW(p)
The model of the world that has superintelligence in its future straightforwardly predicts that the scope of counterfactually higher climate change is much smaller than standard estimates, which ignore this consideration. After making that update, the emotional impression of caring less correctly tracks the underlying concern [LW · GW].