What are the mutual benefits of AGI-human collaboration that would otherwise be unobtainable?
post by M. Y. Zuo
This is a question post.
By mutual benefits I am thinking of what activities, endeavours, and designs may both future AGI(s) and future human groups, or society at large, invest in that would yield net returns.
The most commonly mentioned benefits are the ones where AGI would be enhancing existing human activities in statistical analysis, stock markets, forecasting, optimization, etc… that could be carried on regardless of AGI involvement or not. Where AGI would be a supplement.
However what can collaboration exclusively bring about?
It would be nice to hear some thoughts on what novel developments may be possible. (Also by what ways AGI(s) could be benefiting, other than more compute, since there likely will be synergistic effects.)
Comments sorted by top scores.
comment by Charlie Steiner ·
2021-11-17T19:59:38.095Z · LW(p) · GW(p)
I'm not sure if I understand how you want your answers shaped. Why does it need to be AGI? Are these new activities happening in a radically transformed world run by AGIs, or are we just imagining cool things to do with an AGI in today's world?
Replies from: M. Y. Zuo
comment by Gyrodiot ·
2021-11-17T21:15:36.146Z · LW(p) · GW(p)
I second Charlie Steiner's questions, and add my own: why collaboration? A nice property of an (aligned) AGI would be that we could defer activities to it... I would even say that the full extent of "do what we want" at superhuman level would encompass pretty much everything we care about (assuming, again, alignment).Replies from: M. Y. Zuo
↑ comment by M. Y. Zuo ·
2021-11-18T02:28:14.851Z · LW(p) · GW(p)
Because human deference is usually conditioned on motives beyond deferring for the sake of deferring. Thus even in that case there will still need to be some collaboration.