post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2022-05-05T03:34:51.129Z · LW(p) · GW(p)

I am confused about what you even mean at several points.

Maybe try re-explaining with a more typical example of bias, as clearly as you can?

Replies from: jonas-kgomo
comment by Jonas Kgomo (jonas-kgomo) · 2022-05-05T11:02:01.696Z · LW(p) · GW(p)

Is bias simply human in the loop problem(is it something that can be solved by data refinement and having diverse programmers), or is it also related to explainability of AI, the fact that we can not explain why AI decided to make some decisions.  A simple example would be if an AGI was supposed to identify extreme ideology in a persons posts on social media: one  AI (honest) tells us an extreme person A is extreme, while the other AI (dishonest) tells us an extreme person B is not extreme (even thou it knows the person is extreme). In the above scenario, having a human trying to understand if there is bias would be futile, since the untruthful AI would basically perpetuate bias by lying about there being no bias. Does this mean algorithmic bias is beyond human in the loop, but also an architectural bias (if we had more causal models and logic in neural networks then we could have less of such bias and side effects).