Does anyone worry about A.I. forums like this where they reinforce each other’s biases/ are led by big tech?

post by misabella16 · 2020-10-13T15:14:07.699Z · score: 4 (3 votes) · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    4 Dagon
    1 Kenny
None
1 comment

I came across this programme for A.I. https://inspired-minds.co.uk/inspired-ai-events/ mostly being run by tech companies and their employees and it worries me that they are making A.I. and the fast development/deployment of A.I. less worrisome by influencing people. I can circle back with their points if I get in and see their presentations but the fact that it is mostly Microsoft and the like dominating the conversation instead of academics or people I suppose less focused on a mentality of “move fast and break things” when they are breaking people.

Answers

answer by Dagon · 2020-10-13T17:33:35.822Z · score: 4 (2 votes) · LW(p) · GW(p)

Not worried - these are not exclusive, and there's plenty of room for many different formats of discussion, with different foci.  

IMO, "move fast and break things" is a fine counterpoint to "move slow and suffer the current broken-ness for longer", and I really want both viewpoints represented and considered.  Also, I don't think academics are necessarily better aligned nor knowledgeable than corporate- (or government-)funded groups.

answer by Kenny · 2020-10-13T19:03:52.985Z · score: 1 (1 votes) · LW(p) · GW(p)

I don't think they're dominating the conversation – just some conversations, i.e. the ones they pay for. I don't think them doing this is negatively affecting any other conversations, e.g. by academics or "people I suppose less focused on a mentality of “move fast and break things” when they are breaking people".

(I'm not sure what you mean by "when they are breaking people" – any details or specifics you can share about this?)

If anything, I've been pleasantly surprised at how open those same big tech companies are to 'friendly AI'.

It also doesn't hurt tho that deploying (and maintaining) effective AI (systems) seems fairly difficult.

1 comment

Comments sorted by top scores.

comment by CardinalExponentiation · 2020-10-14T12:49:22.086Z · score: 1 (1 votes) · LW(p) · GW(p)

I agree that this is worrisome! At the same time, I would like to flag that academic researchers aren’t necessarily free from corporate influence either. Arguably, many technology companies fund academic research in order to shape the research agenda pursued in academia. To mention one example, one might argue that big technology companies are funding so much research about the ethics of artificial intelligence because this ultimately makes policymakers less likely to legally restrict the deployment of controversial technologies (cf. https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence).