Algo trading is a central example of AI risk
post by Vanessa Kosoy (vanessa-kosoy) · 2018-07-28T20:31:55.422Z · LW · GW · 5 commentsContents
5 comments
I suspect this observation is far from original, but I didn't see it explicitly said anywhere, so it seemed worthwhile spelling it out.
The paperclip maximizer is a popular example of unfriendly AI, however it's not the most realistic one (obviously it wasn't meant to be). It might be useful to think which applications of AI are the most realistic examples, i.e. which applications are both likely to use state-of-the-art AI and are especially prone to failure modes (which is not to say that other applications are not dangerous). In particular, if AI risk considerations ever make it into policy, such an analysis is one thing that might help to inform it.
One application that stands out is algorithmic trading. Considering the following:
- Algorithmic trading is obviously lucrative and has strong economic incentives encouraging it.
- Algorithmic trading has some aspects that are zero-sum, leading to an especially vicious technological race. In this race, there is no natural stopping point of "good enough": the more powerful your algorithm, the better.
- Even ignoring the possibility of competing AIs, there is no "good enough" point: acquiring more money is a goal that is either unlimited or, if it has a limit then this limit already requires enough power for a pivotal event.
- The domain is such that it would be very advantageous for your AI to build detailed models of the world as a whole (at least the human world) and understand how to control it, including in terms of human psychology, economics and technological development. These capabilities are precisely what a pivotal event would require.
- Algorithmic trading doesn't require anything close to AGI in order to start paying off. Indeed, it is already a very active domain. This means that transition from subhuman to superhuman intelligence is more likely to happen in a way that is unintended and unplanned, as the algorithm is gradually scaled up, whether in terms of computing power or in other ways.
- Last but definitely not least, the utility function is exceptionally simple. Formally specifying a "paperclip" might still be complicated, but here we only need something like "the amount of money in a given bank account". This means that this application requires almost nothing in the way of robust and transparent learning algorithms: sufficiently powerful straightforward reinforcement learning might do absolutely fine. Because of this, an algorithmic trading AI might lack even those safety mechanisms that other applications would require before scaling up to superintelligence.
5 comments
Comments sorted by top scores.
comment by the gears to ascension (lahwran) · 2018-07-28T21:18:06.862Z · LW(p) · GW(p)
Agreed, this is a good point. Here are some thoughts my contrarian comment generator had in response to this:
It's also not a particularly lucrative place to apply the upper end of powerful agent intelligence. While ultimately everything boils down to algorithmic trading, the most lucrative trades are made by starting an actual company around a world-changing product. As a trader, the agent would commonly want to actually start a company to make more money, and that's not an action that is available until you go far enough down the diminishing returns curve of world modeling that it starts being able to plan through manipulating stock price to communicate.
also, high frequency trading is not likely to use heavy ML any time soon, due to strict latency constraints, and longer term trading is competing against human traders who, while imperfect, are still some of the least inadequate in the world at predicting the future.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2018-07-29T19:26:00.242Z · LW(p) · GW(p)
These are interesting contrarian comments.
Regarding ML in high frequency trading, I'm not sure there is a significant impediment. What one would do there (and maybe someone already does?) is use ML to control the parameters of, and ultimately design from scratch, the algorithms that do the trading itself (so that the ML runs with high latency in the background while the algorithms operate in real-time).
comment by avturchin · 2018-07-29T10:55:34.455Z · LW(p) · GW(p)
There was a similar idea presented in reddit-control-probelem a few days ago.
A quote from a longer post: "Instead of using the "seed AI" analogy, I want to describe a different scenario where, paradoxically, it is not the core intelligence that drives improvement, but external economic incentives. One can think of a such AI as a kind of economic system.
At it's core, it is pretty dumb, but it happens to work very well in a complex environment. Instead of having intelligence as its core, it out-sources the parts of problem solving requiring intelligence to third-parties which uses the system as a trading platform".
I commented where and wound add here again that it looks like bitcoin, which even outsources intelligence for ASIC building, and I could imagine that it could create incentives for some miners to buy weapons to protect their network. At the end full automated ascending economy could appear from bitcoin and I would not surprised that the universe will by titled by miners. The idea of bitcoin as a paperclipper starts to appear often, fueled by its growth as electricity consumer.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2018-07-30T15:24:51.546Z · LW(p) · GW(p)
Wasn't this averted by there being finite potential bitcoins, such that eventually miners only receive what the transactions are willing to pay?
Replies from: avturchin↑ comment by avturchin · 2018-07-30T15:51:49.979Z · LW(p) · GW(p)
In bitcoin case probably yes, but there are other cryptocurencies, which now consume around a half of all mining electricity. It is a good example of a case where initial "AI" has some anti-paperclipping properties, but its counterfactual copies didn't.