You know what i find really interesting

Discussion in 'Automated Trading' started by pinetboltz, Jan 31, 2017.

  1. pinetboltz

    pinetboltz

    So there's been a lot of discussion about deep learning's impact on the future of trading.

    Chess, Go, and now Poker seem to have been cracked by deep learning algos. Bloomberg had an article today: https://www.bloomberg.com/news/arti...year-quest-to-build-computers-that-play-poker

    However, there's a section towards the bottom of the article, which contains a very interesting nugget of info:

    (Emphasis added)

    If "flawed" here means irrational, then in a strange way, it would seem that the Achilles heel of these algos would actually be irrational behavior by their opponents? Irrational behavior seems to be a feature of the financial markets (or at least to the extent there are enough participants who are not following a perfectly rational strategy).

    All of this adds an interesting dimension, in that the questions that come up are:
    - If the algos are able to pick off professionals, but "fall apart" when irrational players are added to the mix, then the algos have exploitable vulnerabilities whenever irrational behavior is present? What is the inherent source of these vulnerabilities in the algos' strategies (eg. randomness in opponents' irrational behavior causes the mathematical expectation of their predictive models to be eroded away)? What is it about irrational behavior that causes the models to fall apart?

    - With the descriptions of how the deep learning algos use parallel computing / thousands of cores -equivalent in computational power, etc, their strategy must be as perfectly optimized as it gets. Why are the vulnerabilities against irrational behavior not patched? ie. Is it inevitable that some deep learning strategy with a "perfect" strategy in theory will still "fall apart" in practice? Or is it simply because the algos haven't been optimized towards the presence of irrational play yet?
     
    tommcginnis likes this.
  2. vanzandt

    vanzandt

    Bingo! They can be trick fvcked in a big way if one has the AUM to do it.

    See: Star Trek Epsiode 36, Season 2:
    Wolf in the Fold

    Spock demands the computer to calculate PI to the last digit.
     
    pinetboltz and tommcginnis like this.
  3. Baron

    Baron ET Founder

    As a supplemental resource to this thread, here's a site that provides a quick 6-minute overview of the various components of Artificial Intelligence such as Machine Learning, Deep Learning, NLP, etc. so you can wrap your head around everything.

    A 6 minute Intro to AI
     
  4. pinetboltz

    pinetboltz

    Thanks for the replies guys.

    Another thought came to me earlier today, re the vulnerabilities of deep learning.

    It seems that the "flawed" players threw a wrench in the works when they were added to the multiplayer games, as measured by the decay in deep learning algo performance - could it be because the construction of the deep learning models were done by having the algos train on professional play? and so when the "flawed" players joined the multiplayer games, the algos could not easily optimize against the erratic behaviors of the "flawed" players.

    It's like that saying, "All happy families are alike, all unhappy families are unhappy in their own way" -- the deep learning models can perform in the first place precisely because they were trained using the professional players (ie. the happy families here), but in a situation with multiplayer games involving flawed players, there is no single consistent paradigm anymore, because the flawed players make all kinds of deviations from the 'professional' paradigm (similar to wide variety in the set of unhappy families).

    Perhaps the deep learning algos could then be beat, in the context of there being (a) multiplayer games, (b) flawed players being present in the game / players whose strategies deviate enough from the professionals, (c) the deep learning algo's performance suffers when it tries to make a response to the behavior of the flawed players, which opens up the opportunity for profit on both the expense of the flawed players as well as that of the deep learning algos.
     
  5. quant1

    quant1

    A few things come to mind:

    From a trading perspective:

    - Poorly written algorithms are very easily confused by irrationality. The use of spoofing is an example of a way to trick and algorithm.

    - Many algorithms are designed to capture arbitrage opportunities. In this case, the target is in fact the irrationality you speak of. For example, the algo knows that A should be worth $2 when B is $1. If some one trades B at $1.50, the algo buys 1 A and sells 2 B for a net credit of (-$2+2*$1.50)=$1. This is "free" money curtesy of irrationality.

    From a math perspective:

    - Chess, Go, and many other games are solvable. That is, there are only finitely many states that can exist. If a game cannot be quantified in this manner, optimal solutions are often derived from heuristics and deep learning methods. This enters the world of probability in which we say "this move is optimal most of the time". Once we lose determinism, the path towards cracking a game is far more difficult.

    - By construction, deep learning methods are mostly deterministic. I have not come across many algorithms that have a random component. One exception is a limit hold'em conputer that apparently has a randomized bluffing component.

    Great question and responses
     
    pinetboltz likes this.
  6. The biggest issue here seems to be "the equilibrium strategy that the AIs use fall apart in multiplayer games". equilibrium doesn't even exist in complex systems, at most a steady state. any thoughts?
     
  7. userque

    userque

    Points to consider:

    • Not all algos work the same. Simply because one algo can become confused by such-and-such, doesn't imply all algos can be fooled similarly.
    • Most algos will perform less than optimal when presented with inputs that it wasn't trained on. Solution: train the algo on those types of inputs.
    • Randomness makes things difficult, but not necessarily impossible. Algos can learn to ignore random inputs, or more accurately, inputs that contribute nothing to the solution. Where randomness is great, the algo, in the financial sense, can learn to simply rely more upon probabilities, money management, etc. In the poker sense: probabilities, money management, etc.