The current market and regulatory effects of AI are driven as much by its limitations as by its capabilities, but this is a short-term situation. The impact of hallucination-prone language models won't generalize to the possibilities and dangers of next-generation AIs as the cognitive capability frontier of domain-specific financial AI expands.

Baseline model

We can explore this in an elementary way through a basic game-theoretic simulation allowing us to represent the impact of structural cognitive advantages. First, we model a financial transaction as a game where

  1. Nature chooses a future state of the world.

  2. The Company and its Customer choose their actions simultaneously, knowing the initial probability distribution of Nature's choices but not the one it made.

  3. Payoffs for Company and Customer are revealed based on the combination of their choices and Nature's, with everybody aware of the payoff matrix before the game is played.

To introduce the impact of regulation, we turn this into a meta-game simulating a continuously changing industry:

  1. Innovators propose a random game with the structure described above but random payoff matrix and probability distribution for world states.

  2. A regulator vetoes the game if any of the Nash equilibria for it has negative payoff for the Customer.

  3. The Company refuses to offer the transaction if the expected value of its payoffs over Nash equilibria is negative.

  4. If the game hasn't been vetoed and the Company chooses to, the game is played.

This is the aggregated behavior from simulations of this industry model:

As expected

  • Companies and Customers show a healthy distribution of positive payoffs.

  • Successful regulation protects Customers from negative payoffs.

The impact of significantly advanced AI

We modify our simulation to include a significant gap in cognitive capabilities between Company and Customer, and, implicitly, the regulator.

  • Random games are generated and vetted by the regulator as in the previous model.

  • When a game is played, the Company knows the state of the world and can simulate and predict Customer behavior.

  • Neither the Customer nor the regulator are aware of these capabilities.

From the point of view of the Customer and regulator, the structure of the game remains

But the Company is really playing

This stylized model of significantly advanced AI captures two important features:

  • Advanced inference capabilities can give a company effective access to information regulators and Customers assume they don't have.

  • It's easier to hide from customers and regulators the true extent of AI-driven capabilities than it is to hide traditional legal or illegal information sources.

The combination of these features has a structural impact on market outcomes, as seen in the changed aggregated behavior from the modified simulation:

First we observe significantly improved payoffs for the Company. This is natural: better information is being leveraged to obtain better outcomes.

More surprising, and most relevant to structural and regulatory concerns, is the fact that Customers are now exposed to negative payoffs even if regulators retain their information access and veto power.

This is a straightforward consequence of how the model was built. Regulators evaluate possible games by looking at their Nash equilibria, which is a reasonable process for the game as it was defined in the original model. But the Company's increased capabilities allow it to engage with what's in practice a different system, one with optimal behaviors outside the equilibria evaluated by regulators.

The simplicity of this mechanism shouldn't detract from its importance. As a conceptual generalization, we see how the presence of significantly advanced unknown AI capabilities doesn't just lead to better outcomes for companies but can, on its own, drastically undermine the effectiveness of regulators.

Conclusions

The market and regulatory risks from significantly advanced AI capabilities — just as much as its potential — come not from its absolute capabilities but from the potential of a widening and possibly opaque gap between company capabilities and those of customers and, specially, regulators.

Paradoxically, we can already see the seeds of this problem in the AI investment patterns of regulatory bodies and ancillary organizations. Investments in tools, training, and conceptual methods are being driven by the marketing push of companies selling contemporary AI tools rather than by the particular needs and issues of the finance industry. That most of the industry is doing the same is irrelevant: given the competitive advantages of an AI gap, regulatory risks will be strongly influenced not by the median company but by the most advanced ones.

A proper strategic response doesn't require levels of AI investment above those already planned, but rather a reformulation of those investments around the core cognitive activities of financial and regulatory organizations, focusing into deep specialized tools and aiming at significant new developments rather than incremental cost savings. Companies and regulators will need to engage in the same exercise: the former because of competitive possibilities and necessities, the later to be able to keep up with them.

(Originally posted on my blog.)

Keep reading

No posts found