Machines to the Rescue

8 Min Read By: Thomas Vartanian

The following article is an excerpt from 200 Years of American Financial Panics: Crashes, Recessions, Depressions And The Technology That Will Change It All


Artificial intelligence has allowed us to enter the age of Big Data, where extremely large collections of digitized data can be analyzed computationally through the application of complex algorithms to reveal patterns, trends, and associations relating to human behavior and interactions. If you believe that history merely repeats itself, Big Data can be enormously profitable to the extent that it allows users to better predict economic outcomes.

The gap in this seamless evolution of technology is the government. If banks are now technology companies, the government should regulate them as such. That means that government regulators must also understand and use technology. But federal and state banking agencies still ground many decisions on the results of manually collected historical data and physical on-site examinations. There is still an important role for an examiner’s ability to look into the eyes of bank executives and discuss and debate the operations and safety and soundness of a bank during an on-site examination. It is also a critical way to identify and evaluate potential fraud and other misdeeds. But it can no longer be the main tool in a real-time environment.

The Panic of 2008 has pointed regulators in the direction of evaluating future risks. For example, regulators now oversee the creation of elaborate bank resolution plans called living wills, sophisticated capital, and stress testing under alternative financial scenarios as a part of its Comprehensive Capital Analysis and Review (CCAR), and measurements of liquidity and risk management plans under similar duress. But the supervisory function should move to the next level and become fully focused on the comprehensive, real-time collection of data that can be analyzed by artificial intelligence algorithms to assess present and predict future economic and financial behavior.

Predicting the next financial crisis is comparable to forecasting the next hurricane. There are endless human, operational, and financial variables that may impact the outcome and timing. Artificial intelligence can be the bridge between the historically based microeconomic analysis that financial regulation supervisors focus on, and predictive macroprudential regulation that can use Big Data to build a safer and sounder financial services network. The risks embedded in the financial statements of a bank are only a part of the challenge that it must confront. The risks inherent in the overall economy and financial networks will often have as much if not more of an impact on the quality of the credit that it has extended and its performance than its own financial predicament.

Our current system of financial regulation is not only seriously challenged when it comes to averting or mitigating financial crises, it can often exacerbate them. Technology provides a solution because the supervision of financial institutions relies on “the evaluation of a vast quantity of objective and factual data against an equally vast body of well-defined rules with explicit objectives.”

Consider how artificial intelligence and Big Data could have impacted the Panic of 2008. Assume that a huge amount of macroeconomic and financial industry data going back to 1965 had been compiled and was being analyzed by sophisticated computer algorithms beginning in 2000. That data input would have covered the inception of interest and usury rate controls, the most volatile interest rate environment the country had ever experienced, the failure of a massive number of S&Ls and banks, the collapse of oil prices, risky lending in Latin America, several real estate development recessions, the junk bond boom and bust, the stock market collapse of 1987, dramatic changes in demography, the rise of mutual and money market funds, the emergence of asset management businesses, and the internet and social media explosion.

An integrated approach to the evaluation of financial data could also have included information related to the financial incentives and behavior, rational and irrational, that were built into the system. Socialized risk and short-term compensation incentives could have been factored into the mix, perhaps leading to a quicker grasp of how, for example, the securitization of assets ranging from home mortgages to credit cards had skewed the risk/reward formula. With better data sets and analysis, the government and industry executives would have had more reliable indications of developing crises years before they arrived.

What would have occurred if years before the Panic of 2008, regulators and executives accessed these new databases and ran simulations that began to show red flags emerging? They would have seen, as early as 2000, disturbing data about the impact of increases in the amounts of outstanding credit, leverage, second and third mortgages, default rates, and the potential impact of several generations of variable-rate mortgages in rising rate and decreasing home value scenarios. Intelligent machines could have analyzed data that the government had in ways that it was not capable of doing. Red flags would have been seen earlier and more clearly about the interrelated impact of reductions in credit quality, increases in credit availability and the proliferation and interaction of shiny new financial products such as MBS, collateralized debt obligations, and credit default swaps. The creation of excessive risk created by parties with no skin in the game and few downside concerns would have been noticed and hopefully financial incentives could have been adjusted. Intelligent computers would have produced alternative economic scenarios that regulators could have evaluated. If regulators could have spent less time micro-supervising less important matters, they would have had the time to war game how these events might have intersected and made appropriate course corrections.

Congress, bank and investment banking executives, the SEC, and the Federal Reserve might have had the chance to realize that under the developing circumstances, the capitalization and leverage ratios of firms like Bear Sterns and Lehman Brothers were dangerously low and were creating a massive systemic threat. Similarly, regulators and executives might have seen much earlier that AIG could not have sustained a credit default swaps exposure that was effectively insuring all of Wall Street. Better data and predictive analysis could have led to more fulsome public securities disclosures by Bear Sterns, Lehman Brothers, AIG, and Merrill Lynch about possible risk factors that the companies were facing. That would have given shareholders the opportunity to speak through their platforms and, perhaps, alter the course of future events.

Technology, and particularly artificial intelligence, bring with it significant challenges. Artificial intelligence is a tool that relies on the integrity of the program, the programmer, and the data being used. It can be wrong, biased, corrupted, hijacked, stale, or simply based on bad data. Trusting artificial intelligence is an exercise in caution and discretion. Whether factual or not, the parable about the US Navy’s testing of artificial intelligence is instructive. As it goes, when the navy’s artificial intelligence applications sensed that a simulated convoy was moving too slowly, it simply sank the slowest two ships in its convoy to speed up the convoy’s overall progress. That is hardly a solution that would work in the field of financial regulation.

The issues of “explainability” and “accountability” are extraordinarily important in the financial world. How does a financial institution explain why the predictive conclusions of a machine were followed or rejected, particularly after the outcome goes wrong? How can a decision made by an intelligent machine be challenged? How is the use of artificial intelligence impacted by privacy laws and the ability or inability to identify an accountable party? Can machines explain what their algorithms did or how they did it to satisfy the kinds of legal obligations that are imposed by the Fair Credit Reporting Act, the Equal Credit Opportunity Act, the Fair Housing Act, and the European General Data Protection Regulation to provide the borrower or customer with an explanation about why credit was denied?  

Big Data, superintelligent and quantum computers, the cloud, complex algorithms, and artificial intelligence will increasingly provide governments with tools that will dramatically increase their ability to predict and avert future economic disasters. While those systems will never be foolproof, they will increase the opportunity for the government and businesses to make course corrections based on a wider and clearer field of vision. They will potentially give regulators better intelligence and more time to improve and adapt financial regulation, monetary and interest rate controls, and economic responses to impending downturns. Imagine being able to avoid the next financial crisis or, more realistically, lessening its impact because of the decisions made based on information produced by algorithms feverishly analyzing sets of Big Data years before. The advantages of having substantially more data that can be analyzed quickly by intelligent machines can alter the course of financial history and create a smarter and more effective system of financial supervision. Every day that passes without this technological tool in the government’s pocket is another day the economy potentially creeps closer to the next financial Armageddon without any clear warning.


200 Years of American Financial Panics:  Crashes, Recessions, Depressions And The Technology That Will Change It All is available from Prometheus Books and all online book outlets. Learn more about the author.

By: Thomas Vartanian

Connect with a global network of over 30,000 business law professionals

18264

Login or Registration Required

You need to be logged in to complete that action.

Register/Login