Chapter 3

A New Balance

Advertisement

Of course, the Bank of England has long used analytics. But prior to Carney’s arrival and its new supervisory mandates, the pace of analytics was not typically at the speed associated with big-data analytics, where large volumes of data can be gathered at frequencies approaching real time.

To help create datasets to support One Bank’s analytics goals, the Bank took its first-ever data inventory to see what kinds of datasets it had in house. Inventory “sounds quite boring,” says Hogg, “but it’s pretty fundamental. We need to know what we’ve got to know how to manage it.” Another reason the inventory was important: It would make it easier to aggregate datasets to help with policy decisions.

The inventory took most of a year and turned up nearly 1,000 datasets. Choueiri says he set up a data inventory tool to tab each dataset across a list of 14 categories, which are searchable on the Bank’s intranet. The inventory would make it clear which datasets can be used for which purposes; for instance, when the Bank collects data from an external source, the inventory also captures the purpose for which the Bank has agreed to use the data. The data inventory thus helps ensure that the Bank is compliant with legal restrictions on the data it has.

Analytics requires a balancing act of sorts at the Bank, given the different missions of the institution. Monetary policy and insurance regulation, for instance, use vastly different data and aim to accomplish different goals. While various parts of the Bank often need access to the same data, some has to be kept restricted for limited use because of regulatory provisions or because the Bank has agreed to use it for only certain purposes. But much of the data does not need to be restricted, and creating broader access can boost policy making because of reduced duplication of effort. Better policy making expands the value of the information.

Along with the data inventory, the Bank’s IT department was also putting in place the tools and structures they want for advanced, big data–style analytics. The datasets used for the Bank’s macroeconomic charter — measures like unemployment, consumer pricing, and productivity — are comprehensive for their purposes, but are neither especially large nor do they operate in anything approaching real time. “Historically, data collection has been very specific, with systems built for each one of the collections,” Choueiri says. The Bank is moving to more general tools to increase its flexibility, a move it is undertaking as part of the three-year One Bank data architecture program.

Next stages for data management will include building a data architecture to more effectively handle the various kinds of structured and especially unstructured data, such as text, that the Bank has or expects to get in order to help policy makers. And the Bank has worked to consolidate the use of tools for analyzing data and to move people off of Excel as the primary analytics tool. Reducing the number of specialized data tools in use at the Bank should make it easier for people from different parts of the Bank to share data and even work together on certain projects, with the end result being better policy decisions.

“This Stuff Is Brilliant”

Any time an organization tries to centralize control, it runs the risk of rebellion. Choueiri says he’s aware of CDOs who find themselves fighting pitched battles within their organization as they try to bring data together. He says the One Bank platform has largely helped him to avoid this at the Bank of England.

It helps, says Hogg, both that the Bank is analytically inclined by its nature and that people who work there do so out of a sense of public service. “I’ve found here that if what you’re doing is clearly in the interests of the mission of the institution, people tend to welcome it,” she says. “And this stuff is brilliant, right? I mean, your ability to be able to get a handle on different sources of data is really powerful, and people can see how that will benefit their work.”

Sujit Kapadia, head of research, is one of those beneficiaries. An economist who has been at the Bank since 2005, he says One Bank offers “a natural mechanism” for bringing together different perspectives from within the Bank. Adding this kind of diversity is valuable as the Bank looks to apply lessons learned in the 2008 economic crisis, which Kapadia says “caused us to rethink some of the conventional ways of approaching economics and finance and regulation.” There were also huge increases in the quantity and level of detail in the available data.

In July 2014, those factors all went into a workshop called “Big Data and Central Banks,” where the Bank brought together people from 20 central banks across the globe and external topic experts to discuss the impact of big data, defined as “datasets that are granular, high frequency, and/or non-numeric,” on central bank policy making.18 Breaking these characteristics down, granular means per item (for each loan or each security), high frequency means frequently updated, and non-numeric from widely varied sources. Historically, the Bank of England has used little in the way of big data; its datasets were typically highly structured and (when reported) were typically reported quarterly. But the Bank had been a fairly advanced adopter, for a central bank, of nontraditional data — for example, using Google data to look at housing and employment market conditions in 2011, examining the impact of high-frequency trading on stock markets by looking at equity transactions, and looking at credit swaps and liquidity management using high-frequency datasets.19

Such high-frequency datasets have not traditionally been in wide use for macroeconomic policy recommendations by the Bank. Speakers at the workshop (who were not identified by name) showed results from their work demonstrating that micro data could yield macro patterns, especially when visualization tools were used effectively.

For the Bank of England, with 300 years of macroeconomic data available on its website, the addition of much higher-frequency data represents an interesting development. It opens datasets that, for instance, can enhance understanding of how a monetary policy action like changing an interest rate affects the financial system.

Joy and Stress (Tests)

The mere creation of a chief data office sparked unusual emotion in some corners of the Bank. “I was whooping with joy, literally,” says Nathanael Benjamin, head of division for financial risk and resilience at the Bank. He knew that a CDO would give him easier access to the data and tools he needed to do his job. A major reason that mattered were the stress tests for banks and insurers. Stress tests use analytics to look at a bank’s financial structure and evaluate whether it could withstand different kinds of severe but plausible financial shocks: from short, sharp ones like a stock market crash to waves of bad economic developments that play out over months or even years. If it can’t, policy makers look at why, and tell the bank what it must do to prepare itself.

The rise to prominence of stress tests was triggered by the 2008 crisis, and Benjamin was involved in the early days of this evolution due to his experience in quantitative risk analysis and in regulation. He was on temporary assignment to the Federal Reserve Bank of New York from 2008 to 2010, and took part in the very first supervisory stress tests of major U.S. banks. “That worked really well, but it was painful,” he says. “It was the first time we were asking firms for this type of data, and it was the first time the firms had to provide it to us — and even to themselves sometimes. We found ourselves in a lot of situations where firms weren’t able to get hold of the data in a timely manner and really struggled to drill down and aggregate that risk data.”

In the end, it worked. But Benjamin saw the need to manage — with conviction — a well-defined data strategy for the risk-related data relevant to stress testing. The Bank of England now has such a strategy. Although it involves a great deal more data collection than before, it is being carried out under a very different regime from the one in place in the U.S., where central banks tend to seek out and gather every morsel of data. At the Bank of England, the data will be big, but it won’t be all-encompassing.

For stress tests, “we’re trying to get the cut of the data that tells us what we need to know, but not necessarily much more,” says Benjamin. He says vacuuming up large quantities of this data is very resource-intensive, requiring processing, checking, translating, and analyzing. There can be diminishing returns in asking for more. “We are, on purpose, targeting a middle ground in terms of the data we ask for,” he says.

The Bank of England is now running stress tests concurrently in seven banks each year. When it started, it was only able to perform them sequentially and could only do two banks a year. The increase is valuable, not just for scope but also because concurrent stress tests provide a better idea of the overall strength of the banking sector and permit the consistent exercise of supervisory judgment through benchmarking. In short, these tests help regulators ask the right questions.

Advancing Analytics Across the Bank

Part of the charge for analytics has fallen to Andrew Haldane, who had been executive director for financial stability prior to Carney’s arrival and is now the Bank’s chief economist. Haldane is a prolific researcher who has established himself as a bold, almost maverick, economist, talking publicly about setting negative interest rates20 and replacing cash with digital currency. As part of Carney’s sweeping reorganization, announced in March 2014, Haldane swapped jobs with then-chief economist Spencer Dale.21 Haldane has embraced a cross-departmental structure and created the research unit that Kapadia heads. In this unit, five or six full-time employees are charged with working on cross-cutting research projects spanning all of the Bank’s responsibilities. Additionally, members of different departments at the Bank rotate in for various project periods, almost like research fellows. Haldane also brought Paul Robinson on board to build the advanced analytics unit — basically a center of analytics expertise within the Bank.

Robinson is another returnee, having left for several years for the private sector. When he arrived back at the Bank, the advanced analytics unit had just four people. Now, it has 12 to 13 people, mostly new. They come from untraditional backgrounds like physics and computer science.

“We have lots of extremely sophisticated, very highly qualified, and highly numerate economists. We wanted to supplement them with people who had a different background and were used to modeling other sorts of phenomena,” Robinson says. Bringing in people from other spheres of knowledge also expanded techniques that could be used; for instance, among the techniques being used more frequently now are agent-based modeling and network analysis.

The analysis underlying the recommendations set out in the June 2014 Financial Stability Report helped underscore that the new analytics wasn’t just a nice set of tools for the already analytical parts of the organization. It helped to show just what the Bank might be able to do with its new access to transaction data. And it reinforced the Bank’s efforts to improve cross-group work. The potential for improved policy decisions is obvious, as is the likelihood that the Bank will be able to respond more quickly to market events.

Unstructuring the Data

The Bank’s experience with analytics was largely derived from its use of structured data. It ran a creative experiment analyzing new kinds of unstructured data when Scottish voters were preparing to vote on whether to leave the United Kingdom in September 2014. One IT staff member built a feed from Twitter to look for signs of a potential run on Scottish banks.22

The feed looked for terms like “run” and financial institutions such as “RBS” (Royal Bank of Scotland) and the like. The Sunday before the referendum, Twitter saw a spike of “RBS” mentions. It turns out that it wasn’t a sign people were planning to flood the Royal Bank of Scotland the next morning, but that an American football game was starting — the mentions of “RBS” identified in the Twitter feed were references to running backs. The football players didn’t stiff-arm the whole test, however; there were enough relevant tweets to show that unstructured data sources could provide useful information to Bank policy makers if they needed to quickly respond to something — an important lesson about unstructured data. Plus, it hadn’t been hard to do; the IT developer who built the feed did it working at home, in his bedroom.

Another experiment for analytics was to set up a Hadoop data framework, an open-source platform for handling large amounts of data on relatively inexpensive hardware. It was built as part of a data lab project meant to give the Bank an analytics sandbox to play in, a tool to experiment with cutting-edge analytics techniques on things like what an entire day’s trading records on a stock exchange might mean for bank stability.

Zinging results out of a Hadoop cluster sparked active debate within the traditional IT department. Some members raised valid concerns: The cluster didn’t have the typical IT controls; it was unclear how it would be secured or managed; and it wasn’t even clear who would handle system backups, since the cluster was set up outside of IT. This kind of discussion often takes place when an organization adopts a dual IT structure, adding a group for emerging technologies to run in parallel to the traditional organization. In this case, the issues were resolved by isolating the Hadoop cluster from key regulatory systems.

Overcoming Overfitting

Central banks deal not just in real-world economic conditions but also in theoretical scenarios and in rare events like major financial crises, making it harder to use actual circumstances to prove the models are accurate. Robinson calls this a key challenge for his unit, one that means the unit has to show rigor and robust explanations for its decisions. Organizations expanding into big-data analytics must have someone looking out for some decidedly abstract concerns, such as overfitting in the models. In overfitting, as the number of variables that might explain a set of observations increases, the chances grow that the models will come up with spurious relationships among the variables. “Then, as soon as you start using them outside the sample, they are utterly hopeless,” Robinson says.

Choueiri says the Bank isn’t yet really doing big data. He says there’s a huge variety of data analyzed at the Bank, but the volumes are not yet anywhere near what the private sector examines. That will change. His big-data platform will launch in 2016 and run in parallel to the existing systems to make sure it’s ready to handle heavier data analytical workloads. The other twist with big data is: While data gets more granular, it also demands speed. That might mean a drop in accuracy. “We’re potentially getting away from the notion that data has to be 100% accurate,” says Choueiri. “What we cannot do is wait months to obtain a highly accurate dataset” in some instances. That’s a huge cultural shift for central bankers.