The NBA added the 3-point line in the 1979-1980 season. At the time, I’m guessing it felt more like a gimmick than a revolution. The NBA Finals that year featured a grand total of one made 3-pointer, across six games.
Suffice it to say, things have changed. In the 2017-2018 season, The Houston Rockets attempted 3,470 3-pointers, which was more than the the total number of 3-pointers attempted by any NBA team throughout the entire 1980s. That translates to nearly one 3-pointer a minute, all season.
It took 40 years and and the overwhelming success of pioneering teams like the Rockets and the Golden State Warriors, but now all NBA teams are pushing a relatively obvious idea — the possibility of 3 points is better than the possibility of 2 points — to its logical extreme.
When the rules of the game change, new opportunities emerge. But it can take a long time for market participants to take notice and adjust their strategies accordingly.
And this is why I find the emergence of modern artificial intelligence (AI) techniques like machine learning (ML) so interesting. Because these new analytic techniques have changed the game, and most companies in most industries (including financial services) haven’t caught on yet.
The AI Rule Change
Today, banks sit on massive amounts of data, which they’ve spent the last 15-20 years wrangling. They then take very small subsets of that data and write static decision rules in a variety of different legacy systems, with the end goal of transforming that data into answers that will help them run their business.
In this environment, you want to avoid biting off more than you can chew. If your dataset is too large or unorganized, you won’t be able to efficiently analyze it. If you write too many rules to help analyze the data, you may overburden your organization with a codebase that is difficult to maintain (point of reference: there are already 220 Billion lines of COBOL code in use by banks today).
So the best practice is to be disciplined. Don’t overreach. Eat the elephant one bite at a time.
AI turns this entire paradigm on its head. At a fundamental level, what specific AI techniques like machine learning allow banks to do is to combine the data that they have with the answers that they want to look for in order to generate rules and models that will help them make better predictions and better decisions.
In this new environment, data goes from being the thing companies are drowning in to the fuel that drives their ability to deliver compelling and differentiated outcomes to their customers. The more data you have, the faster you can train machine learning algorithms, which can then produce better-performing rules and models that you can use to run your business.
So the new best practice, enabled by AI, is to reorient your business model to generate as much proprietary data as possible. In other words, to go from eating the elephant one bite at a time to stockpiling as many elephants as you can get your hands on.
Just as in the early days of the 3-pointer, there aren’t many examples of companies that are actively trying to capitalize on this new proprietary data opportunity. However, there are a couple industries where the impacts of this ‘AI rule change’ are starting to play out.
The automotive industry is my favorite example.
Solving for Fully Autonomous Driving
The hardware necessary to facilitate autonomous driving — cameras, LIDAR, GPS — generates a lot of data. In fact, a single autonomous car driving on the road generates about as much data as 3,000 people going about their everyday activities according to estimates from Intel. So if we get to 1 million autonomous-capable cars on the road (which isn’t a huge stretch), we’ll generate 3 billion peoples’ worth of data.
But if you approach the problem from the perspective of ‘data as fuel’, you might not think so.
Tesla has 500,000+ vehicles equipped with self-driving hardware on the road today. Each of these vehicles is constantly, passively collecting data. This fleet, collectively, drives approximately 15 million miles a day. That extrapolates to 5.4 billion miles a year, or 200 times the amount of miles expected to be driven by Tesla’s biggest autonomous driving competitor (Waymo) a year from now.
What’s notable about Tesla (and something you’ll already know if you’re a customer or stockholder) is that they haven’t been exactly great at delivering, on time, the end vision of fully autonomous driving. Tesla’s Hardware 2 (included in all cars manufactured after October 2016) doesn’t come close to facilitating Level 5 autonomous driving (as defined by the Society of Automobile Engineers). And it may not ever be able to (at least without additional upgrades). Fully autonomous driving is an incredibly difficult challenge to solve.
But that shouldn’t obscure this important fact — Tesla has a huge lead on Waymo and every other major player in the automotive space when it comes to fully autonomous vehicles because, very early on, they recognized that autonomous driving wouldn’t be feasible without AI-powered self-driving algorithms and you wouldn’t be able to train algorithms capable of Level 5 autonomous driving without a ton of training data.
So Tesla built, into each of its cars, technology capable of capturing the data necessary train self-driving algorithms that aren’t, years later, even close to being commercially viable.
In the age of AI and ML, the rapid creation and collection of data (particularly proprietary data) is the competitive advantage. Whether it’s Tesla and self-driving cars or Amazon and the Echo or Netflix and original content; forward-thinking companies are putting strategies in place to create a sustainable competitive advantage built around data.
A Data-First Strategy in Financial Services
In financial services, the question is what does a similarly forward-looking strategy look like? Here are a couple suggestions:
Use your data better. Internal silos often prevent banks from leveraging the full value of the data they already have. The recent growth of small business lending by non-banks provides a good example of what can be achieved in the absence of such silos. Payment providers like Amazon, Square, PayPal, and Stripe have made massive progress in lending to small businesses, not by offering super competitive pricing, but by leveraging the transactional payment data that they already have to proactively identify clients that may need extra liquidity and making that liquidity instantly available (in the form of pre-approved loan offers). It’s a very effective strategy, as PayPal’s results prove (from a recent Jim Marous (@JimMarous) article on the Financial Brand):
In the first quarter of 2019, PayPal announced that it had provided more than $10 billion in loans to more than 225,000 small businesses around the globe after only a little more than five years. “It took PayPal twenty-three months to get to the first $1 billion in lending and now we’re hitting more than $1 billion per quarter,” said Darrell Esch, vice president of global credit at PayPal.
Deepen your data. In much the same that Tesla built a data advantage by building sensors into each of their vehicles, so too can banks deepen their data advantage by much more precisely measuring their customers’ behaviors. As the majority of banking interactions shift to digital channels, this becomes significantly easier. These channels are capable of capturing a wealth of detailed behavioral signals — from a customer’s typing rhythm to their gait to the angle at which they take a selfie. This data can then be used to train algorithms to do all kinds of interesting stuff, from continuous behavioral biometric identity authentication (which Mercator Advisory Group recently wrote will “irrevocably alter the authentication landscape.”) to interpreting an anonymous applicant’s intentions during the account opening process (for more on this check out Neuro-ID’s inaugural FinTech Friction Index Report).
Acquire more data. Maximizing ‘on-us data’ is, comparatively speaking, easy. The next challenge is acquiring ‘off-us data’ from third-parties that can further extend a bank’s data advantage. The emergence of open banking — helped along by regulations like the ones in the U.K. and Australia, standards organizations like FDX, and technology from data aggregators like Plaid and Finicity — is one path forward. The linking of on-us data with anonymized, aggregate-level off-us data for competitive benchmarking and strategy development is another. In all cases, the trick is to blend this off-us data (which is, by definition, not proprietary) into a bank’s on-us data in a way that generates uniquely valuable insights.
Adjust your business model. Banks that are committed to building a sustainable data advantage will, after picking all the low hanging fruit, need to make some difficult choices. Generating proprietary data that can be used to train machine learning algorithms isn’t usually going to be an investment that shows a return next quarter. A longer time horizon is necessary to evaluate a proposed change to a business model or strategy which might, on the surface, appear like a bad bet.
The U.S. mortgage industry is a great example. Today, the vast majority of mortgages are resold on the secondary market, with their servicing rights passing from the originator to a new third party, which is responsible for collecting payments and administering the loans. This model is very effective at bringing additional liquidity into the mortgage market (via investors in the secondary market) and de-risking the process for each participant (if you’re good at underwriting, just focus on that).
Yet despite the efficiency of our current mortgage lending model, I wonder if banks are missing a broader opportunity. The way it is handled today, mortgage servicing is a simple and deeply impersonal process (you set up the auto-draft and never think about it again) and yet a mortgage is usually the biggest, most complex, and most enduring financial relationship that a consumer will enter into. It literally can last for a third of your life! Imagine how different mortgage servicing would be if banks looked at it not just as a fee-generating business, but as a 30-year long opportunity to gather data on the behaviors and aspirations of an entire household.
The Bank of England recently released a report on the use of machine learning in the U.K. financial services market. According to BoE’s survey:
ML is increasingly being used in UK financial services. Two thirds of respondents report they already use it in some form. The median firm uses live ML applications in two business areas and this is expected to more than double within the next three years.
Ron Shevlin (@rshevlin) at Cornerstone Advisors describes the looming AI gap between megabanks and community banks and credit unions, driven by discrepancies in investment dollars and the availability of data (particularly on the marketing/revenue-enhancement side of the house).
Penny Crosman (@PennyCrosman) at American Banker takes a very detailed look the ways in which AI is being woven into the DNA of TD Bank. The article touches on the challenges in managing data, Canadian consumers’ attitudes towards AI, and how TD is using AI in mortgage lending(!)
The AI engine examines the bank’s data for signs that a customer may be interested in buying a home…Knowing that a customer is likely to buy a house in six months, the bank rep might talk about savings products or debt capacity, to help get the customer set up for a mortgage.
Somewhat surprisingly, bank employees are excited to collaborate with algorithms, according to Accenture:
two-thirds of banking workers said they believe AI will create opportunities for their work. They expect it to make their jobs simpler (72 percent) and to improve their work-life balance (67 percent). Only 37 percent said it would threaten jobs in their organization, with 57 percent expecting it to expand their career prospects.
For a smarter, more detailed description of how machine learning changes the way we analyze data and write rules, take a look at this video from Google’s I/O 2019 conference: