The Data Dilemma: Is Digital Profiling Discriminatory in Banking Tech?
Digital profiling has become an increasingly prevalent practice in today’s digital age. Think of the faces and behaviors of people (captured by smart cameras in public spaces), vehicles on motorways, travel details of airline passengers, web surfing habits, tax information or the energy consumption in households. All of this is just personal data collecting.
This huge amount of digital data is used by governments and businesses for a variety of purposes. For example for better service provision or commercial market research but also to anticipate various social security risks such as fraud, terrorism, motor vehicle theft and more. However, experts are raising concerns about its potential invasion of privacy.
What Is Digital Profiling?
Digital profiling is the process of collecting and analyzing data from individuals’ online activities to create detailed profiles that can be used for targeted advertising, personalized recommendations, and even influencing behavior.
It embraces the use of surveillance technologies to track users’ digital footprints, including their browsing history, social media interactions, and online purchases. This data is then analyzed to identify patterns and preferences, allowing companies to tailor their marketing strategies to specific individuals. Such a useful tool used by financial institutions, especially to release loans to lenders.
Banking Tech in Real Today
Over the last few years, commercial banks and the new wave of digital-only financial institutions have embraced microfinancing. How to recognize a valid lender? The answer is, again, digital data.
Digital data collection replaces conventional paper-based methods, expediting the collection and analysis of information and eliminating the risk of human errors that can occur during manual data entry. The shift to digital data collection has allowed for the real-time processing of information, which is vital in the expedited approval process of instant loans.
By gathering and analyzing the required financial information online, lenders can assess creditworthiness faster than traditional methods, which in turn helps them make faster decisions on loans - reducing the time it takes to approve loan applications from days to minutes.
Digital Lending and Risk
Fraudsters are increasingly targeting micro loan offers, using a combination of known attacks and techniques such as identity theft (to open a new account with stolen credentials without any intention of repaying the debt); going AWOL (fraudsters take advantage of loaning organization’s need for frictionless experiences. They open accounts quickly, borrow, and vanish) and account takeover (fraudsters phish for current account holder information and exploit them to borrow money for themselves).
Digital Banking Could Lead to Discrimination
However, digital lending could have more “dangerous” risks like discrimination. Traditionally, credit scoring calculates a score based on your financial history. The credit score, performed by various credit bureaus, will determine if you can get a loan or not.
Mick McAteer, a former board member at the Financial Conduct Authority (FCA), said lenders and insurers were gaining access to tools to more accurately identify “unprofitable” or costly customers, increasing the risk of exclusion for certain sections of society.
As reported by The Guardian, the issue has gained fresh attention after an algorithm used to set credit limits for the new Apple Card sparked claims of gender discrimination. David Heinemeier Hansson, a tech entrepreneur, said he had been offered 20 times more credit than his wife.
The Case Study of Latin-American Community in the US
According to the latest UC Berkeley report about new bias in the digital banking sector, Latinx/African-American borrowers pay 7.9 and 3.6 basis points more for purchase and refinance mortgages respectively, costing them $765M in aggregate per year in extra interest. FinTech algorithms also discriminate, but 40% less than face-to-face lenders.
These results are consistent with both FinTech and non-FinTech lenders extracting monopoly rents in weaker competitive environments or profiling borrowers on low-shopping behavior. Such strategic pricing is not illegal per se, but under the law, it cannot result in discrimination.
With algorithmic credit scoring, the nature of discrimination changes from being primarily concerned with human biases – racism and in-group/out-group bias – to being primarily concerned with illegitimate applications of statistical discrimination.
Challenges and Opportunities
The use of s personality profiling in finance and insurance is in a state of flux. The technical capabilities are racing ahead while governments struggle to catch up with legislation such as GDPR in the EU.
The latest Facebook scandal has alerted the public opinion to the scale and potential risks of profiling. Public attitudes are also changing as individuals become more aware of how data can be both used and abused. The evidence is clear, however, that individuals will continue to sign away their rights to data privacy as long as the incentive is attractive enough.
In the UK, firms are not allowed to segment customers based on gender, race or physical ability, but it has become easier to identify customers based on technical analysis of data such as spending habits and income. The challenge for insurers and lenders is to strike a balance between technological possibility, market opportunity, legislation and public perception.