Cloud Journal

 

 



What Are Big Data Use Cases


Written by  Amir Halfon | 10 April 2012
E-mail PDF

big dataSo far I have been focusing on Big Data technology within the Financial Services context. Now, let’s turn our attention to the domain itself and the actual use cases within the Financial Services industry.

Precise Customer Targeting

Most banks are paying much closer attention to their customers these days then they have in the past, especially as they are being forced to spin off their proprietary trading businesses. What this means is that many of them are looking at ways to offer new, targeted services to their customers in order to reduce churn and increase customer engagement, and by extension the banks' revenue. In some ways this is no different than retailers wanting to fine tune their cross-selling, up-selling and discounting strategies, and the attention that mobile wallets have been getting recently attests to the importance all parties involved are putting on these types of analytics (which obviously get even more powerful once location information is added to the mix).

Banks, however, have additional concerns, as their products all revolve around risk. And the ability to accurately assess the risk profile of an individual or a loan is therefore paramount to offering (or denying) services to a customer. The availability of troves of web data about almost any individual - including spending habits, risky behavior, etc. - provides valuable information that can target service offerings with a great level of sophistication. Additionally, web data can signal customer life events (marriage, childbirth, house purchase, etc.) that would introduce opportunities to offer even more targeted services. Add location information (available from almost every cell phone) and you can achieve almost surgical customer targeting. Again this is something retailers and telco providers are also quite keen on.

Sentiment Analysis

Whether looking for broad economic indicators, market indicators, or sentiments concerning a specific organization or its stocks, there is obviously a trove of data on the web to be harvested, available from traditional as well as social media sources. While keyword analysis and entity extraction have been with us for a while, and are available from several data vendors, the availability of social media sources is relatively new, and has certainly captured the attention of many people looking to gauge public sentiment.

Sentiment analysis can be considered straightforward, as the data resides outside the firm and is therefore not bound by organizational boundaries. In fact, sentiment analysis is becoming so popular that a couple of hedge funds are basing their entire strategies on trading signals generated by tweeter feed analytics. While this is an extreme example, most firms at this point are using sentiment analysis to gauge public opinion about specific companies, markets or the economy as a whole.

Predictive Analytics

These analytics are the bread and butter of all Capital Market firms, relevant for both for strategy development and risk management. They include correlations analysis, strategy back-testing, Monte Carlo Simulations, etc., and are relevant for pricing and valuation as well as risk management and strategy development.

The large amounts of historical market data, and the speed at which new data sometimes needs to be evaluated (e.g. complex derivatives valuations) certainly make this a big data problem. And while traditionally these type of analytics have been processed by large compute grids, today  more and more firms are looking at technologies that would bring the compute workloads closer to the data in order to speed things up. Also these type of analytics have been mostly executed using proprietary tools in the past, while today they are starting to move to open source frameworks such as R and Hadoop (see previous posts).

Risk Management

Broader risk calculations such as the aggregation of counter party exposure or VAR also fall within the realm of Big Data, if only due to the mounting pressure to speed these up well beyond the capacity of current systems, while dealing with ever growing volumes of data. New computing paradigms that parallelize data access as well as computation are gaining a lot of traction in this space. A somewhat related topic is the integration of risk and finance, as risk adjusted returns and P&L require ever increasing amounts of data to be integrated from multiple, un-correlated sources across the firm, and accessed and analyzed on the fly.

Rogue Trading

Somewhat related to the topic of finance and accounting, this use case may not be as common, but is considered frequently as we're faced with the ever increasing implications of rogue trading. Deep analytics that correlate accounting data with position tracking and order management systems can provide valuable insights that are not available using traditional data management tools. In couple of well-known cases (UBS and Société Générale), inconsistencies between data managed by different systems could have raised red flags if found early on, and might have prevented at least part of the huge losses incurred by the affected firms. Here too, a lot of data needs to be crunched from multiple, inconsistent sources in a very dynamic way, requiring some of the technologies and patterns discussed in earlier posts.

Fraud Detection

Fraud detection is also related, in as much as a similar point can be made: i.e., that correlating data from multiple, unrelated sources has the potential to catch more fraudulent activities earlier than current methods. Consider for instance the potential of correlating Point of Sale data (available to any credit card issuer) with web behavior analysis (either on the bank's site or externally), and potentially with other financial institutions or service providers such as First Data or SWIFT, to detect suspect activities.

This would go above and beyond the current Know Your Customer initiatives, watch list screening, and the application of fundamental rules. Correlating heterogeneous data sets has the potential to dramatically improve fraud detection, and could also significantly decrease the number of false positives (e.g. using a card while traveling).

Summary

This discussion started and ended with web data, as it still seems to be dominating current Big Data discussions, and has minimal friction in executing POCs and projects. This does not mean however that one should lose sight of the firm’s internal data, which is so central to the business. It is the combination of internal and external, structured and unstructured data, from disparate sources, which has the potential to truly change our industry. Oracle is one of the only companies that can provide the full spectrum of integrated technologies to cover this gamut and enable firms to tackle all aspects of the Big Data continuum.

Amir Halfon

Amir Halfon

Amir Halfon is a Senior Director of Technology for Financial Services at Oracle. He is in charge of developing Oracle’s industry-specific data management solutions and strategy, which target industry challenges such as Bid Data analytics, on-demand risk management and timely regulatory compliance. Amir possesses a wealth of technical and industry experience, and is a frequent speaker at conferences such as SIBOS, SemTech, A-Team Insight Exchange and Oracle OpenWorld.

blog comments powered by Disqus