Your SlideShare is downloading. ×
Keynote Big Data in CeBIT 2014
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Keynote Big Data in CeBIT 2014

1,521
views

Published on

Big data is just technology, a tool, but one which if used properly can provide great value to banks and customers alike.

Big data is just technology, a tool, but one which if used properly can provide great value to banks and customers alike.

Published in: Economy & Finance

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,521
On Slideshare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
15
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Good afternoon. It’s a great opportunity to be here today at Code_N. Two reasons I am excited to be here talking about Big Data in Financial Services.
    First off, big data is such an interesting topic.
    I have been working two decades in IT; more than half of that at GFT. First decade dominated by the creation of the Internet: revolutionary. I feel very lucky to have entered the field at that time. Second decade dominated by mobility. Both have made profound changes on financial services.
    I think the next decade will be big data’s.
    Secondly, this is the right time to be talking about big data in financial services. A year ago, there would have been little to say, and a year from now, the horse will be out of the barn.
    Today, banks have begun to successfully use big data technologies, but have yet to reap all its benefits.
  • Across all industries, big data still remains a new technology, with only 40% of companies increasing their capability and 27% investing in improved data storage. Most surprising, is that only 4% currently have something up and running as of today.
    Few industries manage as much data as financial services. For this reason, it's not surprising that banks have shown great interest and have invested in big data technologies.
    As compared with other industries, interest, investment, and use are all higher.
    It is interesting to note that many companies have begun to test the waters with big data, but few have jumped in, putting systems into production.
    No doubt, it’s still early days.
  • So what are banks doing with big data?
    Retail Banks serve their clients, so the more they know about them, the better. To do that, they need to obtain a “single view of the client”, gathering data from across their organization to understand better understand their needs and what services they are currently purchasing. In the end, to improve the customer relationship. Big data technologies provide a means to join large volumes of disparate data and make sense of them.
    The opportunities here are great – better customer satisfaction, improved cross sell, reduced customer churn; even the re-establishment of trust with the bank.
    In investment banking, the trade is the central piece of data. For each of the tens of millions of trades that a large investment bank has open at one time, the bank needs to know the related profit or loss, the amount of market and credit risk. With an exponential increase in the number of regulations, banks are struggling to report all the facts and figures that are required of them. Big data is also a way to do algorithmic (or automatic, rules-based) trading.
    Big data technologies allows the investment bank to manage more data and process it more quickly, thus streamlining operational procedures.
  • In investment banking, so far the focus has been to improve existing processes: more, bigger, faster.
    At GFT we have participated in a good number of interesting projects.
    Trade repository to hold a historical trade repository to manage years of trades – distributed storage provided capacity to have access to all this data; petabyte = 1015 bytes
    Trade event store unifying daily events about all trades within the bank – big data technologies were able to make this a real-time process
    Trade accounting – distributed processing makes short work of the calculation of 100s of millions of daily balances
    ETL – MapReduce transforms 750 million rows of data in 2 hours, , replacing a huge overnight batch process
    Volcker – Forbes Magazine published an article in December “Volcker Compliance: The Ultimate Big Data Challenge”; Volcker Rule was a key piece of legislation from the US Frank-Dodd Wall Street Reform Act, which inhibited proprietary trading of the banks. We proved them right, using huge volumes of data to calculate the inventory age of each trade which helps measure the level of proprietary trading.
    This is what I call an evolutionary step. The new technologies help do what they currently do, better.
  • These projects have proven big data’s capacity to provide real value to the banks:
    More efficient operational procedures
    Improved SLAs; timely reporting
    New insights based upon broader data sets
    But we haven’t really changed anything fundamental. So far, we have only did the same, better.
    To understand the next step, let’s take a quick look at how an investment bank works.
  • Don’t worry, this will be quick and painless.
    An investment bank is divided between front, middle and back office activities. In the front office is where the traders and client managers work, capturing and initiating each trade. The middle office supports the front office, enriching the data on the trade and initiating the settlement process. The back office is where all the accounting is done and all the heavy calculations: P&L, etc.
    Data flows from front to back, between system and system; in fact, the reconciliation process is constantly undergoing to ensure that these data flows are correct. But data also flows in from the sides, making things a bit more complicated.
    OK, that wasn’t so bad…
    This is the idea that I had of an investment bank when I started working at GFT.
  • This is the image of an investment bank that I had after one of my first projects. This ridiculous diagram shows the flows of data between systems for one single functional area: the calculation of market risk. Each line represents a different kind of data that has to flow from the trading systems at the top, to the back-office systems at the bottom. On each side are complementary systems which manage reference, static, and market data.
    The problem comes from the fact that an investment bank is highly siloed. Each business area which trades a certain kind of product (foreign exchange, equities, fixed income, derivatives, etc.) each has their own systems – the investment banks have naturally grown this way. Point-to-point connections between system are created.
    When a new product is traded (let’s say weather derivatives), the bank starts by trading off an Excel worksheet. From there, a user built system emerges, and finally a fully supported IT system. Obviously, this creates a mess like this.
    OK, there is only one appropriate response to this: there must be a better way.
  • If we diagram this problem schematically, it might look like this: many sources of data being processed for different uses, all independently of each other. What is wrong with this:
    . Data is duplicated
    . Processes are duplicated
    . Inconsistency and redundancy
    A simple change to our schema changes the picture to this: centralized data and consolidate the processing. This way, each trade is represented once. The results of each process which affects the data is stored back into the same repository, so every process thereafter doesn’t have to make the same calculation.
    . Data is not duplicated
    . Processes are not duplicated
    . Consistency and no redundancy
    To manage all these data centrally, however, you need the right technologies. Big data enables this.
    How? And what is different from before? This requires huge storage and computing power and big data gives us this via distributed storage and distributed processing. One needs to parallelize and synchronize a vast amount of processing.
  • If we diagram this problem schematically, it might look like this: many sources of data being processed for different uses, all independently of each other. What is wrong with this:
    . Data is duplicated
    . Processes are duplicated
    . Inconsistency and redundancy
    A simple change to our schema changes the picture to this: centralized data and consolidate the processing. This way, each trade is represented once. The results of each process which affects the data is stored back into the same repository, so every process thereafter doesn’t have to make the same calculation.
    . Data is not duplicated
    . Processes are not duplicated
    . Consistency and no redundancy
    To manage all these data centrally, however, you need the right technologies. Big data enables this.
    How? And what is different from before? This requires huge storage and computing power and big data gives us this via distributed storage and distributed processing. One needs to parallelize and synchronize a vast amount of processing.
  • The objectives of this is to consolidate the data, rationalize the architecture, and have a complete view of the trading activity of the bank.
    The benefits are clear: consistent data, no redundancy in processes, costs savings due to the decommissioning of systems, cost-effective data administration, and of course a complete data view.
    With all the data together, we can have a clearer, more accurate view of the business and are able to take better business decisions.
    I admit that the idea does not seem very revolutionary; it is what investment banks have been trying to do for a long time. But using big data, this will make a huge difference in how investment banking IT is implemented and how in in the end, the banks are run.
  • So what is the future for big data in financial services?
    Let’s start with an obvious truth. There will be more data. Here is graph taken from Oracle, obviously an interested party.
    In banks, the process of data consolidation will continue with a focus on producing a “Single view of the client” and “Consolidated view of the trade”.
    But more doesn’t mean better. In banking there is a strong emphasis on data quality. A lot of banks have created the role of “Chief Data Officer” to resolve issues of data quality.
    For banks to take meaning from these data, there will need by a strong effort in data analytics, “un-hiding value from big data”, something which has just begun to happen.
    Which returns me to what I said at the beginning of my talk: the time is right.
    Less than one bank in 10 has some big data solution in production. It’s still early days and the possibilities have yet to be tapped. It’s a great time to be working in IT.
    Thank you.
  • Transcript

    • 1. Big Data in Financial Services Solving Banking’s Biggest Problem with Big Data Karl Rieder, Code_N, March 11, 2014
    • 2. GFT Page 220.03.14 Big Data in Financial Services Adoption of Big Data: FS vs. All Industries “Considerably enhance big data capability” “Invest in improved data storage” “Have Hadoop solution in production”
    • 3. GFT Page 320.03.14 Retail, Private, and Commercial Banks  “Single view of the client”  Customer portfolio analysis  Social media analytics  Credit card fraud Big Data in Financial Services What are Banks Doing? Improving customer service and sales; re-establishing trust Investment Banks  Calculating daily P&L  Measuring market and credit risk  Regulatory reporting  Algorithmic trading Streamlining operational procedures; improving efficiency
    • 4. GFT Page 420.03.14 In investment banking, the current trend is evolutionary… improving existing processes – larger data stores and faster processing In 2013, GFT has built:  Trade repository: holds 10 billion trade states (6 Petabytes)  Trade event store: records millions of daily events  Real-time trade processing and accounting system: calculates 100s of millions of daily balances  MapReduce-based ETL: processes 750 million rows of data  Volcker rule calculation: scours 4 Petabytes of historical data Big Data in Financial Services Evolution
    • 5. GFT Page 520.03.14 Evolutionary approach has been clearly beneficial for the banks:  More efficient operational procedures  Reduced timelines  New calculations and insights  Accurate (regulatory) reporting … but greater gains could be made from a more revolutionary approach Big Data in Financial Services Evolution
    • 6. GFT Page 620.03.14 Big Data in Financial Services A Simplified View of an Investment Bank
    • 7. GFT Page 720.03.14 Big Data in Financial Services The Silo Problem A sadly common problem in all large investment banks…
    • 8. GFT Page 820.03.14 Big Data in Financial Services Revolution – The Soltuion Source A Source C Source D Source B Source E Process 1 Process 2 Process 3 Process 4 Result 1 Result 3 Result 2 Result 4
    • 9. GFT Page 920.03.14 Big Data in Financial Services Revolution – The Soltuion Source A Source C Source D Source B Source E Data and Processing Consolidation Result 1 Result 3 Result 2 Result 4 Distributed storage Distributed processing
    • 10. GFT Page 1020.03.14  Consistent, golden-sourced data - single source of truth  Elimination of system and data redundancy  Reduction of operational costs  Cost-effective data administration  Unified, complete, and aggregated view of business  Improved analytical capability  Data consolidation  Architectural rationalization  Analysis of full data sets  Extraction of new information and insights BenefitsObjectives Big Data in Financial Services Revolution – Objectives and Benefits A more compete view of banking activity, at a lower price!
    • 11. GFT Page 1120.03.14 Big Data in Financial Services The Future  Lots more data  Data quality is critical  Value comes from analytics The time is now!
    • 12. GFT Page 1220.03.14© Copyright GFT Technologies AG, 2014 Thank you Karl Rieder GFT IT Consulting, S.L.U., Avenida de la Generalitat, 163–167 08174 Sant Cugat del Vallès España T +34 93 172 7071 M +34 649 845 130 karl.rieder@gft.com