Your SlideShare is downloading. ×
Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud
Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud
Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud
Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud
Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud
Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud
Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud
Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud
Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud
Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud
Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud
Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud
Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Primatics Financial - Parallel, High Throughput Risk Calculations On The Cloud

3,549

Published on

Published in: Technology, Economy & Finance
1 Comment
1 Like
Statistics
Notes
  • I have a question about cloud computing in general and I think it applies to the financial world more than any...

    It seems to me that cloud computing is to the digital network world as perhaps nuclear power plants are to the energy world. There is no question that it is cheaper more efficient, and sustainable for the long term compared to the local networks and coal plants of the past. However are they not flawed in terms of the 1/1,000,000,000 accident. What protects the info if the cloud fails for an extended amount of time or whatever.

    Is it like the nuke plants that have back up diesel generators that are in worst shape than the rest of the plant at the time they are needed. It would seem that it would be far to easy to out grow the brick and mortar backup.

    I know Microsoft has already had an incident or two. granted these are the early days but at the same time it has only been a couple of years now. what happens when we get a big one??

    Seems like the financial records of an already fragile economy need to be grounded at some level,

    Thanks

    Finance Guy
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Views
Total Views
3,549
On Slideshare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
41
Comments
1
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Transcript

    • 1. Evolv ™ Risk: Cloud Computing Use Case for Java One 2009 Nati Shalom CTO GigaSpaces Natishalom.typepad.com Twitter.com/natishalom Francis de la Cruz Manager , Argyn Kuketayev Senior Consultant, Primatics Financial
    • 2. Use Case Outline
      • Business Challenge
        • Client Needs and Application Evolution
        • Business Needs
      • Architectural Challenges
        • Risk 1.5 Application Stack
        • Risk 2.0 Application Stack
        • Processing Unit diagram
        • Lessons Learned
    • 3. Business Case
      • Many mid-size regional banks with sizeable mortgage portfolios
      • Outsource sophisticated mortgage credit risk modeling
      • Outsource IT infrastructure to perform costly computations, suited for on-demand PaaS
    • 4. Primatics’ Story in Cloud Computing
      • Company with roots in financial services and IT
      • Multiple clients in financial services, 2 products
      • 2007
        • EVOLV Risk V1: Web-based hosted solution, a regional bank
      • 2008
        • V2: embrace Cloud computing, Amazon EC2
        • V1.5: big national bank M&A, portfolio valuation, AWS EC2
      • 2009
        • V2 “reloaded”: ground up redesign, GigaSpaces
    • 5. Business Needs: Accounting-Cashflows Risk Analytics Platform Stochastic (Monte Carlo) simulation: for each loan run 100 paths (360 months each) INPUT Loan Portfolio ( 100MB): 100k Loans x 1 KB data Forecast Scenarios (100MB): 1MB per simulation path Market History (<1MB) Assumptions, Setting, Dials (<1KB) OUTPUT Cashflows (3GB): 100k Loans x 360 months x 10 Doubles
    • 6. Business Needs: Scale Many jobs by the same user Many users in a firm Many firms More loans, more simulations, longer time horizon Risk Analytics Platform
    • 7. Overarching Challenges
      • Quantitative financial applications (models) with heavy CPU load
      • Large volumes of data I/O
      • Infrastructure to support growing number of clients, and users
      • Varying load on the system, with spikes of heavy load
      • Avoid lock-in to cloud vendors
    • 8. Evolv Risk 1.5 vs 2.0 Software Stack GIGASPACES Prior Grid Framework J2EE Platform Loans & securities Loans Web-Interface SSL Reporting Warehouse Model API Scenarios Loss & Valuation Securities Custom Templates Validation Pluggable Client models Model Validation & statistics Cohorts HPI rates Interest rates Market rates Provisioning Monitoring Job Scheduling Failover / Recovery Load Balancing AWS Billing Registration Management Loss Forecasts Valuations Discount rates Node Management OLAP Reports Auto Throttling SLA Space Based Architecture Scaling Manual Process Model Validation & statistics Provisioning Monitoring Billing AWS Registration AWS management Not In Architecture Auto Throttling Space Based Architecture SLA Node Management * Broker (MQ) Sun JMS Discovery Persistence Manager J2EE Platform Inbuilt models
    • 9. Performance, Scalability, Data Security, Data Integrity
    • 10. Performance, Scalability, Data Security, Data Integrity Primatics Gateway AWS Provisioning
    • 11. Lessons Learned : Cloud Integration
      • Provisioning
        • Configuration API vs. Using Scripts
      • Auto Throttling vs. Manual Throttling
      • Failover and Recovery provided by framework
      • Monitoring VIA GS Console and API vs in house application.
      • Infrastructure took 5 weeks vs 8 weeks to develop.
      • Achieving linear scalability was a matter of creating Processing Units.
    • 12. Lessons Learned : Data and Processing
      • Data Distribution
        • Uses Spaces Architecture vs. In house developed JMS Broker and Discovery to push to Grid
      • Deployment
        • Uses GS API vs. Pushing data to grid or using machine images.
    • 13. Lessons Learned:
      • Problems are now high class problems.
        • GigaSpaces Framework spared us from low level problems.
        • Code written is for more answering ‘why’ not answering ‘how’.
      • GigaSpaces:
        • Provisioning out of the box
        • Failover and Recovery much easier
        • Monitoring through API and Console
        • SLA out of the box
        • Integrated into the Cloud
        • Significantly lowers the barrier of entry into Cloud computing

    ×