Computational Finance with Map-Reduce in Scala
Upcoming SlideShare
Loading in...5

Computational Finance with Map-Reduce in Scala






Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    Computational Finance with Map-Reduce in Scala Computational Finance with Map-Reduce in Scala Presentation Transcript

    • Computational Finance with Map-Reduce in Scala Jianfeng Zhang
    • Abstract This paper presets results of computational finance experiments using map-reduce in Scala. They observe superlinear speedup, superefficiency, and evidence for a high degree of compute and I/O overlap in the median runtimes using “naïve,” memory-bound, finegrain, and course-grain parallel algorithms on three different hardware platforms.
    • Computational finance is a multidisciplinary field at the crossroads of mathematical finance and computer science. The emphasis is on development and utilization of numerically intensive methods for pricing, risk analysis, forecasting, automated trading, and other applications.
    • &
    • Map-reduce is a framework generally to speedup data analysis using distributed computing. While map-reduce has been applied to different problem domains, many of a data-intensive nature, almost no attention has been given to opportunities for computational finance as a mixture of floating point and data-intensive operations.
    • Scala is a modern, high-level Java Virtual Machine (JVM) language that blends object-oriented and functional programming styles with actors, a shared nothing model of concurrent computation inspired by physics theories. Proponents have argued that Scala language features are suited to solving large-scale computing tasks on inexpensive, commodity multicore and multiprocessor platforms in an expressive manner that avoids the concurrency hazards and runtime inefficiencies of shared, mutable state programs. Indeed, the function-oriented style of Scala would seem to lend itself precisely to coding mathematical expressions which characterize quantitative operations.
    • Related work • The literature shows enduring interest in speeding up computational finance algorithms. • The literature furthermore indicates mapreduce is a widely accepted approach to speeding up computation for various problem classes.
    • Method • • • • • • • • Bond pricing theory Bond generation algorithm IO design Pricing algorithms Serial algorithms Parallel naïve algorithm Parallel coarse-grain algorithm Parallel fine-grain algorithm
    • Experimental design • Environment • Trials • Speed-up calculations
    • Environment
    • Speed-up calculations
    • Results • Parallel naïve results • Parallel fine-grain results • Parallel coarse-grain results
    • Parallel naïve results
    • Parallel naïve results
    • Parallel naïve results
    • Parallel fine-grain results
    • Parallel coarse-grain results
    • • The naïve algorithm appears to be the best performing overall end-to-end, achieving super-linearity and superefficiency for levels of u, depending on the processor type. For instance, the more modern processors, the W3540 and i5, realize super-linearity and superefficiency for u as small as 64. • I/O is broadly sub-linear which, by itself, is not surprising. However, I/O does not appear to be a processing bottleneck since the difference between compute and memory-bound compute plus memorybound I/O over the range of u appears to be insignificant.
    • Conclusion • They would like to explore changes to H-S to support multiprocessor parallelism. • there are open questions on how to “shard” or parallelize the data. • we had briefly mentioned Scala’s parallel collections.