3. Abstract
This paper presets results of computational
finance experiments using map-reduce in Scala.
They observe superlinear speedup, superefficiency, and evidence for a high degree of
compute and I/O overlap in the median
runtimes using “naïve,” memory-bound, finegrain, and course-grain parallel algorithms on
three different hardware platforms.
4. Computational finance is a multidisciplinary field
at the crossroads of mathematical finance and
computer science. The emphasis is on
development and utilization of numerically
intensive methods for pricing, risk analysis,
forecasting, automated trading, and other
applications.
6. Map-reduce is a framework generally to speedup data analysis using distributed computing.
While map-reduce has been applied to different
problem domains, many of a data-intensive
nature, almost no attention has been given to
opportunities for computational finance as a
mixture of floating point and data-intensive
operations.
7.
8. Scala is a modern, high-level Java Virtual Machine (JVM)
language that blends object-oriented and functional
programming styles with actors, a shared nothing model
of concurrent computation inspired by physics theories.
Proponents have argued that Scala language features are
suited to solving large-scale computing tasks on
inexpensive, commodity multicore and multiprocessor
platforms in an expressive manner that avoids the
concurrency hazards and runtime inefficiencies of shared,
mutable state programs. Indeed, the function-oriented
style of Scala would seem to lend itself precisely to
coding mathematical expressions which characterize
quantitative operations.
10. Related work
• The literature shows enduring interest in
speeding up computational finance
algorithms.
• The literature furthermore indicates mapreduce is a widely accepted approach to
speeding up computation for various problem
classes.
11. Method
•
•
•
•
•
•
•
•
Bond pricing theory
Bond generation algorithm
IO design
Pricing algorithms
Serial algorithms
Parallel naïve algorithm
Parallel coarse-grain algorithm
Parallel fine-grain algorithm
21. • The naïve algorithm appears to be the best performing
overall end-to-end, achieving super-linearity and
superefficiency for levels of u, depending on the
processor type. For instance, the more modern
processors, the W3540 and i5, realize super-linearity
and superefficiency for u as small as 64.
• I/O is broadly sub-linear which, by itself, is not
surprising. However, I/O does not appear to be a
processing bottleneck since the difference between
compute and memory-bound compute plus
memorybound I/O over the range of u appears to be
insignificant.
22. Conclusion
• They would like to explore changes to H-S to
support multiprocessor parallelism.
• there are open questions on how to “shard”
or parallelize the data.
• we had briefly mentioned Scala’s parallel
collections.