Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Scientific Computation on
JRuby
Prasun Anand
github.com/prasunanand
NMatrix
NMatrix is Sciruby’s numerical matrix core implemented for dense and sparse
matrices.
NMatrix on ATLAS/CBLAS/CLAPA...
y = NMatrix.new([2,3],[1,2,3,4,5,6], dtype: :float32, stype: :dense)
pp y
[
[1,2,3]
[4,5,6]
]
Optimization and Speed
Memory Management
Chaining java methods
Benchmark
b = Java::double[15_000,15_000].new
c = Java::double[15_000,15_000].new
# b= Array.new(25000000)
index=0
puts Be...
#prints
#43.260000 3.250000 46.510000 ( 39.606356)
#67.790000 0.070000 67.860000 ( 65.126546)
#RAM consumed => 5.4GB
Chaining Java methods
Large Arrays.
Converting to two-d matrix from flat_array.
Coercion.
Each Fixnum object could be anyw...
Memory Management
Overcopying
Type Guessing
Autoboxing
Speed
Never upset the Garbage Collector.
Speed improved 1000times (from 25s to 0.022s) :).
Mixed-models
Mixed models are statistical models which predict the value of a response
variable as a result of fixed and r...
Matrix Multiplication
Matrix Addition
Matrix Subtraction
Thank You
Github: prasunanand
Twitter: @prasun_anand
Blog : prasunanand.com
Scientific Computation on JRuby
Scientific Computation on JRuby
Scientific Computation on JRuby
Upcoming SlideShare
Loading in …5
×

Scientific Computation on JRuby

750 views

Published on

Optimizing scientific computation programs on JRuby.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Scientific Computation on JRuby

  1. 1. Scientific Computation on JRuby Prasun Anand github.com/prasunanand
  2. 2. NMatrix NMatrix is Sciruby’s numerical matrix core implemented for dense and sparse matrices. NMatrix on ATLAS/CBLAS/CLAPACK and standard LAPACK. NMatrix for JRuby has been implemented using Apache Commons Math.
  3. 3. y = NMatrix.new([2,3],[1,2,3,4,5,6], dtype: :float32, stype: :dense) pp y [ [1,2,3] [4,5,6] ]
  4. 4. Optimization and Speed Memory Management Chaining java methods
  5. 5. Benchmark b = Java::double[15_000,15_000].new c = Java::double[15_000,15_000].new # b= Array.new(25000000) index=0 puts Benchmark.measure{ (0...15000).each do |i| (0...15000).each do |j| b[i][j] = index index+=1 end end } index =0 puts Benchmark.measure{ (0...15000).each do |i| (0...15000).each do |j| c[i][j] = b[i][j] index+=1 end end }
  6. 6. #prints #43.260000 3.250000 46.510000 ( 39.606356) #67.790000 0.070000 67.860000 ( 65.126546) #RAM consumed => 5.4GB
  7. 7. Chaining Java methods Large Arrays. Converting to two-d matrix from flat_array. Coercion. Each Fixnum object could be anywhere from 64 to 128 bytes in memory, depending on the platform and how the JVM lays the objects out in memory. So even on the low end, 100M numeric elements would be 6.4GB of objects. And then the Array itself will be at least 100M * 4-8 bytes, or at least 400MB. So there's at most 7.5GB difference from MRI.
  8. 8. Memory Management Overcopying Type Guessing Autoboxing
  9. 9. Speed Never upset the Garbage Collector. Speed improved 1000times (from 25s to 0.022s) :).
  10. 10. Mixed-models Mixed models are statistical models which predict the value of a response variable as a result of fixed and random effects. It is like a Ruby version of lme4 (an R package). It has been successfully ported to JRuby too :) .
  11. 11. Matrix Multiplication
  12. 12. Matrix Addition
  13. 13. Matrix Subtraction
  14. 14. Thank You Github: prasunanand Twitter: @prasun_anand Blog : prasunanand.com

×