От Java Threads к лямбдам
Андрей Родионов
@AndriiRodionov
http://jug.ua/
The Green Project
The Star7 PDA
• SPARC based, handheld wireless PDA
• with a 5" color LCD with touchscreen
input
• a new 16 bit color hardw...
+ Оак
• a new small, safe, secure, distributed, robust,
interpreted, garbage collected, multi-threaded,
architecture neutr...
Зачем это все?
• Если Oak предназначался для подобных
устройств, когда еще было не особо много
многопроцессорных машин (и ...
Напишем реализации одной и той же
задачи с использованием
• Sequential algorithm
• Java Threads
• java.util.concurrent (Th...
А так же …
• Сравним производительность каждого из
подходов
MicroBenchmarking?!
Вы занимаетесь
микробенчмаркингом?
Тогда мы идем к Вам!
(The Art Of) (Java) Benchmarking
http://shipilev.net/
http://openjdk.java.net/projects/code-tools/jmh/
JMH is a Java harness for building, running, and analysing
nano/micro/mil...
В качестве задачи –
численное интегрирование
• Методом прямоугольников
Sequential algorithm
Sequential v.1
public class SequentialCalculate {
public double calculate(double start, double end,
double step) {
double ...
Sequential v.1
public class SequentialCalculate {
public double calculate(double start, double end,
double step) {
double ...
Sequential v.2: with Functional interface
public interface Function<T, R> {
R apply(T t);
}
Sequential v.2: with Functional interface
public interface Function<T, R> {
R apply(T t);
}
public class SequentialCalcula...
Sequential v.2
With Functional interface
SequentialCalculate sc = new SequentialCalculate (
new Function<Double, Double>()...
Performance
• Intel Core i7-4770, 3.4GHz, 4 Physical cores + 4
Hyper threading = 8 CPUs
• Sun UltraSPARC T1, 1.0GHz, 8 Phy...
Java Threads
Как будем параллелить?
Thread 1 Thread 2 Thread N

class CalcThread extends Thread {
private final double start;
private final double end;
private final double step;
private...
public double calculate(double start, double end,
double step, int chunks) {
CalcThread[] calcThreads = new CalcThread[chu...
Spliterator
Collector
public double calculate(double start, double end,
double step, int chunks) {
CalcThread[] calcThread...
0
0,5
1
1,5
2
1 2 4 8 16 32
t(sec)
Threads
Execution time
Simple Threads
0
1
2
3
4
5
6
2 4 8 16 32
Speedup
Threads
Speedup
Simple Threads
Ограничения классического
подхода
• "поток-на-задачу" хорошо работает с небольшим
количеством долгосрочных задач
• слияние...
java.util.concurrent
Thread pool
• Пул потоков - это очередь в сочетании с
фиксированной группой рабочих потоков, в
которой используются wait()...
class CalcThread implements Callable<Double> {
private final double start;
private final double end;
private final double ...
public double calculate(double start, double end,
double step, int chunks) {
ExecutorService executorService =
Executors.n...
public double calculate(double start, double end,
double step, int chunks) {
ExecutorService executorService =
Executors.n...
0
0,5
1
1,5
2
1 2 4 8 16 32
t(sec)
Threads
Execution time
Simple Threads
Thread Pool
0
1
2
3
4
5
6
2 4 8 16 32
Speedup
Threads
Speedup
Simple Threads Thread Pool
«Бытие определяет сознание»
Доминирующие в текущий момент аппаратные
платформы формируют подход к созданию языков,
библиот...
Путь к параллелизму
• По мере изменения доминирующей аппаратной платформы,
должна соответственно изменяться и программная ...
Fork/Join
Fork/Join
// PSEUDOCODE
Result solve(Problem problem) {
if (problem.size < SEQUENTIAL_THRESHOLD)
return solveSequentially(...
Work stealing - планировщики на
основе захвата работы
ForkJoinPool может в небольшом
количестве потоков выполнить
существе...
Work stealing
• Планировщики на основе захвата работы (work
stealing) "автоматически" балансируют нагрузку за
счёт того, ч...
public class ForkJoinCalculate extends RecursiveTask<Double> {
...
static final long SEQUENTIAL_THRESHOLD = 500;
...
@Over...
protected double sequentialCompute() {
double x = start;
double result = 0.0;
while (x < end) {
result += step * func.appl...
ForkJoinPool pool = new ForkJoinPool();
ForkJoinCalculate calc = new
ForkJoinCalculate(sqFunc, start, end, step);
double s...
Spliterator
public class ForkJoinCalculate extends RecursiveTask<Double> {
...
static final long SEQUENTIAL_THRESHOLD = 50...
0
0,5
1
1,5
2
1 2 4 8 16 32
t(sec)
Threads
Execution time
Simple Threads
Thread Pool
Fork/Join
0
1
2
3
4
5
6
2 4 8 16 32
Speedup
Threads
Speedup
Simple Threads
Thread Pool
Fork/Join
Fork/Join effectiveness
• Local task queues and work stealing are only
utilized when worker threads actually schedule
new ...
The F/J framework Criticism
• exceedingly complex
– The code looks more like an old C language program that was
segmented ...
F/J source code
F/J restrictions
• Recursive decomposition has narrower performance window. It
only works well:
– on balanced tree structu...
F/J restrictions
• Recursive decomposition has narrower performance window. It
only works well:
– on balanced tree structu...
F/J restrictions
• Recursive decomposition has narrower performance window. It
only works well:
– on balanced tree structu...
Lambda
1994
“He (Bill Joy) would often go on at length about
how great Oak would be if he could only add
closures and continuatio...
1994
“He (Bill Joy) would often go on at length about
how great Oak would be if he could only add
closures and continuatio...
1994
“He (Bill Joy) would often go on at length about
how great Oak would be if he could only add
closures and continuatio...
Ingredients of lambda expression
• A lambda expression has three ingredients:
– A block of code
– Parameters
– Values for ...
Ingredients of lambda expression
• A lambda expression has three ingredients:
– A block of code
– Parameters
– Values for ...
Ingredients of lambda expression
• A lambda expression has three ingredients:
– A block of code
– Parameters
– Values for ...
Ingredients of lambda expression
• A lambda expression has three ingredients:
– A block of code
– Parameters
– Values for ...
Ingredients of lambda expression
• A lambda expression has three ingredients:
– A block of code
– Parameters
– Values for ...
Ingredients of lambda expression
• A lambda expression has three ingredients:
– A block of code
– Parameters
– Values for ...
Lambda expression
step * (sin(x) * sin(x) + cos(x) * cos(x));
Lambda expression
step * (sin(x) * sin(x) + cos(x) * cos(x));
x -> step * (sin(x) * sin(x) + cos(x) * cos(x));
Lambda expression
step * (sin(x) * sin(x) + cos(x) * cos(x));
x -> step * (sin(x) * sin(x) + cos(x) * cos(x));
Function<Do...
Lambda expression
step * (sin(x) * sin(x) + cos(x) * cos(x));
x -> step * (sin(x) * sin(x) + cos(x) * cos(x));
Function<Do...
Lambda expression
step * (sin(x) * sin(x) + cos(x) * cos(x));
x -> step * (sin(x) * sin(x) + cos(x) * cos(x));
Function<Do...
SequentialCalculate sc = new SequentialCalculate (
new Function<Double, Double>() {
public Double apply(Double x) {
return...
SequentialCalculate sc = new SequentialCalculate (
new Function<Double, Double>() {
public Double apply(Double x) {
return...
SequentialCalculate sc = new SequentialCalculate (
new Function<Double, Double>() {
public Double apply(Double x) {
return...
Stream API
Integral calculation
double step = 0.001;
double start = 0.0;
double end = 10_000.0;
Function<Double, Double> func =
x -> ...
Integral calculation
double step = 0.001;
double start = 0.0;
double end = 10_000.0;
Function<Double, Double> func =
x -> ...
Integral calculation
double step = 0.001;
double start = 0.0;
double end = 10_000.0;
Function<Double, Double> func =
x -> ...
Integral calculation
double step = 0.001;
double start = 0.0;
double end = 10_000.0;
Function<Double, Double> func =
x -> ...
Integral calculation
double step = 0.001;
double start = 0.0;
double end = 10_000.0;
Function<Double, Double> func =
x -> ...
java.util.function.*
public interface Function<T, R> {
R apply(T t);
}
public interface DoubleUnaryOperator {
double apply...
DoubleStream
public interface DoubleUnaryOperator {
double applyAsDouble(double x);
}
DoubleStream
DoubleUnaryOperator funcD =
x -> sin(x) * sin(x) + cos(x) * cos(x);
DoubleUnaryOperator calcFuncD = y -> step...
DoubleStream
DoubleUnaryOperator funcD =
x -> sin(x) * sin(x) + cos(x) * cos(x);
DoubleUnaryOperator calcFuncD = y -> step...
What's the difference?
1,7
1,75
1,8
1,85
1,9
1,95
2
2,05
2,1
2,15
2,2
Execution time
Sequential
Generic Stream
Double Stre...
Stream parallel
double sum = DoubleStream.
iterate(0.0, s -> s + step).
limit((long) ((end - start) / step)).
parallel().
...
Stream parallel
double sum = DoubleStream.
iterate(0.0, s -> s + step).
limit((long) ((end - start) / step)).
parallel().
...
and …
and …
and …
http://mail.openjdk.java.net/pipermail/lambda-dev/2013-June/010019.html
Stream parallel v.2
double sum = LongStream.
range(0, (long) ((end - start) / step)).
parallel().
mapToDouble(i -> start +...
Stream parallel v.2
double sum = LongStream.
range(0, (long) ((end - start) / step)).
parallel().
mapToDouble(i -> start +...
Spliterator
Collector
Streams parallel v.2
double sum = LongStream.
range(0, (long) ((end - start) / step)).
parallel().
m...
0,35
0,355
0,36
0,365
0,37
0,375
0,38
0,385
0,39
Execution time
Simple Threads Thread Pool
Fork/Join Double Stream Parallel
http://stackoverflow.com/questions/20375176/should-i-always-use-a-parallel-stream-when-possible?rq=1
Parallel stream problem
https://bugs.openjdk.java.net/browse/JDK-8032512
• The problem is that all parallel streams use co...
0
0,5
1
1,5
2
2,5
Execution time
Sequential
Generic Stream
Double Stream
Simple Threads
Thread Pool
Fork/Join
Double Strea...
Thank you!
javaday.org.ua
От Java Threads к лямбдам, Андрей Родионов
От Java Threads к лямбдам, Андрей Родионов
От Java Threads к лямбдам, Андрей Родионов
От Java Threads к лямбдам, Андрей Родионов
От Java Threads к лямбдам, Андрей Родионов
От Java Threads к лямбдам, Андрей Родионов
Upcoming SlideShare
Loading in …5
×

От Java Threads к лямбдам, Андрей Родионов

826 views

Published on

Один из основных мотивов добавления в Java 8 лямбда-выражений — упростить написание многопоточных программ. На примере несложной вычислительной задачи я покажу эволюцию средств Java для многопоточности. Начнём с Java Threads, а закончим лямбда-выражениями и Stream API. Ну и в результате посмотрим, что и как вышло.

Published in: Technology

От Java Threads к лямбдам, Андрей Родионов

  1. 1. От Java Threads к лямбдам Андрей Родионов @AndriiRodionov http://jug.ua/
  2. 2. The Green Project
  3. 3. The Star7 PDA • SPARC based, handheld wireless PDA • with a 5" color LCD with touchscreen input • a new 16 bit color hardware double buffered NTSC framebuffer • 900MHz wireless networking • multi-media audio codec • a new power supply/battery interface • a version of Unix (SolarisOs) that runs in under a megabyte including drivers for PCMCIA • radio networking • flash RAM file system
  4. 4. + Оак • a new small, safe, secure, distributed, robust, interpreted, garbage collected, multi-threaded, architecture neutral, high performance, dynamic programming language • a set of classes that implement a spatial user interface metaphor, a user interface methodology which uses animation, audio, spatial cues, gestures • All of this, in 1992!
  5. 5. Зачем это все? • Если Oak предназначался для подобных устройств, когда еще было не особо много многопроцессорных машин (и тем более никто не мечтала о телефоне с 4 ядрами), то зачем он изначально содержал поддержку потоков???
  6. 6. Напишем реализации одной и той же задачи с использованием • Sequential algorithm • Java Threads • java.util.concurrent (Thread pool) • Fork/Join • Java 8 Stream API (Lambda)
  7. 7. А так же … • Сравним производительность каждого из подходов
  8. 8. MicroBenchmarking?!
  9. 9. Вы занимаетесь микробенчмаркингом? Тогда мы идем к Вам! (The Art Of) (Java) Benchmarking http://shipilev.net/
  10. 10. http://openjdk.java.net/projects/code-tools/jmh/ JMH is a Java harness for building, running, and analysing nano/micro/milli/macro benchmarks written in Java and other languages targetting the JVM.
  11. 11. В качестве задачи – численное интегрирование • Методом прямоугольников
  12. 12. Sequential algorithm
  13. 13. Sequential v.1 public class SequentialCalculate { public double calculate(double start, double end, double step) { double result = 0.0; double x = start; while (x < end) { result += step * (sin(x) * sin(x) + cos(x) * cos(x)); x += step; } return result; } }
  14. 14. Sequential v.1 public class SequentialCalculate { public double calculate(double start, double end, double step) { double result = 0.0; double x = start; while (x < end) { result += step * (sin(x) * sin(x) + cos(x) * cos(x)); x += step; } return result; } }
  15. 15. Sequential v.2: with Functional interface public interface Function<T, R> { R apply(T t); }
  16. 16. Sequential v.2: with Functional interface public interface Function<T, R> { R apply(T t); } public class SequentialCalculate { private final Function<Double, Double> func; public SequentialCalculate(Function<Double, Double> func) { this.func = func; } public double calculate(double start, double end, double step) { double result = 0.0; double x = start; while (x < end) { result += step * func.apply(x); x += step; } return result; } }
  17. 17. Sequential v.2 With Functional interface SequentialCalculate sc = new SequentialCalculate ( new Function<Double, Double>() { public Double apply(Double x) { return sin(x) * sin(x) + cos(x) * cos(x); } } );
  18. 18. Performance • Intel Core i7-4770, 3.4GHz, 4 Physical cores + 4 Hyper threading = 8 CPUs • Sun UltraSPARC T1, 1.0GHz, 8 Physical cores * 4 Light Weight Processes = 32 CPUs 54,832 1,877 0 10 20 30 40 50 60 t, sec
  19. 19. Java Threads
  20. 20. Как будем параллелить? Thread 1 Thread 2 Thread N 
  21. 21. class CalcThread extends Thread { private final double start; private final double end; private final double step; private double partialResult; public CalcThread(double start, double end, double step) { this.start = start; this.end = end; this.step = step; } public void run() { double x = start; while (x < end) { partialResult += step * func.apply(x); x += step; } } }
  22. 22. public double calculate(double start, double end, double step, int chunks) { CalcThread[] calcThreads = new CalcThread[chunks]; double interval = (end - start) / chunks; double st = start; for (int i = 0; i < chunks; i++) { calcThreads[i] = new CalcThread(st, st + interval, step); calcThreads[i].start(); st += interval; } double result = 0.0; for (CalcThread cs : calcThreads) { cs.join(); result += cs.partialResult; } return result; }
  23. 23. Spliterator Collector public double calculate(double start, double end, double step, int chunks) { CalcThread[] calcThreads = new CalcThread[chunks]; double interval = (end - start) / chunks; double st = start; for (int i = 0; i < chunks; i++) { calcThreads[i] = new CalcThread(st, st + interval, step); calcThreads[i].start(); st += interval; } double result = 0.0; for (CalcThread cs : calcThreads) { cs.join(); result += cs.partialResult; } return result; }
  24. 24. 0 0,5 1 1,5 2 1 2 4 8 16 32 t(sec) Threads Execution time Simple Threads
  25. 25. 0 1 2 3 4 5 6 2 4 8 16 32 Speedup Threads Speedup Simple Threads
  26. 26. Ограничения классического подхода • "поток-на-задачу" хорошо работает с небольшим количеством долгосрочных задач • слияние низкоуровневого кода, отвечающего за многопоточное исполнение, и высокоуровневого кода, отвечающего за основную функциональность приложения приводит к т.н. «спагетти-коду» • трудности связанные с управлением потоками • поток занимает относительно много места в памяти ~ 1 Mb • для выполнения новой задачи потребуется запустить новый поток – это одна из самых требовательных к ресурсам операций
  27. 27. java.util.concurrent
  28. 28. Thread pool • Пул потоков - это очередь в сочетании с фиксированной группой рабочих потоков, в которой используются wait() и notify(), чтобы сигнализировать ожидающим потокам, что прибыла новая работа.
  29. 29. class CalcThread implements Callable<Double> { private final double start; private final double end; private final double step; public CalcThread(double start, double end, double step){ this.start = start; this.end = end; this.step = step; } public Double call() { double partialResult = 0.0; double x = start; while (x < end) { partialResult += step * func.apply(x); x += step; } return partialResult; } }
  30. 30. public double calculate(double start, double end, double step, int chunks) { ExecutorService executorService = Executors.newFixedThreadPool(chunks); Future<Double>[] futures = new Future[chunks]; double interval = (end - start) / chunks; double st = start; for (int i = 0; i < chunks; i++) { futures[i] = executorService.submit( new CalcThread(st, st + interval, step)); st += interval; } executorService.shutdown(); double result = 0.0; for (Future<Double> partRes : futures) { result += partRes.get(); } return result; }
  31. 31. public double calculate(double start, double end, double step, int chunks) { ExecutorService executorService = Executors.newFixedThreadPool(chunks); Future<Double>[] futures = new Future[chunks]; double interval = (end - start) / chunks; double st = start; for (int i = 0; i < chunks; i++) { futures[i] = executorService.submit( new CalcThread(st, st + interval, step)); st += interval; } executorService.shutdown(); double result = 0.0; for (Future<Double> partRes : futures) { result += partRes.get(); } return result; } Spliterator Collector
  32. 32. 0 0,5 1 1,5 2 1 2 4 8 16 32 t(sec) Threads Execution time Simple Threads Thread Pool
  33. 33. 0 1 2 3 4 5 6 2 4 8 16 32 Speedup Threads Speedup Simple Threads Thread Pool
  34. 34. «Бытие определяет сознание» Доминирующие в текущий момент аппаратные платформы формируют подход к созданию языков, библиотек и систем • С самого момента зарождения языка в Java была поддержка потоков и параллелизма (Thread, synchronized, volatile, …) • Однако примитивы параллелизма, введенные в 1995 году, отражали реальность аппаратного обеспечения того времени: большинство доступных коммерческих систем вообще не предоставляли возможностей использования параллелизма, и даже наиболее дорогостоящие системы предоставляли такие возможности лишь в ограниченных масштабах • В те дни потоки использовались в основном, для выражения asynchrony, а не concurrency, и в результате, эти механизмы в целом отвечали требованиям времени
  35. 35. Путь к параллелизму • По мере изменения доминирующей аппаратной платформы, должна соответственно изменяться и программная платформа • Когда начался процесс удешевления многопроцессорных систем, от приложений стали требовать все большего использования предоставляемого системами аппаратного параллелизма. Тогда программисты обнаружили, что разрабатывать параллельные программы, использующие низкоуровневые примитивы, обеспечиваемые языком и библиотекой классов, сложно и чревато ошибками • java.util.concurrent дала возможности для «coarse-grained» параллелизма (поток на запрос), но этого может быть не достаточно, т.к. сам по себе запрос может выполняться долго • Необходимы средства для «finer-grained» параллелизма Web server Th1 Th2 Th3 ThNcoarse-grained parallelism finer-grained parallelism
  36. 36. Fork/Join
  37. 37. Fork/Join // PSEUDOCODE Result solve(Problem problem) { if (problem.size < SEQUENTIAL_THRESHOLD) return solveSequentially(problem); else { Result left, right; INVOKE-IN-PARALLEL { left = solve(extractLeftHalf(problem)); right = solve(extractRightHalf(problem)); } return combine(left, right); } } • Fork/Join сейчас является одной из самых распространённых методик для построения параллельных алгоритмов
  38. 38. Work stealing - планировщики на основе захвата работы ForkJoinPool может в небольшом количестве потоков выполнить существенно большее число задач
  39. 39. Work stealing • Планировщики на основе захвата работы (work stealing) "автоматически" балансируют нагрузку за счёт того, что потоки, оказавшиеся без задач, самостоятельно обнаруживают и забирают "свободные" задачи у других потоков. Находится ли поток-"жертва" в активном или пассивном состоянии, неважно. • Основными преимуществами перед планировщиком с общим пулом задач: – отсутствие общего пула :), то есть точки глобальной синхронизации – лучшая локальность данных, потому что в большинстве случаев поток самостоятельно выполняет порождённые им задачи
  40. 40. public class ForkJoinCalculate extends RecursiveTask<Double> { ... static final long SEQUENTIAL_THRESHOLD = 500; ... @Override protected Double compute() { if ((end - start) / step < SEQUENTIAL_THRESHOLD) { return sequentialCompute(); } double mid = start + (end - start) / 2.0; ForkJoinCalculate left = new ForkJoinCalculate(func, start, mid, step); ForkJoinCalculate right = new ForkJoinCalculate(func, mid, end, step); left.fork(); double rightAns = right.compute(); double leftAns = left.join(); return leftAns + rightAns; } }
  41. 41. protected double sequentialCompute() { double x = start; double result = 0.0; while (x < end) { result += step * func.apply(x); x += step; } return result; }
  42. 42. ForkJoinPool pool = new ForkJoinPool(); ForkJoinCalculate calc = new ForkJoinCalculate(sqFunc, start, end, step); double sum = pool.invoke(calc); How to launch recursive execution
  43. 43. Spliterator public class ForkJoinCalculate extends RecursiveTask<Double> { ... static final long SEQUENTIAL_THRESHOLD = 500; ... @Override protected Double compute() { if ((end - start) / step < SEQUENTIAL_THRESHOLD) { return sequentialCompute(); } double mid = start + (end - start) / 2.0; ForkJoinCalculate left = new ForkJoinCalculate(func, start, mid, step); ForkJoinCalculate right = new ForkJoinCalculate(func, mid, end, step); left.fork(); double rightAns = right.compute(); double leftAns = left.join(); return leftAns + rightAns; } } Collector
  44. 44. 0 0,5 1 1,5 2 1 2 4 8 16 32 t(sec) Threads Execution time Simple Threads Thread Pool Fork/Join
  45. 45. 0 1 2 3 4 5 6 2 4 8 16 32 Speedup Threads Speedup Simple Threads Thread Pool Fork/Join
  46. 46. Fork/Join effectiveness • Local task queues and work stealing are only utilized when worker threads actually schedule new tasks in their own queues – If this doesn't occur, the ForkJoinPool is just a ThreadPoolExecutor with an extra overhead – If input tasks are already split (or are splittable) into tasks of approximately equal computing load, then it less efficient than just using a ThreadPoolExecutor directly • If tasks have variable computing load and can be split into subtasks, then ForkJoinPool's in-built load balancing is likely to make it more efficient than using a ThreadPoolExecutor
  47. 47. The F/J framework Criticism • exceedingly complex – The code looks more like an old C language program that was segmented into classes than an O-O structure • a design failure – It’s primary uses are for fully-strict, compute-only, recursively decomposing processing of large aggregate data structures. It is for compute intensive tasks only • lacking in industry professional attributes – no monitoring, no alerting or logging, no availability for general application usage • misusing parallelization – recursive decomposition has narrower performance window. An academic exercise • inadequate in scope – you must be able to express things in terms of apply, reduce, filter, map, cumulate, sort, uniquify, paired mappings, and so on — no general purpose application programming here • special purpose
  48. 48. F/J source code
  49. 49. F/J restrictions • Recursive decomposition has narrower performance window. It only works well: – on balanced tree structures (DAG), – where there are no cyclic dependencies, – where the computation duration is neither too short nor too long, – where there is no blocking • Recommended restrictions: – must be plain (between 100 and 10,000 basic computational steps in the compute method), – compute intensive code only, – no blocking, – no I/O, – no synchronization
  50. 50. F/J restrictions • Recursive decomposition has narrower performance window. It only works well: – on balanced tree structures (DAG), – where there are no cyclic dependencies, – where the computation duration is neither too short nor too long, – where there is no blocking • Recommended restrictions: – must be plain (between 100 and 10,000 basic computational steps in the compute method), – compute intensive code only, – no blocking, – no I/O, – no synchronization All problems
  51. 51. F/J restrictions • Recursive decomposition has narrower performance window. It only works well: – on balanced tree structures (DAG), – where there are no cyclic dependencies, – where the computation duration is neither too short nor too long, – where there is no blocking • Recommended restrictions: – must be plain (between 100 and 10,000 basic computational steps in the compute method), – compute intensive code only, – no blocking, – no I/O, – no synchronization F/J All problems reduce, filter, map, cumulate, sort, uniquify, paired mappings, …
  52. 52. Lambda
  53. 53. 1994 “He (Bill Joy) would often go on at length about how great Oak would be if he could only add closures and continuations and parameterized types” Patrick Naughton, one of the creators of the Java
  54. 54. 1994 “He (Bill Joy) would often go on at length about how great Oak would be if he could only add closures and continuations and parameterized types” “While we all agreed these were very cool language features, we were all kind of hoping to finish this language in our lifetimes and get on to creating cool applications with it” Patrick Naughton, one of the creators of the Java
  55. 55. 1994 “He (Bill Joy) would often go on at length about how great Oak would be if he could only add closures and continuations and parameterized types” “While we all agreed these were very cool language features, we were all kind of hoping to finish this language in our lifetimes and get on to creating cool applications with it” “It is also interesting that Bill was absolutely right about what Java needs long term. When I go look at the list of things he wanted to add back then, I want them all. He was right, he usually is” Patrick Naughton, one of the creators of the Java
  56. 56. Ingredients of lambda expression • A lambda expression has three ingredients: – A block of code – Parameters – Values for the free variables; that is, the variables that are not parameters and not defined inside the code
  57. 57. Ingredients of lambda expression • A lambda expression has three ingredients: – A block of code – Parameters – Values for the free variables; that is, the variables that are not parameters and not defined inside the code while (x < end) { result += step * (sin(x) * sin(x) + cos(x) * cos(x)); x += step; }
  58. 58. Ingredients of lambda expression • A lambda expression has three ingredients: – A block of code – Parameters – Values for the free variables; that is, the variables that are not parameters and not defined inside the code while (x < end) { result += step * (sin(x) * sin(x) + cos(x) * cos(x)); x += step; } step * (sin(x) * sin(x) + cos(x) * cos(x));
  59. 59. Ingredients of lambda expression • A lambda expression has three ingredients: – A block of code – Parameters – Values for the free variables; that is, the variables that are not parameters and not defined inside the code while (x < end) { result += step * (sin(x) * sin(x) + cos(x) * cos(x)); x += step; } step * (sin(x) * sin(x) + cos(x) * cos(x)); A block of code
  60. 60. Ingredients of lambda expression • A lambda expression has three ingredients: – A block of code – Parameters – Values for the free variables; that is, the variables that are not parameters and not defined inside the code while (x < end) { result += step * (sin(x) * sin(x) + cos(x) * cos(x)); x += step; } step * (sin(x) * sin(x) + cos(x) * cos(x)); A block of code Parameter(s)
  61. 61. Ingredients of lambda expression • A lambda expression has three ingredients: – A block of code – Parameters – Values for the free variables; that is, the variables that are not parameters and not defined inside the code while (x < end) { result += step * (sin(x) * sin(x) + cos(x) * cos(x)); x += step; } step * (sin(x) * sin(x) + cos(x) * cos(x)); A block of code Parameter(s)Free variable
  62. 62. Lambda expression step * (sin(x) * sin(x) + cos(x) * cos(x));
  63. 63. Lambda expression step * (sin(x) * sin(x) + cos(x) * cos(x)); x -> step * (sin(x) * sin(x) + cos(x) * cos(x));
  64. 64. Lambda expression step * (sin(x) * sin(x) + cos(x) * cos(x)); x -> step * (sin(x) * sin(x) + cos(x) * cos(x)); Function<Double, Double> func = x -> step * (sin(x) * sin(x) + cos(x) * cos(x));
  65. 65. Lambda expression step * (sin(x) * sin(x) + cos(x) * cos(x)); x -> step * (sin(x) * sin(x) + cos(x) * cos(x)); Function<Double, Double> func = x -> step * (sin(x) * sin(x) + cos(x) * cos(x)); F(x)G(y)
  66. 66. Lambda expression step * (sin(x) * sin(x) + cos(x) * cos(x)); x -> step * (sin(x) * sin(x) + cos(x) * cos(x)); Function<Double, Double> func = x -> step * (sin(x) * sin(x) + cos(x) * cos(x)); F(x)G(y) Function<Double, Double> func = x -> sin(x) * sin(x) + cos(x) * cos(x); Function<Double, Double> calcFunc = y -> step * y; Function<Double, Double> sqFunc = func.andThen(calcFunc);
  67. 67. SequentialCalculate sc = new SequentialCalculate ( new Function<Double, Double>() { public Double apply(Double x) { return sin(x) * sin(x) + cos(x) * cos(x); } } );
  68. 68. SequentialCalculate sc = new SequentialCalculate ( new Function<Double, Double>() { public Double apply(Double x) { return sin(x) * sin(x) + cos(x) * cos(x); } } ); SequentialCalculate sc = new SequentialCalculate(x -> sin(x)*sin(x) + cos(x)*cos(x));
  69. 69. SequentialCalculate sc = new SequentialCalculate ( new Function<Double, Double>() { public Double apply(Double x) { return sin(x) * sin(x) + cos(x) * cos(x); } } ); SequentialCalculate sc = new SequentialCalculate(x -> sin(x)*sin(x) + cos(x)*cos(x)); Function<Double, Double> func = x -> sin(x) * sin(x) + cos(x) * cos(x); SequentialCalculate sc = new SequentialCalculate(func);
  70. 70. Stream API
  71. 71. Integral calculation double step = 0.001; double start = 0.0; double end = 10_000.0; Function<Double, Double> func = x -> sin(x) * sin(x) + cos(x) * cos(x); Function<Double, Double> calcFunc = y -> step * y; Function<Double, Double> sqFunc = func.andThen(calcFunc); double sum = ...
  72. 72. Integral calculation double step = 0.001; double start = 0.0; double end = 10_000.0; Function<Double, Double> func = x -> sin(x) * sin(x) + cos(x) * cos(x); Function<Double, Double> calcFunc = y -> step * y; Function<Double, Double> sqFunc = func.andThen(calcFunc); double sum = Stream. iterate(0.0, s -> s + step).
  73. 73. Integral calculation double step = 0.001; double start = 0.0; double end = 10_000.0; Function<Double, Double> func = x -> sin(x) * sin(x) + cos(x) * cos(x); Function<Double, Double> calcFunc = y -> step * y; Function<Double, Double> sqFunc = func.andThen(calcFunc); double sum = Stream. iterate(0.0, s -> s + step). limit((long) ((end - start) / step)).
  74. 74. Integral calculation double step = 0.001; double start = 0.0; double end = 10_000.0; Function<Double, Double> func = x -> sin(x) * sin(x) + cos(x) * cos(x); Function<Double, Double> calcFunc = y -> step * y; Function<Double, Double> sqFunc = func.andThen(calcFunc); double sum = Stream. iterate(0.0, s -> s + step). limit((long) ((end - start) / step)). map(sqFunc).
  75. 75. Integral calculation double step = 0.001; double start = 0.0; double end = 10_000.0; Function<Double, Double> func = x -> sin(x) * sin(x) + cos(x) * cos(x); Function<Double, Double> calcFunc = y -> step * y; Function<Double, Double> sqFunc = func.andThen(calcFunc); double sum = Stream. iterate(0.0, s -> s + step). limit((long) ((end - start) / step)). map(sqFunc). reduce(0.0, Double::sum); ∑ sum
  76. 76. java.util.function.* public interface Function<T, R> { R apply(T t); } public interface DoubleUnaryOperator { double applyAsDouble(double x); }
  77. 77. DoubleStream public interface DoubleUnaryOperator { double applyAsDouble(double x); }
  78. 78. DoubleStream DoubleUnaryOperator funcD = x -> sin(x) * sin(x) + cos(x) * cos(x); DoubleUnaryOperator calcFuncD = y -> step * y; DoubleUnaryOperator sqFuncDouble = funcD.andThen(calcFuncD); double sum = ... public interface DoubleUnaryOperator { double applyAsDouble(double x); }
  79. 79. DoubleStream DoubleUnaryOperator funcD = x -> sin(x) * sin(x) + cos(x) * cos(x); DoubleUnaryOperator calcFuncD = y -> step * y; DoubleUnaryOperator sqFuncDouble = funcD.andThen(calcFuncD); double sum = DoubleStream. iterate(0.0, s -> s + step). limit((long) ((end - start) / step)). map(sqFuncDouble). sum(); public interface DoubleUnaryOperator { double applyAsDouble(double x); }
  80. 80. What's the difference? 1,7 1,75 1,8 1,85 1,9 1,95 2 2,05 2,1 2,15 2,2 Execution time Sequential Generic Stream Double Stream
  81. 81. Stream parallel double sum = DoubleStream. iterate(0.0, s -> s + step). limit((long) ((end - start) / step)). parallel(). map(sqFuncDouble). sum();
  82. 82. Stream parallel double sum = DoubleStream. iterate(0.0, s -> s + step). limit((long) ((end - start) / step)). parallel(). map(sqFuncDouble). sum();
  83. 83. and …
  84. 84. and …
  85. 85. and … http://mail.openjdk.java.net/pipermail/lambda-dev/2013-June/010019.html
  86. 86. Stream parallel v.2 double sum = LongStream. range(0, (long) ((end - start) / step)). parallel(). mapToDouble(i -> start + step * i). map(sqFuncDouble). sum();
  87. 87. Stream parallel v.2 double sum = LongStream. range(0, (long) ((end - start) / step)). parallel(). mapToDouble(i -> start + step * i). map(sqFuncDouble). sum();
  88. 88. Spliterator Collector Streams parallel v.2 double sum = LongStream. range(0, (long) ((end - start) / step)). parallel(). mapToDouble(i -> start + step * i). map(sqFuncDouble). sum();
  89. 89. 0,35 0,355 0,36 0,365 0,37 0,375 0,38 0,385 0,39 Execution time Simple Threads Thread Pool Fork/Join Double Stream Parallel
  90. 90. http://stackoverflow.com/questions/20375176/should-i-always-use-a-parallel-stream-when-possible?rq=1
  91. 91. Parallel stream problem https://bugs.openjdk.java.net/browse/JDK-8032512 • The problem is that all parallel streams use common fork-join thread pool and if you submit a long-running task, you effectively block all threads in the pool. • By default, all streams will use the same ForkJoinPool, configured to use as many threads as there are cores in the computer on which the program is running. • So, for computation intensive stream evaluation, one should always use a specific ForkJoinPool in order not to block other streams.
  92. 92. 0 0,5 1 1,5 2 2,5 Execution time Sequential Generic Stream Double Stream Simple Threads Thread Pool Fork/Join Double Stream Parallel
  93. 93. Thank you!
  94. 94. javaday.org.ua

×