SlideShare a Scribd company logo
Ben Christensen
Developer – Edge Engineering at Netflix
@benjchristensen
http://techblog.netflix.com/
QConSF - November 2014
ReactiveProgrammingwithRx
InfoQ.com: News & Community Site
• 750,000 unique visitors/month
• Published in 4 languages (English, Chinese, Japanese and Brazilian
Portuguese)
• Post content from our QCon conferences
• News 15-20 / week
• Articles 3-4 / week
• Presentations (videos) 12-15 / week
• Interviews 2-3 / week
• Books 1 / month
Watch the video with slide
synchronization on InfoQ.com!
http://www.infoq.com/presentations
/rx-service-architecture
Purpose of QCon
- to empower software development by facilitating the spread of
knowledge and innovation
Strategy
- practitioner-driven conference designed for YOU: influencers of
change and innovation in your teams
- speakers and topics driving the evolution and innovation
- connecting and catalyzing the influencers and innovators
Highlights
- attended by more than 12,000 delegates since 2007
- held in 9 cities worldwide
Presented at QCon San Francisco
www.qconsf.com
RxJava
http://github.com/ReactiveX/RxJava
http://reactivex.io
Maven Central: 'io.reactivex:rxjava:1.0.+'
Iterable<T>
pull
Observable<T>
push
T next()
throws Exception
returns;
onNext(T)
onError(Exception)
onCompleted()
Iterable<T>
pull
Observable<T>
push
T next()
throws Exception
returns;
onNext(T)
onError(Exception)
onCompleted()
	
  //	
  Iterable<String>	
  or	
  Stream<String>	
  	
  
	
  //	
  that	
  contains	
  75	
  Strings	
  
	
  getDataFromLocalMemory()	
  
	
  	
  .skip(10)	
  
	
  	
  .limit(5)	
  
	
  	
  .map(s	
  -­‐>	
  s	
  +	
  "_transformed")	
  
	
  	
  .forEach(t	
  -­‐>	
  System.out.println("onNext	
  =>	
  "	
  +	
  t))	
  
	
  //	
  Observable<String>	
  	
  
	
  //	
  that	
  emits	
  75	
  Strings	
  
	
  getDataFromNetwork()	
  
	
  	
  .skip(10)	
  
	
  	
  .take(5)	
  
	
  	
  .map(s	
  -­‐>	
  s	
  +	
  "_transformed")	
  
	
  	
  .forEach(t	
  -­‐>	
  System.out.println("onNext	
  =>	
  "	
  +	
  t))	
  
Iterable<T>
pull
Observable<T>
push
T next()
throws Exception
returns;
onNext(T)
onError(Exception)
onCompleted()
	
  //	
  Observable<String>	
  	
  
	
  //	
  that	
  emits	
  75	
  Strings	
  
	
  getDataFromNetwork()	
  
	
  	
  .skip(10)	
  
	
  	
  .take(5)	
  
	
  	
  .map(s	
  -­‐>	
  s	
  +	
  "_transformed")	
  
	
  	
  .forEach(t	
  -­‐>	
  System.out.println("onNext	
  =>	
  "	
  +	
  t))	
  
	
  //	
  Iterable<String>	
  or	
  Stream<String>	
  	
  
	
  //	
  that	
  contains	
  75	
  Strings	
  
	
  getDataFromLocalMemory()	
  
	
  	
  .skip(10)	
  
	
  	
  .limit(5)	
  
	
  	
  .map(s	
  -­‐>	
  s	
  +	
  "_transformed")	
  
	
  	
  .forEach(t	
  -­‐>	
  System.out.println("onNext	
  =>	
  "	
  +	
  t))	
  
Single Multiple
Sync T getData()
Iterable<T> getData()
Stream<T> getData()
Async Future<T> getData() Observable<T> getData()
Observable.create(subscriber -> {
subscriber.onNext("Hello World!");
subscriber.onCompleted();
}).subscribe(System.out::println);
Observable.create(subscriber -> {
subscriber.onNext("Hello");
subscriber.onNext("World!");
subscriber.onCompleted();
}).subscribe(System.out::println);
// shorten by using helper method
Observable.just(“Hello”, “World!”)
.subscribe(System.out::println);
// add onError and onComplete listeners
Observable.just(“Hello”, “World!”)
.subscribe(System.out::println,
Throwable::printStackTrace,
() -> System.out.println("Done"));
// expand to show full classes
Observable.create(new OnSubscribe<String>() {
@Override
public void call(Subscriber<? super String> subscriber) {
subscriber.onNext("Hello World!");
subscriber.onCompleted();
}
}).subscribe(new Subscriber<String>() {
@Override
public void onCompleted() {
System.out.println("Done");
}
@Override
public void onError(Throwable e) {
e.printStackTrace();
}
@Override
public void onNext(String t) {
System.out.println(t);
}
});
// add error propagation
Observable.create(subscriber -> {
try {
subscriber.onNext("Hello World!");
subscriber.onCompleted();
} catch (Exception e) {
subscriber.onError(e);
}
}).subscribe(System.out::println);
// add error propagation
Observable.create(subscriber -> {
try {
subscriber.onNext(throwException());
subscriber.onCompleted();
} catch (Exception e) {
subscriber.onError(e);
}
}).subscribe(System.out::println);
// add error propagation
Observable.create(subscriber -> {
try {
subscriber.onNext("Hello World!");
subscriber.onNext(throwException());
subscriber.onCompleted();
} catch (Exception e) {
subscriber.onError(e);
}
}).subscribe(System.out::println);
// add concurrency (manually)
Observable.create(subscriber -> {
new Thread(() -> {
try {
subscriber.onNext(getData());
subscriber.onCompleted();
} catch (Exception e) {
subscriber.onError(e);
}
}).start();
}).subscribe(System.out::println);
?
// add concurrency (using a Scheduler)
Observable.create(subscriber -> {
try {
subscriber.onNext(getData());
subscriber.onCompleted();
} catch (Exception e) {
subscriber.onError(e);
}
}).subscribeOn(Schedulers.io())
.subscribe(System.out::println);
?
Learn more about parameterized concurrency and virtual time with
Rx Schedulers at https://github.com/ReactiveX/RxJava/wiki/Scheduler
// add operator
Observable.create(subscriber -> {
try {
subscriber.onNext(getData());
subscriber.onCompleted();
} catch (Exception e) {
subscriber.onError(e);
}
}).subscribeOn(Schedulers.io())
.map(data -> data + " --> at " + System.currentTimeMillis())
.subscribe(System.out::println);
?
// add error handling
Observable.create(subscriber -> {
try {
subscriber.onNext(getData());
subscriber.onCompleted();
} catch (Exception e) {
subscriber.onError(e);
}
}).subscribeOn(Schedulers.io())
.map(data -> data + " --> at " + System.currentTimeMillis())
.onErrorResumeNext(e -> Observable.just("Fallback Data"))
.subscribe(System.out::println);
// infinite
Observable.create(subscriber -> {
int i=0;
while(!subscriber.isUnsubscribed()) {
subscriber.onNext(i++);
}
}).subscribe(System.out::println);
Note: No backpressure support here. See Observable.from(Iterable)
or Observable.range() for actual implementations
Hot Cold
emits whether you’re ready or not
examples
mouse and keyboard events
system events
stock prices
emits when requested
(generally at controlled rate)
examples
database query
web service request
reading file
Observable.create(subscriber -> {
// register with data source
})
Observable.create(subscriber -> {
// fetch data
})
Hot Cold
emits whether you’re ready or not
examples
mouse and keyboard events
system events
stock prices
emits when requested
(generally at controlled rate)
examples
database query
web service request
reading file
Observable.create(subscriber -> {
// register with data source
})
Observable.create(subscriber -> {
// fetch data
})
flow control flow control & backpressure
Abstract Concurrency
Cold Finite Streams
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
 Observable<R>	
  b	
  =	
  Observable<T>.flatMap({	
  T	
  t	
  -­‐>	
  	
  
	
  	
  	
  	
  Observable<R>	
  r	
  =	
  ...	
  transform	
  t	
  ...	
  
	
  	
  	
  	
  return	
  r;	
  
	
  })
flatMap
 Observable<R>	
  b	
  =	
  Observable<T>.flatMap({	
  T	
  t	
  -­‐>	
  	
  
	
  	
  	
  	
  Observable<R>	
  r	
  =	
  ...	
  transform	
  t	
  ...	
  
	
  	
  	
  	
  return	
  r;	
  
	
  })
flatMap
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
 	
  	
  	
  Observable.zip(a,	
  b,	
  (a,	
  b)	
  -­‐>	
  {	
  	
  
	
  	
  	
  	
  	
  	
  ...	
  operate	
  on	
  values	
  from	
  both	
  a	
  &	
  b	
  ...	
  
	
  	
  	
  	
  	
  	
  return	
  Arrays.asList(a,	
  b);	
  
	
  	
  	
  	
  })
zip { ( , ) }
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
class	
  VideoService	
  {	
  
	
  	
  	
  def	
  VideoList	
  getPersonalizedListOfMovies(userId);	
  
	
  	
  	
  def	
  VideoBookmark	
  getBookmark(userId,	
  videoId);	
  
	
  	
  	
  def	
  VideoRating	
  getRating(userId,	
  videoId);	
  
	
  	
  	
  def	
  VideoMetadata	
  getMetadata(videoId);	
  
}
class	
  VideoService	
  {	
  
	
  	
  	
  def	
  Observable<VideoList>	
  getPersonalizedListOfMovies(userId);	
  
	
  	
  	
  def	
  Observable<VideoBookmark>	
  getBookmark(userId,	
  videoId);	
  
	
  	
  	
  def	
  Observable<VideoRating>	
  getRating(userId,	
  videoId);	
  
	
  	
  	
  def	
  Observable<VideoMetadata>	
  getMetadata(videoId);	
  
}
...createanobservableapi:
insteadofablockingapi...
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
1
2
3
4
5
6
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
1
2
3
4
5
6
Non-Opinionated Concurrency
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
}).flatMap(data -> {
// output as SSE as we get back the data (no waiting until all is done)
return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data)));
});
}
Decouples Consumption from Production
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
})
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
})
Decouples Consumption from Production
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
})
Decouples Consumption from Production
Decouples Consumption from Production
Event Loop
API Request
API Request
API Request
API Request
API Request
Dependency
REST API
new Command().observe()
new Command(). observe()
new Command(). observe()
new Command(). observe()
new Command(). observe()
Collapser
// first request User object
return getUser(request.getQueryParameters().get("userId")).flatMap(user -> {
// then fetch personal catalog
Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user)
.flatMap(catalogList -> {
return catalogList.videos().<Map<String, Object>> flatMap(video -> {
Observable<Bookmark> bookmark = getBookmark(video);
Observable<Rating> rating = getRating(video);
Observable<VideoMetadata> metadata = getMetadata(video);
return Observable.zip(bookmark, rating, metadata, (b, r, m) -> {
return combineVideoData(video, b, r, m);
});
});
});
// and fetch social data in parallel
Observable<Map<String, Object>> social = getSocialData(user).map(s -> {
return s.getDataAsMap();
});
// merge the results
return Observable.merge(catalog, social);
})
~5 network calls
(#3 and #4 may result in more due to windowing)
1
2
3
4
5
Clear API Communicates Potential Cost
class	
  VideoService	
  {	
  
	
  	
  	
  def	
  Observable<VideoList>	
  getPersonalizedListOfMovies(userId);	
  
	
  	
  	
  def	
  Observable<VideoBookmark>	
  getBookmark(userId,	
  videoId);	
  
	
  	
  	
  def	
  Observable<VideoRating>	
  getRating(userId,	
  videoId);	
  
	
  	
  	
  def	
  Observable<VideoMetadata>	
  getMetadata(videoId);	
  
}
class	
  VideoService	
  {	
  
	
  	
  	
  def	
  Observable<VideoList>	
  getPersonalizedListOfMovies(userId);	
  
	
  	
  	
  def	
  Observable<VideoBookmark>	
  getBookmark(userId,	
  videoId);	
  
	
  	
  	
  def	
  Observable<VideoRating>	
  getRating(userId,	
  videoId);	
  
	
  	
  	
  def	
  Observable<VideoMetadata>	
  getMetadata(videoId);	
  
}
Implementation Can Differ
BIO Network Call
Local Cache Collapsed
Network Call
class	
  VideoService	
  {	
  
	
  	
  	
  def	
  Observable<VideoList>	
  getPersonalizedListOfMovies(userId);	
  
	
  	
  	
  def	
  Observable<VideoBookmark>	
  getBookmark(userId,	
  videoId);	
  
	
  	
  	
  def	
  Observable<VideoRating>	
  getRating(userId,	
  videoId);	
  
	
  	
  	
  def	
  Observable<VideoMetadata>	
  getMetadata(videoId);	
  
}
Implementation Can Differ and Change
Local Cache Collapsed
Network CallCollapsed
Network Call
BIO NIO Network Call
Retrieval, Transformation, Combination
all done in same declarative manner
What about … ?
Error Handling
Observable.create(subscriber -> {
throw new RuntimeException("failed!");
}).onErrorResumeNext(throwable -> {
return Observable.just("fallback value");
}).subscribe(System.out::println);
Observable.create(subscriber -> {
throw new RuntimeException("failed!");
}).onErrorReturn(throwable -> {
return "fallback value";
}).subscribe(System.out::println);
Observable.create(subscriber -> {
throw new RuntimeException("failed!");
}).retryWhen(attempts -> {
return attempts.zipWith(Observable.range(1, 3), (throwable, i) -> i)
.flatMap(i -> {
System.out.println("delay retry by " + i + " second(s)");
return Observable.timer(i, TimeUnit.SECONDS);
}).concatWith(Observable.error(new RuntimeException("Exceeded 3 retries")));
})
.subscribe(System.out::println, t -> t.printStackTrace());
Observable.create(subscriber -> {
throw new RuntimeException("failed!");
}).retryWhen(attempts -> {
return attempts.zipWith(Observable.range(1, 3), (throwable, i) -> i)
.flatMap(i -> {
System.out.println("delay retry by " + i + " second(s)");
return Observable.timer(i, TimeUnit.SECONDS);
}).concatWith(Observable.error(new RuntimeException("Exceeded 3 retries")));
})
.subscribe(System.out::println, t -> t.printStackTrace());
Observable.create(subscriber -> {
throw new RuntimeException("failed!");
}).retryWhen(attempts -> {
return attempts.zipWith(Observable.range(1, 3), (throwable, i) -> i)
.flatMap(i -> {
System.out.println("delay retry by " + i + " second(s)");
return Observable.timer(i, TimeUnit.SECONDS);
}).concatWith(Observable.error(new RuntimeException("Exceeded 3 retries")));
})
.subscribe(System.out::println, t -> t.printStackTrace());
Observable.create(subscriber -> {
throw new RuntimeException("failed!");
}).retryWhen(attempts -> {
return attempts.zipWith(Observable.range(1, 3), (throwable, i) -> i)
.flatMap(i -> {
System.out.println("delay retry by " + i + " second(s)");
return Observable.timer(i, TimeUnit.SECONDS);
}).concatWith(Observable.error(new RuntimeException("Exceeded 3 retries")));
})
.subscribe(System.out::println, t -> t.printStackTrace());
Concurrency
Concurrency
an Observable is sequential
(no concurrent emissions)
scheduling and combining Observables
enables concurrency while retaining sequential emission
// merging async Observables allows each
// to execute concurrently
Observable.merge(getDataAsync(1), getDataAsync(2))
merge
// concurrently fetch data for 5 items
Observable.range(0, 5).flatMap(i -> {
return getDataAsync(i);
})
Observable.range(0, 5000).window(500).flatMap(work -> {
return work.observeOn(Schedulers.computation())
.map(item -> {
// simulate computational work
try { Thread.sleep(1); } catch (Exception e) {}
return item + " processed " + Thread.currentThread();
});
})
Observable.range(0, 5000).buffer(500).flatMap(is -> {
return Observable.from(is).subscribeOn(Schedulers.computation())
.map(item -> {
// simulate computational work
try { Thread.sleep(1); } catch (Exception e) {}
return item + " processed " + Thread.currentThread();
});
})
Flow Control
Flow Control
(backpressure)
no backpressure needed
Observable.from(iterable).take(1000).map(i -> "value_" + i).subscribe(System.out::println);
Observable.from(iterable).take(1000).map(i -> "value_" + i).subscribe(System.out::println);
no backpressure needed
synchronous on same thread
(no queueing)
Observable.from(iterable).take(1000).map(i -> "value_" + i)
.observeOn(Schedulers.computation()).subscribe(System.out::println);
backpressure needed
Observable.from(iterable).take(1000).map(i -> "value_" + i)
.observeOn(Schedulers.computation()).subscribe(System.out::println);
backpressure needed
asynchronous
(queueing)
Flow Control Options
Hot Cold
emits whether you’re ready or not
examples
mouse and keyboard events
system events
stock prices
emits when requested
(generally at controlled rate)
examples
database query
web service request
reading file
Observable.create(subscriber -> {
// register with data source
})
Observable.create(subscriber -> {
// fetch data
})
flow control flow control & backpressure
Block
(callstack blocking and/or park the thread)
Hot or Cold Streams
Temporal Operators
(batch or drop data using time)
Hot Streams
Observable.range(1, 1000000).sample(10, TimeUnit.MILLISECONDS).forEach(System.out::println);
110584
242165
544453
942880
Observable.range(1, 1000000).throttleFirst(10, TimeUnit.MILLISECONDS).forEach(System.out::println);
1
55463
163962
308545
457445
592638
751789
897159
Observable.range(1, 1000000).debounce(10, TimeUnit.MILLISECONDS).forEach(System.out::println);
1000000
Observable.range(1, 1000000).buffer(10, TimeUnit.MILLISECONDS)
.toBlocking().forEach(list -> System.out.println("batch: " + list.size()));
batch: 71141
batch: 49488
batch: 141147
batch: 141432
batch: 195920
batch: 240462
batch: 160410
/* The following will emit a buffered list as it is debounced */
// first we multicast the stream ... using refCount so it handles the subscribe/unsubscribe
Observable<Integer> burstStream = intermittentBursts().take(20).publish().refCount();
// then we get the debounced version
Observable<Integer> debounced = burstStream.debounce(10, TimeUnit.MILLISECONDS);
// then the buffered one that uses the debounced stream to demark window start/stop
Observable<List<Integer>> buffered = burstStream.buffer(debounced);
// then we subscribe to the buffered stream so it does what we want
buffered.toBlocking().forEach(System.out::println);
[0, 1, 2]
[0, 1, 2]
[0, 1, 2, 3, 4, 5, 6]
[0, 1, 2, 3, 4]
[0, 1]
[]
/* The following will emit a buffered list as it is debounced */
// first we multicast the stream ... using refCount so it handles the subscribe/unsubscribe
Observable<Integer> burstStream = intermittentBursts().take(20).publish().refCount();
// then we get the debounced version
Observable<Integer> debounced = burstStream.debounce(10, TimeUnit.MILLISECONDS);
// then the buffered one that uses the debounced stream to demark window start/stop
Observable<List<Integer>> buffered = burstStream.buffer(debounced);
// then we subscribe to the buffered stream so it does what we want
buffered.toBlocking().forEach(System.out::println);
[0, 1, 2]
[0, 1, 2]
[0, 1, 2, 3, 4, 5, 6]
[0, 1, 2, 3, 4]
[0, 1]
[]
/* The following will emit a buffered list as it is debounced */
// first we multicast the stream ... using refCount so it handles the subscribe/unsubscribe
Observable<Integer> burstStream = intermittentBursts().take(20).publish().refCount();
// then we get the debounced version
Observable<Integer> debounced = burstStream.debounce(10, TimeUnit.MILLISECONDS);
// then the buffered one that uses the debounced stream to demark window start/stop
Observable<List<Integer>> buffered = burstStream.buffer(debounced);
// then we subscribe to the buffered stream so it does what we want
buffered.toBlocking().forEach(System.out::println);
[0, 1, 2]
[0, 1, 2]
[0, 1, 2, 3, 4, 5, 6]
[0, 1, 2, 3, 4]
[0, 1]
[]
/* The following will emit a buffered list as it is debounced */
// first we multicast the stream ... using refCount so it handles the subscribe/unsubscribe
Observable<Integer> burstStream = intermittentBursts().take(20).publish().refCount();
// then we get the debounced version
Observable<Integer> debounced = burstStream.debounce(10, TimeUnit.MILLISECONDS);
// then the buffered one that uses the debounced stream to demark window start/stop
Observable<List<Integer>> buffered = burstStream.buffer(debounced);
// then we subscribe to the buffered stream so it does what we want
buffered.toBlocking().forEach(System.out::println);
[0, 1, 2]
[0, 1, 2]
[0, 1, 2, 3, 4, 5, 6]
[0, 1, 2, 3, 4]
[0, 1]
[]
/* The following will emit a buffered list as it is debounced */
// first we multicast the stream ... using refCount so it handles the subscribe/unsubscribe
Observable<Integer> burstStream = intermittentBursts().take(20).publish().refCount();
// then we get the debounced version
Observable<Integer> debounced = burstStream.debounce(10, TimeUnit.MILLISECONDS);
// then the buffered one that uses the debounced stream to demark window start/stop
Observable<List<Integer>> buffered = burstStream.buffer(debounced);
// then we subscribe to the buffered stream so it does what we want
buffered.toBlocking().forEach(System.out::println);
[0, 1, 2]
[0, 1, 2]
[0, 1, 2, 3, 4, 5, 6]
[0, 1, 2, 3, 4]
[0, 1]
[]
/* The following will emit a buffered list as it is debounced */
// first we multicast the stream ... using refCount so it handles the subscribe/unsubscribe
Observable<Integer> burstStream = intermittentBursts().take(20).publish().refCount();
// then we get the debounced version
Observable<Integer> debounced = burstStream.debounce(10, TimeUnit.MILLISECONDS);
// then the buffered one that uses the debounced stream to demark window start/stop
Observable<List<Integer>> buffered = burstStream.buffer(debounced);
// then we subscribe to the buffered stream so it does what we want
buffered.toBlocking().forEach(System.out::println);
[0, 1, 2]
[0, 1, 2]
[0, 1, 2, 3, 4, 5, 6]
[0, 1, 2, 3, 4]
[0, 1]
[]
https://gist.github.com/benjchristensen/e4524a308456f3c21c0b
Observable.range(1, 1000000).window(50, TimeUnit.MILLISECONDS)
.flatMap(window -> window.count())
.toBlocking().forEach(count -> System.out.println("num items: " + count));
num items: 477769
num items: 155463
num items: 366768
Observable.range(1, 1000000).window(500000)
.flatMap(window -> window.count())
.toBlocking().forEach(count -> System.out.println("num items: " + count));
num items: 500000
num items: 500000
Reactive Pull
(dynamic push-pull)
Push (reactive) when consumer
keeps up with producer.
Switch to Pull (interactive)
when consumer is slow.
Bound all* queues.
*vertically, not horizontally
Push (reactive) when consumer
keeps up with producer.
Switch to Pull (interactive)
when consumer is slow.
Bound all* queues.
Reactive Pull
hot vs cold
Reactive Pull
cold supports pull
Observable.from(iterable)
Observable.from(0, 100000)
Cold Streams
emits when requested
(generally at controlled rate)
examples
database query
web service request
reading file
Observable.from(iterable)
Observable.from(0, 100000)
Cold Streams
emits when requested
(generally at controlled rate)
examples
database query
web service request
reading file
Pull
Reactive Pull
hot receives signal
Reactive Pull
hot receives signal
*including Observables that don’t
implement reactive pull support
hotSourceStream.onBackpressureBuffer().observeOn(aScheduler);
hotSourceStream.onBackpressureBuffer().observeOn(aScheduler);
hotSourceStream.onBackpressureBuffer().observeOn(aScheduler);
hotSourceStream.onBackpressureBuffer().observeOn(aScheduler);
hotSourceStream.onBackpressureDrop().observeOn(aScheduler);
stream.onBackpressure(strategy).subscribe
Hot Infinite Streams
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
Hot Infinite Stream
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
(movieId)
movieId=12345 movieId=34567
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
Stage 1 Stage 2
groupBy
Event Streams
sink
Stage 1 Stage 2
12345
56789
34567
Stage 1 Stage 2
12345
56789
34567
Stage 1 Stage 2
12345
56789
34567
Stage 1 Stage 2
12345
56789
34567
Stage 1 Stage 2
12345
56789
34567
12345
56789
34567
Stage 1 Stage 2
12345
56789
34567
12345
56789
34567
Stage 1 Stage 2
12345
56789
34567
12345
56789
34567
Stage 1 Stage 2
12345
56789
34567
12345
56789
34567
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
10mins || 1000
4:40-4:50pm
4:50-5:00pm
5:00-5:07pm
(burst to 1000)
5:07-5:17pm
10mins || 1000
4:40-4:50pm
4:50-5:00pm
5:00-5:07pm
(burst to 1000)
5:07-5:17pm
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
MantisJob
.source(NetflixSources.moviePlayAttempts())
.stage(playAttempts -> {
return playAttempts.groupBy(playAttempt -> {
return playAttempt.getMovieId();
})
})
.stage(playAttemptsByMovieId -> {
playAttemptsByMovieId
.window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts
.flatMap(windowOfPlayAttempts -> {
return windowOfPlayAttempts
.reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> {
experiment.updateFailRatio(playAttempt);
experiment.updateExamples(playAttempt);
return experiment;
}).doOnNext(experiment -> {
logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis
}).filter(experiment -> {
return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get();
}).map(experiment -> {
return new FailReport(experiment, runCorrelations(experiment.getExamples()));
}).doOnNext(report -> {
logToHistorical("Failure report", report.getId(), report); // log for offline analysis
})
})
})
.sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
stream.onBackpressure(strategy?).subscribe
stream.onBackpressure(buffer).subscribe
stream.onBackpressure(drop).subscribe
stream.onBackpressure(sample).subscribe
stream.onBackpressure(scaleHorizontally).subscribe
Reactive-Streams
https://github.com/reactive-streams/reactive-streams
Reactive-Streams
https://github.com/reactive-streams/reactive-streams
https://github.com/ReactiveX/RxJavaReactiveStreams
final ActorSystem system = ActorSystem.create("InteropTest");
final FlowMaterializer mat = FlowMaterializer.create(system);
// RxJava Observable
Observable<GroupedObservable<Boolean, Integer>> oddAndEvenGroups = Observable.range(1, 1000000)
.groupBy(i -> i % 2 == 0)
.take(2);
Observable<String> strings = oddAndEvenGroups.<String> flatMap(group -> {
// schedule odd and even on different event loops
Observable<Integer> asyncGroup = group.observeOn(Schedulers.computation());
// convert to Reactive Streams Publisher
Publisher<Integer> groupPublisher = RxReactiveStreams.toPublisher(asyncGroup);
// convert to Akka Streams Source and transform using Akka Streams ‘map’ and ‘take’ operators
Source<String> stringSource = Source.from(groupPublisher).map(i -> i + " " + group.getKey()).take(2000);
// convert back from Akka to Rx Observable
return RxReactiveStreams.toObservable(stringSource.runWith(Sink.<String> fanoutPublisher(1, 1), mat));
});
strings.toBlocking().forEach(System.out::println);
system.shutdown();
compile 'io.reactivex:rxjava:1.0.+'
compile 'io.reactivex:rxjava-reactive-streams:0.3.0'
compile 'com.typesafe.akka:akka-stream-experimental_2.11:0.10-M1'
final ActorSystem system = ActorSystem.create("InteropTest");
final FlowMaterializer mat = FlowMaterializer.create(system);
// RxJava Observable
Observable<GroupedObservable<Boolean, Integer>> oddAndEvenGroups = Observable.range(1, 1000000)
.groupBy(i -> i % 2 == 0)
.take(2);
Observable<String> strings = oddAndEvenGroups.<String> flatMap(group -> {
// schedule odd and even on different event loops
Observable<Integer> asyncGroup = group.observeOn(Schedulers.computation());
// convert to Reactive Streams Publisher
Publisher<Integer> groupPublisher = RxReactiveStreams.toPublisher(asyncGroup);
// convert to Akka Streams Source and transform using Akka Streams ‘map’ and ‘take’ operators
Source<String> stringSource = Source.from(groupPublisher).map(i -> i + " " + group.getKey()).take(2000);
// convert back from Akka to Rx Observable
return RxReactiveStreams.toObservable(stringSource.runWith(Sink.<String> fanoutPublisher(1, 1), mat));
});
strings.toBlocking().forEach(System.out::println);
system.shutdown();
compile 'io.reactivex:rxjava:1.0.+'
compile 'io.reactivex:rxjava-reactive-streams:0.3.0'
compile 'com.typesafe.akka:akka-stream-experimental_2.11:0.10-M1'
// RxJava Observable
Observable<GroupedObservable<Boolean, Integer>> oddAndEvenGroups = Observable.range(1, 1000000)
.groupBy(i -> i % 2 == 0)
.take(2);
Observable<String> strings = oddAndEvenGroups.<String> flatMap(group -> {
// schedule odd and even on different event loops
Observable<Integer> asyncGroup = group.observeOn(Schedulers.computation());
// convert to Reactive Streams Publisher
Publisher<Integer> groupPublisher = RxReactiveStreams.toPublisher(asyncGroup);
// Convert to Reactor Stream and transform using Reactor Stream ‘map’ and ‘take’ operators
Stream<String> linesStream = Streams.create(groupPublisher).map(i -> i + " " + group.getKey()).take(2000);
// convert back from Reactor Stream to Rx Observable
return RxReactiveStreams.toObservable(linesStream);
});
strings.toBlocking().forEach(System.out::println);
compile 'io.reactivex:rxjava:1.0.+'
compile 'io.reactivex:rxjava-reactive-streams:0.3.0'
compile 'org.projectreactor:reactor-core:2.0.0.M1'
// RxJava Observable
Observable<GroupedObservable<Boolean, Integer>> oddAndEvenGroups = Observable.range(1, 1000000)
.groupBy(i -> i % 2 == 0)
.take(2);
Observable<String> strings = oddAndEvenGroups.<String> flatMap(group -> {
// schedule odd and even on different event loops
Observable<Integer> asyncGroup = group.observeOn(Schedulers.computation());
// convert to Reactive Streams Publisher
Publisher<Integer> groupPublisher = RxReactiveStreams.toPublisher(asyncGroup);
// Convert to Reactor Stream and transform using Reactor Stream ‘map’ and ‘take’ operators
Stream<String> linesStream = Streams.create(groupPublisher).map(i -> i + " " + group.getKey()).take(2000);
// convert back from Reactor Stream to Rx Observable
return RxReactiveStreams.toObservable(linesStream);
});
strings.toBlocking().forEach(System.out::println);
compile 'io.reactivex:rxjava:1.0.+'
compile 'io.reactivex:rxjava-reactive-streams:0.3.0'
compile 'org.projectreactor:reactor-core:2.0.0.M1'
compile 'io.reactivex:rxjava:1.0.+'
compile 'io.reactivex:rxjava-reactive-streams:0.3.0'
compile 'io.ratpack:ratpack-rx:0.9.10'
try {
RatpackServer server = EmbeddedApp.fromHandler(ctx -> {
Observable<String> o1 = Observable.range(0, 2000)
.observeOn(Schedulers.computation()).map(i -> {
return "A " + i;
});
Observable<String> o2 = Observable.range(0, 2000)
.observeOn(Schedulers.computation()).map(i -> {
return "B " + i;
});
Observable<String> o = Observable.merge(o1, o2);
ctx.render(
ServerSentEvents.serverSentEvents(RxReactiveStreams.toPublisher(o), e ->
e.event("counter").data("event " + e.getItem()))
);
}).getServer();
server.start();
System.out.println("Port: " + server.getBindPort());
} catch (Exception e) {
e.printStackTrace();
}
compile 'io.reactivex:rxjava:1.0.+'
compile 'io.reactivex:rxjava-reactive-streams:0.3.0'
compile 'io.ratpack:ratpack-rx:0.9.10'
try {
RatpackServer server = EmbeddedApp.fromHandler(ctx -> {
Observable<String> o1 = Observable.range(0, 2000)
.observeOn(Schedulers.computation()).map(i -> {
return "A " + i;
});
Observable<String> o2 = Observable.range(0, 2000)
.observeOn(Schedulers.computation()).map(i -> {
return "B " + i;
});
Observable<String> o = Observable.merge(o1, o2);
ctx.render(
ServerSentEvents.serverSentEvents(RxReactiveStreams.toPublisher(o), e ->
e.event("counter").data("event " + e.getItem()))
);
}).getServer();
server.start();
System.out.println("Port: " + server.getBindPort());
} catch (Exception e) {
e.printStackTrace();
}
RxJava 1.0 Final
November 18th
Mental Shift
imperative → functional
sync → async
pull → push
Concurrency and async are non-trivial.
Rx doesn’t trivialize it.
Rx is powerful and rewards those
who go through the learning curve.
Single Multiple
Sync T getData()
Iterable<T> getData()
Stream<T> getData()
Async Future<T> getData() Observable<T> getData()
Abstract Concurrency
Non-Opinionated Concurrency
Decouple Production from Consumption
Powerful Composition
of Nested, Conditional Flows
First-class Support of
Error Handling, Scheduling
& Flow Control
RxJava
http://github.com/ReactiveX/RxJava
http://reactivex.io
Reactive Programming in the Netflix API with RxJava http://techblog.netflix.com/2013/02/rxjava-netflix-api.html
Optimizing the Netflix API http://techblog.netflix.com/2013/01/optimizing-netflix-api.html
Reactive Extensions (Rx) http://www.reactivex.io
Reactive Streams https://github.com/reactive-streams/reactive-streams
Ben Christensen
@benjchristensen
RxJava
https://github.com/ReactiveX/RxJava
@RxJava
RxJS
http://reactive-extensions.github.io/RxJS/
@ReactiveX
jobs.netflix.com
Watch the video with slide synchronization on
InfoQ.com!
http://www.infoq.com/presentations/rx-
service-architecture

More Related Content

What's hot

Building Scalable Stateless Applications with RxJava
Building Scalable Stateless Applications with RxJavaBuilding Scalable Stateless Applications with RxJava
Building Scalable Stateless Applications with RxJava
Rick Warren
 
End to End Akka Streams / Reactive Streams - from Business to Socket
End to End Akka Streams / Reactive Streams - from Business to SocketEnd to End Akka Streams / Reactive Streams - from Business to Socket
End to End Akka Streams / Reactive Streams - from Business to Socket
Konrad Malawski
 
RxJava Applied
RxJava AppliedRxJava Applied
RxJava Applied
Igor Lozynskyi
 
Bulding a reactive game engine with Spring 5 & Couchbase
Bulding a reactive game engine with Spring 5 & CouchbaseBulding a reactive game engine with Spring 5 & Couchbase
Bulding a reactive game engine with Spring 5 & Couchbase
Alex Derkach
 
Fresh from the Oven (04.2015): Experimental Akka Typed and Akka Streams
Fresh from the Oven (04.2015): Experimental Akka Typed and Akka StreamsFresh from the Oven (04.2015): Experimental Akka Typed and Akka Streams
Fresh from the Oven (04.2015): Experimental Akka Typed and Akka StreamsKonrad Malawski
 
Developing distributed applications with Akka and Akka Cluster
Developing distributed applications with Akka and Akka ClusterDeveloping distributed applications with Akka and Akka Cluster
Developing distributed applications with Akka and Akka Cluster
Konstantin Tsykulenko
 
Async await
Async awaitAsync await
Async await
Jeff Hart
 
Reactive Streams, j.u.concurrent & Beyond!
Reactive Streams, j.u.concurrent & Beyond!Reactive Streams, j.u.concurrent & Beyond!
Reactive Streams, j.u.concurrent & Beyond!
Konrad Malawski
 
libAttachSQL, The Next-Generation C Connector For MySQL
libAttachSQL, The Next-Generation C Connector For MySQLlibAttachSQL, The Next-Generation C Connector For MySQL
libAttachSQL, The Next-Generation C Connector For MySQL
Andrew Hutchings
 
The Cloud-natives are RESTless @ JavaOne
The Cloud-natives are RESTless @ JavaOneThe Cloud-natives are RESTless @ JavaOne
The Cloud-natives are RESTless @ JavaOne
Konrad Malawski
 
How Reactive Streams & Akka Streams change the JVM Ecosystem
How Reactive Streams & Akka Streams change the JVM EcosystemHow Reactive Streams & Akka Streams change the JVM Ecosystem
How Reactive Streams & Akka Streams change the JVM Ecosystem
Konrad Malawski
 
Async and Await on the Server
Async and Await on the ServerAsync and Await on the Server
Async and Await on the Server
Doug Jones
 
The Newest in Session Types
The Newest in Session TypesThe Newest in Session Types
The Newest in Session Types
Roland Kuhn
 
Reactive Streams / Akka Streams - GeeCON Prague 2014
Reactive Streams / Akka Streams - GeeCON Prague 2014Reactive Streams / Akka Streams - GeeCON Prague 2014
Reactive Streams / Akka Streams - GeeCON Prague 2014
Konrad Malawski
 
Leveraging Open Source for Database Development: Database Version Control wit...
Leveraging Open Source for Database Development: Database Version Control wit...Leveraging Open Source for Database Development: Database Version Control wit...
Leveraging Open Source for Database Development: Database Version Control wit...
All Things Open
 
CTU June 2011 - C# 5.0 - ASYNC & Await
CTU June 2011 - C# 5.0 - ASYNC & AwaitCTU June 2011 - C# 5.0 - ASYNC & Await
CTU June 2011 - C# 5.0 - ASYNC & AwaitSpiffy
 
Practical RxJava for Android
Practical RxJava for AndroidPractical RxJava for Android
Practical RxJava for Android
Tomáš Kypta
 
PyCon AU 2012 - Debugging Live Python Web Applications
PyCon AU 2012 - Debugging Live Python Web ApplicationsPyCon AU 2012 - Debugging Live Python Web Applications
PyCon AU 2012 - Debugging Live Python Web Applications
Graham Dumpleton
 

What's hot (20)

Building Scalable Stateless Applications with RxJava
Building Scalable Stateless Applications with RxJavaBuilding Scalable Stateless Applications with RxJava
Building Scalable Stateless Applications with RxJava
 
End to End Akka Streams / Reactive Streams - from Business to Socket
End to End Akka Streams / Reactive Streams - from Business to SocketEnd to End Akka Streams / Reactive Streams - from Business to Socket
End to End Akka Streams / Reactive Streams - from Business to Socket
 
elk_stack_alexander_szalonnas
elk_stack_alexander_szalonnaselk_stack_alexander_szalonnas
elk_stack_alexander_szalonnas
 
RxJava Applied
RxJava AppliedRxJava Applied
RxJava Applied
 
Bulding a reactive game engine with Spring 5 & Couchbase
Bulding a reactive game engine with Spring 5 & CouchbaseBulding a reactive game engine with Spring 5 & Couchbase
Bulding a reactive game engine with Spring 5 & Couchbase
 
Fresh from the Oven (04.2015): Experimental Akka Typed and Akka Streams
Fresh from the Oven (04.2015): Experimental Akka Typed and Akka StreamsFresh from the Oven (04.2015): Experimental Akka Typed and Akka Streams
Fresh from the Oven (04.2015): Experimental Akka Typed and Akka Streams
 
Developing distributed applications with Akka and Akka Cluster
Developing distributed applications with Akka and Akka ClusterDeveloping distributed applications with Akka and Akka Cluster
Developing distributed applications with Akka and Akka Cluster
 
Async await
Async awaitAsync await
Async await
 
Reactive Streams, j.u.concurrent & Beyond!
Reactive Streams, j.u.concurrent & Beyond!Reactive Streams, j.u.concurrent & Beyond!
Reactive Streams, j.u.concurrent & Beyond!
 
libAttachSQL, The Next-Generation C Connector For MySQL
libAttachSQL, The Next-Generation C Connector For MySQLlibAttachSQL, The Next-Generation C Connector For MySQL
libAttachSQL, The Next-Generation C Connector For MySQL
 
The Cloud-natives are RESTless @ JavaOne
The Cloud-natives are RESTless @ JavaOneThe Cloud-natives are RESTless @ JavaOne
The Cloud-natives are RESTless @ JavaOne
 
How Reactive Streams & Akka Streams change the JVM Ecosystem
How Reactive Streams & Akka Streams change the JVM EcosystemHow Reactive Streams & Akka Streams change the JVM Ecosystem
How Reactive Streams & Akka Streams change the JVM Ecosystem
 
Async and Await on the Server
Async and Await on the ServerAsync and Await on the Server
Async and Await on the Server
 
The Newest in Session Types
The Newest in Session TypesThe Newest in Session Types
The Newest in Session Types
 
Reactive Streams / Akka Streams - GeeCON Prague 2014
Reactive Streams / Akka Streams - GeeCON Prague 2014Reactive Streams / Akka Streams - GeeCON Prague 2014
Reactive Streams / Akka Streams - GeeCON Prague 2014
 
rx-java-presentation
rx-java-presentationrx-java-presentation
rx-java-presentation
 
Leveraging Open Source for Database Development: Database Version Control wit...
Leveraging Open Source for Database Development: Database Version Control wit...Leveraging Open Source for Database Development: Database Version Control wit...
Leveraging Open Source for Database Development: Database Version Control wit...
 
CTU June 2011 - C# 5.0 - ASYNC & Await
CTU June 2011 - C# 5.0 - ASYNC & AwaitCTU June 2011 - C# 5.0 - ASYNC & Await
CTU June 2011 - C# 5.0 - ASYNC & Await
 
Practical RxJava for Android
Practical RxJava for AndroidPractical RxJava for Android
Practical RxJava for Android
 
PyCon AU 2012 - Debugging Live Python Web Applications
PyCon AU 2012 - Debugging Live Python Web ApplicationsPyCon AU 2012 - Debugging Live Python Web Applications
PyCon AU 2012 - Debugging Live Python Web Applications
 

Similar to Reactive Programming with Rx

Asynchronous Programming at Netflix
Asynchronous Programming at NetflixAsynchronous Programming at Netflix
Asynchronous Programming at Netflix
C4Media
 
.NET Fest 2018. Антон Молдован. One year of using F# in production at SBTech
.NET Fest 2018. Антон Молдован. One year of using F# in production at SBTech.NET Fest 2018. Антон Молдован. One year of using F# in production at SBTech
.NET Fest 2018. Антон Молдован. One year of using F# in production at SBTech
NETFest
 
WebRTC Standards & Implementation Q&A - All about browser interoperability
WebRTC Standards & Implementation Q&A - All about browser interoperabilityWebRTC Standards & Implementation Q&A - All about browser interoperability
WebRTC Standards & Implementation Q&A - All about browser interoperability
Amir Zmora
 
The JSON Architecture - BucharestJS / July
The JSON Architecture - BucharestJS / JulyThe JSON Architecture - BucharestJS / July
The JSON Architecture - BucharestJS / July
Constantin Dumitrescu
 
The unconventional devices for the video streaming in Android
The unconventional devices for the video streaming in AndroidThe unconventional devices for the video streaming in Android
The unconventional devices for the video streaming in Android
Alessandro Martellucci
 
Firebase & SwiftUI Workshop
Firebase & SwiftUI WorkshopFirebase & SwiftUI Workshop
Firebase & SwiftUI Workshop
Peter Friese
 
Eddystone beacons demo
Eddystone beacons demoEddystone beacons demo
Eddystone beacons demo
Angelo Rüggeberg
 
CP3108B (Mozilla) Sharing Session on Add-on SDK
CP3108B (Mozilla) Sharing Session on Add-on SDKCP3108B (Mozilla) Sharing Session on Add-on SDK
CP3108B (Mozilla) Sharing Session on Add-on SDK
Mifeng
 
The unconventional devices for the Android video streaming
The unconventional devices for the Android video streamingThe unconventional devices for the Android video streaming
The unconventional devices for the Android video streaming
Matteo Bonifazi
 
Reactive Programming at Cloud-Scale and Beyond
Reactive Programming at Cloud-Scale and BeyondReactive Programming at Cloud-Scale and Beyond
Reactive Programming at Cloud-Scale and Beyond
C4Media
 
Server Side Swift with Swag
Server Side Swift with SwagServer Side Swift with Swag
Server Side Swift with Swag
Jens Ravens
 
RichFaces: rich:* component library
RichFaces: rich:* component libraryRichFaces: rich:* component library
RichFaces: rich:* component libraryMax Katz
 
Cloudbase.io MoSync Reload Course
Cloudbase.io MoSync Reload CourseCloudbase.io MoSync Reload Course
Cloudbase.io MoSync Reload Course
cloudbase.io
 
Bringing JAMStack to the Enterprise
Bringing JAMStack to the EnterpriseBringing JAMStack to the Enterprise
Bringing JAMStack to the Enterprise
C4Media
 
WSO2 Product Release Webinar: WSO2 Complex Event Processor 4.0
WSO2 Product Release Webinar: WSO2 Complex Event Processor 4.0WSO2 Product Release Webinar: WSO2 Complex Event Processor 4.0
WSO2 Product Release Webinar: WSO2 Complex Event Processor 4.0
WSO2
 
Digital Publishing for Scale: The Economist and Go
Digital Publishing for Scale: The Economist and GoDigital Publishing for Scale: The Economist and Go
Digital Publishing for Scale: The Economist and Go
C4Media
 
Netty from the trenches
Netty from the trenchesNetty from the trenches
Netty from the trenches
Jordi Gerona
 
A Practical Road to SaaS in Python
A Practical Road to SaaS in PythonA Practical Road to SaaS in Python
A Practical Road to SaaS in Python
C4Media
 
Advanced redux
Advanced reduxAdvanced redux
Advanced redux
Boris Dinkevich
 
Xitrum Web Framework Live Coding Demos / Xitrum Web Framework ライブコーディング
Xitrum Web Framework Live Coding Demos / Xitrum Web Framework ライブコーディングXitrum Web Framework Live Coding Demos / Xitrum Web Framework ライブコーディング
Xitrum Web Framework Live Coding Demos / Xitrum Web Framework ライブコーディング
scalaconfjp
 

Similar to Reactive Programming with Rx (20)

Asynchronous Programming at Netflix
Asynchronous Programming at NetflixAsynchronous Programming at Netflix
Asynchronous Programming at Netflix
 
.NET Fest 2018. Антон Молдован. One year of using F# in production at SBTech
.NET Fest 2018. Антон Молдован. One year of using F# in production at SBTech.NET Fest 2018. Антон Молдован. One year of using F# in production at SBTech
.NET Fest 2018. Антон Молдован. One year of using F# in production at SBTech
 
WebRTC Standards & Implementation Q&A - All about browser interoperability
WebRTC Standards & Implementation Q&A - All about browser interoperabilityWebRTC Standards & Implementation Q&A - All about browser interoperability
WebRTC Standards & Implementation Q&A - All about browser interoperability
 
The JSON Architecture - BucharestJS / July
The JSON Architecture - BucharestJS / JulyThe JSON Architecture - BucharestJS / July
The JSON Architecture - BucharestJS / July
 
The unconventional devices for the video streaming in Android
The unconventional devices for the video streaming in AndroidThe unconventional devices for the video streaming in Android
The unconventional devices for the video streaming in Android
 
Firebase & SwiftUI Workshop
Firebase & SwiftUI WorkshopFirebase & SwiftUI Workshop
Firebase & SwiftUI Workshop
 
Eddystone beacons demo
Eddystone beacons demoEddystone beacons demo
Eddystone beacons demo
 
CP3108B (Mozilla) Sharing Session on Add-on SDK
CP3108B (Mozilla) Sharing Session on Add-on SDKCP3108B (Mozilla) Sharing Session on Add-on SDK
CP3108B (Mozilla) Sharing Session on Add-on SDK
 
The unconventional devices for the Android video streaming
The unconventional devices for the Android video streamingThe unconventional devices for the Android video streaming
The unconventional devices for the Android video streaming
 
Reactive Programming at Cloud-Scale and Beyond
Reactive Programming at Cloud-Scale and BeyondReactive Programming at Cloud-Scale and Beyond
Reactive Programming at Cloud-Scale and Beyond
 
Server Side Swift with Swag
Server Side Swift with SwagServer Side Swift with Swag
Server Side Swift with Swag
 
RichFaces: rich:* component library
RichFaces: rich:* component libraryRichFaces: rich:* component library
RichFaces: rich:* component library
 
Cloudbase.io MoSync Reload Course
Cloudbase.io MoSync Reload CourseCloudbase.io MoSync Reload Course
Cloudbase.io MoSync Reload Course
 
Bringing JAMStack to the Enterprise
Bringing JAMStack to the EnterpriseBringing JAMStack to the Enterprise
Bringing JAMStack to the Enterprise
 
WSO2 Product Release Webinar: WSO2 Complex Event Processor 4.0
WSO2 Product Release Webinar: WSO2 Complex Event Processor 4.0WSO2 Product Release Webinar: WSO2 Complex Event Processor 4.0
WSO2 Product Release Webinar: WSO2 Complex Event Processor 4.0
 
Digital Publishing for Scale: The Economist and Go
Digital Publishing for Scale: The Economist and GoDigital Publishing for Scale: The Economist and Go
Digital Publishing for Scale: The Economist and Go
 
Netty from the trenches
Netty from the trenchesNetty from the trenches
Netty from the trenches
 
A Practical Road to SaaS in Python
A Practical Road to SaaS in PythonA Practical Road to SaaS in Python
A Practical Road to SaaS in Python
 
Advanced redux
Advanced reduxAdvanced redux
Advanced redux
 
Xitrum Web Framework Live Coding Demos / Xitrum Web Framework ライブコーディング
Xitrum Web Framework Live Coding Demos / Xitrum Web Framework ライブコーディングXitrum Web Framework Live Coding Demos / Xitrum Web Framework ライブコーディング
Xitrum Web Framework Live Coding Demos / Xitrum Web Framework ライブコーディング
 

More from C4Media

Streaming a Million Likes/Second: Real-Time Interactions on Live Video
Streaming a Million Likes/Second: Real-Time Interactions on Live VideoStreaming a Million Likes/Second: Real-Time Interactions on Live Video
Streaming a Million Likes/Second: Real-Time Interactions on Live Video
C4Media
 
Next Generation Client APIs in Envoy Mobile
Next Generation Client APIs in Envoy MobileNext Generation Client APIs in Envoy Mobile
Next Generation Client APIs in Envoy Mobile
C4Media
 
Software Teams and Teamwork Trends Report Q1 2020
Software Teams and Teamwork Trends Report Q1 2020Software Teams and Teamwork Trends Report Q1 2020
Software Teams and Teamwork Trends Report Q1 2020
C4Media
 
Understand the Trade-offs Using Compilers for Java Applications
Understand the Trade-offs Using Compilers for Java ApplicationsUnderstand the Trade-offs Using Compilers for Java Applications
Understand the Trade-offs Using Compilers for Java Applications
C4Media
 
Kafka Needs No Keeper
Kafka Needs No KeeperKafka Needs No Keeper
Kafka Needs No Keeper
C4Media
 
High Performing Teams Act Like Owners
High Performing Teams Act Like OwnersHigh Performing Teams Act Like Owners
High Performing Teams Act Like Owners
C4Media
 
Does Java Need Inline Types? What Project Valhalla Can Bring to Java
Does Java Need Inline Types? What Project Valhalla Can Bring to JavaDoes Java Need Inline Types? What Project Valhalla Can Bring to Java
Does Java Need Inline Types? What Project Valhalla Can Bring to Java
C4Media
 
Service Meshes- The Ultimate Guide
Service Meshes- The Ultimate GuideService Meshes- The Ultimate Guide
Service Meshes- The Ultimate Guide
C4Media
 
Shifting Left with Cloud Native CI/CD
Shifting Left with Cloud Native CI/CDShifting Left with Cloud Native CI/CD
Shifting Left with Cloud Native CI/CD
C4Media
 
CI/CD for Machine Learning
CI/CD for Machine LearningCI/CD for Machine Learning
CI/CD for Machine Learning
C4Media
 
Fault Tolerance at Speed
Fault Tolerance at SpeedFault Tolerance at Speed
Fault Tolerance at Speed
C4Media
 
Architectures That Scale Deep - Regaining Control in Deep Systems
Architectures That Scale Deep - Regaining Control in Deep SystemsArchitectures That Scale Deep - Regaining Control in Deep Systems
Architectures That Scale Deep - Regaining Control in Deep Systems
C4Media
 
ML in the Browser: Interactive Experiences with Tensorflow.js
ML in the Browser: Interactive Experiences with Tensorflow.jsML in the Browser: Interactive Experiences with Tensorflow.js
ML in the Browser: Interactive Experiences with Tensorflow.js
C4Media
 
Build Your Own WebAssembly Compiler
Build Your Own WebAssembly CompilerBuild Your Own WebAssembly Compiler
Build Your Own WebAssembly Compiler
C4Media
 
User & Device Identity for Microservices @ Netflix Scale
User & Device Identity for Microservices @ Netflix ScaleUser & Device Identity for Microservices @ Netflix Scale
User & Device Identity for Microservices @ Netflix Scale
C4Media
 
Scaling Patterns for Netflix's Edge
Scaling Patterns for Netflix's EdgeScaling Patterns for Netflix's Edge
Scaling Patterns for Netflix's Edge
C4Media
 
Make Your Electron App Feel at Home Everywhere
Make Your Electron App Feel at Home EverywhereMake Your Electron App Feel at Home Everywhere
Make Your Electron App Feel at Home Everywhere
C4Media
 
The Talk You've Been Await-ing For
The Talk You've Been Await-ing ForThe Talk You've Been Await-ing For
The Talk You've Been Await-ing For
C4Media
 
Future of Data Engineering
Future of Data EngineeringFuture of Data Engineering
Future of Data Engineering
C4Media
 
Automated Testing for Terraform, Docker, Packer, Kubernetes, and More
Automated Testing for Terraform, Docker, Packer, Kubernetes, and MoreAutomated Testing for Terraform, Docker, Packer, Kubernetes, and More
Automated Testing for Terraform, Docker, Packer, Kubernetes, and More
C4Media
 

More from C4Media (20)

Streaming a Million Likes/Second: Real-Time Interactions on Live Video
Streaming a Million Likes/Second: Real-Time Interactions on Live VideoStreaming a Million Likes/Second: Real-Time Interactions on Live Video
Streaming a Million Likes/Second: Real-Time Interactions on Live Video
 
Next Generation Client APIs in Envoy Mobile
Next Generation Client APIs in Envoy MobileNext Generation Client APIs in Envoy Mobile
Next Generation Client APIs in Envoy Mobile
 
Software Teams and Teamwork Trends Report Q1 2020
Software Teams and Teamwork Trends Report Q1 2020Software Teams and Teamwork Trends Report Q1 2020
Software Teams and Teamwork Trends Report Q1 2020
 
Understand the Trade-offs Using Compilers for Java Applications
Understand the Trade-offs Using Compilers for Java ApplicationsUnderstand the Trade-offs Using Compilers for Java Applications
Understand the Trade-offs Using Compilers for Java Applications
 
Kafka Needs No Keeper
Kafka Needs No KeeperKafka Needs No Keeper
Kafka Needs No Keeper
 
High Performing Teams Act Like Owners
High Performing Teams Act Like OwnersHigh Performing Teams Act Like Owners
High Performing Teams Act Like Owners
 
Does Java Need Inline Types? What Project Valhalla Can Bring to Java
Does Java Need Inline Types? What Project Valhalla Can Bring to JavaDoes Java Need Inline Types? What Project Valhalla Can Bring to Java
Does Java Need Inline Types? What Project Valhalla Can Bring to Java
 
Service Meshes- The Ultimate Guide
Service Meshes- The Ultimate GuideService Meshes- The Ultimate Guide
Service Meshes- The Ultimate Guide
 
Shifting Left with Cloud Native CI/CD
Shifting Left with Cloud Native CI/CDShifting Left with Cloud Native CI/CD
Shifting Left with Cloud Native CI/CD
 
CI/CD for Machine Learning
CI/CD for Machine LearningCI/CD for Machine Learning
CI/CD for Machine Learning
 
Fault Tolerance at Speed
Fault Tolerance at SpeedFault Tolerance at Speed
Fault Tolerance at Speed
 
Architectures That Scale Deep - Regaining Control in Deep Systems
Architectures That Scale Deep - Regaining Control in Deep SystemsArchitectures That Scale Deep - Regaining Control in Deep Systems
Architectures That Scale Deep - Regaining Control in Deep Systems
 
ML in the Browser: Interactive Experiences with Tensorflow.js
ML in the Browser: Interactive Experiences with Tensorflow.jsML in the Browser: Interactive Experiences with Tensorflow.js
ML in the Browser: Interactive Experiences with Tensorflow.js
 
Build Your Own WebAssembly Compiler
Build Your Own WebAssembly CompilerBuild Your Own WebAssembly Compiler
Build Your Own WebAssembly Compiler
 
User & Device Identity for Microservices @ Netflix Scale
User & Device Identity for Microservices @ Netflix ScaleUser & Device Identity for Microservices @ Netflix Scale
User & Device Identity for Microservices @ Netflix Scale
 
Scaling Patterns for Netflix's Edge
Scaling Patterns for Netflix's EdgeScaling Patterns for Netflix's Edge
Scaling Patterns for Netflix's Edge
 
Make Your Electron App Feel at Home Everywhere
Make Your Electron App Feel at Home EverywhereMake Your Electron App Feel at Home Everywhere
Make Your Electron App Feel at Home Everywhere
 
The Talk You've Been Await-ing For
The Talk You've Been Await-ing ForThe Talk You've Been Await-ing For
The Talk You've Been Await-ing For
 
Future of Data Engineering
Future of Data EngineeringFuture of Data Engineering
Future of Data Engineering
 
Automated Testing for Terraform, Docker, Packer, Kubernetes, and More
Automated Testing for Terraform, Docker, Packer, Kubernetes, and MoreAutomated Testing for Terraform, Docker, Packer, Kubernetes, and More
Automated Testing for Terraform, Docker, Packer, Kubernetes, and More
 

Recently uploaded

Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Adtran
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
DianaGray10
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
Pierluigi Pugliese
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
Alpen-Adria-Universität
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Neo4j
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
Neo4j
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
Matthew Sinclair
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Aggregage
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Neo4j
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 

Recently uploaded (20)

Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 

Reactive Programming with Rx

  • 1. Ben Christensen Developer – Edge Engineering at Netflix @benjchristensen http://techblog.netflix.com/ QConSF - November 2014 ReactiveProgrammingwithRx
  • 2. InfoQ.com: News & Community Site • 750,000 unique visitors/month • Published in 4 languages (English, Chinese, Japanese and Brazilian Portuguese) • Post content from our QCon conferences • News 15-20 / week • Articles 3-4 / week • Presentations (videos) 12-15 / week • Interviews 2-3 / week • Books 1 / month Watch the video with slide synchronization on InfoQ.com! http://www.infoq.com/presentations /rx-service-architecture
  • 3. Purpose of QCon - to empower software development by facilitating the spread of knowledge and innovation Strategy - practitioner-driven conference designed for YOU: influencers of change and innovation in your teams - speakers and topics driving the evolution and innovation - connecting and catalyzing the influencers and innovators Highlights - attended by more than 12,000 delegates since 2007 - held in 9 cities worldwide Presented at QCon San Francisco www.qconsf.com
  • 6. Iterable<T> pull Observable<T> push T next() throws Exception returns; onNext(T) onError(Exception) onCompleted()  //  Iterable<String>  or  Stream<String>      //  that  contains  75  Strings    getDataFromLocalMemory()      .skip(10)      .limit(5)      .map(s  -­‐>  s  +  "_transformed")      .forEach(t  -­‐>  System.out.println("onNext  =>  "  +  t))    //  Observable<String>      //  that  emits  75  Strings    getDataFromNetwork()      .skip(10)      .take(5)      .map(s  -­‐>  s  +  "_transformed")      .forEach(t  -­‐>  System.out.println("onNext  =>  "  +  t))  
  • 7. Iterable<T> pull Observable<T> push T next() throws Exception returns; onNext(T) onError(Exception) onCompleted()  //  Observable<String>      //  that  emits  75  Strings    getDataFromNetwork()      .skip(10)      .take(5)      .map(s  -­‐>  s  +  "_transformed")      .forEach(t  -­‐>  System.out.println("onNext  =>  "  +  t))    //  Iterable<String>  or  Stream<String>      //  that  contains  75  Strings    getDataFromLocalMemory()      .skip(10)      .limit(5)      .map(s  -­‐>  s  +  "_transformed")      .forEach(t  -­‐>  System.out.println("onNext  =>  "  +  t))  
  • 8. Single Multiple Sync T getData() Iterable<T> getData() Stream<T> getData() Async Future<T> getData() Observable<T> getData()
  • 9. Observable.create(subscriber -> { subscriber.onNext("Hello World!"); subscriber.onCompleted(); }).subscribe(System.out::println);
  • 11. // shorten by using helper method Observable.just(“Hello”, “World!”) .subscribe(System.out::println);
  • 12. // add onError and onComplete listeners Observable.just(“Hello”, “World!”) .subscribe(System.out::println, Throwable::printStackTrace, () -> System.out.println("Done"));
  • 13. // expand to show full classes Observable.create(new OnSubscribe<String>() { @Override public void call(Subscriber<? super String> subscriber) { subscriber.onNext("Hello World!"); subscriber.onCompleted(); } }).subscribe(new Subscriber<String>() { @Override public void onCompleted() { System.out.println("Done"); } @Override public void onError(Throwable e) { e.printStackTrace(); } @Override public void onNext(String t) { System.out.println(t); } });
  • 14. // add error propagation Observable.create(subscriber -> { try { subscriber.onNext("Hello World!"); subscriber.onCompleted(); } catch (Exception e) { subscriber.onError(e); } }).subscribe(System.out::println);
  • 15. // add error propagation Observable.create(subscriber -> { try { subscriber.onNext(throwException()); subscriber.onCompleted(); } catch (Exception e) { subscriber.onError(e); } }).subscribe(System.out::println);
  • 16. // add error propagation Observable.create(subscriber -> { try { subscriber.onNext("Hello World!"); subscriber.onNext(throwException()); subscriber.onCompleted(); } catch (Exception e) { subscriber.onError(e); } }).subscribe(System.out::println);
  • 17. // add concurrency (manually) Observable.create(subscriber -> { new Thread(() -> { try { subscriber.onNext(getData()); subscriber.onCompleted(); } catch (Exception e) { subscriber.onError(e); } }).start(); }).subscribe(System.out::println); ?
  • 18. // add concurrency (using a Scheduler) Observable.create(subscriber -> { try { subscriber.onNext(getData()); subscriber.onCompleted(); } catch (Exception e) { subscriber.onError(e); } }).subscribeOn(Schedulers.io()) .subscribe(System.out::println); ? Learn more about parameterized concurrency and virtual time with Rx Schedulers at https://github.com/ReactiveX/RxJava/wiki/Scheduler
  • 19. // add operator Observable.create(subscriber -> { try { subscriber.onNext(getData()); subscriber.onCompleted(); } catch (Exception e) { subscriber.onError(e); } }).subscribeOn(Schedulers.io()) .map(data -> data + " --> at " + System.currentTimeMillis()) .subscribe(System.out::println); ?
  • 20. // add error handling Observable.create(subscriber -> { try { subscriber.onNext(getData()); subscriber.onCompleted(); } catch (Exception e) { subscriber.onError(e); } }).subscribeOn(Schedulers.io()) .map(data -> data + " --> at " + System.currentTimeMillis()) .onErrorResumeNext(e -> Observable.just("Fallback Data")) .subscribe(System.out::println);
  • 21. // infinite Observable.create(subscriber -> { int i=0; while(!subscriber.isUnsubscribed()) { subscriber.onNext(i++); } }).subscribe(System.out::println); Note: No backpressure support here. See Observable.from(Iterable) or Observable.range() for actual implementations
  • 22. Hot Cold emits whether you’re ready or not examples mouse and keyboard events system events stock prices emits when requested (generally at controlled rate) examples database query web service request reading file Observable.create(subscriber -> { // register with data source }) Observable.create(subscriber -> { // fetch data })
  • 23. Hot Cold emits whether you’re ready or not examples mouse and keyboard events system events stock prices emits when requested (generally at controlled rate) examples database query web service request reading file Observable.create(subscriber -> { // register with data source }) Observable.create(subscriber -> { // fetch data }) flow control flow control & backpressure
  • 24.
  • 25.
  • 26.
  • 29. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 30. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 31. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 32.  Observable<R>  b  =  Observable<T>.flatMap({  T  t  -­‐>            Observable<R>  r  =  ...  transform  t  ...          return  r;    }) flatMap
  • 33.  Observable<R>  b  =  Observable<T>.flatMap({  T  t  -­‐>            Observable<R>  r  =  ...  transform  t  ...          return  r;    }) flatMap
  • 34. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 35. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 36. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 37. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 38. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 39. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 40. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 41. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 42. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 43. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 44.        Observable.zip(a,  b,  (a,  b)  -­‐>  {                ...  operate  on  values  from  both  a  &  b  ...              return  Arrays.asList(a,  b);          }) zip { ( , ) }
  • 45. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 46. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 47.
  • 48. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 49. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 50. class  VideoService  {        def  VideoList  getPersonalizedListOfMovies(userId);        def  VideoBookmark  getBookmark(userId,  videoId);        def  VideoRating  getRating(userId,  videoId);        def  VideoMetadata  getMetadata(videoId);   } class  VideoService  {        def  Observable<VideoList>  getPersonalizedListOfMovies(userId);        def  Observable<VideoBookmark>  getBookmark(userId,  videoId);        def  Observable<VideoRating>  getRating(userId,  videoId);        def  Observable<VideoMetadata>  getMetadata(videoId);   } ...createanobservableapi: insteadofablockingapi...
  • 51. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); } 1 2 3 4 5 6
  • 52. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); } 1 2 3 4 5 6
  • 54.
  • 55. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 56.
  • 57.
  • 58.
  • 59.
  • 60. public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) { // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }).flatMap(data -> { // output as SSE as we get back the data (no waiting until all is done) return response.writeAndFlush(new ServerSentEvent(SimpleJson.mapToJson(data))); }); }
  • 61.
  • 62. Decouples Consumption from Production // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); })
  • 63. // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }) Decouples Consumption from Production
  • 64. // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }) Decouples Consumption from Production
  • 65. Decouples Consumption from Production Event Loop API Request API Request API Request API Request API Request Dependency REST API new Command().observe() new Command(). observe() new Command(). observe() new Command(). observe() new Command(). observe() Collapser
  • 66. // first request User object return getUser(request.getQueryParameters().get("userId")).flatMap(user -> { // then fetch personal catalog Observable<Map<String, Object>> catalog = getPersonalizedCatalog(user) .flatMap(catalogList -> { return catalogList.videos().<Map<String, Object>> flatMap(video -> { Observable<Bookmark> bookmark = getBookmark(video); Observable<Rating> rating = getRating(video); Observable<VideoMetadata> metadata = getMetadata(video); return Observable.zip(bookmark, rating, metadata, (b, r, m) -> { return combineVideoData(video, b, r, m); }); }); }); // and fetch social data in parallel Observable<Map<String, Object>> social = getSocialData(user).map(s -> { return s.getDataAsMap(); }); // merge the results return Observable.merge(catalog, social); }) ~5 network calls (#3 and #4 may result in more due to windowing) 1 2 3 4 5
  • 67. Clear API Communicates Potential Cost class  VideoService  {        def  Observable<VideoList>  getPersonalizedListOfMovies(userId);        def  Observable<VideoBookmark>  getBookmark(userId,  videoId);        def  Observable<VideoRating>  getRating(userId,  videoId);        def  Observable<VideoMetadata>  getMetadata(videoId);   }
  • 68. class  VideoService  {        def  Observable<VideoList>  getPersonalizedListOfMovies(userId);        def  Observable<VideoBookmark>  getBookmark(userId,  videoId);        def  Observable<VideoRating>  getRating(userId,  videoId);        def  Observable<VideoMetadata>  getMetadata(videoId);   } Implementation Can Differ BIO Network Call Local Cache Collapsed Network Call
  • 69. class  VideoService  {        def  Observable<VideoList>  getPersonalizedListOfMovies(userId);        def  Observable<VideoBookmark>  getBookmark(userId,  videoId);        def  Observable<VideoRating>  getRating(userId,  videoId);        def  Observable<VideoMetadata>  getMetadata(videoId);   } Implementation Can Differ and Change Local Cache Collapsed Network CallCollapsed Network Call BIO NIO Network Call
  • 70. Retrieval, Transformation, Combination all done in same declarative manner
  • 73. Observable.create(subscriber -> { throw new RuntimeException("failed!"); }).onErrorResumeNext(throwable -> { return Observable.just("fallback value"); }).subscribe(System.out::println);
  • 74. Observable.create(subscriber -> { throw new RuntimeException("failed!"); }).onErrorReturn(throwable -> { return "fallback value"; }).subscribe(System.out::println);
  • 75. Observable.create(subscriber -> { throw new RuntimeException("failed!"); }).retryWhen(attempts -> { return attempts.zipWith(Observable.range(1, 3), (throwable, i) -> i) .flatMap(i -> { System.out.println("delay retry by " + i + " second(s)"); return Observable.timer(i, TimeUnit.SECONDS); }).concatWith(Observable.error(new RuntimeException("Exceeded 3 retries"))); }) .subscribe(System.out::println, t -> t.printStackTrace());
  • 76. Observable.create(subscriber -> { throw new RuntimeException("failed!"); }).retryWhen(attempts -> { return attempts.zipWith(Observable.range(1, 3), (throwable, i) -> i) .flatMap(i -> { System.out.println("delay retry by " + i + " second(s)"); return Observable.timer(i, TimeUnit.SECONDS); }).concatWith(Observable.error(new RuntimeException("Exceeded 3 retries"))); }) .subscribe(System.out::println, t -> t.printStackTrace());
  • 77. Observable.create(subscriber -> { throw new RuntimeException("failed!"); }).retryWhen(attempts -> { return attempts.zipWith(Observable.range(1, 3), (throwable, i) -> i) .flatMap(i -> { System.out.println("delay retry by " + i + " second(s)"); return Observable.timer(i, TimeUnit.SECONDS); }).concatWith(Observable.error(new RuntimeException("Exceeded 3 retries"))); }) .subscribe(System.out::println, t -> t.printStackTrace());
  • 78. Observable.create(subscriber -> { throw new RuntimeException("failed!"); }).retryWhen(attempts -> { return attempts.zipWith(Observable.range(1, 3), (throwable, i) -> i) .flatMap(i -> { System.out.println("delay retry by " + i + " second(s)"); return Observable.timer(i, TimeUnit.SECONDS); }).concatWith(Observable.error(new RuntimeException("Exceeded 3 retries"))); }) .subscribe(System.out::println, t -> t.printStackTrace());
  • 80. Concurrency an Observable is sequential (no concurrent emissions) scheduling and combining Observables enables concurrency while retaining sequential emission
  • 81. // merging async Observables allows each // to execute concurrently Observable.merge(getDataAsync(1), getDataAsync(2)) merge
  • 82. // concurrently fetch data for 5 items Observable.range(0, 5).flatMap(i -> { return getDataAsync(i); })
  • 83. Observable.range(0, 5000).window(500).flatMap(work -> { return work.observeOn(Schedulers.computation()) .map(item -> { // simulate computational work try { Thread.sleep(1); } catch (Exception e) {} return item + " processed " + Thread.currentThread(); }); })
  • 84. Observable.range(0, 5000).buffer(500).flatMap(is -> { return Observable.from(is).subscribeOn(Schedulers.computation()) .map(item -> { // simulate computational work try { Thread.sleep(1); } catch (Exception e) {} return item + " processed " + Thread.currentThread(); }); })
  • 87. no backpressure needed Observable.from(iterable).take(1000).map(i -> "value_" + i).subscribe(System.out::println);
  • 88. Observable.from(iterable).take(1000).map(i -> "value_" + i).subscribe(System.out::println); no backpressure needed synchronous on same thread (no queueing)
  • 89. Observable.from(iterable).take(1000).map(i -> "value_" + i) .observeOn(Schedulers.computation()).subscribe(System.out::println); backpressure needed
  • 90. Observable.from(iterable).take(1000).map(i -> "value_" + i) .observeOn(Schedulers.computation()).subscribe(System.out::println); backpressure needed asynchronous (queueing)
  • 92. Hot Cold emits whether you’re ready or not examples mouse and keyboard events system events stock prices emits when requested (generally at controlled rate) examples database query web service request reading file Observable.create(subscriber -> { // register with data source }) Observable.create(subscriber -> { // fetch data }) flow control flow control & backpressure
  • 93. Block (callstack blocking and/or park the thread) Hot or Cold Streams
  • 94. Temporal Operators (batch or drop data using time) Hot Streams
  • 98. Observable.range(1, 1000000).buffer(10, TimeUnit.MILLISECONDS) .toBlocking().forEach(list -> System.out.println("batch: " + list.size())); batch: 71141 batch: 49488 batch: 141147 batch: 141432 batch: 195920 batch: 240462 batch: 160410
  • 99.
  • 100. /* The following will emit a buffered list as it is debounced */ // first we multicast the stream ... using refCount so it handles the subscribe/unsubscribe Observable<Integer> burstStream = intermittentBursts().take(20).publish().refCount(); // then we get the debounced version Observable<Integer> debounced = burstStream.debounce(10, TimeUnit.MILLISECONDS); // then the buffered one that uses the debounced stream to demark window start/stop Observable<List<Integer>> buffered = burstStream.buffer(debounced); // then we subscribe to the buffered stream so it does what we want buffered.toBlocking().forEach(System.out::println); [0, 1, 2] [0, 1, 2] [0, 1, 2, 3, 4, 5, 6] [0, 1, 2, 3, 4] [0, 1] []
  • 101. /* The following will emit a buffered list as it is debounced */ // first we multicast the stream ... using refCount so it handles the subscribe/unsubscribe Observable<Integer> burstStream = intermittentBursts().take(20).publish().refCount(); // then we get the debounced version Observable<Integer> debounced = burstStream.debounce(10, TimeUnit.MILLISECONDS); // then the buffered one that uses the debounced stream to demark window start/stop Observable<List<Integer>> buffered = burstStream.buffer(debounced); // then we subscribe to the buffered stream so it does what we want buffered.toBlocking().forEach(System.out::println); [0, 1, 2] [0, 1, 2] [0, 1, 2, 3, 4, 5, 6] [0, 1, 2, 3, 4] [0, 1] []
  • 102. /* The following will emit a buffered list as it is debounced */ // first we multicast the stream ... using refCount so it handles the subscribe/unsubscribe Observable<Integer> burstStream = intermittentBursts().take(20).publish().refCount(); // then we get the debounced version Observable<Integer> debounced = burstStream.debounce(10, TimeUnit.MILLISECONDS); // then the buffered one that uses the debounced stream to demark window start/stop Observable<List<Integer>> buffered = burstStream.buffer(debounced); // then we subscribe to the buffered stream so it does what we want buffered.toBlocking().forEach(System.out::println); [0, 1, 2] [0, 1, 2] [0, 1, 2, 3, 4, 5, 6] [0, 1, 2, 3, 4] [0, 1] []
  • 103. /* The following will emit a buffered list as it is debounced */ // first we multicast the stream ... using refCount so it handles the subscribe/unsubscribe Observable<Integer> burstStream = intermittentBursts().take(20).publish().refCount(); // then we get the debounced version Observable<Integer> debounced = burstStream.debounce(10, TimeUnit.MILLISECONDS); // then the buffered one that uses the debounced stream to demark window start/stop Observable<List<Integer>> buffered = burstStream.buffer(debounced); // then we subscribe to the buffered stream so it does what we want buffered.toBlocking().forEach(System.out::println); [0, 1, 2] [0, 1, 2] [0, 1, 2, 3, 4, 5, 6] [0, 1, 2, 3, 4] [0, 1] []
  • 104. /* The following will emit a buffered list as it is debounced */ // first we multicast the stream ... using refCount so it handles the subscribe/unsubscribe Observable<Integer> burstStream = intermittentBursts().take(20).publish().refCount(); // then we get the debounced version Observable<Integer> debounced = burstStream.debounce(10, TimeUnit.MILLISECONDS); // then the buffered one that uses the debounced stream to demark window start/stop Observable<List<Integer>> buffered = burstStream.buffer(debounced); // then we subscribe to the buffered stream so it does what we want buffered.toBlocking().forEach(System.out::println); [0, 1, 2] [0, 1, 2] [0, 1, 2, 3, 4, 5, 6] [0, 1, 2, 3, 4] [0, 1] []
  • 105. /* The following will emit a buffered list as it is debounced */ // first we multicast the stream ... using refCount so it handles the subscribe/unsubscribe Observable<Integer> burstStream = intermittentBursts().take(20).publish().refCount(); // then we get the debounced version Observable<Integer> debounced = burstStream.debounce(10, TimeUnit.MILLISECONDS); // then the buffered one that uses the debounced stream to demark window start/stop Observable<List<Integer>> buffered = burstStream.buffer(debounced); // then we subscribe to the buffered stream so it does what we want buffered.toBlocking().forEach(System.out::println); [0, 1, 2] [0, 1, 2] [0, 1, 2, 3, 4, 5, 6] [0, 1, 2, 3, 4] [0, 1] [] https://gist.github.com/benjchristensen/e4524a308456f3c21c0b
  • 106. Observable.range(1, 1000000).window(50, TimeUnit.MILLISECONDS) .flatMap(window -> window.count()) .toBlocking().forEach(count -> System.out.println("num items: " + count)); num items: 477769 num items: 155463 num items: 366768
  • 107. Observable.range(1, 1000000).window(500000) .flatMap(window -> window.count()) .toBlocking().forEach(count -> System.out.println("num items: " + count)); num items: 500000 num items: 500000
  • 109. Push (reactive) when consumer keeps up with producer. Switch to Pull (interactive) when consumer is slow. Bound all* queues.
  • 110. *vertically, not horizontally Push (reactive) when consumer keeps up with producer. Switch to Pull (interactive) when consumer is slow. Bound all* queues.
  • 113. Observable.from(iterable) Observable.from(0, 100000) Cold Streams emits when requested (generally at controlled rate) examples database query web service request reading file
  • 114. Observable.from(iterable) Observable.from(0, 100000) Cold Streams emits when requested (generally at controlled rate) examples database query web service request reading file Pull
  • 116. Reactive Pull hot receives signal *including Observables that don’t implement reactive pull support
  • 124. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 125. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 126. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here) Hot Infinite Stream
  • 127. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 128. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 129. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 131. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 132. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 133. Stage 1 Stage 2 groupBy Event Streams sink
  • 134. Stage 1 Stage 2 12345 56789 34567
  • 135. Stage 1 Stage 2 12345 56789 34567
  • 136. Stage 1 Stage 2 12345 56789 34567
  • 137. Stage 1 Stage 2 12345 56789 34567
  • 138. Stage 1 Stage 2 12345 56789 34567 12345 56789 34567
  • 139. Stage 1 Stage 2 12345 56789 34567 12345 56789 34567
  • 140. Stage 1 Stage 2 12345 56789 34567 12345 56789 34567
  • 141. Stage 1 Stage 2 12345 56789 34567 12345 56789 34567
  • 142. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 145. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 146. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 147.
  • 148. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 149. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 150. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 151. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 152. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 153. MantisJob .source(NetflixSources.moviePlayAttempts()) .stage(playAttempts -> { return playAttempts.groupBy(playAttempt -> { return playAttempt.getMovieId(); }) }) .stage(playAttemptsByMovieId -> { playAttemptsByMovieId .window(10,TimeUnit.MINUTES, 1000) // buffer for 10 minutes, or 1000 play attempts .flatMap(windowOfPlayAttempts -> { return windowOfPlayAttempts .reduce(new FailRatioExperiment(playAttemptsByMovieId.getKey()), (experiment, playAttempt) -> { experiment.updateFailRatio(playAttempt); experiment.updateExamples(playAttempt); return experiment; }).doOnNext(experiment -> { logToHistorical("Play attempt experiment", experiment.getId(),experiment); // log for offline analysis }).filter(experiment -> { return experiment.failRatio() >= DYNAMIC_PROP("fail_threshold").get(); }).map(experiment -> { return new FailReport(experiment, runCorrelations(experiment.getExamples())); }).doOnNext(report -> { logToHistorical("Failure report", report.getId(), report); // log for offline analysis }) }) }) .sink(Sinks.emailAlert(report -> { return toEmail(report)})) // anomalies trigger events (simple email here)
  • 161. final ActorSystem system = ActorSystem.create("InteropTest"); final FlowMaterializer mat = FlowMaterializer.create(system); // RxJava Observable Observable<GroupedObservable<Boolean, Integer>> oddAndEvenGroups = Observable.range(1, 1000000) .groupBy(i -> i % 2 == 0) .take(2); Observable<String> strings = oddAndEvenGroups.<String> flatMap(group -> { // schedule odd and even on different event loops Observable<Integer> asyncGroup = group.observeOn(Schedulers.computation()); // convert to Reactive Streams Publisher Publisher<Integer> groupPublisher = RxReactiveStreams.toPublisher(asyncGroup); // convert to Akka Streams Source and transform using Akka Streams ‘map’ and ‘take’ operators Source<String> stringSource = Source.from(groupPublisher).map(i -> i + " " + group.getKey()).take(2000); // convert back from Akka to Rx Observable return RxReactiveStreams.toObservable(stringSource.runWith(Sink.<String> fanoutPublisher(1, 1), mat)); }); strings.toBlocking().forEach(System.out::println); system.shutdown(); compile 'io.reactivex:rxjava:1.0.+' compile 'io.reactivex:rxjava-reactive-streams:0.3.0' compile 'com.typesafe.akka:akka-stream-experimental_2.11:0.10-M1'
  • 162. final ActorSystem system = ActorSystem.create("InteropTest"); final FlowMaterializer mat = FlowMaterializer.create(system); // RxJava Observable Observable<GroupedObservable<Boolean, Integer>> oddAndEvenGroups = Observable.range(1, 1000000) .groupBy(i -> i % 2 == 0) .take(2); Observable<String> strings = oddAndEvenGroups.<String> flatMap(group -> { // schedule odd and even on different event loops Observable<Integer> asyncGroup = group.observeOn(Schedulers.computation()); // convert to Reactive Streams Publisher Publisher<Integer> groupPublisher = RxReactiveStreams.toPublisher(asyncGroup); // convert to Akka Streams Source and transform using Akka Streams ‘map’ and ‘take’ operators Source<String> stringSource = Source.from(groupPublisher).map(i -> i + " " + group.getKey()).take(2000); // convert back from Akka to Rx Observable return RxReactiveStreams.toObservable(stringSource.runWith(Sink.<String> fanoutPublisher(1, 1), mat)); }); strings.toBlocking().forEach(System.out::println); system.shutdown(); compile 'io.reactivex:rxjava:1.0.+' compile 'io.reactivex:rxjava-reactive-streams:0.3.0' compile 'com.typesafe.akka:akka-stream-experimental_2.11:0.10-M1'
  • 163. // RxJava Observable Observable<GroupedObservable<Boolean, Integer>> oddAndEvenGroups = Observable.range(1, 1000000) .groupBy(i -> i % 2 == 0) .take(2); Observable<String> strings = oddAndEvenGroups.<String> flatMap(group -> { // schedule odd and even on different event loops Observable<Integer> asyncGroup = group.observeOn(Schedulers.computation()); // convert to Reactive Streams Publisher Publisher<Integer> groupPublisher = RxReactiveStreams.toPublisher(asyncGroup); // Convert to Reactor Stream and transform using Reactor Stream ‘map’ and ‘take’ operators Stream<String> linesStream = Streams.create(groupPublisher).map(i -> i + " " + group.getKey()).take(2000); // convert back from Reactor Stream to Rx Observable return RxReactiveStreams.toObservable(linesStream); }); strings.toBlocking().forEach(System.out::println); compile 'io.reactivex:rxjava:1.0.+' compile 'io.reactivex:rxjava-reactive-streams:0.3.0' compile 'org.projectreactor:reactor-core:2.0.0.M1'
  • 164. // RxJava Observable Observable<GroupedObservable<Boolean, Integer>> oddAndEvenGroups = Observable.range(1, 1000000) .groupBy(i -> i % 2 == 0) .take(2); Observable<String> strings = oddAndEvenGroups.<String> flatMap(group -> { // schedule odd and even on different event loops Observable<Integer> asyncGroup = group.observeOn(Schedulers.computation()); // convert to Reactive Streams Publisher Publisher<Integer> groupPublisher = RxReactiveStreams.toPublisher(asyncGroup); // Convert to Reactor Stream and transform using Reactor Stream ‘map’ and ‘take’ operators Stream<String> linesStream = Streams.create(groupPublisher).map(i -> i + " " + group.getKey()).take(2000); // convert back from Reactor Stream to Rx Observable return RxReactiveStreams.toObservable(linesStream); }); strings.toBlocking().forEach(System.out::println); compile 'io.reactivex:rxjava:1.0.+' compile 'io.reactivex:rxjava-reactive-streams:0.3.0' compile 'org.projectreactor:reactor-core:2.0.0.M1'
  • 165. compile 'io.reactivex:rxjava:1.0.+' compile 'io.reactivex:rxjava-reactive-streams:0.3.0' compile 'io.ratpack:ratpack-rx:0.9.10' try { RatpackServer server = EmbeddedApp.fromHandler(ctx -> { Observable<String> o1 = Observable.range(0, 2000) .observeOn(Schedulers.computation()).map(i -> { return "A " + i; }); Observable<String> o2 = Observable.range(0, 2000) .observeOn(Schedulers.computation()).map(i -> { return "B " + i; }); Observable<String> o = Observable.merge(o1, o2); ctx.render( ServerSentEvents.serverSentEvents(RxReactiveStreams.toPublisher(o), e -> e.event("counter").data("event " + e.getItem())) ); }).getServer(); server.start(); System.out.println("Port: " + server.getBindPort()); } catch (Exception e) { e.printStackTrace(); }
  • 166. compile 'io.reactivex:rxjava:1.0.+' compile 'io.reactivex:rxjava-reactive-streams:0.3.0' compile 'io.ratpack:ratpack-rx:0.9.10' try { RatpackServer server = EmbeddedApp.fromHandler(ctx -> { Observable<String> o1 = Observable.range(0, 2000) .observeOn(Schedulers.computation()).map(i -> { return "A " + i; }); Observable<String> o2 = Observable.range(0, 2000) .observeOn(Schedulers.computation()).map(i -> { return "B " + i; }); Observable<String> o = Observable.merge(o1, o2); ctx.render( ServerSentEvents.serverSentEvents(RxReactiveStreams.toPublisher(o), e -> e.event("counter").data("event " + e.getItem())) ); }).getServer(); server.start(); System.out.println("Port: " + server.getBindPort()); } catch (Exception e) { e.printStackTrace(); }
  • 168. Mental Shift imperative → functional sync → async pull → push
  • 169. Concurrency and async are non-trivial. Rx doesn’t trivialize it. Rx is powerful and rewards those who go through the learning curve.
  • 170. Single Multiple Sync T getData() Iterable<T> getData() Stream<T> getData() Async Future<T> getData() Observable<T> getData()
  • 173. Decouple Production from Consumption
  • 174. Powerful Composition of Nested, Conditional Flows
  • 175. First-class Support of Error Handling, Scheduling & Flow Control
  • 177. Reactive Programming in the Netflix API with RxJava http://techblog.netflix.com/2013/02/rxjava-netflix-api.html Optimizing the Netflix API http://techblog.netflix.com/2013/01/optimizing-netflix-api.html Reactive Extensions (Rx) http://www.reactivex.io Reactive Streams https://github.com/reactive-streams/reactive-streams Ben Christensen @benjchristensen RxJava https://github.com/ReactiveX/RxJava @RxJava RxJS http://reactive-extensions.github.io/RxJS/ @ReactiveX jobs.netflix.com
  • 178. Watch the video with slide synchronization on InfoQ.com! http://www.infoq.com/presentations/rx- service-architecture