Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

2017 meetup-apache-kafka-nov

248 views

Published on

Kafka 1.0 Highlights

Published in: Data & Analytics
  • You might also like this slide 'Apache Kafka vs MapR-ES: Fit for purpose/Decision tree': https://www.slideshare.net/sbaltagi/apache-kafka-vs-mapres-fit-for-purposedecision-tree
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

2017 meetup-apache-kafka-nov

  1. 1. Apache Kafka Goes to 1.0 Paris Meetup Nov.2017 Florian HUSSONNOIS - Data Architect @Zenika @fhussonnois Paris Apache Kafka Meetup - November 21, 2017
  2. 2. Paris Apache Kafka Meetup - November 21, 2017 Why Kafka goes to 1.0 ? https://twitter.com/jaykreps/status/925758458908917760
  3. 3. Kafka Core 1.0.0 KIP-112: Tolerate single disk failure in a JBOD broker Just a Buch Of Disks  Prior to Kafka 1.0.0 Broker will shutdown if any disk fails.  As of Kafka 1.0.0 The broker offlines replicas for bad log directories during runtime. The broker notifies the controller that will trigger leader election. Offline replicas and log directories can be monitored through metrics : OfflineReplicasCount : The number of offline replicas on a live broker. OfflineLogDirectoriesCount:The number of offline log directories on a live broker. Paris Apache Kafka Meetup - November 21, 2017
  4. 4. Kafka Streams 1.0.0 KIP-120: Cleanup Kafka Streams builder API Deprecated classes org.apache.kafka.streams.processor.TopologyBuilder org.apache.kafka.streams.kstream.KStreamBuilder New APIs final StreamsBuilder builder = new StreamsBuilder(); builder.<String, String>stream("my-topic") .mapValues(v -> v.toUpperCase()) .to("my-uppercase-topic"); final Topology topology = builder.build(); TopologyDescription description = topology.describe(); KafkaStreams kafkaStreams = new KafkaStreams(topology, streamsConfig); kafkaStreams.start(); TopologyDescription has public interfaces :  Subtopology ▪ Node ▪ Source ▪ Sink ▪ Processor  GlobalStores Maybe useful for debugging (static information) Paris Apache Kafka Meetup - November 21, 2017
  5. 5. Kafka Streams 1.0.0 KIP-130: Expose the state of active tasks at runtime source : https://docs.confluent.io/current/streams/architecture.html Paris Apache Kafka Meetup - November 21, 2017
  6. 6. Kafka Streams 1.0.0 New method and classes final StreamsBuilder builder = new StreamsBuilder(); builder.<String, String>stream("my-topic") .mapValues(v -> v.toUpperCase()) .to("my-uppercase-topic"); final Topology topology = builder.build(); KafkaStreams kafkaStreams = new KafkaStreams(topology, streamsConfig); kafkaStreams.start(); Set<ThreadMetadata> localStreamsThreads = kafkaStreams.localThreadsMetadata(); ThreadMetadata has public classes :  TaskMetadata ▪ TopicPartition Maybe useful for debugging (rutime information) Paris Apache Kafka Meetup - November 21, 2017 KIP-130: Expose the state of active tasks at runtime
  7. 7. Kafka Streams 1.0.0 KIP-160: Enhancements to the print() and writeAsText() methods Deprecated methods org.apache.kafka.streams.StreamsBuilder#print() org.apache.kafka.streams.StreamsBuilder#writeAsText() New Method final StreamsBuilder builder = new StreamsBuilder(); KStream<String, String> stream = builder.<String, String>stream("my-topic") .mapValues(v -> v.toUpperCase()); // Example n°1 stream.print(Printed.toFile("streams-debug.txt")); // Example n°2 Printed<String, String> sysOutPrinted = Printed.toSysOut() .withLabel(« my-uppercase-stream") .withKeyValueMapper((k, v) -> String.format("key:%s -> value: %s", k, v)); stream.print(sysOutPrinted); Paris Apache Kafka Meetup - November 21, 2017
  8. 8. Kafka Streams 1.0.0 KIP-161: streams deserialization exception handlers Prior to Kafka 1.0.0  Close the entire streams pipeline  Implement complex error handling mechanisms using map, flatMap and branch methods Paris Apache Kafka Meetup - November 21, 2017 Two implementations of the interface : LogAndContinueExceptionHandler : logs the error and returns CONTINUE LogAndFailExceptionHandler : logs the error and returns FAIL //{@code default.deserialization.exception.handler} streamsConfig.put(DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG, LogAndFailExceptionHandler.class.getName());
  9. 9. More KIPS  Kafka Streams KIP-138 - Change punctuate semantics  Kafka Connect KIP-196 - Add metrics to Kafka Connect framework  Core Monitoring KIP-164 - Add UnderMinIsrPartitionCount and per-partition UnderMinIsr metrics KIP-187 - Add cumulative count metric for all Kafka rate metrics KIP-188 - Add new metrics to support health checks  Core Security KIP-152 - Improve diagnostics for SASL authentication failures Paris Apache Kafka Meetup - November 21, 2017 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
  10. 10. Paris Apache Kafka Meetup - November 21, 2017 KEEP CALM AND STREAM YOUR DATA www.meetup.com/fr-FR/Paris-Apache-Kafka-Meetup/

×