Yuto Kawamura LINE / Z Part Team At LINE we've been operating Apache Kafka to provide the company-wide shared data pipeline for services using it for storing and distributing data. Kafka is underlying many of our services in some way, not only the messaging service but also AD, Blockchain, Pay, Timeline, Cryptocurrency trading and more. Many services feeding many data into our cluster, leading over 250 billion daily messages and 3.5GB incoming bytes in 1 second which is one of the world largest scale. At the same time, it is required to be stable and performant all the time because many important services uses it as a backend. In this talk I will introduce the overview of Kafka usage at LINE and how we're operating it. I'm also going to talk about some engineerings we did for maximizing its performance, solving troubles led particularly by hosting huge data from many services, leveraging advanced techniques like kernel-level dynamic tracing.