Building Large Java Codebase with Bazel - CodeOneNatan Silnitsky
Continuous integration of a large interconnected Java codebase can be very challenging.
The traditional solution is to break the code up to small cohesive repositories and define semantically versioned modules in each one (e.g. using Maven or Gradle) in such a manner that won't break APIs. This leads to technical debt and stagnation.
Bazel allows to discard versions aside and to work purely with source code dependencies, whether in the local repository or an external one. It can handle very large Java codebases using aggressive caching and a high degree of parallelism.
Apache Maven - Gestione di progetti Java e build automationTiziano Serritella
Apache Maven è un tool per la gestione di progetti e build automation, utilizzato principalmente per progetti Java, il cui obiettivo è: semplificare, uniformare e automatizzare il processo di build di sistemi complessi.
In questa presentazione / guida verranno illustrati i problemi e le criticità dei tool di build automation tradizionali: make e Apache Ant, vedremo poi come installare e configurare Maven, le caratteristiche, gli obiettivi e i punti di forza del tool, le fasi del ciclo di vita, i plugin e i goal, le dipendenze, gli scope e la risoluzione di eventuali conflitti, i repository, i plugin "esterni" e i progetti multi-modulo.
La presentazione è ricca di esempi pratici.
Drupal può essere considerato un CMS che permette una rapida creazione di portali con funzionalità abbastanza standard, ma può anche essere considerato uno strumento usato per la creazione ditrumento usato per la creazione di siti usando le potenzialità di framework che lo strumento presenta.
In questo intervento verranno presentate le API del sistema che permettono una ampia espandibilità e velocità di scrittura di codice. Inoltre verrà descritta l’anatomia di un modulo presentandone la struttura e come questo interagisca con il sistema di base.
L’intervento chiuderà con una carrellata di vantaggi e svantaggi dell’uso di Drupal mettendone in evidenza la differenza dai classici framework
Building Large Java Codebase with Bazel - CodeOneNatan Silnitsky
Continuous integration of a large interconnected Java codebase can be very challenging.
The traditional solution is to break the code up to small cohesive repositories and define semantically versioned modules in each one (e.g. using Maven or Gradle) in such a manner that won't break APIs. This leads to technical debt and stagnation.
Bazel allows to discard versions aside and to work purely with source code dependencies, whether in the local repository or an external one. It can handle very large Java codebases using aggressive caching and a high degree of parallelism.
Apache Maven - Gestione di progetti Java e build automationTiziano Serritella
Apache Maven è un tool per la gestione di progetti e build automation, utilizzato principalmente per progetti Java, il cui obiettivo è: semplificare, uniformare e automatizzare il processo di build di sistemi complessi.
In questa presentazione / guida verranno illustrati i problemi e le criticità dei tool di build automation tradizionali: make e Apache Ant, vedremo poi come installare e configurare Maven, le caratteristiche, gli obiettivi e i punti di forza del tool, le fasi del ciclo di vita, i plugin e i goal, le dipendenze, gli scope e la risoluzione di eventuali conflitti, i repository, i plugin "esterni" e i progetti multi-modulo.
La presentazione è ricca di esempi pratici.
Drupal può essere considerato un CMS che permette una rapida creazione di portali con funzionalità abbastanza standard, ma può anche essere considerato uno strumento usato per la creazione ditrumento usato per la creazione di siti usando le potenzialità di framework che lo strumento presenta.
In questo intervento verranno presentate le API del sistema che permettono una ampia espandibilità e velocità di scrittura di codice. Inoltre verrà descritta l’anatomia di un modulo presentandone la struttura e come questo interagisca con il sistema di base.
L’intervento chiuderà con una carrellata di vantaggi e svantaggi dell’uso di Drupal mettendone in evidenza la differenza dai classici framework
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Effective Strategies for Wix's Scaling challenges - GeeConNatan Silnitsky
This session unveils the multifaceted horizontal scaling strategies that power Wix's robust infrastructure. From Kafka consumer scaling and dynamic traffic routing for site segments to DynamoDB sharding and MySQL clusters custom routing with ProxySQL, we dissect the mechanisms that ensure scalability and performance at Wix.
Attendees will learn about the art of sharding and routing key selection across different systems, and how to apply these strategies to their own infrastructure. We'll share insights into choosing the right scaling strategy for various scenarios, balancing between managed services and custom solutions.
Key Takeaways:
- Grasp various sharding techniques and routing strategies used at Wix.
- Understand key considerations for sharding key and routing rule selection.
- Learn when and why to choose specific horizontal scaling strategies.
- Gain practical knowledge for applying these strategies to achieve scalability and high availability.
Join us to gain a blueprint for scaling your systems horizontally, drawing from Wix's proven practices.
Discover how Wix transitioned from complex event sourcing and CQRS to streamlined CRUD services, optimizing their vast platform for better scalability, performance, and resiliency.
Wix's platform, designed to accommodate diverse business needs, boasts:
* 3.5 Billion daily HTTP transactions
* 70 Billion Kafka messages per day
* Roughly 4000 microservices in production
This session will highlight the simplification of Wix's architecture through domain events, resilient Kafka messaging, and advanced techniques like materialization and caching. By standardizing APIs and employing tools like protobuf and gRPC, Wix has enhanced the developer experience, both internally and externally, and fostered an open, integrative platform.
Attendees will gain insights into Wix's strategies for microservice coordination, ensuring system resilience and data consistency, as well as query performance optimization through innovative 2-level caching solutions.
Workflow Engines & Event Streaming Brokers - Can they work together? [Current...Natan Silnitsky
Workflow engines and event streaming brokers offer very different solutions to the same requirement - an optimal implementation of microservices communication.
At Wix, we have a good experience with event-driven architecture for our 2500 microservices using Apache Kafka. Apache Kafka provides:
* support for very high throughput
* Fault tolerance
* very loose coupling
* Huge connectors eco-system
Temporal workflow orchestration has interesting features:
* Support for long running tasks
* business flows visual tracking
* Easy to follow imperative style programming
In this talk we will learn about the tradeoffs between the two technologies and how to implement various use cases in each architecture, including those that need a little more work.
DevSum - Lessons Learned from 2000 microservicesNatan Silnitsky
Wix has a huge scale of event driven traffic. More than 70 billion Kafka business events per day.
Over the past few years Wix has made a gradual transition to an event-driven architecture for its 2000 microservices.
We have made mistakes along the way but have improved and learned a lot about how to make sure our production is still maintainable, performant and resilient.
In this talk you will hear about the lessons we learned including:
1. The importance of atomic operations for databases and events
2. avoiding data consistency issues due to out-of-order and duplicate processing
3. Having essential events debugging and quick-fix tools in production
and a few more
GeeCon - Lessons Learned from 2000 microservicesNatan Silnitsky
Wix has a huge scale of event driven traffic. More than 70 billion Kafka business events per day.
Over the past few years Wix has made a gradual transition to an event-driven architecture for its 2000 microservices.
We have made mistakes along the way but have improved and learned a lot about how to make sure our production is still maintainable, performant and resilient.
In this talk you will hear about the lessons we learned including:
1. The importance of atomic operations for databases and events
2. avoiding data consistency issues due to out-of-order and duplicate processing
3. Having essential events debugging and quick-fix tools in production
and a few more
Migrating to Multi Cluster Managed Kafka - ApacheKafkaILNatan Silnitsky
As Wix Kafka usage grew to 2.5B messages per day, >20K topics and >100K leader partitions serving 2000 microservices,
we decided to migrate from self-operated single cluster per data-center to a managed cloud service (Like Amazon MSK or Confluent Cloud) with a multi-cluster setup.
The classic approach would be to perform this transition when all incoming traffic is removed from the data center.
But draining an entire data-center for an undetermined period of time, until all 2000 services complete the switch was too risky for us.
This talk is about how we gradually migrated all of our Kafka consumers and producers with 0 downtime while they continued to handle regular traffic. You will learn practical steps you can take to greatly reduce the risks and speed up the migration timeline.
Wix has a huge scale of event driven traffic. More than 70 billion Kafka business events per day.
Over the past few years Wix has made a gradual transition to an event-driven architecture for its 2000 microservices.
We have made mistakes along the way but have improved and learned a lot about how to make sure our production is still maintainable, performant and resilient.
In this talk you will hear about the lessons we learned including:
1. The importance of atomic operations for databases and events
2. avoiding data consistency issues due to out-of-order and duplicate processing
3. Having essential events debugging and quick-fix tools in production
and a few more
BuildStuff - Lessons Learned from 2000 Event Driven MicroservicesNatan Silnitsky
Wix has a huge scale of event driven traffic. More than 70 billion Kafka business events per day.
Over the past few years Wix has made a gradual transition to an event-driven architecture for its 2000 microservices.
We have made mistakes along the way but have improved and learned a lot about how to make sure our production is still maintainable, performant and resilient.
In this talk you will hear about the lessons we learned including:
1. The importance of atomic operations for databases and events
2. avoiding data consistency issues due to out-of-order and duplicate processing
3. Having essential events debugging and quick-fix tools in production
and a few more
Lessons Learned from 2000 Event Driven Microservices - ReversimNatan Silnitsky
Wix has a huge scale of event driven traffic. More than 70 billion Kafka business events per day.
Over the past few years Wix has made a gradual transition to an event-driven architecture for its 2000 microservices.
We have made mistakes along the way but have improved and learned a lot about how to make sure our production is still maintainable, performant and resilient.
In this talk you will hear about the lessons we learned including:
1. The importance of atomic operations for databases and events
2. avoiding data consistency issues due to out-of-order and duplicate processing
3. Having essential events debugging and quick-fix tools in production
and a few more
Devoxx Ukraine - Kafka based Global Data MeshNatan Silnitsky
As your organization rapidly grows in scale, so do the amount of challenges.
Growing scale comes in multiple dimensions - traffic, geographic presence, products portfolio, various technologies, amount of developers, etc.
Coming up with an architecture that can handle all of the data flows in a universal, simple way is key.
This talk is about Wix's Kafka based global data architecture and platform.
How we made it very easy for Wix 2000 microservices to publish and subscribe to data, no matter where they are deployed in the world, or what technological stack they use.
All the while offering various SDKs (some of them open-source), tools, and features for adapting to growing scale and insuring high resilience.
Devoxx UK - Migrating to Multi Cluster Managed KafkaNatan Silnitsky
As Wix Kafka usage grew to 2.5B messages per day, >20K topics and >100K leader partitions serving 2000 microservices,
we decided to migrate from self-operated single cluster per data-center to a managed cloud service (Like Amazon MSK or Confluent Cloud) with a multi-cluster setup.
The classic approach would be to perform this transition when all incoming traffic is removed from the data center.
But draining an entire data-center for an undetermined period of time, until all 2000 services complete the switch was too risky for us.
This talk is about how we gradually migrated all of our Kafka consumers and producers with 0 downtime while they continued to handle regular traffic. You will learn practical steps you can take to greatly reduce the risks and speed up the migration timeline.
Dev Days Europe - Kafka based Global Data Mesh at WixNatan Silnitsky
As your organization rapidly grows in scale, so do the amount of challenges.
Growing scale comes in multiple dimensions - traffic, geographic presence, products portfolio, various technologies, amount of developers, etc.
Coming up with an architecture that can handle all of the data flows in a universal, simple way is key.
This talk is about Wix's Kafka based global data architecture and platform.
How we made it very easy for Wix 2000 microservices to publish and subscribe to data, no matter where they are deployed in the world, or what technological stack they use.
All the while offering various SDKs (some of them open-source), tools, and features for adapting to growing scale and insuring high resilience.
Kafka Summit London - Kafka based Global Data Mesh at WixNatan Silnitsky
As your organization rapidly grows in scale, so do the amount of challenges.
Growing scale comes in multiple dimensions - traffic, geographic presence, products portfolio, various technologies, amount of developers, etc.
Coming up with an architecture that can handle all of the data flows in a universal, simple way is key.
This talk is about Wix's Kafka based global data architecture and platform.
How we made it very easy for Wix 2000 microservices to publish and subscribe to data, no matter where they are deployed in the world, or what technological stack they use.
All the while offering various SDKs (some of them open-source), tools, and features for adapting to growing scale and insuring high resilience.
Migrating to Multi Cluster Managed Kafka - Conf42 - CloudNative Natan Silnitsky
As Wix Kafka usage grew to 2.5B messages per day, >20K topics and >100K leader partitions serving 2000 microservices,
we decided to migrate from self-operated single cluster per data-center to a managed cloud service (Like Amazon MSK or Confluent Cloud) with a multi-cluster setup.
The classic approach would be to perform this transition when all incoming traffic is removed from the data center.
But draining an entire data-center for an undetermined period of time, until all 2000 services complete the switch was too risky for us.
This talk is about how we gradually migrated all of our Kafka consumers and producers with 0 downtime while they continued to handle regular traffic. You will learn practical steps you can take to greatly reduce the risks and speed up the migration timeline.
5 Takeaways from Migrating a Library to Scala 3 - Scala LoveNatan Silnitsky
Scala 3 is going to make Scala more easy to write, and especially read. more power features like enums and less misleading keywords like implicit.
But first we need to migrate our old Scala 2.12 / 2.13 codebase to Scala 3.
This talk tells the story of how I tried to migrate greyhound open-source library to Scala 3 with partial success.
You will hear about what works, what doesn't and about a few pitfalls to avoid
Migration takeaways include:
1. use migration tools, don't do it manually
2. which popular 3rd party libraries can and can't be used by Scala 3 code
and many more
Migrating to Multi Cluster Managed Kafka - DevopStars 2022Natan Silnitsky
As Wix Kafka usage grew to 2.5B messages per day, >20K topics and >100K leader partitions serving 2000 microservices,
we decided to migrate from self-operated single cluster per data-center to a managed cloud service (Like Amazon MSK or Confluent Cloud) with a multi-cluster setup.
The classic approach would be to perform this transition when all incoming traffic is removed from the data center.
But draining an entire data-center for an undetermined period of time, until all 2000 services complete the switch was too risky for us.
This talk is about how we gradually migrated all of our Kafka consumers and producers with 0 downtime while they continued to handle regular traffic. You will learn practical steps you can take to greatly reduce the risks and speed up the migration timeline.
Open sourcing a successful internal project - Reversim 2021Natan Silnitsky
About a year ago data streams team at Wix has released to open-source its Kafka client SDK wrapper called Greyhound.
Greyhound offers rich functionality like message processing parallelisation and batching, various fault tolerant retry policies and much more.
This talk will show how the team designed Greyhound with a layered architecture to allow both public and private parts and also different levels of flexible configuration.
How it automatically syncs only relevant code from private repo to public one and also how it securely accepts public PRs back to the private repo.
Outline:
* Quick intro on what Greyhound is and its history at Wix
* Greyhound layered architecture design to allow both public and private parts and also different levels of flexible configuration.
* How it automatically syncs only relevant code from private repo to public one using Copybara tool
* how it securely accepts public PRs back to the private repo.
How to successfully manage a ZIO fiber’s lifecycle - Functional Scala 2021Natan Silnitsky
Fibers are the backbone of the highly performant, asynchronous and concurrent abilities of ZIO. They are lightweight “green threads” implemented by the ZIO runtime system.
In this lightening talk you will learn about:
* How to handle fiber dying due to unexpected failure
* How to guarantee a ZIO fiber is interrupted
* How to set up fiber tracking and executing a fiberDump for increased debuggability
Advanced Caching Patterns used by 2000 microservices - Code MotionNatan Silnitsky
Wix has a huge scale of traffic. more than 500 billion HTTP requests and more than 1.5 billion Kafka business events per day. This talk goes through 3 Caching Patterns that are used by Wix's 2000 microservices in order to provide the best experience for Wix users along with saving costs and increasing availability.
A cache will reduce latency, by avoiding the need of a costly query to a DB, a HTTP request to a Wix servicer, or a 3rd-party service. It will reduce the needed scale to service these costly requests. It will also improve reliability, by making sure some data can be returned even if aforementioned DB or 3rd-party service are currently unavailable.
The patterns include:
* Configuration Data Cache - persisted locally or to S3
* HTTP Reverse Proxy Caching - using Varnish Cache
* (Dynamo)DB+CDC based Cache and more - for unlimited capacity with continuously updating LRU cache on top each pattern is optimal for other use cases, but all allow to reduce costs and gain performance and resilience.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Effective Strategies for Wix's Scaling challenges - GeeConNatan Silnitsky
This session unveils the multifaceted horizontal scaling strategies that power Wix's robust infrastructure. From Kafka consumer scaling and dynamic traffic routing for site segments to DynamoDB sharding and MySQL clusters custom routing with ProxySQL, we dissect the mechanisms that ensure scalability and performance at Wix.
Attendees will learn about the art of sharding and routing key selection across different systems, and how to apply these strategies to their own infrastructure. We'll share insights into choosing the right scaling strategy for various scenarios, balancing between managed services and custom solutions.
Key Takeaways:
- Grasp various sharding techniques and routing strategies used at Wix.
- Understand key considerations for sharding key and routing rule selection.
- Learn when and why to choose specific horizontal scaling strategies.
- Gain practical knowledge for applying these strategies to achieve scalability and high availability.
Join us to gain a blueprint for scaling your systems horizontally, drawing from Wix's proven practices.
Discover how Wix transitioned from complex event sourcing and CQRS to streamlined CRUD services, optimizing their vast platform for better scalability, performance, and resiliency.
Wix's platform, designed to accommodate diverse business needs, boasts:
* 3.5 Billion daily HTTP transactions
* 70 Billion Kafka messages per day
* Roughly 4000 microservices in production
This session will highlight the simplification of Wix's architecture through domain events, resilient Kafka messaging, and advanced techniques like materialization and caching. By standardizing APIs and employing tools like protobuf and gRPC, Wix has enhanced the developer experience, both internally and externally, and fostered an open, integrative platform.
Attendees will gain insights into Wix's strategies for microservice coordination, ensuring system resilience and data consistency, as well as query performance optimization through innovative 2-level caching solutions.
Workflow Engines & Event Streaming Brokers - Can they work together? [Current...Natan Silnitsky
Workflow engines and event streaming brokers offer very different solutions to the same requirement - an optimal implementation of microservices communication.
At Wix, we have a good experience with event-driven architecture for our 2500 microservices using Apache Kafka. Apache Kafka provides:
* support for very high throughput
* Fault tolerance
* very loose coupling
* Huge connectors eco-system
Temporal workflow orchestration has interesting features:
* Support for long running tasks
* business flows visual tracking
* Easy to follow imperative style programming
In this talk we will learn about the tradeoffs between the two technologies and how to implement various use cases in each architecture, including those that need a little more work.
DevSum - Lessons Learned from 2000 microservicesNatan Silnitsky
Wix has a huge scale of event driven traffic. More than 70 billion Kafka business events per day.
Over the past few years Wix has made a gradual transition to an event-driven architecture for its 2000 microservices.
We have made mistakes along the way but have improved and learned a lot about how to make sure our production is still maintainable, performant and resilient.
In this talk you will hear about the lessons we learned including:
1. The importance of atomic operations for databases and events
2. avoiding data consistency issues due to out-of-order and duplicate processing
3. Having essential events debugging and quick-fix tools in production
and a few more
GeeCon - Lessons Learned from 2000 microservicesNatan Silnitsky
Wix has a huge scale of event driven traffic. More than 70 billion Kafka business events per day.
Over the past few years Wix has made a gradual transition to an event-driven architecture for its 2000 microservices.
We have made mistakes along the way but have improved and learned a lot about how to make sure our production is still maintainable, performant and resilient.
In this talk you will hear about the lessons we learned including:
1. The importance of atomic operations for databases and events
2. avoiding data consistency issues due to out-of-order and duplicate processing
3. Having essential events debugging and quick-fix tools in production
and a few more
Migrating to Multi Cluster Managed Kafka - ApacheKafkaILNatan Silnitsky
As Wix Kafka usage grew to 2.5B messages per day, >20K topics and >100K leader partitions serving 2000 microservices,
we decided to migrate from self-operated single cluster per data-center to a managed cloud service (Like Amazon MSK or Confluent Cloud) with a multi-cluster setup.
The classic approach would be to perform this transition when all incoming traffic is removed from the data center.
But draining an entire data-center for an undetermined period of time, until all 2000 services complete the switch was too risky for us.
This talk is about how we gradually migrated all of our Kafka consumers and producers with 0 downtime while they continued to handle regular traffic. You will learn practical steps you can take to greatly reduce the risks and speed up the migration timeline.
Wix has a huge scale of event driven traffic. More than 70 billion Kafka business events per day.
Over the past few years Wix has made a gradual transition to an event-driven architecture for its 2000 microservices.
We have made mistakes along the way but have improved and learned a lot about how to make sure our production is still maintainable, performant and resilient.
In this talk you will hear about the lessons we learned including:
1. The importance of atomic operations for databases and events
2. avoiding data consistency issues due to out-of-order and duplicate processing
3. Having essential events debugging and quick-fix tools in production
and a few more
BuildStuff - Lessons Learned from 2000 Event Driven MicroservicesNatan Silnitsky
Wix has a huge scale of event driven traffic. More than 70 billion Kafka business events per day.
Over the past few years Wix has made a gradual transition to an event-driven architecture for its 2000 microservices.
We have made mistakes along the way but have improved and learned a lot about how to make sure our production is still maintainable, performant and resilient.
In this talk you will hear about the lessons we learned including:
1. The importance of atomic operations for databases and events
2. avoiding data consistency issues due to out-of-order and duplicate processing
3. Having essential events debugging and quick-fix tools in production
and a few more
Lessons Learned from 2000 Event Driven Microservices - ReversimNatan Silnitsky
Wix has a huge scale of event driven traffic. More than 70 billion Kafka business events per day.
Over the past few years Wix has made a gradual transition to an event-driven architecture for its 2000 microservices.
We have made mistakes along the way but have improved and learned a lot about how to make sure our production is still maintainable, performant and resilient.
In this talk you will hear about the lessons we learned including:
1. The importance of atomic operations for databases and events
2. avoiding data consistency issues due to out-of-order and duplicate processing
3. Having essential events debugging and quick-fix tools in production
and a few more
Devoxx Ukraine - Kafka based Global Data MeshNatan Silnitsky
As your organization rapidly grows in scale, so do the amount of challenges.
Growing scale comes in multiple dimensions - traffic, geographic presence, products portfolio, various technologies, amount of developers, etc.
Coming up with an architecture that can handle all of the data flows in a universal, simple way is key.
This talk is about Wix's Kafka based global data architecture and platform.
How we made it very easy for Wix 2000 microservices to publish and subscribe to data, no matter where they are deployed in the world, or what technological stack they use.
All the while offering various SDKs (some of them open-source), tools, and features for adapting to growing scale and insuring high resilience.
Devoxx UK - Migrating to Multi Cluster Managed KafkaNatan Silnitsky
As Wix Kafka usage grew to 2.5B messages per day, >20K topics and >100K leader partitions serving 2000 microservices,
we decided to migrate from self-operated single cluster per data-center to a managed cloud service (Like Amazon MSK or Confluent Cloud) with a multi-cluster setup.
The classic approach would be to perform this transition when all incoming traffic is removed from the data center.
But draining an entire data-center for an undetermined period of time, until all 2000 services complete the switch was too risky for us.
This talk is about how we gradually migrated all of our Kafka consumers and producers with 0 downtime while they continued to handle regular traffic. You will learn practical steps you can take to greatly reduce the risks and speed up the migration timeline.
Dev Days Europe - Kafka based Global Data Mesh at WixNatan Silnitsky
As your organization rapidly grows in scale, so do the amount of challenges.
Growing scale comes in multiple dimensions - traffic, geographic presence, products portfolio, various technologies, amount of developers, etc.
Coming up with an architecture that can handle all of the data flows in a universal, simple way is key.
This talk is about Wix's Kafka based global data architecture and platform.
How we made it very easy for Wix 2000 microservices to publish and subscribe to data, no matter where they are deployed in the world, or what technological stack they use.
All the while offering various SDKs (some of them open-source), tools, and features for adapting to growing scale and insuring high resilience.
Kafka Summit London - Kafka based Global Data Mesh at WixNatan Silnitsky
As your organization rapidly grows in scale, so do the amount of challenges.
Growing scale comes in multiple dimensions - traffic, geographic presence, products portfolio, various technologies, amount of developers, etc.
Coming up with an architecture that can handle all of the data flows in a universal, simple way is key.
This talk is about Wix's Kafka based global data architecture and platform.
How we made it very easy for Wix 2000 microservices to publish and subscribe to data, no matter where they are deployed in the world, or what technological stack they use.
All the while offering various SDKs (some of them open-source), tools, and features for adapting to growing scale and insuring high resilience.
Migrating to Multi Cluster Managed Kafka - Conf42 - CloudNative Natan Silnitsky
As Wix Kafka usage grew to 2.5B messages per day, >20K topics and >100K leader partitions serving 2000 microservices,
we decided to migrate from self-operated single cluster per data-center to a managed cloud service (Like Amazon MSK or Confluent Cloud) with a multi-cluster setup.
The classic approach would be to perform this transition when all incoming traffic is removed from the data center.
But draining an entire data-center for an undetermined period of time, until all 2000 services complete the switch was too risky for us.
This talk is about how we gradually migrated all of our Kafka consumers and producers with 0 downtime while they continued to handle regular traffic. You will learn practical steps you can take to greatly reduce the risks and speed up the migration timeline.
5 Takeaways from Migrating a Library to Scala 3 - Scala LoveNatan Silnitsky
Scala 3 is going to make Scala more easy to write, and especially read. more power features like enums and less misleading keywords like implicit.
But first we need to migrate our old Scala 2.12 / 2.13 codebase to Scala 3.
This talk tells the story of how I tried to migrate greyhound open-source library to Scala 3 with partial success.
You will hear about what works, what doesn't and about a few pitfalls to avoid
Migration takeaways include:
1. use migration tools, don't do it manually
2. which popular 3rd party libraries can and can't be used by Scala 3 code
and many more
Migrating to Multi Cluster Managed Kafka - DevopStars 2022Natan Silnitsky
As Wix Kafka usage grew to 2.5B messages per day, >20K topics and >100K leader partitions serving 2000 microservices,
we decided to migrate from self-operated single cluster per data-center to a managed cloud service (Like Amazon MSK or Confluent Cloud) with a multi-cluster setup.
The classic approach would be to perform this transition when all incoming traffic is removed from the data center.
But draining an entire data-center for an undetermined period of time, until all 2000 services complete the switch was too risky for us.
This talk is about how we gradually migrated all of our Kafka consumers and producers with 0 downtime while they continued to handle regular traffic. You will learn practical steps you can take to greatly reduce the risks and speed up the migration timeline.
Open sourcing a successful internal project - Reversim 2021Natan Silnitsky
About a year ago data streams team at Wix has released to open-source its Kafka client SDK wrapper called Greyhound.
Greyhound offers rich functionality like message processing parallelisation and batching, various fault tolerant retry policies and much more.
This talk will show how the team designed Greyhound with a layered architecture to allow both public and private parts and also different levels of flexible configuration.
How it automatically syncs only relevant code from private repo to public one and also how it securely accepts public PRs back to the private repo.
Outline:
* Quick intro on what Greyhound is and its history at Wix
* Greyhound layered architecture design to allow both public and private parts and also different levels of flexible configuration.
* How it automatically syncs only relevant code from private repo to public one using Copybara tool
* how it securely accepts public PRs back to the private repo.
How to successfully manage a ZIO fiber’s lifecycle - Functional Scala 2021Natan Silnitsky
Fibers are the backbone of the highly performant, asynchronous and concurrent abilities of ZIO. They are lightweight “green threads” implemented by the ZIO runtime system.
In this lightening talk you will learn about:
* How to handle fiber dying due to unexpected failure
* How to guarantee a ZIO fiber is interrupted
* How to set up fiber tracking and executing a fiberDump for increased debuggability
Advanced Caching Patterns used by 2000 microservices - Code MotionNatan Silnitsky
Wix has a huge scale of traffic. more than 500 billion HTTP requests and more than 1.5 billion Kafka business events per day. This talk goes through 3 Caching Patterns that are used by Wix's 2000 microservices in order to provide the best experience for Wix users along with saving costs and increasing availability.
A cache will reduce latency, by avoiding the need of a costly query to a DB, a HTTP request to a Wix servicer, or a 3rd-party service. It will reduce the needed scale to service these costly requests. It will also improve reliability, by making sure some data can be returned even if aforementioned DB or 3rd-party service are currently unavailable.
The patterns include:
* Configuration Data Cache - persisted locally or to S3
* HTTP Reverse Proxy Caching - using Varnish Cache
* (Dynamo)DB+CDC based Cache and more - for unlimited capacity with continuously updating LRU cache on top each pattern is optimal for other use cases, but all allow to reduce costs and gain performance and resilience.
2. In Maven
Pom files determine the
build units.
group_id:artifact_0
group_id:artifact_1
scala_project
├── pom.xml
├── m0
│ ├── pom.xml
│ └── src
│ ├── main
│ │ └── scala
│ │ └── com
│ │ └── example
│ │ └── Example.scala
│ └── test
│ └── scala
│ └── com
│ ├── example
│ └── ExampleTest.scala
└── m1
├── pom.xml
└── src
└── main
3. In Bazel
You control the level of granularity
to build.
scala_project
├── WORKSPACE
└── src
└── main
└── scala
└── com
├── example
│ ├── A.scala
│ ├── B.scala
│ └── C.scala
└── example2
├── D.scala
├── E.scala
└── F.scala
4. The unit of
organization is the
package
scala_project
├── WORKSPACE
└── src
└── main
└── scala
└── com
├── example
│ ├── A.scala
│ ├── B.scala
│ └── C.scala
└── example2
├── D.scala
├── E.scala
└── F.scala
Package
5. To define a package
you need to declare a BUILD file in it
scala_project
├── WORKSPACE
└── src
└── main
└── scala
└── com
├── example
│ ├── A.scala
│ ├── B.scala
│ ├── BUILD
│ └── C.scala
└── example2
├── D.scala
├── E.scala
└── F.scala
Package
6. The elements of a
package are called
Targets
scala_project
├── WORKSPACE
└── src
└── main
└── scala
└── com
├── example
│ ├── A.scala
│ ├── B.scala
│ ├── BUILD
│ └── C.scala
└── example2
├── D.scala
├── E.scala
└── F.scala
Target
targets are one of two principal kinds,
files and rules.
7. Target is an instance
of a Rule
scala_project
├── WORKSPACE
└── src
└── main
└── scala
└── com
├── example
│ ├── A.scala
│ ├── B.scala
│ ├── BUILD
│ └── C.scala
└── example2
├── D.scala
├── E.scala
└── F.scala
Relationship between a set of input
and a set of output files
Target
B.scala -> scala_library -> B.jar
Foo.cc + Foo.h -> cc_library -> Foo.dll
8. scala_project
├── WORKSPACE
└── src
├── main
│ └── scala
│ └── com
│ └── example
│ ├── A.scala
│ ├── B.scala
│ ├── BUILD
│ ├── C.scala
│ └── Example.scala
└── test
└── scala
└── com
└── example
├── BUILD
├── ExampleTest.scala
└── ExampleTest2.scala
BUILD
targets
examples
scala_library(
name="a",
srcs=[
"A.scala",
],
deps = ["c"])
scala_library(
name="c",
srcs=[
"C.scala",
])
scala_library(
name="example_test",
srcs=glob(["Example*.scala"]))
a
c
Example_test
9. All Bazel builds take
place in a workspace.
Scala_project
├── WORKSPACE
└── src
├── main
│ └── scala
│ └── com
│ └── example
│ ├── A.scala
│ ├── B.scala
│ ├── BUILD
│ ├── C.scala
│ └── Example.scala
└── test
└── scala
└── com
└── example
├── ExampleTest.scala
A workspace is a directory in the file
system that contains a file named
WORKSPACE.
The file contain references to external
dependencies or could be empty
10. Workspace file example
Scala_project
├── WORKSPACE
└── src
├── main
│ └── scala
│ └── com
│ └── example
│ ├── A.scala
│ ├── B.scala
│ ├── BUILD
│ ├── C.scala
│ └── Example.scala
└── test
└── scala
└── com
└── example
├── ExampleTest.scala
maven_server(
name = "default",
url =
"http://repo.dev.wixpress.com/artifactory/li
bs-snapshots",
)
maven_jar(
name = "guava",
artifact =
"com.google.guava:guava:18.0",
)
...
11. All build targets
are referenced using Labels
The target labels are relative to the
location of the workspace file.
//src/main/scala/com/example:a
a
Scala_project
├── WORKSPACE
└── src
├── main
│ └── scala
│ └── com
│ └── example
│ ├── A.scala
│ ├── B.scala
│ ├── BUILD
│ ├── C.scala
│ └── Example.scala
└── test
└── scala
└── com
└── example
├── ExampleTest.scala