Apache Avro is a data serialization system that is compact, fast, and provides support for RPC mechanisms and data evolution. It uses JSON schemas and efficient binary encoding. Avro is built for large datasets and supports features like schema validation, dynamic typing, sorting, and protocol definitions for language-independent RPCs.
Thrift vs Protocol Buffers vs Avro - Biased ComparisonIgor Anishchenko
Igor Anishchenko
Odessa Java TechTalks
Lohika - May, 2012
Let's take a step back and compare data serialization formats, of which there are plenty. What are the key differences between Apache Thrift, Google Protocol Buffers and Apache Avro. Which is "The Best"? Truth of the matter is, they are all very good and each has its own strong points. Hence, the answer is as much of a personal choice, as well as understanding of the historical context for each, and correctly identifying your own, individual requirements.
If you are building a service oriented system and you want to build it for scale as well as flexibility. There are a few questions you need to make sure are asked and answered regarding the data interchange between services and offline persistency of services data. Questions as:
- How can I change a service API without breaking other services?
- How do I keep data from services consistent over time?
This talk covers the challenges we tackled during building our new service oriented system. Summarizing what we realized would bad Ideas to do, what are the better approaches to data consistency.
It includes a dive into the Apache Avro technology and how we used it.
Also what other supporting infrastructure we created to help us achieving the goal of consistent yet flexible system.
JSON, by now, became a regular part of most applications and services. Do we, how ever, really want to transfer human readable information or are we looking for a binary protocol to be as debuggable as JSON? CBOR the Concise Binary Object Representation offers the best of JSON + an extremely efficient, binary representation.
http://www.cbor.io
Thrift vs Protocol Buffers vs Avro - Biased ComparisonIgor Anishchenko
Igor Anishchenko
Odessa Java TechTalks
Lohika - May, 2012
Let's take a step back and compare data serialization formats, of which there are plenty. What are the key differences between Apache Thrift, Google Protocol Buffers and Apache Avro. Which is "The Best"? Truth of the matter is, they are all very good and each has its own strong points. Hence, the answer is as much of a personal choice, as well as understanding of the historical context for each, and correctly identifying your own, individual requirements.
If you are building a service oriented system and you want to build it for scale as well as flexibility. There are a few questions you need to make sure are asked and answered regarding the data interchange between services and offline persistency of services data. Questions as:
- How can I change a service API without breaking other services?
- How do I keep data from services consistent over time?
This talk covers the challenges we tackled during building our new service oriented system. Summarizing what we realized would bad Ideas to do, what are the better approaches to data consistency.
It includes a dive into the Apache Avro technology and how we used it.
Also what other supporting infrastructure we created to help us achieving the goal of consistent yet flexible system.
JSON, by now, became a regular part of most applications and services. Do we, how ever, really want to transfer human readable information or are we looking for a binary protocol to be as debuggable as JSON? CBOR the Concise Binary Object Representation offers the best of JSON + an extremely efficient, binary representation.
http://www.cbor.io
Apache Avro and Messaging at Scale in LivePersonLivePerson
This talk covers the challenges we tackled during building our new service oriented system. Summarizing what we realized would bad Ideas to do, what are the better approaches to data consistency, how we used Apache Avro technology and what other supporting infrastructure we created to help us achieving the goal of consistent yet flexible system.
Amihay Zer-Kavod is I'm a Senior Software Architect at LivePerson.
RESTLess Design with Apache Thrift: Experiences from Apache Airavatasmarru
Apache Airavata is software for providing services to manage scientific applications on a wide range of remote computing resources. Airavata can be used by both individual scientists to run scientific workflows as well as communities of scientists through Web browser interfaces. It is a challenge to bring all of Airavata’s capabilities together in the single API layer that is our prerequisite for a 1.0 release. To support our diverse use cases, we have developed a rich data model and messaging format that we need to expose to client developers using many programming languages. We do not believe this is a good match for REST style services. In this presentation, we present our use and evaluation of Apache Thrift as an interface and data model definition tool, its use internally in Airavata, and its use to deliver and distribute client development kits.
This is the fourteenth (and last for now) set of slides from a Perl programming course that I held some years ago.
I want to share it with everyone looking for intransitive Perl-knowledge.
A table of content for all presentations can be found at i-can.eu.
The source code for the examples and the presentations in ODP format are on https://github.com/kberov/PerlProgrammingCourse
Apache Lucene's next major release, 4.0, will introduce lots of flexibility into indexing, but also fundamental changes to the well-known APIs: It features a new and consistent, 4-dimensional iteration API on top of a low-level, pluggable codec API giving applications full control over the postings data.
Designing Payloads for Event-Driven Systems | Lorna Mitchell, AivenHostedbyConfluent
Event-driven systems come in different shapes and sizes, and the rules for payload construction are: there are no rules (but there are guidelines). Flexible payloads are both the best and worst thing about event streaming - you never quite know what to expect from each system's payloads.
Just like when you met your first NoSQL datastore, this sounds like chaos! In this session we will cover strategies for designing the payloads you stream over Kafka. From fields to include, common mistakes to avoid, and what to do when the data structure changes over time, this session has real-world advice and examples that you can apply in your own projects.
We will also look at other aspects, such as when to use a self-contained data format such as JSON or XML, or when a serialization format like Avro is best - and how to handle the schemas. This session is recommended for anyone who wants to design their payloads right first time and have all their applications playing nicely together.
End-to-end Data Governance with Apache Avro and AtlasDataWorks Summit
Aeolus is Comcast’s new internal Big Data system for providing access to an integrated view of a wide variety of high-quality, near-real-time and batch data. Such integration can enable data scientists to uncover otherwise hidden trends, anomalies, and powerful predictors of business successes and failures. But integrating data across silos in a large enterprise is fraught with peril. There typically are few standards on naming conventions and data representation, and spotty documentation at best. The old rule of thumb often applies: 70% of the analysts’ time goes into data wrangling, while only 30% goes toward the actual analyses and simulations. The goal of the Athene Data Governance Platform within Aeolus is to invert this ratio. This talk will explain how Comcast is using Apache Avro and Atlas for end-to-end data governance, the challenges faced, and methods used to address these challenges.
Avro provides a lingua franca for data representation, data integration, and schema evolution. All data published for community consumption must have an associated avro schema in Atlas. Every step in its journey through Aeolus, in flight or at rest, is captured in Atlas. Atlas’ extensibility has allowed us to add or update various entity types (e.g., avro schemas, kafka topics, object store pseudo-directories) and lineage types (e.g., storing streaming data in object storage; embellishing and re-publishing streaming data; performing aggregations and other transformations on data at rest; and evolution of schemas with compatibility flags). Transformation services notify Atlas of lineage links via custom asynchronous kafka messaging.
Atlas provides self-service data discovery and lineage browsing and querying, via full-text search, DSL query language, or gremlin graph query language. Example queries: “Where is data from kafka topic X stored?” “Display the journey of data currently stored in pseudo-directory X since it entered the Aeolus system”. “Show me all earlier versions of schema S, and whether they are forward/backward compatible with each other.”
Apache Avro and Messaging at Scale in LivePersonLivePerson
This talk covers the challenges we tackled during building our new service oriented system. Summarizing what we realized would bad Ideas to do, what are the better approaches to data consistency, how we used Apache Avro technology and what other supporting infrastructure we created to help us achieving the goal of consistent yet flexible system.
Amihay Zer-Kavod is I'm a Senior Software Architect at LivePerson.
RESTLess Design with Apache Thrift: Experiences from Apache Airavatasmarru
Apache Airavata is software for providing services to manage scientific applications on a wide range of remote computing resources. Airavata can be used by both individual scientists to run scientific workflows as well as communities of scientists through Web browser interfaces. It is a challenge to bring all of Airavata’s capabilities together in the single API layer that is our prerequisite for a 1.0 release. To support our diverse use cases, we have developed a rich data model and messaging format that we need to expose to client developers using many programming languages. We do not believe this is a good match for REST style services. In this presentation, we present our use and evaluation of Apache Thrift as an interface and data model definition tool, its use internally in Airavata, and its use to deliver and distribute client development kits.
This is the fourteenth (and last for now) set of slides from a Perl programming course that I held some years ago.
I want to share it with everyone looking for intransitive Perl-knowledge.
A table of content for all presentations can be found at i-can.eu.
The source code for the examples and the presentations in ODP format are on https://github.com/kberov/PerlProgrammingCourse
Apache Lucene's next major release, 4.0, will introduce lots of flexibility into indexing, but also fundamental changes to the well-known APIs: It features a new and consistent, 4-dimensional iteration API on top of a low-level, pluggable codec API giving applications full control over the postings data.
Designing Payloads for Event-Driven Systems | Lorna Mitchell, AivenHostedbyConfluent
Event-driven systems come in different shapes and sizes, and the rules for payload construction are: there are no rules (but there are guidelines). Flexible payloads are both the best and worst thing about event streaming - you never quite know what to expect from each system's payloads.
Just like when you met your first NoSQL datastore, this sounds like chaos! In this session we will cover strategies for designing the payloads you stream over Kafka. From fields to include, common mistakes to avoid, and what to do when the data structure changes over time, this session has real-world advice and examples that you can apply in your own projects.
We will also look at other aspects, such as when to use a self-contained data format such as JSON or XML, or when a serialization format like Avro is best - and how to handle the schemas. This session is recommended for anyone who wants to design their payloads right first time and have all their applications playing nicely together.
End-to-end Data Governance with Apache Avro and AtlasDataWorks Summit
Aeolus is Comcast’s new internal Big Data system for providing access to an integrated view of a wide variety of high-quality, near-real-time and batch data. Such integration can enable data scientists to uncover otherwise hidden trends, anomalies, and powerful predictors of business successes and failures. But integrating data across silos in a large enterprise is fraught with peril. There typically are few standards on naming conventions and data representation, and spotty documentation at best. The old rule of thumb often applies: 70% of the analysts’ time goes into data wrangling, while only 30% goes toward the actual analyses and simulations. The goal of the Athene Data Governance Platform within Aeolus is to invert this ratio. This talk will explain how Comcast is using Apache Avro and Atlas for end-to-end data governance, the challenges faced, and methods used to address these challenges.
Avro provides a lingua franca for data representation, data integration, and schema evolution. All data published for community consumption must have an associated avro schema in Atlas. Every step in its journey through Aeolus, in flight or at rest, is captured in Atlas. Atlas’ extensibility has allowed us to add or update various entity types (e.g., avro schemas, kafka topics, object store pseudo-directories) and lineage types (e.g., storing streaming data in object storage; embellishing and re-publishing streaming data; performing aggregations and other transformations on data at rest; and evolution of schemas with compatibility flags). Transformation services notify Atlas of lineage links via custom asynchronous kafka messaging.
Atlas provides self-service data discovery and lineage browsing and querying, via full-text search, DSL query language, or gremlin graph query language. Example queries: “Where is data from kafka topic X stored?” “Display the journey of data currently stored in pseudo-directory X since it entered the Aeolus system”. “Show me all earlier versions of schema S, and whether they are forward/backward compatible with each other.”
Apache Arrow Workshop at VLDB 2019 / BOSS SessionWes McKinney
Technical deep dive for database system developers in the Arrow columnar format, binary protocol, C++ development platform, and Arrow Flight RPC.
See demo Jupyter notebooks at https://github.com/wesm/vldb-2019-apache-arrow-workshop
Why you should care about data layout in the file system with Cheng Lian and ...Databricks
Efficient data access is one of the key factors for having a high performance data processing pipeline. Determining the layout of data values in the filesystem often has fundamental impacts on the performance of data access. In this talk, we will show insights on how data layout affects the performance of data access. We will first explain how modern columnar file formats like Parquet and ORC work and explain how to use them efficiently to store data values. Then, we will present our best practice on how to store datasets, including guidelines on choosing partitioning columns and deciding how to bucket a table.
Efficient Schemas in Motion with Kafka and Schema RegistryPat Patterson
Apache Avro allows data to be self-describing, but carries an overhead when used with message queues such as Apache Kafka. Confluent’s open source Schema Registry integrates with Kafka to allow Avro schemas to be passed ‘by reference’, minimizing overhead, and can be used with any application that uses Avro. Learn about Schema Registry, using it with Kafka, and leveraging it in your application.
Apache Drill is a new open source Apache Incubator project for interactive analysis of large-scale datasets, inspired by Google's Dremel. It enables users to query terabytes of data in seconds. Apache Drill supports a broad range of data formats, including Protocol Buffers, Avro and JSON, and leverages Hadoop and HBase as data sources. Drill's primary query language, DrQL, is compatible with Google BigQuery. In this talk we provide an overview of the Drill project, including its design goals and architecture.
Presenter: Jason Frantz, Software Architect, MapR Technologies
(Randall Hauch, Confluent) Kafka Summit SF 2018
The Kafka Connect framework makes it easy to move data into and out of Kafka, and you want to write a connector. Where do you start, and what are the most important things to know? This is an advanced talk that will cover important aspects of how the Connect framework works and best practices of designing, developing, testing and packaging connectors so that you and your users will be successful. We’ll review how the Connect framework is evolving, and how you can help develop and improve it.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Looking for a reliable mobile app development company in Noida? Look no further than Drona Infotech. We specialize in creating customized apps for your business needs.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Launch Your Streaming Platforms in MinutesRoshan Dwivedi
The claim of launching a streaming platform in minutes might be a bit of an exaggeration, but there are services that can significantly streamline the process. Here's a breakdown:
Pros of Speedy Streaming Platform Launch Services:
No coding required: These services often use drag-and-drop interfaces or pre-built templates, eliminating the need for programming knowledge.
Faster setup: Compared to building from scratch, these platforms can get you up and running much quicker.
All-in-one solutions: Many services offer features like content management systems (CMS), video players, and monetization tools, reducing the need for multiple integrations.
Things to Consider:
Limited customization: These platforms may offer less flexibility in design and functionality compared to custom-built solutions.
Scalability: As your audience grows, you might need to upgrade to a more robust platform or encounter limitations with the "quick launch" option.
Features: Carefully evaluate which features are included and if they meet your specific needs (e.g., live streaming, subscription options).
Examples of Services for Launching Streaming Platforms:
Muvi [muvi com]
Uscreen [usencreen tv]
Alternatives to Consider:
Existing Streaming platforms: Platforms like YouTube or Twitch might be suitable for basic streaming needs, though monetization options might be limited.
Custom Development: While more time-consuming, custom development offers the most control and flexibility for your platform.
Overall, launching a streaming platform in minutes might not be entirely realistic, but these services can significantly speed up the process compared to building from scratch. Carefully consider your needs and budget when choosing the best option for you.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
2. unstructured
data
• Databases provide their own
internal serialization
– A fast but completely closed system
– Not perfect for unstructured data
• Systems requiring the ability to
process arbitrary datasets generated
by a wide range of programs need
open solutions
– XML – gigantic, glacial to parse
– JSON – big, slow to parse
– >> Something else in the middle? <<
– Binary – small, fast and inflexible
• Size matters!
– 1 Gigabyte or 6, maybe doesn’t matter
– 1 Petabytes or 6 usually matters
$ ls -l order.*
-rw-r--r-- 1 tm tm 152 May 28 02:12 order.bin
-rw-r--r-- 1 tm tm 798 May 28 02:05 order.json
-rw-r--r-- 1 tm tm 958 May 28 02:04 order.xml
3. avro • Apache Avro
– Apache Avro is a data serialization system
• What?
– Data Serialization – Apache Avro is first and foremost an excellent means
for reading and writing data structures in an open and dynamic way
– Compact – Apache Avro uses an efficient binary data format
– Fast – Apache Avro data can be parsed quickly with low overhead
– RPC Support – Apache Avro supplies RPC mechanisms using the Apache
Avro serialization system for platform and language independence
– Reach – Apache Avro supports a wide range of languages (some supported
outside of the project itself): C, C++, Java, Python, PHP, C#, Ruby and other
externally supported languages
– Flexibility – Apache Avro supports data evolution, allowing data structures
to change over time without breaking existing systems
– No IDL – Apache Avro uses schemas to describe data structures, schemas
are encoded with the data eliminating the need for IDL code generation
and repetitive field tags embedded with the data
– JSON – Apache Avro schemas are described using JSON
– Hadoop – Apache Avro is built into the Hadoop Framework
4. • Built for big data
– Unlike other systems, Apache Avro is
particularly focused on features making it
efficient for use with large data sets
• Schemas are always present
– The schema is required to parse the data
• Apache Avro leverages dynamic typing, making it possible for
dynamic programming languages to generate data types
from Apache Avro Schemas on the fly
• Completely generic data processing systems can be
developed with no preexisting knowledge of the data
formats to be processed
• Apache Avro data structures are based on field names not
field IDs
– This makes the selection and consistency of field names important to Apache Avro
readers and writers
different?
Schema: the
structure of a data
system described in
a formal language
5. • Primitive types
– null: no value
– boolean: a binary value
– int: 32-bit signed integer
– long: 64-bit signed integer
– float: single precision (32-bit) IEEE 754
floating-point number
– double: double precision (64-bit) IEEE 754
floating-point number
– bytes: sequence of 8-bit unsigned bytes
– string: unicode character sequence
• Complex Types
– records – named sets of fields
• Fields are JSON objects with a name, type and optionally a default value
– enums – named set of strings, instances may hold any one of the string values
– fixed – a named fixed size set of bytes
– arrays – a collection of a single type
– maps – a string keyed set of key/value pairs
– unions – an array of types, instances may assume any of the types
schematypes
Records, enums and fixed types
have fullnames
Fullnames contain two parts
• Namespace (e.g. com.example)
• Name (e.g. Order)
Namespaces are dot separated
sequences and case sensitive.
Elements referenced by undotted name
alone inherit the most closely enclosing
namespace.
Fullnames must be unique and defined
before use.
Avro Java strings are:
org.apache.avro.util.Utf8
6. • Apache Avro supports Binary and JSON encoding schemes
encoding • null: 0 bytes
• boolean: 1 byte
• int & long: varint/zigzag
• float: 4 bytes
• double: 8 bytes
• bytes: a long encoded length followed by the bytes
• string: a long encoded length (in bytes) followed by UTF-8 characters
• records: fields encoded in the order declared
• enums: int encoded as the 0 based position of the value in the enum array
• fixed: a fixed size set of bytes
• arrays: a set of blocks
– A block is typically a long encoded count followed by that many items
– Small arrays are represented with a single block
– Large data sets can be broken up into multiple blocks such that each block can be buffered in memory
– The final block has a count of 0 and no items
– A negative count flags abs(count) many items but is immediately followed by a long encoded size in bytes
• This allows the reader to skip ahead quickly without decoding individual items (which may be of variable
length)
• maps: like arrays but with key/value pairs for items
• unions: a long encoded 0 based index identifying the type to use followed by the value
Binary
7. • Apache Avro schemas must always be present
• The Apache Avro system defines a container file used to house a
schema and its data
• Files consist of
– A header which contains metadata and a sync marker
– One or more data blocks containing data as defined in the schema
• Metadata
– The header meta data can be any data useful to the file’s author
– Apache Avro assigns two metadata values:
• avro.schema – containing the schema in JSON format
• avro.codec – defining the name of the compression codec used on the blocks if any
– Mandatory codecs are null and deflate, null (no compression) is assumed if the metadata field
is absent
– Snappy is another common codec
• Data Blocks
– Data blocks contain a long count of objects in the block, a long size in
bytes, the serialized objects (possibly compressed) and the 16 byte sync
mark
– The block prefix and sync mark allow data to be efficiently skipped during
HDFS mapred splits and other similar processing
• Corrupt blocks can also be detected
containers
8. • Apache Avro defines a sort order for data
– This can be a key optimization in large data processing
environments
• Data items with identical schemas can be
compared
• Record fields can optionally specify one of three
sort orders
– ascending – standard sort order
– descending – reverse sort order
– ignore – values are to be ignored when sorting the records
sortorder
9. • Apache Avro defines RPC interfaces using protocol
schemas
• Protocols are defined in JSON and contain
– protocol – the name of the protocol (i.e. service)
– namespace – the containing namespace (optional)
– types – the data types defined within the protocol
– messages – the RPC message sequences supported
• Messages define an RPC method
– request – the list of parameters passed
– response – the normal return type
– error – the possible error return types (optional)
– one-way – optional Boolean for request only messages
protocols
11. • Schemas must always be present
• Therefore when using Protocols:
– The client must send the request schema to the server
– The server must send the response and error schema to the client
• Stateless transports
require a schema
handshake before
each request
• Stateful transports
can maintain a
schema cache
eliminating the
schema exchange
on successive calls
• Handshakes use a
hash of the schema
to avoid sending
schemas which are
already consistent on
both sides
handshakes
{
"type": "record",
"name": "HandshakeRequest", "namespace":"org.apache.avro.ipc",
"fields": [
{"name": "clientHash",
"type": {"type": "fixed", "name": "MD5", "size": 16}},
{"name": "clientProtocol", "type": ["null", "string"]},
{"name": "serverHash", "type": "MD5"},
{"name": "meta", "type": ["null", {"type": "map", "values": "bytes"}]}
]
}
{
"type": "record",
"name": "HandshakeResponse", "namespace": "org.apache.avro.ipc",
"fields": [
{"name": "match",
"type": {"type": "enum", "name": "HandshakeMatch",
"symbols": ["BOTH", "CLIENT", "NONE"]}},
{"name": "serverProtocol",
"type": ["null", "string"]},
{"name": "serverHash",
"type": ["null", {"type": "fixed", "name": "MD5", "size": 16}]},
{"name": "meta",
"type": ["null", {"type": "map", "values": "bytes"}]}
]
}
Apache Avro Handshake Records
15. • The Avro serialization system was originally
developed by Doug Cutting for use with Hadoop, it
became an Apache Software Foundation project in
2009.
• 0.0.0 Apache Inception 2009-04-01
• 1.0.0 released 2009-07-15
• 1.1.0 released 2009-09-15
• 1.2.0 released 2009-10-15
• 1.3.0 released 2010-02-26
• 1.4.0 released 2010-09-08
• 1.5.0 released 2011-03-11
• 1.6.0 released 2011-11-02
• 1.7.0 released 2012-06-11
• 1.7.6 released 2014-01-22
versions
Open Source
Community Developed
Apache License Version 2.0
16. avro
resources
• Web
– avro.apache.org
– github.com/apache/avro
• Mail
– Users: user@avro.apache.org
– Developers: dev@avro.apache.org
• Chat
– #avro
• Book
– White (2012), Hadoop, The Definitive Guide,
O’Reilly Media Inc. [http://www.oreilly.com]
– Features 20 pages of Apache Avro coverage
Randy Abernethy
ra@apache.org