Blog Post

High Performance Avro Schema Versioning in Rocana Ops – Part 1

    
July 14, 2016 Author: Alan Gardner

Version dependency is an important consideration, especially in large distributed systems where it's usually impossible to upgrade all components in unison. This is the first in a series of posts about how we use Avro schema versioning in Rocana Ops to help minimize this dependency with minimum performance impact.

avro-post.pngIn Rocana Ops, all the data points we collect – syslog messages, application metrics, netflows, etc. – are encoded as Apache Avro records and written to Kafka. A key feature of Avro is that it uses a schema: a definition of the field names and types that exist in each record. Our Avro schema is an essential part of the contract between Kakfa producers and consumers. Producers are required to publish records to Kafka using our Avro schema, which ensures consumers will be able to deserialize them later when they receive them from Kafka. This requirement extends to both Rocana Ops core components, as well as third-party integrations which might produce their own records – everything is an open format with an open, predictable schema.

Changing our event schema is relatively rare, but when it does change we always want to guarantee backwards compatibility. We want new Kafka consumers, which know about the new schema, to be able to read any existing events queued up in Kafka. We also want to support a mix of new and old producers, so users aren't required to upgrade every component in lock step.

In general, the process by which schemas change is known as schema evolution. If you aren't familiar with the problem of schema evolution, Martin Kleppmann has a great comparison of how various formats (including Avro) handle decoding old events once the schema has changed. For us, what's relevant is that Avro requires any decoders to always have the schema used to encode a record – our consumers have to know what version of the schema every individual record was encoded with.

This requirement is hard to satisfy when working with Kafka: multiple records from different producers (potentially using different schema versions) may be interleaved on a single partition. A consumer may receive an event with any version of the schema at any time. Initially we didn't plan to handle schema evolution at all – we just produced records with our "v1" schema directly into Kafka. Inevitably this led to some pain when we wanted to roll out v2: we wouldn't know which records used the old schema and which events used the new schema.

When the time came to make a change, our solution was to add what we called the "wrapper schema". The idea was simple: when we encode an Avro record, we get a bunch of bytes and we don't know the schema version. The wrapper schema would contain the version, as an integer, and a byte array containing the encoded event itself. The bytes for the actual record would be wrapped in a second, outer record. We wouldn't ever need to add or remove fields from outer record, so we didn't have to worry about schema evolution. The inner byte array could change as much as we wanted, as long as we changed the version field accordingly.

This was a pretty straightforward design: we could give anybody the two schemas, and they could encode or decode records using a standard Avro library. We could introduce new versions of the schema without sacrificing backwards compatibility. However, we had a suspicion this approach was really slow. Running two decoders for each event (one for the wrapper and one for the record itself) would use more CPU cycles, and crucially it would produce more garbage. Since we have many consumers in our system, and each consumer has to decode each record separately, every record in Kafka ends up being decoded six or more times. We decided to do some microbenchmarking to confirm our fears and try to find a better way.

In the second post of this series, Avro Schema Versioning and Performance – Part 2, we look at microbenchmarking the various schemas using JMH to ensure we get repeatable, realistic results: And in the final post of this series, Avro Schema Versioning and Performance – Part 3, we compare two alternate Avro Event decoding approaches using the JMH benchmark to determine which delivers the best runtime performance in Rocana Ops.

Learn More...

Learn About Rocana Ops: The Central Nervous System for IT Operations