- We’re sharing Tulip, a binary serialization protocol supporting schema evolution.
- Tulip assists with knowledge schematization by addressing protocol reliability and different points concurrently.
- It replaces a number of legacy codecs utilized in Meta’s knowledge platform and has achieved vital efficiency and effectivity good points.
There are quite a few heterogeneous providers, equivalent to warehouse knowledge storage and varied real-time programs, that make up Meta’s knowledge platform — all exchanging giant quantities of information amongst themselves as they impart by way of service APIs. As we proceed to develop the variety of AI- and machine studying (ML)–associated workloads in our programs that leverage knowledge for duties equivalent to coaching ML fashions, we’re regularly working to make our knowledge logging programs extra environment friendly.
Schematization of information performs an necessary function in a knowledge platform at Meta’s scale. These programs are designed with the data that each choice and trade-off can impression the reliability, efficiency, and effectivity of information processing, in addition to our engineers’ developer expertise.
Making large bets, like altering serialization codecs for all the knowledge infrastructure, is difficult within the brief time period, however provides higher long-term advantages that assist the platform evolve over time.
The problem of a knowledge platform at exabyte scale
The knowledge analytics logging library is current within the internet tier in addition to in inside providers. It’s chargeable for logging analytical and operational knowledge by way of Scribe (Meta’s persistent and sturdy message queuing system). Numerous providers learn and ingest knowledge from Scribe, together with (however not restricted to) the info platform Ingestion Service, and real-time processing programs, equivalent to Puma, Stylus, and XStream. The knowledge analytics studying library correspondingly assists in deserializing knowledge and rehydrating it right into a structured payload. Whereas this text will give attention to solely the logging library, the narrative applies to each.
On the scale at which Meta’s knowledge platform operates, 1000’s of engineers create, replace, and delete logging schemas each month. These logging schemas see petabytes of information flowing by them day-after-day over Scribe.
Schematization is necessary to make sure that any message logged within the current, previous, or future, relative to the model of (de)serializer, could be (de)serialized reliably at any time limit with the best constancy and no lack of knowledge. This property is known as secure schema evolution by way of ahead and backward compatibility.
This text will give attention to the on-wire serialization format chosen to encode knowledge that’s lastly processed by the info platform. We inspire the evolution of this design, the trade-offs thought-about, and the ensuing enhancements. From an effectivity perspective, the brand new encoding format wants between 40 p.c to 85 p.c fewer bytes, and makes use of 50 p.c to 90 p.c fewer CPU cycles to (de)serialize knowledge in contrast with the beforehand used serialization codecs, particularly Hive Textual content Delimited and JSON serialization.
How we developed Tulip
An outline of the info analytics logging library
The logging library is utilized by functions written in varied languages (equivalent to Hack, C++, Java, Python, and Haskell) to serialize a payload in response to a logging schema. Engineers outline logging schemas in accordance with enterprise wants. These serialized payloads are written to Scribe for sturdy supply.
The logging library itself is available in two flavors:
- Code-generated: On this taste, statically typed setters for every discipline are generated for type-safe utilization. Moreover, post-processing and serialization code are additionally code-generated (the place relevant) for max effectivity. For instance, Hack’s thrift serializer makes use of a C++ accelerator, the place code era is partially employed.
- Generic: A C++ library known as Tulib (to not be confused with Tulip) to carry out (de)serialization of dynamically typed payloads is supplied. On this taste, a dynamically typed message is serialized in response to a logging schema. This mode is extra versatile than the code-generated mode as a result of it permits (de)serialization of messages with out rebuilding and redeploying the appliance binary.
Legacy serialization format
The logging library writes knowledge to a number of back-end programs which have traditionally dictated their very own serialization mechanisms. For instance, warehouse ingestion makes use of Hive Textual content Delimiters throughout serialization, whereas different programs use JSON serialization. There are various issues when utilizing one or each of those codecs for serializing payloads.
- Standardization: Beforehand, every downstream system had its personal format, and there was no standardization of serialization codecs. This elevated growth and upkeep prices.
- Reliability: The Hive Textual content Delimited format is positional in nature. To take care of deserialization reliability, new columns could be added solely on the finish. Any try so as to add fields in the midst of a column or delete columns will shift all of the columns after it, making the row unattainable to deserialize (since a row just isn’t self-describing, not like in JSON). We distribute the up to date schema to readers in actual time.
- Effectivity: Each the Hive Textual content Delimited and JSON protocol are text-based and inefficient as compared with binary (de)serialization.
- Correctness: Textual content-based protocols equivalent to Hive Textual content require escaping and unescaping of management characters discipline delimiters and line delimiters. That is performed by each author/reader and places further burden on library authors. It’s difficult to take care of legacy/buggy implementations that solely verify for the presence of such characters and disallow all the message as a substitute of escaping the problematic characters.
- Ahead and backward compatibility: It’s fascinating for shoppers to have the ability to eat payloads that have been serialized by a serialization schema each earlier than and after the model that the buyer sees. The Hive Textual content Protocol doesn’t present this assure.
- Metadata: Hive Textual content Serialization doesn’t trivially allow the addition of metadata to the payload. Propagation of metadata for downstream programs is vital to implement options that profit from its presence. For instance, sure debugging workflows profit from having a hostname or a checksum transferred together with the serialized payload.
The basic downside that Tulip solved is the reliability concern, by guaranteeing a secure schema evolution format with ahead and backward compatibility throughout providers which have their very own deployment schedules.
One might have imagined fixing the others independently by pursuing a distinct technique, however the truth that Tulip was in a position to remedy all of those issues directly made it a way more compelling funding than different choices.
Tulip serialization
The Tulip serialization protocol is a binary serialization protocol that makes use of Thrift’s TCompactProtocol for serializing a payload. It follows the identical guidelines for numbering fields with IDs as one would anticipate an engineer to make use of when updating IDs in a Thrift struct.
When engineers writer a logging schema, they specify an inventory of discipline names and kinds. Area IDs will not be specified by engineers, however are as a substitute assigned by the knowledge platform administration module.

This determine reveals user-facing workflow when an engineer creates/updates a logging schema. As soon as validation succeeds, the modifications to the logging schema are printed to numerous programs within the knowledge platform.
The logging schema is translated right into a serialization schema and saved within the serialization schema repository. A serialization config holds lists of (discipline title, discipline sort, discipline ID) for a corresponding logging schema in addition to the sector historical past. A transactional operation is carried out on the serialization schema when an engineer needs to replace a logging schema.

The instance above reveals the creation and updation of a logging schema and its impression on the serialization schema over time.
- Area addition: When a brand new discipline named “authors” is added to the logging schema, a brand new ID is assigned within the serialization schema.
- Area sort change: Equally, when the kind of the sector “isbn” is modified from “i64” to “string”, a brand new ID is related to the brand new discipline, however the ID of the unique “i64” typed “isbn” discipline is retained within the serialization schema. When the underlying knowledge retailer doesn’t permit discipline sort modifications, the logging library disallows this modification.
- Area deletion: IDs are by no means faraway from the serialization schema, permitting full backward compatibility with already serialized payloads. The sphere in a serialization schema for a logging schema is indelible even when fields within the logging schema are added/eliminated.
- Area rename: There’s no idea of a discipline rename, and this operation is handled as a discipline deletion adopted by a discipline addition.
Acknowledgements
We want to thank all of the members of the info platform workforce who helped make this mission successful. With out the XFN-support of those groups and engineers at Meta, this mission wouldn’t have been attainable.
A particular thank-you to Sriguru Chakravarthi, Sushil Dhaundiyal, Hung Duong, Stefan Filip, Manski Fransazov, Alexander Gugel, Paul Harrington, Manos Karpathiotakis, Thomas Lento, Harani Mukkala, Pramod Nayak, David Pletcher, Lin Qiao, Milos Stojanovic, Ezra Stuetzel, Huseyin Tan, Bharat Vaidhyanathan, Dino Wernli, Kevin Wilfong, Chong Xie, Jingjing Zhang, and Zhenyuan Zhao.