Gorilla

This post is to write down my first impressions after reading Facebook’s Gorilla which is their in-house TSDB.

Long story short, Facebook designed their own time-series database (TSDB) to collect and expose system metrics for monitoring and debugging (similar to OpenTSDB). Their main claim-to-fame is that their systems handles their enormous volume of data (10M/s) while still supporting wicked-fast querying because they can cache recent data in memory (older data is put into HBase). The benefit for them is that they can run some complex anomaly/correlation detection algorithms on their metrics to aid in super-rapid issue detection and debugging.

For me the most novel part of the paper is how they compressed their data (again to allow them to cache it in memory). Basically it comes down to two observations they made about the data:

  1. The timestamps between each event tend to be spaced apart at even intervals
  2. Consecutive values tend to have relatively small differences

Based on these two characteristics of the data, the were able to compress the data by:

  1. Only storing the delta of deltas between each consecutive timestamp. For example, if a system pushes data every 60 seconds, almost 100% of the time the time stamps would be t, t+60, t+120, etc. This means that the delta of deltas would be 0 except for the rare event that the metrics is delayed (t, t+60, t+121… which then the delta would be t, 0, 1…).
  2. Storing the delta of values, using the smallest number of bits possible to represent that delta. This gets a bit more complicated since they try to optimize as much as possible and so there ends up being some bookkeeping bits/values that tracks how many bits are used to represent the delta. But the basic intuition is: use the least number of bits to represent the delta.

The paper then goes on to describe the data structure they use to organize the data in memory (called TSMap) and also describes how they persist data to disk. Basically the in-memory data structure has to deal with concurrency and overall the system prioritizes recent data over older data.

Finally, the paper goes through how the system is designed for fault tolerance and provides some examples of how Gorilla was used at Facebook to do great things.

Overall I think what they did at Facebook isn’t particularly breakthrough for me. In general understanding the character of data allows you to compress it to the most efficient manner that will provide you with the information you really need to know from the data. The concept of prioritizing recent data over historical data is almost always the correct way to build data applications (e.g. time-series caches, lambda architecture) since way more often than not, historical data is about trends while recent data is about events/incidents. Although based on my experience with OpenTSDB it’s one thing to not be surprised in theory and another thing entirely to have those ideas/concepts in practice. 🙂

Edit: Gorilla has been open sourced as Beringei