Juho Mäkinen Head of Technical Operations at Unity
"The old MongoDB had bigger latency distribution, but now our 95% percentile latency is really flat and low... Latency translates directly into better end user experience, plus it gives us confidence that the database cluster has plenty of room to scale."
Juho Mäkinen Head of Technical Operations at Unity

Unity is a game development ecosystem: a powerful rendering engine fully integrated with a complete set of intuitive tools and rapid workflows to create interactive 3D and 2D content; easy multiplatform publishing; thousands of quality, ready-made assets in the Asset Store and a knowledge-sharing community. My role is to lead the Technical Operations for the new Unity Ads and Everyplay community services. The Unity Ads gives game developers the ability to show video ads in games and both monetise their players and promote new games.

Migrating from MongoDB

The datastore has few hard requirements: It needs to scale linearly to several terabytes of storage, it can’t have any single point of failures, it needs to support distributing data in an active master-master way into multiple different datacenters and it needs to support TTL for each record.
We were using MongoDB before as our key database but we ran out from its comfortability zone a long ago when our active dataset grew. We weren’t happy with its sharding and it doesn’t really support master-master setup in multiple datacenters. As some of our operators had previous Cassandra experience from the past it was a natural candidate as a replacement. We installed a six node Cassandra cluster and modified our core services to duplicate all data requests to it and we ran this setup half a year in parallel with the original MongoDB. During this time the MongoDB database was still the production database and the data from Cassandra cluster wasn’t used for anything else than to just test the cluster performance. After we were confident it suited our use case we switched our application to use the Cassandra cluster as the source of truth and we’ll be soon decommissioning the old MongoDB database.

As the database contained around 500 million documents we first spent some time thinking how we should model it in Cassandra. We ended up choosing a time series style list of events per each tracked object. In our data model each object has an unique id which works nicely as the partition key. Then, each tracked event is placed as rows into the partition with the timestamp as the second primary key. This was already an improvement over the old MongoDB model because we felt confident we could store more detailed data in Cassandra than MongoDB. We also used a few static columns to store common properties for each object. Each time the the database is used the frontend node.js server does a query into the MongoDB to get a document, modifies it and saves the updated data back to MongoDB.

We used a shadow launch method, which essentially means that we implemented as much of the desired functionality into production as possible, but without actually relying on it. First we altered our node.js frontend code to save a copy of each data into Cassandra. This allowed us to watch closely how the incoming data sets into the new Cassandra cluster and track its performance. Once we were satisfied we altered the frontend code to also do queries to the Cassandra cluster, but not to use those query responses in any way. This meant that the production was still running from the old MongoDB database. We let the system do this for couple of months, which resulted with around one terabyte of data in the Cassandra cluster. During this phase we practiced Cassandra operations by removing and adding nodes, running repairs with different methods, created network topology splits, rolling upgrades to a new minor version and verified that we have good monitoring and understanding on every part of the cluster.

After this we truncated the Cassandra database and modified our front-end code to start doing reads to Cassandra and to use the data from Cassandra and to fallback to read from MongoDB if Cassandra didn’t contain the data. After this the data was always saved to Cassandra. As the migration started the miss ratio from Cassandra quickly fell from 100% to around 30% within a couple of days and then gradually decreased as more and more documents were migrated. Eventually we can just disable the old MongoDB code path and conclude that the migration is fully completed.

This kind of shadow launch and gradual migration is usually pretty easy to implement assuming the code accessing the database tier is structured with good programming practices and patterns. This also allowed us to keep 100% uptime during the entire migration. Another option would have been to migrate data offline from MongoDB to Cassandra, but this could have easily taken days so we didn’t even consider that.

Improved latency for ads

Our biggest single user is our Unity Ads which stores history data on shown ads in the entire ad network. This is a real time need and the database is touched each time ads are shown in a game and the data is used to calculate which ads are shown for the player. We’re heavily using the time to live (TTL) feature in Cassandra to clean the database from old records.

Our biggest benefit received is better request times from the database tier. The old MongoDB had bigger latency distribution, but now our 95% percentile latency is really flat and low. There’s still some optimisations to be done to gain smoother 99% percentile but we’re already really happy with the performance. Latency translates directly into better end user experience, plus it gives us confidence that the database cluster has plenty of room to scale. The programmers are happy with the CQL language. It’s still doesn’t have many features when compared to a full SQL implementation, so there’s still a lot of room ease the CQL usage by implementing more SQL like features.


We are currently running 15 c3.2xlarge nodes with 2.0.11 in a single AWS region with replication factor three. We did try i2.xlarge instances, but they were lacking CPU power for our data model. Because the CPU-SSD ratio is un-optimal in c3.2xlarge instances for Cassandra usage, we run parts of our middle tier in the same machines, which benefit from the reduced latency to the database. We run everything inside Docker containers which hasn’t been any problem for Cassandra.

We’re in the process of migrating some smaller datasets from MongoDB to Cassandra so that we gain the ability to distribute our production into multiple data centers (both AWS and private cloud). The end goal is to use geoip DNS to direct users to the closest data center. We’re using OpsCenter community edition to get an overview of the cluster and the metrics-graphite plugin to get instrumentation into our Graphite monitoring setup. We monitor latencies from both Cassandra side and our application side for each different table.

It’s known that there are several Cassandra clusters with well over one magnitude bigger than what we have, so I have no reason to believe that we can scale well into the future. Also I’m excited on the multiple master-master data center deployment which should give us even better uptime and faster response times. Also the recently released 2.1.0 contains several features which we’re hoping to use in the future.

Cassandra tradeoffs & community

Cassandra is a great choice if the dataset is larger than what a few single big machines could handle with some other database. Everything smaller can usually be handled with just a single machine, and if you can have only a single machine then there are other databases with more rich feature sets available; as Cassandra has chosen to support linear scalability and master-master deployments across data centers, there are some tradeoffs with its design.

The community is really helpful. I’ve been most active in the Apache hosted Jira which is used to track all Cassandra development. All my bug reports have been answered fast and the actual bugs have been usually fixed for the next revision. The discussion is really helpful and it gives the confidence that the product is under active development.

Follow @twitter