January 29th, 2014



“We knew we had a winner with Apache Cassandra.”

–  Dave Cocca VP Engineering at Retailigence


Dave Cocca VP Engineering at Retailigence



Retailigence provides local retail product availability and price information through an API to consumer applications, mobile applications, and mobile ad solutions.  We provide retailers with more foot-traffic with stronger purchase intent, and brands with an added “path-to-purchase” layer to their brand advertising.  Retailigence analytics also gives brands and retailers insight to the needs of consumers searching for products when they are “out and about”.


I am VP of Engineering, responsible for the design and development of Retailigence’s software solutions.  I manage the technology team at Retailigence which consists of IT and QA specialists, data and Software engineers in three countries.

Making sure you are always in stock with Cassandra

We use Cassandra as our primary data-store behind our API.  We store product and availability data for tens of millions of products in hundreds of thousands of locations as well as user behavior and search history.  We began using Cassandra version 1.1.1.  Currently we are using version 1.2.8 in production, and 2.0.3 in development.

We are currently using Hector as a Java client to Cassandra.  We are very excited about the improvements made in Cassandra 2.x and the addition of the DataStax Java driver and CQL 3.0.  We plan to use both in the near future.

A journey from MySQL to MongoDB to Cassandra

We began as a MySQL solution in our alpha days.  MySQL quickly became a bottle-neck for us as we amassed large amounts of retail data in a short period of time.  We scaled our hardware as vertically as possible.  Tuned our code, the servers, and our work-load in order to keep things running smoothly.  Eventually we had nothing left to tune.


We evaluated many different technologies, and short-listed Cassandra and MongoDB as candidates to move forward.  MongoDB was chosen at the time because it was the path of least resistance based on our code-base, and Cassandra was seen as a greater risk due to its infancy at the time (0.6.3).


We used MongoDB for approximately 1 year before we were forced to re-evaluate our selection.  MongoDB’s global write lock significantly impacted our ability to scale further without significantly increasing our investment in hardware, and significantly adding complexity to our deployment and code-base.  At this point Cassandra looked much more attractive to us than it did in the past.  A prototype was created along with a barrage of load and stress tests to put it through its paces based on our data-sets and access patterns.  We were very pleased with performance predictability even under a punishing read/write load.  Overall read performance was less than we were used to, but write performance was beyond our expectations.  In our experience, writes are much more difficult to scale horizontally than reads, and our workload is very write heavy.  Through tuning of our read patterns, de-normalization of data and replication, we were able to reduce the effect that longer reads would have on our product.  Based on our needs, we knew we had a winner with Apache Cassandra.

Evolving nodes to meet demand

We have 3 data-centers in 3 counties.  Most of our Cassandra nodes have SSDs (Solid State Drives). In terms of data, we have 300+ retailers, 100k+ retail locations, 10m+ products which amounts to close to 7.5B inventory records.    We currently have about 6 Cassandra nodes per data-center.  Over the last two years we have added new nodes to meet data-demands, as well as remove nodes to consolidate them to larger servers due to the availability of larger SSDs and virtual nodes which became available in Cassandra 1.2.

Words of wisdom

Don’t be afraid to de-normalize.  Forget everything you’ve ever learned about first normal form.  Think about your data access patterns and model your data the way you will use it.  Experiment.  Don’t expect to get the data-model right the first time.

SSDs are highly recommended for performance.  Familiarize yourself with wide-row-indexing techniques for searching.  Determine and use the least consistent consistency level that your application can correctly handle (one size doesn’t fit all in this category).

And don’t forget to run nodetool repair on all of your keyspaces!

A helping community

Cassandra is a moving target.  Much has changed since 1.1.1.  We keep ourselves up to date by reading community forums, tutorials, retrospectives, etc.  Planet Cassandra is required daily reading.  If we run into a problem we turn to the community forums for insight.  If the community was not as active as it is, I don’t think we would have been able to achieve what we have using Cassandra


 Retailigence Use Case Summary


Functional Collections

 Technical Primary Datastore

 Deployment Cassandra 1.2.8, 18 nodes, 3 Data Centers, SSDs