February 20th, 2014


“Scaling is built into the heart of Cassandra…”

– Arya Goudarzi, Software Engineer at CardSpring


Arya Goudarzi

Arya Goudarzi Software Engineer at CardSpring




At CardSpring we have two primary products: CardSpring API is a commerce platform that enables developers to build applications for Credit/Debit cards and Point of Sale Systems (e.g. discount, loyalty, digital receipt apps); CardSpring Connect lets merchants to switch on those card-linked apps on demand and promote their business to new customers and track their performance.  You can read more about how a publisher like Foursquare uses our API to link offers to its user’s payment cards, and how merchants can track their performance using CardSpring Connect here .

 I am responsible for building the infrastructure.


Cassandra at CardSpring

We use it as our primary storage for real time and batch transaction processing.

We are a small team that got bombarded with a huge data set and didn’t have resources to implement sharded MySQL. I had experience with both sharded MySQL and Cassandra, and decided to go with Cassandra.

We chose Cassandra because it fitted our use case, as our data size is huge and we only stored blobs.  We knew we were going to get 100s of Gigabytes of data at the beginning and we knew scaling that over systems like MySQL requires having special code to handle sharding, and scaling it would require more effort and resources. Scaling is built into the heart of Cassandra, so we never had to worry about handling sharding again. Secondly, high-availability was one of our top criteria as we are an API company. We also knew this was built into the heart of Cassandra so that we never have to worry about a node going down causing any outages. Thirdly, our tests showed better performance for mixed read/write workload. Lastly, we had someone in our team who was experienced with it (me), so it was obvious that it is a good decision to move to C*.


Shrinking their cluster

We have recently shrank the size of our Cassandra cluster from 24 m1.xlarge nodes to 6 hi1.4xlarge nodes in EC2 using Priam. The 6 nodes are significantly beefier than the nodes we started with and are handling much more work than 4 nodes combined. In this article I describe the process we went through to shrink the size of our cluster and replace the nodes with beefier ones without downtime by not swapping each node one at a time, but by creating an additional virtual region with beefier nodes and switching traffic to that.

You can read the post Shrinking the Cassandra cluster to fewer nodes for a more in depth look.


Advice on Cassandra

It is an advanced system that shall not be treated as a black box. You must know what you are doing with Cassandra in and out. You must know how to design correct data structures and access patterns ahead of time so that you do not hit Cassandra’s anti-patterns, etc. If you use Cassandra, be prepared to monkey around its code and figure out how it works. Thanks to open source. Never use the latest version for production. I usually wait for 5 minor releases to go out on a major release, then I consider an upgrade.