Back

Voice

Real-time big-data!

Simon Woodhead

Simon Woodhead

14th June 2013

‘Big data’ is a bit of a buzz phrase lately and is typically used to refer to large volumes of data, crunched over prolonged periods to determine long-term trends. Real-time data on the other hand tends to be much smaller and needs to be rapidly accessed. But what about real-time big-data? That is something we have to deal with here and we thought we’d share some details on our solution.

Background

We’ve used MySQL for over 15 years and love it. All our transactional data is stored in it and until our recent new stack release it was also involved in call processing, i.e. we’d authenticate customers against the database, check balances etc. in real-time albeit with help from memcached.

We moved from bigger and bigger database boxes to smaller SSD based ones years ago, when SSDs were a new thing. We’ve also used Memory tables for our call routing for many years as the only way of getting close to the performance we needed when querying several million route options in real-time. Despite all of that MySQL inevitably gets slower as you fill it with more and more vital data and even before that the latency is such that the options for running multiple queries against it in the fraction of a second we have to route a call are severely limited.

For our new stack we wanted to run literally dozens of checks to offer the unique anti-fraud and security features we do, but we also wanted to reduce the time it took to connect calls. We needed something that could respond in micro-seconds, would perform linearly no matter how much data we threw at it and would enable us to push the boundaries and offer new features. We found it in Redis and built our new stack on it – there’s not a database query in sight now, calls connect quicker than ever yet we do far far more to protect customers before doing so. Result!

Redis

Redis is a key-value store. It stores data in RAM and so is often compared to memcached. Whilst it can be used like memcached it is so much more as it has advanced data structures. It is also fast; really fast. It is so fast that 100k reads or writes per second on commodity hardware is quite achievable and we’ve heard of people achieving over 1m on workstation grade hardware. It responds in micro-seconds which means the bottleneck to performance is usually network latency, even when sub-millisecond on our LAN.

Where it is like memcached is that it is key based. You cannot query for patterns or on multiple parameters as you might in SQL and instead read and write by key. As such, like many NoSQL solutions it requires you to think about what you want out of it very early in the design process and to denormalise data to get there.

Big data

We handle a large number of calls. For round figures let us assume 1m per day, mostly occurring during the 8 hour business day. That is approx. 34 per second assuming they’re received linearly (they aren’t!). For those 34 per second we perform approaching 100 checks, pushing us to 340 per second just for real-time call routing.

Redis handles that effortlessly and in fact our Redis instances run as single core VMs on the Simwood vSphere clusters in all our sites and handle 5-10k requests per second each. That is 5-10k per second with a load of 0 and CPU usage of 0-5%. Did we say it was fast?

Historically all our analysis was retrospective against MySQL or using map-reduce techniques. Either way whilst we could do quick calculations with enough latency for management, complicated analyses took hours to run. At that kind of latency they serve as information only, not actionable intelligence.

There are multiple metrics we wanted and we wanted them real-time so we could use them in our real-time routing. For example, we may want to monitor the performance of each onward carrier for each customer for each destination so we can route calls most appropriately for that customer at that time, rather than blindly firing them down the cheapest route as our competitors do.

Similarly, what if a customer is sending us fraudulent traffic or predatory traffic? We want to identify that as early as possible and handle it. Rather than blocking traffic to all destinations for that customer, or blocking that destination for all customers, we may want to modify our routing for that customer to that destination alone to contain the problem. But we may only want to do that after a certain trigger or for certain calls to that destination.

Wow, that is an awful lot of variables and that is only two examples. And we want to do this in real-time?!

Redis to the rescue! We mentioned that Redis has numerous data structures, one such structure is a sorted set, or ZSET. This maintains each key, sorted by a score. The score can be updated and ranges of values retrieved. The common use of this in examples is league tables, e.g. top 10 scoring users. Whilst it is perfect for that it is way more powerful than that.

ZSETs can be huge and can be combined in various ways. You can add them together, compute the difference or compute the values in common between them. You can do this for two sets or many more. They’re stored in RAM so these things happen fast.

For us, for every leg of every call we add a unique identifier to a ZSET with (usually) a timestamp as a score. We repeat this for every event on that call or every parameter we want to report on. So every account has a ZSET, every carrier has a ZSET, calls failing for every reason each have a ZSET etc. We don’t always use the time as the score either, adding more possibilities. Some sets have millions of entries in them.

We can report on any combination of these. So lets say we want to know calls for account X that were routed over carrier Y between that we billed to destination Z? Easy.

Now you may be thinking that is much easier with SQL. Of course it is and we love SQL for that but how long will it take and can you even write the several million underlying records needed in real-time beforehand? If you look at your MySQL slow log you’ll find queries taking minutes, hours, or days sometimes unless your data set is tiny or you’ve tuned to very specific requirements. If we look at our Redis slow log the slowest ever query we have done was 24ms whilst concurrently writing in real-time at a crazy rate.

Now even tens of milliseconds for each of 100 or so checks is a few seconds before network latency. That is unacceptable – we want to be completely done in the low tens of milliseconds.

So taking account of that fact that Redis is RAM based and read/write operations are really cheap we do all this out of band. We have daemons processing historic data for every combination and permutation we’re interested in, storing time series data and snapshot data back in to Redis using its other powerful data structures.

Now, if we want to know the margin for a customer to a destination in the last second, that information is available in micro-seconds. If we want to know how max/min/avg channels a customer had in use this time last week, micro-seconds too. Micro-seconds make it quick enough to include in our real-time routing and the data is less than a second old in many cases and a few minutes at most.

The future

Having the kind of information that previously we’d report on monthly available real-time is transformatory. We can deliver best quality routing to customers already but as we are able to push more fraud intelligence into the mix we can protect customers more. Naturally, we can also protect ourselves more from bad debts and predatory traffic which feeds through to us needing to build less margin into pricing. We think better pricing for performance based call routing that watches your back for fraud is quite a good deal!

When you consider our competitors route statically and any dynamic checks are based on legacy databases (SQL Server 2000 in one case we think), with retrospective checks against the same why wouldn’t you send all your traffic to Simwood?

Notes

Performance proof

Aside from our own services and our customer’s experience, we apply anything we develop and prove to our Professional Services. One long-running project is for a major US telco where we’ve migrated a legacy SS7 stack to a dynamic one with a SIP-based core. We have also migrated them to a Redis based LCR. The time taken for them to switch a call fell as a result from 1.2 seconds to just 30ms and due to the reduced connection time they’re seeing more calls connect and using their SS7 infrastructure more efficiently.

For Redis aficionados

We know ZSETs are expensive in terms of RAM and we could instead use bitmaps. Bitmaps would be much quicker too but there’s a few challenges here surrounding identifiers and the fact we don’t simply want to count entries. A lot of our intersects compute score differences to calculate time elapsed. It would be possible but would make things more complicated. Given we own our own network and have an abundance of RAM available we elected to do it this way. Repeating it on the public cloud with plenty of time available you might prefer to use bitmaps to replace some of the sorted sets.

Related posts