Back

Security

The week the game changed

Simon Woodhead

Simon Woodhead

29th March 2013

An opinion post by Simon Woodhead, Managing Director.

It should come as little surprise to anyone to hear that this week saw a rather large DDoS against SpamHaus. The publicity machine at CloudFlare has gone into over-drive and news reporting in the mainstream media has been light on fact. For the record I’m unimpressed with some of the things they’ve said. They have had enough publicity and I don’t wish to add to it. Let us instead look at the facts since many customers have been asking.

What actually happened?

There is an excellent video featuring Bill Woodcock from Packet Clearing House which explains the mechanism of the attack.

What can be done about it?

Similarly, Patrick Gilmore from Akamai has posted a good technical piece urging other ISPs to take the necessary action to avoid originating the spoofed packets which enable such an attack.

Both of the above know what they’re talking about I’d encourage you to consider them the authoritative version of events. I agree with them both and support their positions wholeheartedly.

However, you may sense from the title that I have something more to say and am left feeling a little anxious. You’d be correct. In short I fear we’ve entered a new era for DDoS and need a new solution.

This attack was unsophisticated technically and as Bill and Patrick explain, entirely avoidable. There are many modes of attack and as the stakes continue to rise we can be sure the originators of these attacks will find new more sophisticated techniques. The sophisticated doesn’t scare me terribly though, as existing mechanisms will evolve to handle them. But floods scare me and this week we’ve seen a huge flood against a relatively low-profile target. This is a first.

A year or more ago we developed solutions and bought expensive hardware to mitigate attacks against our own network and those of customers. We also partnered with a major DDoS mitigation specialist in the US for anything we couldn’t handle on-net. At the time the average DDoS was 300Mb/s and this gave us the capability to deal with several on-net at once and the fallback to call in the big guns if we needed to. Sophistication wouldn’t affect that decision, only scale.

Whilst a sophisticated attack may seek to exhaust resources on a server or even a farm of servers, a blunt flood such as that seen this week can also saturate links. If your links are full you’ve lost, no matter what you do to filter packets at thin end of the link. Think of it like squirting yourself in the mouth with a hose-pipe – you can close your mouth but you’re still going to get wet. What you really need is for someone to turn the hose-pipe off closer to source.

We have always maintained links to peers and our transit suppliers that are at least 5 times our average utilisation. Our reasons are largely QoS related but this also gives us the headroom to contain a flood, i.e. drop packets when they arrive on-net without saturating the link.

What has changed this week, in my mind at least, is the scale of an attack against a profile of target who could easily be a customer (PayPal, Visa and other historic recipients of massive attacks are not) or even ourselves. The solution is bigger pipes still and bigger equipment on-net to contain floods, but that comes at a price which has to be justified. The DDoS mitigation specialists charge so much because they have to pay to have masses of capacity there ready and waiting. After this week I suspect they and other networks will be reviewing their head-room all of which is going to cost money and feed through to prices.

I don’t think that approach is sustainable. Many small networks operate with very little head-room and were already vulnerable before this week. Others such as us who have engineered in over-capacity cannot economically keep multiplying the head-room just in case. Whilst larger networks have larger links and by implication more head-room, the relative problem is the same. The mitigation specialists are going to need more capacity which means their services will be more expensive and further out of reach for smaller networks and businesses. Do you see my concern?

My suggested approach uses what we already have differently to collective gain. As an industry we already have massive (Terrabit+) resource that we interconnect over in the form of the Internet Exchange Points (IXPs). In an attack there is a destination network and multiple source networks but the chances are that a large proportion of the traffic is already traversing an IXP. They are the massively fat pipes and the place we could collectively and mutually contain it, increasing capacity collectively and mutually as required. I’d urge my peers and the IXPs we’re a member of to consider a few things:

  1. Making larger ports the default. I view congestion anywhere as everybody’s problem, not just the destination network’s, and therefore it is in everyone’s interest that it is avoided. It is feasible for everyone to have more capacity than they need (just in case) but deal with charging for what they use commercially. The present model is a charge based on port size whether or not you use it and some of the entry-level options are in nobody’s interest.
  2. Enabling effective filtering at IXP level. This could range from black-hole mechanisms standardised amongst peers (as it routine with transit) through to ACLs on member ports. I’ll dispense with the ‘how’ as that is the easy part. The bigger challenge is convincing peers that this is good and is necessary whilst amongst bright technical people avoiding over-intellectualising things. As an IXP member, peer, and Internet citizen my position is clear: we need protective measures but this is good and it is necessary.

The Internet community has faced challenges before but has always overcome them. Unlike other communities and industries it has overcome them by coming together examples ranging from the formation of the IXPs themselves through to open-source software powering many applications that sit on the Internet. Whilst I remain anxious after this week I look forward to us coming together again to look at what we have in a new light to solve today’s issues.

Related posts