With the Olympics ever closer and our warnings over commodity Internet access remaining, we wanted to share a tiny change you can make to dramatically improve the speed of your on-line experience – set the right DNS server.
In the olden days users would set their DNS server to whatever their ISP told them. These would generally be the closest server to the user and work reliably. In recent years there’s been a trend towards using public DNS servers such as those of OpenDNS and more recently Google. One obvious benefit of using these is the potential phishing and malware filtering they offer but there is a major potential downside – performance. If you read no further know this: you will substantially increase your download speed by using your ISP’s DNS servers.
When you request a page in your web browser, you generally do so using the fully qualified domain name (e.g. www.google.com), this needs resolving by a DNS lookup. It figures that the further away that server is, the longer it takes. Within the page returned will be potentially a few hundred other resources (the images etc. making up the page) and in some cases they will be on different fully qualified domain names; more lookups which are further away. This can add up but we’re still potentially only talking milliseconds of added delay aren’t we?
For the DNS we are but consider this: most major web properties now use Content Distribution Networks (CDNs) which are globally distributed arrays of servers designed to deliver the content from the closest point to the user, content ranging from software updates through images and to videos. In the most part they identify the closest content source to the user from the DNS lookup. In other words, they identify the closest content source to the DNS server making the request. Consider the below traceroutes to show the path and latency to example content for a typical Simwood co-location customer:
static.bbci.co.uk (the FQDN used for BBC iPlayer content) using Google’s DNS servers
traceroute to a1638.g.akamai.net (184.108.40.206), 30 hops max, 52 byte packets
2 cs0.the.london.uk.core.simwood.com (220.127.116.11) 0.168 ms 0.248 ms 0.150 ms 3 xe-1-3-2-63.lon10.ip4.tinet.net (18.104.22.168) 0.184 ms 0.183 ms 0.177 ms 4 xe-5-0-0.lon21.ip4.tinet.net (22.214.171.124) 0.304 ms 0.266 ms xe-5-1-0.lon21.ip4.tinet.net (126.96.36.199) 0.253 ms 5 188.8.131.52 (184.108.40.206) 0.347 ms 0.319 ms 0.280 ms 6 ldn-bb2-link.telia.net (220.127.116.11) 136.674 ms ldn-bb2-link.telia.net (18.104.22.168) 0.373 ms ldn-bb1-link.telia.net (22.214.171.124) 28.307 ms 7 prs-bb1-link.telia.net (126.96.36.199) 8.365 ms prs-bb2-link.telia.net (188.8.131.52) 7.934 ms prs-bb1-link.telia.net (184.108.40.206) 8.376 ms 8 prs-b8-link.telia.net (220.127.116.11) 8.651 ms prs-b8-link.telia.net (18.104.22.168) 8.670 ms prs-b8-link.telia.net (22.214.171.124) 8.950 ms 9 195-12-231-42.customer.teliacarrier.com (126.96.36.199) 8.738 ms 8.288 ms 8.813 ms
Content is being returned from a content array in Paris which is over 8ms from the London server making the request.
static.bbci.co.uk using Simwood DNS servers
traceroute to a1638.g.akamai.net (188.8.131.52), 30 hops max, 52 byte packets
2 cs0.the.london.uk.core.simwood.com (184.108.40.206) 0.249 ms 0.216 ms 0.158 ms 3 lonap.netarch.akamai.com (220.127.116.11) 3.413 ms 0.545 ms 0.277 ms 4 a92-123-154-18.deploy.akamaitechnologies.com (18.104.22.168) 0.257 ms 0.277 ms 0.288 ms
Content is returned from a content array in London, under 0.3ms from the same London server making the request.
A simple change of the DNS server has taken 5 hops out of the path to the content and reduced the latency by 96%. For domestic DSL users the latency will be substantially higher as DSL itself adds latency but there should still be a marked improvement through using a nearby DNS server.
But how does a few milliseconds matter? I have an 80Mb/s fibre to the home connection so I’ll get the content at that speed won’t I? No, you won’t, and this is the key point: the actual speed of download is directly proportional to the latency and even a slow connection will be barely utilised for a high latency transfer.
Put simply, think of the bandwidth as lanes on a motorway. You could have 6 lanes between you and the site you’re accessing but if you only have one car, 5 of the lanes are useless. In the case of a web page you usually have two cars, each can carry a few boxes, and there’s 1-200 boxes to move from one end to the other. The time it takes you to travel down the motorway is what determines how fast you can get all the boxes from one end to the other, not how many lanes the motorway has!
More technically, neither the server nor the client knows how much bandwidth is available between them nor how much of it is presently in use. To overcome this the server first sends a few segments of information (usually 1.4kb to 1.5kb each) and waits for an acknowledgement that it was received. For each acknowledgement it adds another segment, doubling the size of the transfer until we reach the limit of the connection or the maximum of 64kB.
As a consequence, you will have a faster experience on a low bandwidth low latency network, than a high bandwidth high latency network! 128ms of latency will reach the maximum 64kB segment size without even stressing a 5Mb/s line. But it gets worse, you will only reach the maximum 64kB after 9 round trips and the transfer of over 250kB of data. Considering the exchange above is potentially for every resource on a page (images etc.) and the vast majority of web content of that sort is less than 250kB you can see how latency is far more important. A simple 10kB image will need 3 round trips and will never achieve beyond an instantaneous 1.1Mb/s assuming 60ms of latency.
Your own ISP will advise on the best DNS servers to use. Doing so will greatly improve your performance and may make the difference between your connection being usable or not during the Olympics if you use a commodity service provider. For ultra-low-latency guaranteed capacity fibre connectivity, please contact us.