Winning The War On Latency
As modern society’s reliance on internet connectivity continues to increase, latency is becoming an ever-more pressing issue for online enterprises. Defined as the delay before data transfer begins in response to an instruction, latency’s impact varies hugely across industries. An ecommerce platform might be quite able to sustain perceptible levels of lag, but a cloud-hosted gaming portal would rightly regard delays of more than 30ms as unacceptable.
What Causes Latency?
Latency is an inevitable by-product of the internet’s architecture. Rather than a direct one-to-one connection, online data streams are subdivided into individual packets by host servers. Each packet is dispatched along the least congested network route at that precise millisecond, which means three consecutive packets may take very different routes to their destination.
Each router, server or other distribution node adds a fractional delay to a packet’s delivery. The same is true of the physical distance travelled along fiber cables. If data travels from Santa Monica to Chicago via Buenos Aires and Toronto, it’s naturally going to take longer than it would have done following the digital equivalent of Route 66. Satellite internet connections are notoriously slow, incurring delays of up to 600ms. This explains why OneWeb’s forthcoming network of broadband-emitting satellites will hover just 750 miles above the Earth’s surface, compared to the 22,000-mile orbit of current data satellites.
Recipient devices receive packets in no particular order before compiling them into a unified format using header and footer data. Reassembly also increases latency, particularly if a packet is lost or corrupted along the way and has to be resent. That becomes more likely during peak traffic load periods, with Christmas Day typically representing the high point of data access each year. And while most of us rely on Wi-Fi networks, the current generation of routers and hubs often struggles to project signals more than twenty feet in any direction. They’re also susceptible to interference from wireless devices like baby monitors and microwave ovens, competing for bandwidth across the GHz frequency spectrum. If packet interference occurs, the data has to be resent and latency levels rocket.
Another leading cause of latency is insufficient capacity on server networks. If a streaming service can’t call upon additional servers under peak load, it may become overwhelmed and deliver stuttering performance. This is equally true for client-side cellular networks, which are an improvement on Noughties 3G networks but remain impractical for reliably streaming anything more taxing than low-resolution games or podcasts.
The Latent Effects
Clearly, reducing latency to zero on a remote network is impossible – even the most advanced data cables only convey data at the speed of light. And while the human eye might not notice latency of 10ms, there’s still a delayed response to user inputs between host and recipient devices. Indeed, financial services companies like stock trading platforms regard even the tiniest delay as significant, since modern trading relies on instantaneous algorithmically calculated responses to live events. It’s been suggested being one millisecond faster may save a brokerage firm $100 million in a single year, while Google recently spent almost $2 billion constructing a data center in New York for Stock Exchange clients.
Longer latency periods are more acceptable for media platforms – but not by much. An online chess game ought to cope with 100ms of latency, whereas FPS games become laggy and glitchy with anything more than 30ms. Stuttering graphics and glitchy gameplay ruin any online gaming experience, which is completely unacceptable for online gaming services like Steam or GOG.
Non real-time platforms also suffer with latency. Home workers attempting to video conference their colleagues might become frustrated by pixelation and distorted audio, while shared software platforms like Trello and Slack become glitchy if one user’s bandwidth is slower than everyone else’s. It’s well documented that consumers will abandon websites if they haven’t loaded within three seconds. Months of hard work on SEO and content production could be undone in a flash by page loading speeds, which is unthinkable in today’s mature online marketplace where almost every business has an established presence.
Identifying latency can be a challenge, but online tools are freely available. These commonly involve conducting ping tests, which measure the number of milliseconds taken for a test packet to complete a journey from device to server and back.
An obvious way to detect latency is through adaptive bitrates and transcoding middleware. This encodes sections of material at different qualities, before distributing the highest file size existing bandwidth capacity will support. The first few seconds of on-demand streaming services are often quite pixelated and low resolution, while the recipient device builds a buffer of material and demonstrates its ability to accept higher quality files. Since each packet might be sent at different quality according to network variations, file quality adjustments provide a clear latency benchmark in real time.
The Latency Reduction Myths
It’s long been suggested that boosting average connection speeds will resolve latency issues. However, bandwidth and latency are not directly connected. Average internet connection speeds now run at 7.2Mbps, 15 per cent faster than last year. At the same time, more and more information is being piped to our homes as the Internet of Things takes off and streaming media replaces conventional broadcasting. And with the fragmentation of domestic entertainment habits across various media providers, connection speeds are arguably falling behind our voracious appetite for digital content.
Worms and viruses are often blamed for causing latency, even though they’re more likely to hog available bandwidth at the expense of other programs or devices. It’s worth running regular malware scans of root directories and folders, since Trojans can make identifying malware difficult. However, viruses are rarely intended to cause latency – if anything, they’re more likely to trigger it by uploading hard drive contents to a malicious server, or subjugating a device into a botnet for future DDoS attacks.
The Solutions for Reducing Latency
If more bandwidth and antivirus packages don’t offer solutions to latency, what can we do to minimize its effect on our homes and workplaces? Strange as it may seem, good housekeeping is an important place to start. Devices with full hard drives or a dozen open applications will struggle to process data as effectively as they otherwise might. Older Cat 5 Ethernet cables throughput data at a maximum of 100Mbps, whereas Cat5e cables are up to ten times faster. There’s no point paying for ultrafast connectivity and then hobbling it with faulty telephone micro filters or interference from other wireless devices. It’s worth adopting hardwired client-side connections wherever possible, such as a games console sitting beside a Wi-Fi router with unused Ethernet ports. Similarly, Powerline adaptors pipe bandwidth from a router to electrical sockets far more quickly and dependably than wireless signals.
An obvious way of reducing latency involves algorithmic analysis of data routes. This effectively replaces the routing protocols managing packet transfer along the shortest routes with more sophisticated ones that undertake real-time analysis of routing metrics. Instead, information is sent along whichever path provides the lowest amount of latency. This is commonly undertaken by streaming media providers, who also deploy a number of complementary techniques. The adaptive streaming mentioned earlier ensures playback even under testing conditions, while the fragmented MP4 files used by the MPEG-DASH file standard provide efficient data transfers compatible with most modern devices.
The forthcoming 5G revolution will radically improve cellular networks, with latency slashed to just 1ms thanks to hundreds of thousands of connections per square mile. Upload speeds could be a hundred times faster than that offered by 4G, though we’ll have to wait a couple more years for 5G to make its debut in major cities across America. Underpinning everything from autonomous vehicles to remotely operated surgical appliances, it’s hoped 5G will provide the always-on connectivity we’ve long been promised outside our homes and offices.
Effective website design can also go a long way to reducing latency, from caching static page elements through to compressing file sizes to slash loading times. Every request to the site’s host servers for objects or files adds latency, so efficient page design with minimal elements is hugely beneficial. That’s good news for mobile users, who now comprise the majority of internet traffic; page loading speeds on mobile devices are evaluated by search engines and incorporated into ranking results, with complex or slow-loading pages marked down.
The Last Word: Hosting
Finally, it’s crucial to choose a hosting provider with high-speed data centers in key locations, to minimize the number of nodes each data packet travels through. WestHost’s Tier 3 data center near Salt Lake City has redundant power sources and multiple network carriers, for impressive network performance and reliability. We can’t abolish latency entirely, but we can dramatically reduce its impact by ensuring client servers have sufficient CPU power and memory to handle spikes in traffic. After all, reducing latency below 25ms will effectively be undetectable to humans – and that should be sufficient for most of our needs…