by Adam Gervin
How Network Performance Batters Your Bottom Line
We all know that bad networks with high latency, latency variation, jitter, and packet loss lead to user frustration – leaving your workers, and your customers, unhappy. Which leads to low usage and churn. Then lost opportunities, and lower profits.
If you're responsible for keeping your WAN Always-On, bad networks can give you worse sleep, and maybe even put your job at risk. So when we say that Mode is the No-Worry Network for your SD-WAN, how do you really know? When you look at that SLA and those performance guarantees, how does that really translate into performance, and how does that performance actually tie into profitability? Stick with me, dear reader, to find out.
Obviously, this is a pretty complicated series of questions. Time to give it some quantitative color.
Metrics and Technical Impact
Let's start with the network metrics that matter. Latency and its Variation, Jitter, and Packet Loss. Each of them can drag down performance - from standard web applications, to real-time applications, to bandwidth-intensive applications like backup and recovery. Some research even suggests that cloud service pricing is causally linked with these metrics. But their impact is different depending on the use case. Let's look into that.
Example: VoIP, videoconferencing
Also: Twitch/Mixer, Real-time Workloads, IoT Streams
Let's consider communications applications, like VoIP and videoconferencing.
Latency itself doesn't affect the quality of the delivered audio, but it can ruin a good conversation. At 100ms of latency, people start talking on top of each other. At 300ms, the conversation becomes unintelligible.
High latency variation is behind all those gloriously bizarre glitches that always seem to happen during the most critical part of a meeting. Hey, Adam, you sound like a robot underwater. Can you try a different network?
High latency variation can even lead to dropped packets when those packets arrive with excessive delay, causinge the worst glitch of all - the dropout. I'm sorry Bob, we can't hear you. Can you check your headset?
Real-time communications are typically UDP-based, and for the most part the Internet and its routing designs treat UDP (and other small packet protocols) as second-class citizens. UDP packets are more likely to get dropped as a result of deprioritization. This, and drops due to timing errors and other causes, result in missing conversation and collaboration data, and more dreaded dropouts.
Quantitative to Qualitative
Collaboration application performance is often graded by the Mean Opinion Score (MOS), a universal metric to measure and classify the quality of VoIP and videoconferences. It ranges from 1.0 (low) to 5.0 (high) with 3.0 deemed the very limit of acceptability. For each 100ms latency, the MOS score drops by one point. Since 150ms is the limit of physical latency for a round-the-world trip, you can see how very long distances automatically put the quality of voice and video at risk, starting at 3.5. Just 50ms of added latency puts the conversation below the threshold of acceptable performance.
For example, a long-distance video conference with a physical latency limit of 100ms, needs only 100ms of added latency, or 50ms of latency variation, to put the collaboration session below the line of acceptable. In real-time, every millisecond matters. Which is another way of saying the network matters, a lot, for real-time application performance.
Even though we're talking voice and video, the effect is essentially the same for all real-time applications, one of the fastest-growing segments of enterprise applications. Real-time collaboration and workloads. IoT streaming. Highly distributed data requiring fast assembly and analysis.But what about other Enterprise use cases?
Modern SaaS application performance deteriorates steeply with latency variation beyond 80ms, and packet loss beyond 0.5%.
Backup, Recovery and Large Files
Both backup and recovery are bandwidth-intensive, sensitive to latency variation beyond 25ms and 10ms respectively, and packet loss beyond 0.75% and 0.25% respectively.
So while latency, latency variation, and packet loss are all killers for real-time application performance, they can affect other SaaS and IaaS performance just as easily.
Guess what? When you use the best-efforts Internet with your SD-WAN, you put your business at risk. Why? Performance and Security. Learn more about Mode security here.
According to a recent study (with over 320 million data points collected over four weeks, among 32 last-mile locations, 24 cloud instances, two cloud providers, and four continents):
- Mode SD-CORE reduced average latency over the Internet by 20%.
- It brought latency variation down by 79%.
- It virtually eliminated packet loss, dropping it by an average of 85%.
- And it costs about the same as business Internet. Go figure.
Latency, latency variation, and packet loss can crush all types of SaaS and IaaS performance, battering your bottom line. Mode is the No-Worry Network for your SD-WAN, letting you stress less and do more – BECAUSE we keep your WAN metrics far away from the danger zone.
Best of all, we can get your SD-WAN out of the danger zone in under sixty seconds. All you have to do is ask us how. And that's got to help you sleep better at night.