by Adam Gervin
I remember leaving camp with my parents on visiting day, late 70s, New Hampshire. It was hot and sticky and bright green out as we drove to the Dartmouth campus and the Kiewit Computation Center. Inside was cool and crisp. White and sterile, with the hint of a hum among the rows of machines. On display was connectivity, and it was mesmerizing.
A few years later, in 1982, I leveraged this family memory and asked my father for an Apple II for my bar mitzvah. It took five seconds flat for him to resist nostalgia and turn me down. By 1984 he reneged, and I had a shiny, new Mac on my desk. My first act: connecting with my Hayes modem over that day’s X.25 network to Compuserve. Awe once more. Back then, just the idea of connectivity was inspiring, a blank slate of potential and inspiration, there for anyone to embrace.
Boy have things changed. Reverence for connectivity has given way to frustration. Today, the network — the internet specifically — seems to be holding all of us back. Consumers curse their access providers when their show gets interrupted or their gameplay gets laggy and drops. Businesses have been hybridized, running multiple networks despite the associated cost and complexity, because the internet just can't cut it alone. And app developers, particularly those with a need for resilient reliability and/or low or ultra-low latency (ULL) performance, have in many cases been forced to become their own network operators, all just to avoid the pitfalls of the open internet.
It's really not surprising. The internet was built to serve web pages, to run HTTP over TCP over IP. It wasn't designed for newer protocols like WebRTC, or for handling a large flow of small packets in a highly performant, consistent manner. The internet's core in particular is a best-efforts service, with over 99.95% of latency variance happening in the first and middle miles. Add to this the fact that the whole notion of routing and peering has been largely designed to serve economics first and foremost, not performance.
No biggie, right? I mean, frustration with networking isn't entirely new, and we've always found ways to improve things to meet demand. Wireless is a good example, where app developers screamed for faster data rates. I can hear Andy Rubin banging his head in frustration at Danger, trying to get the Hiptop to work on that era's infuriating wireless networks. Those developers got a steady march of improved protocols, and faster and faster throughput. Problem solved. CDNs cached popular video files at the edge, and Netflix flourished. Problem solved.
This time is different. It's not just about throughput or proximity. It's about the fundamental layers of the OSI model. All of the clever tricks and optimizations, from WANOP, to compression, to pattern recognition, to tuning — none of them changes the fact the way data is routed on the internet, and for that matter all networks, has become the true limit to performance. If you believe that packets MUST always flow, and that data should travel at the limits of physical law, you have to completely rethink the way packet data has been routed to this point in time. And the ultimate result of that exercise is quite simple: autonomy.
From the original ARPANET, packet data routing has been heuristic. That's a shame, because it turns out that the routing of packet data on a network can be defined as a control system, and the characteristic equations derived. Armed with this pure math truth, you can approach the theoretical limit of packet data routing performance. Implementing this discovery as a virtual router, and using this as the basis of a pure software-defined network gives a packet-size/protocol-agnostic boost to infrastructure efficiency of many multiples, and the near elimination of latency variance. Perhaps best of all, you get an inherent, autonomous parallelization of routing solutions, with each node self-optimizing in real time. Given ten, ten thousand, or ten million nodes, the routing ability of an SDN employing this algorithm approaches perfection regardless of scale.
What does all this mean? A new era in routing is here, and it makes any network built around it performance-first. The efficiency it provides translates into economy as well, so you get reliability, resiliency, performance, and cloud flexibility — at a business-internet price point. Extending SD-WAN. Enhancing UCaaS. Embracing MPLS. Empowering ULL.
Mode is a new backbone for a new world. Often, a post-HTTP world. And for me, personally — connectivity is cool again.