Why Service Mesh Is The Key To Cloud Modernization

Service Mesh Is Critical To Digital Transformation & The Adoption Of Cloud-Native Infrastructure

While COVID-19 has disrupted virtually all aspects of life and business, it has also significantly accelerated "digital transformation", the ongoing information technology (IT) mega-trend. IDC's FutureScape: Worldwide Digital Transformation 2021 Predictions noted that despite COVID-19, direct digital transformation investment is expected to approach $6.8 trillion in spend by 2023. Digital transformation impacts enterprise IT from applications consumed by lines of business down to the underlying infrastructure and IT estates upon which those applications are built. Looking specifically at IT infrastructure, digital transformation certainly includes migration to the cloud, a trend which continues to accelerate. It is, however, far more than merely cloud migration. Rather, it means implementing cloud-native architectures and instrumenting them in a way that they can be managed effectively and efficiently. Making this change, in turn, profoundly affects almost everything about the enterprise operating environment, including: 

  • Development of Workloads – Application workloads will now be developed in an interactive and agile nature, as opposed to a sequential waterfall.
  • Deconstruction & Heterogeneity of Workloads – Functionality in formerly monolithic applications will be deconstructed and decoupled into microservices, which are naturally more diverse in nature and operating at a high-scale.
  • Distribution of Workloads – These decoupled application workloads become distributed and shift closer to the end user, focusing on minimizing latency and increasing uptime for end users/consumers.
  • Security Footprint of Workloads – These changes lead to a dynamic and distributed environment, where security risks increase and maintaining a zero-trust architecture becomes more challenging.

Migration To Cloud-Native Architectures Is An Iterative Process

While the journey to cloud-native architectures carries many benefits to an enterprise (i.e., cloud economics, increased scale, modularity for quick iteration, increased abstractions for developers), it poses new challenges. When you deconstruct monolithic applications and break them into thousands of smaller services, complexity naturally increases. For instance, enabling service discovery and visibility into associated dependencies becomes paramount, as does scaling those very services while continuing to ensure SLOs are met for resiliency and availability for end-users. Meanwhile, debugging services, while ensuring future releases are safe, becomes more complex.

Despite the advantages of this transformation, no large enterprise can or will entirely "lift and shift" their entire IT estate. Too many dependencies exist to do a wholesale rip and replace, particularly for mission-critical applications that run the core of the enterprise and drive critical business operations. The footprint of these legacy IT infrastructure estates is massive, so just hitting the proverbial "reset" button is a non-starter. The key point being -> digital transformation and migration to cloud-native architectures is a gradual and iterative process which creates the need to intelligently bridge legacy and modern. Service mesh, we believe, will be not only a critical layer of modern infrastructure, but also this vital bridge.

What Is A Service Mesh?

A "service mesh" is fundamentally a combination of (1) a data plane leveraging network proxies attached to services (typically as side-cars) and (2) a control plane that manages the "mesh" of proxies. The concept of a service mesh was driven by the need of webscale companies like Lyft, Google, Twitter and Netflix to handle application traffic at very large scale. The traditional three-tier (web, application, database/storage) service architecture could not scale with the large demand they faced. Instead, over time, they created a software infrastructure layer for controlling and monitoring internal, service-to-service traffic in microservices applications which gives operators the ability to ensure reliability, security and visibility. As containers and Kubernetes have standardized how application services are deployed, the service mesh can provide standardization for how an application behaves during runtime. In doing so, it can provide operations engineers with the ability, among other things, to cache, load balance, firewall and control the flow of traffic per service and at scale. For more details, refer to William Morgan's excellent article The History of Service Mesh.

Why Is Service Mesh So Important & Why Now?

Service meshes are not new, so why are they receiving more attention now, and why is venture funding flowing into the segment? Easy – simply because they are a necessary part, in our view, of a successful digital transformation process as we described above, something more and more enterprises are beginning. Much like the web-based businesses which initially created service mesh, enterprises are starting to understand the need, first, for observability as they move workloads from virtualized ("legacy") to containerized ("modern") environments, and the ability to manage their application traffic as it becomes more complex. They are moving from raw curiosity through the education ("help me!") stage to being ready for product adoption. We hear their recognition of the challenges in conversations with many customers, but more importantly, we see it in their actions. Their "pull" of service mesh solutions told us very clearly that now was the time to go to market with urgency. Service mesh is poised to become the window through which to observe transformed service traffic as it moves from virtualized to containerized as well as the virtual "spinal cord" for a dynamic and distributed cloud once transformation is complete.  

Why Tetrate?

We first became aware of Tetrate after attending a variety of talks at KubeCon + CloudNativeCon North America in Seattle in December 2018. During our first meeting with Varun Talwar and Jeyappragash Jeyakeerthi (JJ), the co-founders of Tetrate, their vision was clear, and deeply resonated with what we felt was a massive opportunity to accelerate digital transformation of enterprises by safely bridging the legacy to modern infrastructures€¦and we knew that this was the team to do it. Prior to co-founding Tetrate, Varun was the co-creator of Istio and gRPC at Google, and JJ led Twitter's Cloud Infrastructure Management Platform. As we got to know the extended founding engineering team better, we recognized that they represented the key group of core contributors and maintainers of Istio, Envoy and Apache SkyWalking (a distributed tracing project). The team also understood the power of a strong community supporting the projects, actively seeking to drive contributions into the upstream code bases and working with end-users for downstream adoption. The community strength was evident even as early as at Service Mesh Day 2019, the first ever industry conference on service mesh that was hosted by Tetrate.  

At the time of Intel Capital's initial investment in Tetrate's $12.5 million Series A Round in March 2019, we were excited to back such a preeminent team that had a deep understanding of not only driving community-first adoption, but also the opportunity and core components necessary to build an enterprise-class product over time. Since the Series A round, the Tetrate team continues to actively support the community through active contributions upstream to Istio, Envoy, SkyWalking and other projects, as well as:

  • Tetrate built and launched GetEnvoy, an open source project to make it easier to adopt, use and extend the Envoy Proxy within the wider community;
  • Tetrate built and launched GetIstio, an open source project to make it easier to deploy, use and maintain certified Istio distributions; and
  • Tetrate partnered closely with NIST to define service mesh security standards and to implement NIST's authorization framework, Next Generation Access Control.

In parallel to these community efforts, Tetrate built and launched Tetrate Service Bridge (TSB), an enterprise grade, comprehensive service mesh platform that provides a unified and consistent way to secure and manage services and traditional workloads across complex, heterogeneous deployment environments. Large Fortune 200 and government customers see this value clearly. Hence, they are reaching out to Tetrate proactively to be able to leverage TSB to simplify their adoption and instrumentation of a service mesh in the most demanding of environments, gaining centralized management, multitenancy, audit logging, workflows, a global service inventory, comprehensive lifecycle management, and other configuration standards.

Tetrate's growth in 2020 made it clear it was time to accelerate investment in this exciting company and space, and Intel Capital is thrilled to continue our support for the team as part of their $40 million Series B Round, which will provide more fuel for the amazing opportunity ahead. The new funding round is going to accelerate Tetrate's on-going efforts to bring a SaaS version of TSB to the market.