10 May 2021

Public cloud connectivity: does latency matter?

Due to changes driven by the pandemic, the number of companies migrating their critical applications to the cloud is growing rapidly. In these projects, connectivity is often neglected to the detriment of the quality of the user’s experience…

According to the latest ISG Provider Lens report, the pandemic has fuelled the gradual move of companies’ workloads to cloud infrastructures.

However, it’s not migration towards a single cloud on a hyperscale that’s trending, but migration towards THE clouds. To improve resilience, companies are migrating their workloads and data on to several clouds, while retaining some on-site (especially legacy applications).

Costs are entering into the equation more than ever and companies also intend to optimise expenses by using the competitiveness of the cloud landscape to their advantage. Some players have even started to market tools that allow workload costs to be evaluated according to whether they are on the private cloud or one of the public clouds… 

Driven by this trend of load-shifting towards cloud computing, France has seen a growing demand for application modernisation services, interest in DevOps methods, and need for cloud-native applications’ support.

But when using the cloud to support the operation of specialist applications – often critical – it is essential to take into account latency in inter-cloud exchanges (public <-> public <-> private).

Multi-cloud: beware of latency risks

It’s not uncommon to hear talks of latency between clouds or even delays in sending and receiving information to and from public clouds. These latencies can reduce the performance of a system looking to exploit an overloaded internet connection, or applications communicating between different cloud systems such as AWS or Azure.

To make the best use of applications requiring low latencies, best practice involves carrying out transactions as close to the operations as possible, most often on-site or in a nearby data centre. Geographically distributing the workloads of time-critical applications, in order to be as close as possible to the users, systems, data and commercial ecosystems (partners and clients) that interact with them the most, maximises the prospects of success. 

Enterprises’ strategies of choice to reduce latency include monitoring of the infrastructure supporting the applications (and fixing any problems manually), shifting virtual workloads on to virtual clusters, use of hybrid storage bays, fibre-channel connectivity and low-latency network components.

Choosing a reliable connectivity partner

At the inter-cloud level, you must then be able to rely on level 3 network connectivity based on standard protocols, on Service Level Agreements (SLA’s), monitoring, measurement of performance indicators (latency, stability, packet loss, etc.), optimisation…The ability to extend the network reliably to new geographies, new Points of Presence, and to benefit from infrastructures located as close to the user as possible must also be considered.

When companies are looking to set up low-latency, multi-cloud infrastructures with top performance and high availability, partnerships with connectivity suppliers become essential. “Good enough”, which is to say a network experience based on internet connections, isn’t an option for the companies that are accelerating their digital transformation in light of the needs of a post-Covid digital economy.

Pulling off this transformation requires transparent connectivity, a dedicated, private, reliable and highly secure connection between the various sites and critical applications with unlimited regional coverage.


The original article was published here in French.