By Matthew Lempriere, Head of Asia Pacific, BSO
When low-latency networks started to grab headlines about a decade ago, most of the focus was on how straight your line could be. It was all about large-scale construction projects, whether that meant blasting through mountains with dynamite or taking advantage of old radio towers.
The name of the game was going from point A to point B in the shortest way possible, so simply being able to build a network with minimal deviation from the line represented a huge step forward.
Low-latency trading of course still depends on having the straightest, most efficient network possible. But we’ve come a long way and some of the more interesting conversations these days concern what you do with the network in terms of performance and resiliency. Those kinds of issues keep coming up in panel discussions that I’ve been on, or in conversations with customers looking to push the boundaries.
One of the areas where the boundaries are being pushed the most is in the FPGA space. For instance, using FPGA technology to achieve maximum bandwidth holds the promise of doing much more on the network, especially given trends in terms of data usage.
In fact, as FPGA technology has become much more common in a lot of new areas outside of low-latency trading, the costs have come down dramatically. Some chips go for about $2,000 now that would have cost nearly 10 times that amount a few years ago.
But what’s particularly interesting with FPGA is how that is merging with cloud technology to give network builders and trading firms new options. By putting FPGAs in the cloud, a firm can do its development and testing that way before deploying a real one. The growth of FPGA has also been accompanied by a lot of latency-related open source technology, potentially making it even more cost efficient to develop FPGA-based solutions.
Data distribution needs, link management and bandwidth optimisation are some of the issues that FPGA technology can help firms address. Those are common problems that need to be considered in any network strategy, over and above the basic A-to-B latency question.
The key thing is that a network provider needs to build the network that suits the customer’s needs best. For those market participants that wish to pay a premium, that may mean being the absolute fastest. For a wider market, that may mean still being extremely fast but possibly taking a more cost-effective route that is more focused on capitalising on hardware advances to achieve a quantum leap in performance.
Radio frequency network continues to be a big focus, with firms showing a lot of interest in integrating RF networks with traditional fibre. There are also some emerging technologies that we’re keeping an eye on. Low earth orbit satellites may have potential, although the non-deterministic aspect of their use at the moment presents some challenges. Long wave and hollow-core fibre are also interesting developments.
For now the future still revolves around fibre. Guaranteed performance on a large scale is something that low-latency traders simply have to have. So, that means that a residual challenge lies in constantly finding new routes that are tailored to companies’ trading footprint.
Building those pathways is one part of the challenge, and doing more once they’re built is another. Network suppliers and hardware makers have shown they can do a lot together to develop low-latency equipment that is fit for purpose and resilient. That’s definitely an area where we would expect to see more attention.
It all adds up to a much richer and more dynamic conversation than when low-latency networking first emerged. We still need to get from A to B, but we can think about it much differently when we start factoring in the cloud, FPGA, hardware developments, open source and a whole raft of emerging technologies.
Does your network meet your evolving needs?
SCHEDULE A MEETING