One of Reg NMS’ major tenets of Regulation NMS (Reg. NMS) is the Order Protection Rule (“OPR”), known as Rule 611. In essence, it requires that market participants adopt, maintain and enforce policies and procedures reasonably designed to avoid trading through a better priced quote published by another market center. As a practical matter, the rule requires the routing of orders to access the best price available in the market place. The rule also requires that market centers wishing to have their quotes respected and routed to must be automated. This was a direct shot across the bow at the NYSE when enacted because it forced the NYSE to prioritize electronic order flow over its traditional manual auction market traders.
When the rule was enacted it had the unintended consequence of rewarding speed over all other aspects of execution quality, as market participants competing for trade executions deployed ever-faster routing and execution systems to access better-priced away quotes. As a result of the emphasis on speed, in the mid to late 2000’s with the introduction of Reg NMS into the national market system, the trading eco-system became transfixed with latency. Latency determined how quickly an order could access a quote and therefore liquidity, how quickly a quote or order could be canceled, and therefore influenced risk management and exposure, and how up- to- date one’s technology was.
Those that entered the fray with older technology such as main frame computers or older network technologies found themselves continually unable to access liquidity before competitors did or, worse, unable to cancel quotes with the speed necessary and therefore left unprotected in the market.
Eventually, most market participants upgraded their technologies and the game was equalized among competitors and latency arbitrage became less of an issue. But before that, firms drilled holes through the mountains of Pennsylvania or invested in microwave technology or looked to buy the fastest chip available in order to be the fastest. Those technologies have since become commoditized.
Market centers realizing the need to have their market makers be available for the long term offered the ultimate in latency advantaged access – co-location of their servers in the market centers’ data center. The major equity market data centers have been stabilized for over a decade and are all located in a 250-mile radiance in the northeast. Therefore, the latency calculus is well understood among players. Co-location offered those firms, market makers and others willing to pay, the opportunity to be as close to the “front door” as possible. Regulators, realizing the advantageous nature of co-location, required market centers to emulate a delay equal to the difference between the front door of a market center and the matching engines and data publishers within the market centers, so as not to provide a microsecond advantage to co-location customers. Market centers were able to accomplish that and thereby achieve an equalization effect.
The goal was to create a market that was fair to all that were willing to invest in the technology necessary to reach the best prices for customers. And to create a market that rewarded displayed limit orders.
The Cloud
Cloud technology decentralizes, democratizes and potentially disrupts the eco-system that currently exists by changing the narrative in regard to latency. The very nature of cloud deployments has at its center multiple large data centers that are geographically diversified. Latency, if nothing else, is a measure of how fast light can travel through “glass” over specific distances, so at its heart is about geography and distance.
One of the many positives about cloud implementations is that data can be easily replicated, and fully redundant and so disaster recovery becomes less of an issue in a cloud environment. In some cloud instances the data necessary to run programs, and the programs themselves, might run in different locations from time to time.
One of the key things that electronic traders are focused on is consistency of experience. So, if it takes 5 microseconds to access a market or cancel a quote, they rely on that knowledge to implement their strategies. Without that certainty, the strategy becomes less reliable and potentially less profitable.
Therefore, cloud implementation of market centers potentially increases latency to reach the market center and best prices, potentially changing the consistency of latency, which would be anathema to high speed traders.
But market centers looking to get out of the expensive data center business may find that a cloud implementation provides cost savings compared to those incurred in running a data center, having a disaster recovery center and in certain failover and fallback scenarios.
The SEC has already approved a delayed experience in the form of the Investors’ Exchange (IEX), where, under the rubric of customer protection, the exchange implemented a 350 micro-second delay for every message entering the system to “even the playing field” and diminish the advantage that more technology-advanced order routers may enjoy when accessing other market centers. IEX, to the best of my knowledge does not impose that delay on outbound messages to other market centers. But regulators are very focused on co-location services making sure that there are no intra-market advantages.
The Dilemma
If one market center was to move to the cloud without any other market center moving, it would automatically change the latency dynamic that exists today (e.g, changing the distance a message would have to travel to or from a cloud data center in New Jersey or Virginia or in a different zone ). And since latency is defined as the time that a message traverses a certain distance, the further away the data resides, the more latent the market center will be. For high speed traders the experience may also be varied and therefore unpredictable, and unprofitable.
In terms of co-location, many high-speed traders are highly secretive about their technologies and sometimes hide their servers within cages at the markets data center. In addition, they may not allow market center personnel to do anything to their servers and may service their technology themselves. In a cloud co-location model, it is not clear that a high-speed trader co-locating in the cloud could ever achieve that level of control. In addition, if the cloud provider is constantly moving the data and the servers from location to location, then the latency experience will differ and so will the service needs.
From a market center perspective, while they may achieve cost savings related to a cloud implementation, they would have to consider the loss of latency for their own market center and their own routing to best prices on behalf of their customers. In addition, they would have to consider the impact on co-location revenues.
Possible Remedies
In the perfect world, every exchange would be required to move to the cloud and latency would be less of an issue. Unpredictability would still be an issue, unless the cloud providers hardened their market center instances, meaning: the data and applications would not move from location to location – they would be static. Disaster recovery would still be a net positive, but again the DR scenarios would have to be hardened too.
Cloud redundancy would be an issue and there would have to be some regulatory coordination and agreement on how many cloud companies could host exchanges and how they back each other up on business continuity and disaster recovery. Today, the SEC requires that NYSE and Nasdaq back each other up, in effect forcing a de facto data center back up plan as well. I see no difference in the cloud scenario, that is, the SEC potentially requiring multiple cloud providers.
Latency becomes less of an issue if the SEC removes the Order Protection Requirement from Reg NMS and puts in its place a best price rule with the responsibility on the shoulders of firms. This would alleviate the need for all the routing that takes place today among market centers and from firms to market centers. Many market prognosticators have called for this change as a potential cure to today’s market structure ills, but it is unclear whether the SEC has the willingness to make such a sweeping change. In addition, it is unclear what the unintended consequences of such a change will bring. So, building a cloud strategy with this potential change as a key component would seem to be less than optimal.
Could there be latency enhancing technology solutions not yet introduced into the marketplace that might resolve some of these issues? Possibly. Could cloud providers harden their offerings for specific customers like market centers and co-locators. Possibly. Would the SEC require everyone to move to the cloud? Improbable, and even if the SEC were to do so, the implementation of such a migration would be costly and time-consuming. The SEC would also be faced with the conundrum of how to implement such a migration, since the market participants that were the first to move to the cloud would be at a latency disadvantage to those that continued to function, however temporarily, under the current system. Will the SEC replace the Order Protection Rule? Not without much debate. Market participants, having invested big bucks in developing order routing and execution technologies that are speed-driven, may be opposed to such a move, especially if it results in those firms that migrate to the cloud have any competitive disadvantage against non-migrating firms.
Are there deep enough financial considerations for market centers to offset co-location revenues by moving to the cloud? Possibly.
These issues are thorny and highly interconnected. Navigating them is done at one’s own peril. It is clear that a change at one end of the eco-system produces winners and losers. Better to know all the permutations prior to making a change, advocating for a change, or being the collateral damage of a change.
Global Markets Advisory Group is a financial markets advisory partnership focused on the intersection of compliance, technology and operations.