By Sara Peters, Enterprise Efficiency
17 January 2013
URL: http://www.enterpriseefficiency.com/author.asp?section_id=1134&doc_id=257568&
All trading on the stock market comes with risk. The potential for technological errors and disruptions in stock exchanges is now just a standard variable in the risk equation. Traders have to account for it when they’re deciding which exchanges to use. And the exchanges have to account for it when they’re deciding how to strike the right balance between the need for speed and the potential for error.
In the past few weeks, the U.S. stock exchanges have been plagued with technological snafus — including multiple service disruptions (one that lasted nearly an hour), data reporting system malfunctions, and the discovery of a programming mistake that, over the past four years, caused a total of roughly 435,000 trades to be executed at the wrong price. Last week, the New York Times documented the recent string of mishaps. From the story:
Regulators and traders have said that malfunctions are inevitable in any complex computer system. But many of these same people say that such problems were less frequent before the nation’s stock exchanges were thrown into a technological arms race in the middle of the last decade as a host of upstart exchanges like BATS [Global Markets] challenged incumbents like the New York Stock Exchange.
Is this “technological arms race” a real thing, and if so, what are CIOs in stock exchanges, and the financial services industry at large, supposed to do to minimize the negative impact this arms race is having on trading software quality assurance?
“It’s real,” says Steve Rubinow, CIO of FX Alliance and former CIO of NYSE Euronext, “because for years now, the customers have said that they can have an advantage if they can be a fraction of a second in front of someone else.”
To stay competitive, stock exchanges have pushed the limits of technology to shave down the time it takes to complete a transaction to mere microseconds. Therefore, any software error or service disruption can make a big impact very quickly.
“Considering the fact that these systems have millions of transactions a second, with individual component response times of microseconds,” said Rubinow, “while I’m not saying that problems are ever acceptable, I would say that the track record overall isn’t bad, given the proper context.”
Would investors agree? Or does every little technological slip-up chip away at investors’ confidence, and ultimately have a damaging impact on the whole market?
“As a general principle I would say no,” said Rubinow. “When things go awry the effects can be profound. However, one has to assume that there’s going to be no perfect systems. That they will fail from time to time or just act oddly, so you must be prepared to properly address most reasonable eventualities.”
Fair enough. Risk management is a trader’s bread and butter, after all, so they should be able to handle the possibility of things going awry on the software front. Nevertheless, if you’re the CIO of a stock exchange, you’d prefer it if things went awry with somebody else’s software — preferably one of your competitor’s. The best way to avoid system errors, of course, is to do a stellar job at quality assurance and lots and lots of testing. But that’s easier said than done. Because the stock market is so complex and intertwined, thorough testing means you have to consider not just your own internal systems and situations, but also the systems and situations of all the other players in the marketplace.
“It is impossible to test for all possible conditions,” said Rubinow. “And you can’t take forever, because you’re trying to be competitive.”
Rubinow said that trading software could be made more pristine if everyone conducted more elaborate testing over longer periods of time and collaborated with the other players in the market ecosystem. But, “it would take more time and involve more people, it would be more costly, and still we couldn’t simulate everything, so it is unlikely that the system would be bulletproof.”
Well, there goes that idea. If that degree of elaborate testing isn’t possible, at least make sure that your developers get the software as close to perfect as possible before it’s rolled out… and put in controls that let you know when it proves itself to be imperfect.
Despite business pressures for speed, Rubinow warns CIOs and development teams against falling into the trap of pushing out inadequately tested code just because they’re pressed for time. “Code has to be robust when it leaves a developer’s hands and goes to the quality assurance department,” he said. “Quality and testing is everyone’s responsibility.”
“Organizations need to have risk mitigation tools and controls that kick in,” said Rubinow. “It’s important to be able to recognize when something is out of the ordinary and then the system should first signal it, or if it’s so egregious then it should be shut off what’s going on is understood the market can return to normal.”
Another option, of course, is to get scared and steer clear of all these technological advancements that make trading faster. But Rubinow would advise against being too risk-averse.
“Those who are swift of foot and can run with the changes, they have all kinds of opportunities,” said Rubinow. “If you stop to ponder for a few years you’re going to be left in the dust.”
And why get left in the dust when there are truly good reasons to keep up with the changes?
“Implying that it’s an arms race minimizes the fact that technology advances and the speed has had advantages, for example delivering efficiency for investors,” said Rubinow. And he doesn’t see the push towards greater speed lessening. “As long as there is a financial incentive, people are going to continue to chase it.”