An online guide to more efficent automated FX trade execuTion for buyside firms

How to remain compliant throughout the FX algorithmic trading process

By Howard Grubb and Stephan von Massenbach, Directors at Modular FX Services

The use of algorithms in FX trading to interact with electronic wholesale market venues is widespread and, increasingly, this is for “algorithmic order execution”, which uses pre-defined rules to dynamically match large execution requirements to the (instantaneously) available liquidity. The use of such Execution Algorithms is likely to increase further as less liquid instruments (NDFs, FX swaps) become more widely available on electronic trading venues.

Algorithms may be used for the decisions to trade (“signal generation”), which primarily capture the “business logic” to achieve the execution requirement, as well as for the interactions with the market (“order placement”), which are the “mechanical” steps involved, incorporating, for instance, local (matching) rules of each trading venue and order state management during executions. There are various aspects of regulatory compliance and good practice to consider throughout any Algorithmic Trading application. Users must ensure that these are met before deployment, as well as monitoring the algorithm’s behaviour and performance during execution.

Users of algorithms may apply an in-house developed and tested solution, or be consumers of algorithmic services from other Providers (“sellside”). Providers may also make use of third-party components in their algorithmic infrastructure (“outsourcing”). Where a User makes use of a Provider’s algorithms, or of third-party components, they retain the regulatory responsibility for their use, and should ensure that compliance can be suitably evidenced by the Provider(s), in line with the Principles of the FX Global Code.

Providers of Execution Algorithms may offer Direct Market Access to (“buy-side”) Users, together with their algorithmic capability, or just access to their “Systematic Internaliser” against which the User can execute. In the latter case, the Execution Algorithms may not access wholesale markets directly – the algorithm is simply a complex order type between two counterparties – so that regulatory aspects relate more to the agreed trading relationship between the User (client) and the Provider (counterparty). In this article we will consider some more detailed technical aspects of Algorithmic Trading, which can be considered as “good practice”. In addition to specific regulatory requirements (for example, the “Regulatory Technical Standards” of ESMA), regulatory regimes require that Algorithmic Trading be conducted by people with suitable competencies and knowledge. Good practice, throughout the algorithmic process, therefore falls under the Principles-based aspects of regulation.

It is helpful to break the Algorithmic Trading process down into some generic stages:

  1. Design/choice of algorithm
  2. Calibration of parameters
  3. Testing, deployment
  4. Live monitoring during execution
  5. Offline review, recalibration

We will consider each stage of the process below, but first a general point on data, that applies to all stages.

Data

The availability of appropriate market data is a critical element and, together with the business logic, the specific data used forms a part of any Algorithmic Trading application. It is important to bear in mind four aspects of the data, to understand the sensitivity and robustness of the algorithmic rules:

  1. There should be sufficient data to support decisions at each stage. We discuss this briefly
    below.
  2. The data capture should be consistent with the execution context. While a decision-making algorithm (“signal”) can be developed against aggregate/sampled data, specific calibration for execution should be undertaken with data that are representative of the microstructure of the market that the algorithm will face (order book structure, timing, sampling, intensity).
  3. The data should cover a range of reasonably likely market conditions. If these have not been seen in the historic data available, “sensitivity analysis” is desirable to understand how the algorithm choice and parameters will change under different conditions. Note that the historic data “envelope” used in calibration is the only set of conditions under which the algorithm can be considered reliable.
  4. There will also be assumptions in any “backtest” or calibration – such as rate of market liquidity refresh, likely order fill-ratio, slippage etc. These should be made explicit and their effect considered.

We now consider the issues at each stage of the process.

Design/choice

It is important for Users to have clear, quantifiable objectives in using an algorithm, so that the choice, calibration and monitoring can meet these. This will involve understanding the available liquidity and its characteristics, as well as the execution requirement and its degrees of freedom (quantity, timing, acceptable slippage and shortfall). “Best execution” requirements need to be taken into account in these objectives, with regard to the liquidity (which may differ by the User’s specific view of any trading venue), as well as to how the suitability and performance of the algorithm will be judged and reviewed after execution (below). This means that alternative execution pathways or liquidity sources should also be considered, to ensure that algorithmic execution is suitable to achieve the objectives. There should be sufficient historic market data available to inform the choice of algorithm (as well as for the subsequent calibration step). This could be approached as a statistical concept (formal “significance”), but at the very least, consideration should be given to whether the data are sufficient to distinguish between different algorithm choices (such as a “null” or simple alternative approach), to avoid unduly complex algorithms. Also, the data used should adequately cover market conditions that might be expected to be encountered (times of day and historic patterns). This may be difficult to achieve with available data, but some degree of “sensitivity testing” is prudent, to understand whether the choice of algorithm is robust under different market conditions. As mentioned above, any assumptions used should be made explicit, and the sensitivity of the algorithm choice to these assumptions investigated.

Calibration

Calibration is where the parameters of the particular chosen algorithm are optimised to the intended execution objectives. The same rules (“business logic”) may behave quite differently when calibrated to, or encountering different data sets (even for the same financial instrument).

Again, it is important to ensure sufficient, good quality, data are available, covering a realistic range of market conditions, and that the objective function used for the calibration is defined clearly. Ideally these data should be independent of those used to choose the algorithm, perhaps using statistical “resampling”, or partitioning of the available data. The calibration data should be captured under the same conditions as those under which the algorithm will be executed (live market data). Some historic data sources (which can be suitable for benchmarking, or to develop high level signals) may differ in timing, or aggregation/sampling, to the specific live data feed used for execution. These differences can have an important effect on executions. For Users of others’ algorithms, this may require the calibration to be performed and evidenced by the Provider. The sensitivity of the calibration process to various perturbations of the data (e.g. timing variance/ offset) should be explored to give confidence in its robustness. Similarly, resampling will show the sensitivity of the parameters to the particular set of data used, and any assumptions can be varied within plausible values to understand the effect of these on the resulting parameter settings. The use of the algorithm should be restricted to those market conditions that are represented in the historical calibration data (defining an “operational envelope”) and to the defined execution requirement, otherwise behaviour may be unknown, particularly for complex, non-linear or dynamic algorithmic models. Changing the requirement or parameters (for instance, by manual intervention during operation) effectively invalidates any calibration, and the algorithm’s behaviour with these new settings may be unknown. It is therefore essential to test the behaviour under reasonable changes from the chosen settings, and, given sufficient data, to calibrate separately to datasets partitioned into different market scenarios. Parameters may then be switched between pre-calibrated sets, if needed. This requires an understanding of the market characteristics that an algorithm will be sensitive to, and defining suitable metrics and conditions under which to make such a switch. This approach is effectively a metaalgorithm, the switching conditions of which need explicit consideration. 

Testing

Testing and deployment for algorithms should concentrate on the market circuit breakers and other controls that ensure orderly execution in any market environment. This is additional to standard “conformance” testing, which would be necessary for any normal systems deployment. It is important to include non-standard situations – such as missing data, crossed/one-sided market prices, timestamp errors – which, while rare in live market connections, may nevertheless occur in edge cases, or during extraordinary events. Controls and circuit breakers (which also use these data connections) need to be robust to such deviations. Should there be any controls to be used during the operation of an algorithm (a “kill switch”, but also potentially switching parameters under pretested scenarios), then the behaviour of these should be tested, along with the metrics and triggers used to decide on their activation.

Live monitoring

Market conditions may vary from those seen in any historic data, so it is critical to have adequate real-time monitoring of a range of metrics, both of the algorithm’s performance and of the market itself. Algorithms should be stopped if their behaviour deviates significantly from that calibrated, or if the market conditions are not representative of those in the calibration. An investigation can then be undertaken to understand the cause and the changes required. If the execution requirement is urgent, an alternative (perhaps manual) execution pathway may need to be available. As discussed above, calibrated parameter values are intrinsic to the use of any algorithm and should not be under User control during execution, otherwise the (modified) algorithm is effectively untested. By varying parameters during execution, this becomes a form of manual execution, using particular order generation rules, and should be governed appropriately. If an algorithm is expected to need different parameter settings under different market conditions, then these settings should have been tested/pre-calibrated, and the conditions under which the parameters are to be changed should be defined. This way, subsequent analysis of executions resulting from the different settings can be systematic and feed back into the review stage.

Offline review

Once a suitable number of live trades are available from running an algorithm, it is important to review the performance. These trades allow the performance of the algorithm to be assessed against the objectives, and they provide valuable information which may not be available from “backtesting” or calibration (since these cannot fully account for market impact effects – these features were discussed in a previous article, along with the common metrics used). Live trades will help to inform the assumptions that may have been made in the choice and calibration stages, and may lead to a need to explore other algorithms, or to incorporate elements from these live data in a subsequent recalibration process.

Summary

It is important for Users to have clearly defined execution requirements and objectives in using any algorithm. Data used in the choice and calibration of an algorithm should be sufficient for any decisions, be captured under the same conditions as the execution environment, and represent a range of likely market conditions. Sensitivity to the particular data, as well as to any underlying assumptions regarding executed trades, is an important aspect of understanding the robustness of the choice and settings of the algorithm. The data and the resulting calibration are intrinsic parts of any algorithmic application. Parameters should not be changed while executing, without prior calibration of these settings. Scenarios of differing market conditions should be investigated beforehand to provide different settings of parameters to be used under defined conditions. Monitoring should use appropriate metrics of both the market and the algorithm’s performance, and, if deviations are detected, algorithmic execution should be stopped, otherwise the behaviour may be unknown. Offline review of an algorithm’s performance is an integral part of feeding live execution information back into the selection and calibration processes. This helps refine these and make them more robust, as well as evidence that the defined execution requirements have been met.

Related articles

Regulatory Issues See all