An online guide to more efficent automated FX trade execuTion for buyside firms

Watching your electronic footprints in the market: FX algo trading and the FX Global Code

By Howard Grubb and Stephan von Massenbach, Directors at Modular FX Services.

Industry surveys point to the growing importance of algorithmic trading in FX, particularly for execution requirements which are large relative to the instantaneously available liquidity. This rise in the use of FX algorithmic execution is generally attributed to more efficient execution, lower costs and greater transparency, and is supported by advances in technology as well as regulatory requirements.We will differentiate between “FX algorithmic order execution” capabilities (“FX Algo Trading Services”) being made available to clients (or other market participants), that are often referred to as “FX Algo Trading”, and the automated interaction with an increasingly electronic market, that necessarily often involves “algorithms”, from basic retry logic to more sophisticated matching algorithms on trading venues. For this article, we will use the term “FX Algo Trading” to refer to market participants (typically buyside) accessing algorithmic order execution capability (“execution algorithms” or services), provided by other participants or third party providers (as distinct from either making use of their own in-house developed algorithms, and/or simply accessing market liquidity via standard automated interfaces, that have retry and other execution logic). This application is also distinct from execution algorithms that manage a dynamically changing position, as would be the case in sell-side market-making functions (rather than when the “execution requirement” is defined – for instance in asset management or corporate treasury hedging applications).


Execution algorithms are designed to give clients both greater transparency, but also greater control. The critical element here is that the algorithm user remains responsible for choosing parameters of the algorithm (that control the rate and style of execution), based upon appropriate calibration to their execution requirements and the available market liquidity. As with any trading activity, the performance of algorithmic trading is dependent upon overall market conditions and the risks that are inherent to trading in financial markets. In the past, with a considerable proportion of FX market activity being undertaken by manual trading, the distribution and nature of market liquidity tended to be more discrete and (relatively) long-lived, whereas now, with increased electronic trading, liquidity is more continuous and dynamic (as has been the case in equity markets), so that the market risk aspects of FX have become more like equity markets. The nature of FX liquidity means that even relatively “small” execution requirements can benefit from an algorithmic approach, as liquidity is updated continuously and dynamically in response to very short-term changes in market activity. Increased adoption and demand from clients has been met with the development of more advanced “algo execution strategies”. Socalled first generation algos use rather static parameters, although users might be able to change and interact with a pre-defined set of strategies. Second and third generation algos are increasingly more dynamic and add varying degrees of predictive capability to the execution algorithm. More recent advances include real-time features that inform the execution process. In all cases the user must exercise care in calibrating the parameters to their requirements and the market, as well as being mindful of the regulatory implications of executing via algorithms.


Principle 18 is (at present) the main Principle in the FX Global Code (FXGC) that relates to algorithmic trading and states that “Market Participants providing algorithmic trading or aggregation services to Clients should provide adequate disclosure regarding how they operate.” However, this Principle also emphasises that “Clients of algorithmic trading providers should use such data and disclosed information in order to evaluate, on an ongoing basis, the appropriateness of the trading strategy to their execution strategy.” A recent Statement on month end volatility around fixing orders issued by the Global FX Committee (GFXC) was a timely reminder to market participants. It reiterated that market participants should be mindful of available liquidity and should calibrate their execution parameters appropriately (FXGC Principles 9-11). Principle 18 requires users of FX Algo Trading Services to use data to evaluate the algorithmic service – the choice of execution algorithm, and its calibration, is made by the client.This suggests that a robust process (proportionate to a clients’ requirements) needs to be defined and implemented, as well as documented, to satisfy this particular obligation. A key component here is the data used in any calibration, which also defines the “operating envelope” within which calibration can be considered reliable. If market conditions are outside of this envelope (e.g. liquidity is thin, or spreads unusually wide), then any calibration will be invalid and the algorithm may not be relied upon to operate as intended. Note that this raises significant challenges for market users to appropriately calibrate “adaptive” (real-time) algorithms. The GFXC recently decided to review the FXGC’s existing guidance around algorithmic trading, given the increasing usage of algorithmic execution in FX markets. Importantly, it was also decided to consider how Transaction Cost Analysis (TCA) could be incorporated into the FXGC’s guidance.


Associated with the rise in FX Algo Trading Services, TCA has also seen a particular increase in usage by buy-side firms like Asset Managers and Corporate Treasuries, partially attributed to “best execution” obligations under various regulatory regimes. The UK Senior Manager and Certification Regime has recently been extended to include senior managers on the “buy-side”. Article 27 of the MiFID II Directive defines the obligation to achieve best execution and states that an investment firm must take all sufficient steps to obtain the best possible result for its client when executing a client order (language strengthened from “all reasonable steps” under MiFID I). Generally speaking, TCA is used to determine the effectiveness of transactions. In equity markets, TCA has been widely used for some time and is an established tool, often in conjunction with the use of execution algorithms. The widespread adoption in FX markets relies upon the availability of good quality data at an appropriate time frequency – regulatory trade reporting typically uses timestamps that are not of the same resolution as the frequency of market activity, and the distributed nature of the FX market means that a “consolidated price” can be difficult to construct without third-party providers. More advanced TCA, including sophisticated analysis of “markouts” to assess the market impact of transactions (defined below), has been available for a while. These metrics have been the basis for more detailed performance (and “cost” attributions) in trading functions, but advances in technology and data analysis are now leading to more wide-spread use.


A key consideration for any trading activity is manging the “footprint” – more often referred to as the “market impact” – of the executions. This involves attempting to match the execution requirement with available liquidity, which will result in the most efficient risk transfer for all counterparties. As the FXGC makes clear in Principle 10, market impact is a critical issue for orders which are “large” relative to the liquidity available in particular instruments and at particular market times (for instance around “benchmark” times, or for Stop Loss orders with specific triggers). However, the same issue affects algorithmic order execution where “parent” orders are split into smaller “child” orders for individual execution following prescribed rules. In this application, each “child” order may not be considered “large”, but the cumulative consumption of liquidity throughout the duration of the algorithm can be. To assess “market impact”, market participants also need to clearly understand how the “FX Algo orders” will be handled and transacted in the market (Principle 18, in conjunction with Principle 9) and both the role (principal vs. agency risk management) in which their algorithmic execution provider is acting, and the nature of the liquidity (last look vs. non-last look, full amount) being executed against. A detailed analysis of market impact will also inform the user about the choice of risk management strategies of their counterparties and the potential effects of these on execution quality. This interaction between the algorithm and the counterparty’s risk management should be considered part of one’s electronic “footprint”.


Market impact is typically measured by “markouts”, a specialised but well-understood concept using statistical methods to characterise average market price moves (referred to as “markouts”) conditional on executed trades. These methods require care in application, particularly around sample sizes used in the summaries and accounting for sources of variation. Underlying market variance (volatility) remains a dominant component in such statistics, and other elements of variation include market time (as a proxy for liquidity), trade sizes, and, most commonly, the counterparties involved in the trade requests – market participants have different risk management strategies and horizons and these all affect the nature of an execution. In a more complete and advanced analysis, measures of market impact can be extended to include market price moves conditional on a price request (i.e. to include rejected and not traded deals in the analysis). One can also look at the market impact immediately before trading (arrival time), which is particularly relevant for algorithmic execution where many “child” orders are correlated. These analyses are typically highly statistical with large variance, so there is a particular need for an appropriate amount of good quality data (at appropriate time resolution) and suitable specialised statistical techniques for analysis and subsequent interpretation of the results. Such analyses are made more complex for algorithmic trading, as market activity relating to the whole “parent” execution requirement needs to be considered, not just that for the individual “child” orders and executions. (Note that this same issue was highlighted in a recent FCA Policy Statement on “Transaction cost disclosure in workplace pensions”, that requires the arrival price of the first child order to be used as the arrival price of all subsequent child orders in any analysis). Both counterparties to each transaction are interested in such analyses. The liquidity providers (LPs) want to understand how the “client” is executing – their prices will typically be offered with either the understanding of “full amount” execution, or with the expectation of being part of a larger transaction, following matching rules set by the execution venue. LPs may need to adjust their prices appropriately to the chosen execution style. The client is especially interested in an analysis vs. the risk transfer price of the whole execution requirement so will include market impact as well as the cost of the algorithmic service (that is often incorporated in the executed rate). They will need to understand how each of the LPs who see or respond to their execution requests are managing the risk transfer (particularly with regard to Principle 8 and the capacity in which the Market Participant is acting) or making use of this information in their hedging/ internalisation strategy. In addition, for the client, it is particularly important for any analysis to include trade requests that were not executed (depending on the nature of the engagement – e.g. RFQ vs. ESP). However, this analysis depends upon the availability of anonymous counterparty identifiers, as well as detailed request information from the trading venues, and great care is required when attributing effects.


More recently, market participants and other third party providers of FX Algo Trading Services have looked towards neutral third-party checks to confirm that these services are aligned with the principles of the FX Global Code. Clients of these services are also seeking independent assurances. The specialised nature of these analyses and the reliance on central (venue) data means that third- party analytics providers have a role to play. As already discussed in this article, more advanced “transaction cost analysis”, like specialised analysis of “markouts” to assess the market impact of transactions, has been available for a while, and is finding more wide-spread applications. These analyses originated largely from sell-side attribution models, but their application to a client’s use of FX Algo Trading Services requires additional information (relating to all counterparties that they interact with) and a degree of additional analysis specific to both algorithmic execution and the client view of the transaction requests.


Algorithmic order execution is an increasing element of global FX markets. When using “FX Algo Trading Services” (either those provided by market participants or another third-party), clients remain responsible for calibrating and monitoring their use of the available algorithms – a specialised task, which is dependent on good quality data and analytics. Post-trade analytics of algorithmic execution performance are typically concerned more with “market impact” throughout the whole execution requirement, than with the TCA (mark to market) of each “child” order. Market impact is conventionally measured by “markouts”, an analysis which is well-established, but needs considerable care around the data, sample size and attribution of the effects. Aspects specific to algorithmic order execution include linking child orders to the parent execution requirement and ensuring that trades that were not dealt are included in the analysis. There is a role for third-party providers of such analyses – partly for ensuring neutrality, but also because of the specialised nature of the analyses and the availability of suitable (central) data.

Related articles

Regulatory Issues See all