Peter O'Neill

Assistant Professor at the University of New South Wales


Research Interests

Education

Working Papers

A new market design, an “Automated Market Maker” (AMM), completely automates trade matching and liquidity provision by leveraging decentralized finance technologies including smart contracts. Can AMMs improve the trading of traditional assets? We derive a model of an AMM’s equilibrium liquidity for a given set of asset characteristics. Calibrating the model with 39 million AMM transactions and return and volume of traditional assets, we find that AMMs can make trading more efficient for assets with high volume and low volatility, including foreign exchange and large-cap equities, but are unlikely to be competitive in volatile and thinly traded assets.


In this paper, we characterise the liquidity provision and price discovery roles of dealers and HFTs in the FX spot market during the sample period between 2012 and 2015. We find that they have different responses to adverse market conditions: HFT liquidity provision is less sensitive to spikes in market-wide volatility, while dealer bank liquidity is more robust ahead of scheduled macroeconomic news announcements when adverse selection risk is high. In periods of extreme levels of volatility, such as the `Swiss De-peg' event in our sample, HFTs appear to withdraw almost all liquidity while dealers remain. In normal times, we also find that HFTs contribute to market liquidity by passively trading against the pricing errors created by dealers' aggressive trade flows. On price discovery, HFTs contribute the dominant share, mostly through their high-frequency quote updates which incorporate public information. In contrast, dealers contribute to price discovery more through trades that impound private information.


We examine the design and effectiveness of the WMR 4pm Fix, the most important benchmark in foreign exchange markets, using unique trader identified data from a major inter-dealer trading venue. We propose and examine new measures of benchmark quality and examine changes to market liquidity and trader behavior around two events: (i) the revelations of benchmark rigging in June 2013, and (ii) the reform of the benchmark calculation methodology in February 2015. We find that benchmark quality, measured as price efficiency and robustness, improves after the 2015 reform, but comes at the cost of a significant increase in tracking error for users of the benchmark. We also find that quoted spreads and price impact increase following the reform, with HFTs trading more aggressively during the Fix.


Revise and Resubmit at the Journal of Economic Dynamics and Control

Using proprietary order book data with participant-level message traffic and matching engine time stamps, we investigate stale reference pricing in dark pools. We document a substantial amount of stale trading which imposes large adverse selection costs on passive dark pool participants. We show that HFTs almost never provide dark liquidity, instead frequently consuming dark liquidity, in particular in order to take advantage of stale reference prices. Finally, we examine several market design interventions to mitigate stale trades, showing that only mechanisms to protect passive dark liquidity, such as random uncrossings, are effective at ensuring accurate reference prices. 


Forthcoming at the Journal of Finance

In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants. 

Publications

We use stock exchange message data to quantify the negative aspect of high-frequency trading, known as “latency arbitrage.” The key difference between message data and widely familiar limit order book data is that message data contain attempts to trade or cancel that fail. This allows the researcher to observe both winners and losers in a race, whereas in limit order book data you cannot see the losers, so you cannot directly see the races. We find that latency arbitrage races are very frequent (about one per minute per symbol for FTSE 100 stocks), extremely fast (the modal race lasts 5–10 millionths of a second), and account for a remarkably large portion of overall trading volume (about 20%). Race participation is concentrated, with the top six firms accounting for over 80% of all race wins and losses. The average race is worth just a small amount (about half a price tick), but because of the large volumes the stakes add up. Our main estimates suggest that races constitute roughly one-third of price impact and the effective spread (key microstructure measures of the cost of liquidity), that latency arbitrage imposes a roughly 0.5 basis point tax on trading, that market designs that eliminate latency arbitrage would reduce the market’s cost of liquidity by 17%, and that the total sums at stake are on the order of $5 billion per year in global equity markets alone.


We analyze the relationship between transaction costs and venue choice using proprietary transaction-level data from institutional trade executions in the UK equity market. We show that a higher share of dark trading (midpoint dark pools) is associated with lower execution costs. In the context of a recent ban on dark trading, we provide evidence consistent with the existence of significant participation externalities on substitute trading venues such as periodic auctions. We further provide micro-evidence on Menkveld et al. (2017)’s pecking order theory of venue choice over the life-cycle of large parent orders.


The Fix for precious metals is a global pricing benchmark that provides pricing and liquidity provision for market participants. We exploit the gradual change in the century old auction process to quantify the efficiencies related to more transparent pricing. Our focus is in the market impact of this change on exchange listed products. We find that reforms to the Fix have reduced quoted and effective bid-ask spreads and improved overall market depth. The results imply a positive spillover effect stemming from timelier and more accurate pricing information. The conditions under which we observe the benefits from transparency are related to product liquidity and the degree of market segmentation. 


Matching algorithms are important for well-functioning financial markets. This paper examines the 2007 change by LIFFE, to move from pure pro-rata to time pro-rata allocation for the Euribor, Short Sterling, and Euroswiss futures contracts. We show that the removal of pure pro-rata matching reduces market depth but suggest that this outcome improves execution quality for market participants. Our results are consistent with suggestions in the literature that the former regime creates incentives for traders to “drown” the order book with large orders and that the addition of a time element to this algorithm alters their behavior. We provide evidence that traders increase the amount of order splitting in the new framework, consistent with local optimizing, but argue that this may hinder overall market efficiency. 

Employment

Knowledge Areas