Skip to main content

Cybersecurity Best Paper 2023

Award Winner

The 2023 Winner in the track “Best Practical Paper” is:

Kaihua Qin; Liyi Zhou; Arthur Gervais, "Quantifying Blockchain Extractable Value: How dark is the forest?", 2022 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 2022, pp. 198-214.


Miner Extractable Value, or more broadly, Blockchain Extractable Value (BEV), denotes the financial gains that can be extracted from the blockchain and the decentralized finance landscape. This phenomenon has emerged as a significant security concern for permissionless blockchains like Ethereum. The allure of these financial gains tempts blockchain miners and validators to instigate chain forks, thereby jeopardizing the integrity of consensus security.
The paper titled "Quantifying Blockchain Extractable Value: How Dark is the Forest?" embarks on a mission to illuminate the severity of the BEV challenge. Through meticulous research, the authors discovered that over 32 months, BEV extractions culminated in an astounding profit of $540.54 million. This considerable sum was apportioned among 11,289 unique addresses, spanning 49,691 distinct cryptocurrencies and 60,830 on-chain markets. The most notable BEV extraction instance they pinpointed was a staggering $4.1 million, an amount that is an impressive 616.6 times the standard Ethereum block reward.
The authors further unveiled a pioneering transaction replay (or naive imitation) algorithm. Crafted to maximize profits, this algorithm operates by duplicating and front-running victim transactions, all without requiring comprehension of the transaction's intrinsic logic. The authors estimate that if this algorithm had been operational, it might have enabled an additional BEV extraction to the tune of $35.37 million during the studied 32-month span. The paper also draws attention to a worrisome trend: the emergence of centralized BEV relayers, also termed "front-running as a service." These intermediaries bridge the gap between BEV traders and miners. Their very existence amplifies the vulnerabilities associated with consensus layer attacks, casting a shadow over the holistic security of blockchain systems.
To sum up, this paper provides an exhaustive analysis of BEV extraction and its potential ramifications for blockchain security. It not only quantifies the magnitude of BEV extraction but also pioneers innovative techniques like the transaction replay algorithm. Moreover, it accentuates the burgeoning threats posed by centralized BEV relayers and their consequential impact on blockchain security.

The 2023 Winner in the track “Best Machine Learning and Security Paper” is:

Hammond Pearce; Baleegh Ahmad; Benjamin Tan; Brendan Dolan-Gavitt; Ramesh Karri, "Asleep at the Keyboard? Assessing the Security of GitHub Copilot’s Code Contributions", 2022 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 2022, pp. 754-768.


AI-based code-writing systems are increasingly being used to help automatically generate computer code. The most notable of these comes in the form of the first self-described ‘AI pair programmer’, GitHub Copilot, which is a language model trained over open-source GitHub code. However, code often contains bugs—and so, given the vast quantity of unvetted code that Copilot has processed, it is certain that the language model will have learned from exploitable, buggy code. This raises concerns about the security of Copilot’s code contributions. In this work, we systematically investigate the prevalence and conditions that can cause GitHub Copilot to recommend insecure code. To perform this analysis, we prompt Copilot to generate code in scenarios relevant to high-risk cybersecurity weaknesses, e.g., those from MITRE’s “Top 25” Common Weakness Enumeration (CWE) list. We explore Copilot’s performance on three distinct code generation axes—examining how it performs given the diversity of weaknesses, the diversity of prompts, and the diversity of domains. In total, we produced 89 different scenarios for Copilot to complete, producing 1,689 programs. Of these, we found approximately 40% to be vulnerable.

For Track “Best Theoretical Paper”:

After the evaluation process of the Cybersecurity Award committee, we didn’t select a winner paper for this track.