Adversarial News and Lost Profits: Manipulating Headlines in LLM-Driven Algorithmic Trading
Rizvani, A., Apruzzese, G., Laskov, P., IEEE Conference on Secure and Trustworthy Machine Learning, 2025 Conference
Oneliner:
Abstract. Large Language Models (LLMs) are increasingly adopted in the financial domain. Their exceptional capabilities to analyse textual data make them well-suited for inferring the sentiment of finance-related news. Such feedback can be leveraged by algorithmic trading systems (ATS) to guide buy/sell decisions. However, this practice bears the risk that a threat actor may craft “adversarial news” intended to mislead an LLM. In particular, the news headline may include “malicious” content that remains invisible to human readers but which is still ingested by the LLM. Although prior work has studied textual adversarial examples, their system-wide impact on LLM-supported ATS has not yet been quantified in terms of monetary risk.
To address this threat, we consider an adversary with no direct access to an ATS but able to alter stock-related news headlines on a single day. We evaluate two human-imperceptible manipulations in a financial context: Unicode homoglyph substitutions that misroute models during stock-name recognition, and hiddentext clauses that alter the sentiment of the news headline. We implement a realistic ATS in Backtrader that fuses an LSTMbased price forecast with LLM-derived sentiment (FinBERT, FinGPT, FinLLaMA, and six general-purpose LLMs), and quantifymonetary impact using portfolio metrics. Experiments on realworld data show that manipulating a one-day attack over 14 months can reliably mislead LLMs and reduce annual returns by up to 17.7 percentage points. To assess real-world feasibility, we analyze popular scraping libraries and trading platforms and survey 27 FinTech practitioners, confirming our hypotheses. We notified trading platform owners of this security issue.
