Research Citations
Peer-Reviewed Research
Deepfakes
[1] Chesney, R., & Citron, D. (2019)
“Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security”
California Law Review, 107(6), 1753-1820
DOI: 10.15779/Z38RV0D15J
[2] Tolosana, R., et al. (2020)
“DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection”
Information Fusion, 64, 131-148
DOI: 10.1016/j.inffus.2020.06.014
Prompt Injection
[4] Perez, F., & Ribeiro, I. (2022)
“Ignore Previous Prompt: Attack Techniques For Language Models”
NeurIPS ML Safety Workshop
arXiv: 2211.09527
[5] Greshake, K., et al. (2023)
“Not What You’ve Signed Up For: Compromising Real-World LLM Applications”
ACM CCS
DOI: 10.1145/3576915.3623106
[6] Liu, Y., et al. (2023)
“Prompt Injection attack against LLM-integrated Applications”
arXiv: 2306.05499
Government Standards
[7] NIST (2023)
AI Risk Management Framework
https://www.nist.gov/itl/ai-risk-management-framework
[8] CISA (2024)
Securing AI Systems
https://www.cisa.gov/ai-security
[9] OWASP (2024)
Top 10 for LLM Applications
https://owasp.org/www-project-top-10-for-large-language-model-applications/
Industry Reports
[10] Sensity AI (2023) - State of Deepfakes
[11] Microsoft Security (2024) - AI Red Team Findings
[12] IBM Security (2024) - Cost of Data Breach
Last Updated: October 31, 2025