Threat Intelligence
Current Threat Landscape (2024-2025)
Deepfake Trends
Source: Sensity AI - “State of Deepfakes 2024”
Key Findings:
- 500% increase in deepfake videos (2022-2024)
- 96% are non-consensual intimate content
- $250M+ in documented fraud losses
- 73% of deepfakes target women
Emerging Threats:
- Real-time deepfakes (live video calls)
- Voice cloning (< 3 seconds of audio needed)
- Full-body deepfakes (entire person synthesis)
- Deepfake-as-a-Service (DaaS) platforms
Prompt Injection Trends
Source: Microsoft Security - “AI Red Team Report 2024”
Key Findings:
- 73% of tested LLM applications vulnerable
- 300% increase in attack attempts (2023-2024)
- $4.5M average breach cost
- 45% of attacks succeed on first attempt
Attack Evolution:
- Multi-turn attacks (conversation hijacking)
- Indirect injection via documents
- Encoding-based bypasses
- Automated attack tools
Threat Actor Profiles
Financial Criminals
Motivation: Monetary gain
Methods:
- CEO voice impersonation
- Fake video calls for wire transfers
- Investment scams
Average Loss: $243,000 per incident
Case: UK Energy Company (2019)
- AI voice cloning of CEO
- $243K transferred to fraudulent account
- Detected after 3rd transfer attempt
Nation-State Actors
Motivation: Political influence, espionage
Methods:
- Political deepfakes
- Disinformation campaigns
- Intelligence gathering
Attribution: Difficult due to sophistication
Example: 2024 election interference attempts (multiple countries)
Harassment Campaigns
Motivation: Revenge, intimidation
Methods:
- Non-consensual intimate deepfakes
- Reputation damage
- Targeted harassment
Impact: 96% target women
Attack Tools & Platforms
Deepfake Creation Tools
Open Source:
- DeepFaceLab (GitHub: 40K+ stars)
- FaceSwap (GitHub: 48K+ stars)
- Wav2Lip (GitHub: 8K+ stars)
Commercial:
- Synthesia (text-to-video)
- Respeecher (voice cloning)
- D-ID (talking head generation)
Barrier to Entry: LOW
- Free tools available
- Minimal technical knowledge required
- Cloud computing accessible
Prompt Injection Tools
Research Tools:
- PromptInject (academic research)
- Garak (LLM vulnerability scanner)
Malicious Use:
- Automated jailbreak generators
- Injection payload databases
- Underground forums sharing techniques
Indicators of Compromise (IoCs)
Deepfake IoCs
class DeepfakeIoC:
indicators = {
'visual': [
'inconsistent_lighting',
'blurry_boundaries',
'unnatural_blinking',
'mismatched_skin_tone'
],
'audio': [
'robotic_cadence',
'background_noise_inconsistency',
'unnatural_breathing'
],
'metadata': [
'missing_exif',
'software_mismatch',
'timestamp_anomaly'
]
}
Prompt Injection IoCs
class InjectionIoC:
patterns = [
r'ignore\s+(all\s+)?previous',
r'system\s+prompt',
r'admin\s+mode',
r'debug\s+mode',
r'\[SYSTEM\]',
r'jailbreak',
r'DAN\s+mode'
]
behavioral = [
'excessive_output_length',
'policy_violation',
'out_of_scope_response',
'system_information_leak'
]
Threat Intelligence Feeds
Public Sources
-
MITRE ATT&CK for AI (ATLAS)
- https://atlas.mitre.org/
- Adversarial tactics and techniques
-
CISA Alerts
- https://www.cisa.gov/news-events/cybersecurity-advisories
- Government threat notifications
-
OWASP AI Security
- https://owasp.org/www-project-ai-security-and-privacy-guide/
- Vulnerability database
Commercial Feeds
- Sensity AI - Deepfake detection platform
- Microsoft Threat Intelligence - AI security
- Recorded Future - AI threat tracking
Emerging Threats (2025+)
Real-Time Deepfakes
Technology: Live face-swapping during video calls
Risk:
- Business email compromise
- Remote authentication bypass
- Virtual meeting infiltration
Detection: Liveness detection, behavioral biometrics
Multimodal Attacks
Combination: Deepfake + Prompt Injection
Scenario:
- Deepfake video of executive
- Prompt injection to AI assistant
- Automated approval of fraudulent transaction
Mitigation: Multi-factor verification, human oversight
AI-Generated Phishing
Evolution: LLMs create personalized phishing
Effectiveness:
- Traditional phishing: 3% click rate
- AI-generated: 15-20% click rate
Defense: Security awareness training, email authentication
Threat Modeling
STRIDE Framework (AI-Adapted)
class AIThreatModel:
def analyze(self, ai_system):
threats = {
'Spoofing': ['Deepfake identity theft'],
'Tampering': ['Training data poisoning'],
'Repudiation': ['Deny AI-generated content'],
'Information_Disclosure': ['Prompt injection data leak'],
'Denial_of_Service': ['Resource exhaustion attacks'],
'Elevation_of_Privilege': ['Jailbreak attempts']
}
return threats
Research Citations
- Sensity AI (2024) - State of Deepfakes Report
- Microsoft Security (2024) - AI Red Team Findings
- IBM Security (2024) - Cost of Data Breach
- MITRE ATLAS - https://atlas.mitre.org/
- CISA - https://www.cisa.gov/ai-security
Course Complete! Review Summary