AIM BLOG

Latest Insights.

Read the latest insights on AI security technologies, industry trends, and prompt engineering from the AIM Intelligence research and engineering teams.
Article list
SECURITY NOV 15, 2024

Indirect Prompt Injection Attacks Against Web Agents

Explore how EIA, AdvWeb, and WIPI attack methods exploit vulnerabilities in VLM-powered web agents, revealing serious security concerns for AI systems that interact with web environments.

Read Post →
SECURITY NOV 9, 2024

Defending Web Agents: Advanced Security Strategies through AdvWeb and BrowserART

Explore cutting-edge methodologies for identifying and mitigating vulnerabilities in VLM-powered web agents, including the AdvWeb attack framework and BrowserART red teaming toolkit.

Read Post →
RESEARCH NOV 9, 2024

Refining Vision-Language Model Benchmarks: Base Query Generation and Toxicity Analysis

For existing VLM Safety benchmarks, there are cases where the text alone is sufficiently informative without the image. We explore base query generation and toxicity measurement methods.

Read Post →
RESEARCH NOV 8, 2024

AIM RED TEAM: Insights from the KAIST Lab Meeting on Persona-Based Jailbreak Strategies

This week, we held a productive meeting with the KAIST lab to refine the direction of our ongoing research project and to solidify our experimental design. The focus was on integrating psychological approaches with LLMs to design jailbreak prompts.

Read Post →
RESEARCH NOV 2, 2024

Evaluating Text-based VLM Attack Methods: In-depth Look at Figstep

To evaluate VLM Safety, it is essential to develop a secure model that incorporates the unique characteristics of VLMs. We analyze Figstep and RTVLM datasets to assess typographic visual prompt attacks.

Read Post →
← Prev12
aim

Ready to secure your AI?

Consult with AIM Intelligence's security experts and request a free red teaming demo optimized for your system.

EXPLORE PLATFORM