Chejian Xu
YOU?
Author Swipe
View article: DiffScene: Diffusion-Based Safety-Critical Scenario Generation for Autonomous Vehicles
DiffScene: Diffusion-Based Safety-Critical Scenario Generation for Autonomous Vehicles Open
The field of Autonomous Driving (AD) has witnessed significant progress in recent years. Among the various challenges faced, the safety evaluation of autonomous vehicles (AVs) stands out as a critical concern. Traditional evaluation method…
View article: COMMIT: Certifying Robustness of Multi-Sensor Fusion Systems Against Semantic Attacks
COMMIT: Certifying Robustness of Multi-Sensor Fusion Systems Against Semantic Attacks Open
Multi-sensor fusion systems (MSFs) play a vital role as the perception module in modern autonomous vehicles (AVs). Therefore, ensuring their robustness against common and realistic adversarial semantic transformations, such as rotation and…
View article: MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models
MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models Open
Multimodal foundation models (MMFMs) play a crucial role in various applications, including autonomous driving, healthcare, and virtual assistants. However, several studies have revealed vulnerabilities in these models, such as generating …
View article: AdvWave: Stealthy Adversarial Jailbreak Attack against Large Audio-Language Models
AdvWave: Stealthy Adversarial Jailbreak Attack against Large Audio-Language Models Open
Recent advancements in large audio-language models (LALMs) have enabled speech-based user interactions, significantly enhancing user experience and accelerating the deployment of LALMs in real-world applications. However, ensuring the safe…
View article: AdvAgent: Controllable Blackbox Red-teaming on Web Agents
AdvAgent: Controllable Blackbox Red-teaming on Web Agents Open
Foundation model-based agents are increasingly used to automate complex tasks, enhancing efficiency and productivity. However, their access to sensitive resources and autonomous decision-making also introduce significant security risks, wh…
View article: EIA: Environmental Injection Attack on Generalist Web Agents for Privacy Leakage
EIA: Environmental Injection Attack on Generalist Web Agents for Privacy Leakage Open
Generalist web agents have demonstrated remarkable potential in autonomously completing a wide range of tasks on real websites, significantly boosting human productivity. However, web tasks, such as booking flights, usually involve users' …
View article: ChatScene: Knowledge-Enabled Safety-Critical Scenario Generation for Autonomous Vehicles
ChatScene: Knowledge-Enabled Safety-Critical Scenario Generation for Autonomous Vehicles Open
We present ChatScene, a Large Language Model (LLM)-based agent that leverages the capabilities of LLMs to generate safety-critical scenarios for autonomous vehicles. Given unstructured language instructions, the agent first generates textu…
View article: KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual Checking
KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual Checking Open
This paper introduces KnowHalu, a novel approach for detecting hallucinations in text generated by large language models (LLMs), utilizing step-wise reasoning, multi-formulation query, multi-form knowledge for factual checking, and fusion-…
View article: COMMIT: Certifying Robustness of Multi-Sensor Fusion Systems against Semantic Attacks
COMMIT: Certifying Robustness of Multi-Sensor Fusion Systems against Semantic Attacks Open
Multi-sensor fusion systems (MSFs) play a vital role as the perception module in modern autonomous vehicles (AVs). Therefore, ensuring their robustness against common and realistic adversarial semantic transformations, such as rotation and…
View article: DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models Open
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains li…
View article: Copy Motion From One to Another: Fake Motion Video Generation
Copy Motion From One to Another: Fake Motion Video Generation Open
One compelling application of artificial intelligence is to generate a video of a target person performing arbitrary desired motion (from a source person). While the state-of-the-art methods are able to synthesize a video demonstrating sim…
View article: SafeBench: A Benchmarking Platform for Safety Evaluation of Autonomous Vehicles
SafeBench: A Benchmarking Platform for Safety Evaluation of Autonomous Vehicles Open
As shown by recent studies, machine intelligence-enabled systems are vulnerable to test cases resulting from either adversarial manipulation or natural distribution shifts. This has raised great concerns about deploying machine learning al…
View article: SemAttack: Natural Textual Attacks via Different Semantic Spaces
SemAttack: Natural Textual Attacks via Different Semantic Spaces Open
Recent studies show that pre-trained language models (LMs) are vulnerable to textual adversarial attacks. However, existing attack methods either suffer from low attack success rates or fail to search efficiently in the exponentially large…
View article: Copy Motion From One to Another: Fake Motion Video Generation
Copy Motion From One to Another: Fake Motion Video Generation Open
One compelling application of artificial intelligence is to generate a video of a target person performing arbitrary desired motion (from a source person). While the state-of-the-art methods are able to synthesize a video demonstrating sim…
View article: COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks Open
As reinforcement learning (RL) has achieved near human-level performance in a variety of tasks, its robustness has raised great attention. While a vast body of research has explored test-time (evasion) attacks in RL and corresponding defen…
View article: COPA: Certifying Robust Policies for Offline Reinforcement Learning\n against Poisoning Attacks
COPA: Certifying Robust Policies for Offline Reinforcement Learning\n against Poisoning Attacks Open
As reinforcement learning (RL) has achieved near human-level performance in a\nvariety of tasks, its robustness has raised great attention. While a vast body\nof research has explored test-time (evasion) attacks in RL and corresponding\nde…
View article: A Survey on Safety-Critical Driving Scenario Generation -- A Methodological Perspective
A Survey on Safety-Critical Driving Scenario Generation -- A Methodological Perspective Open
Autonomous driving systems have witnessed a significant development during the past years thanks to the advance in machine learning-enabled sensing and decision-making algorithms. One critical challenge for their massive deployment in the …
View article: SemAttack: Natural Textual Attacks via Different Semantic Spaces
SemAttack: Natural Textual Attacks via Different Semantic Spaces Open
Recent studies show that pre-trained language models (LMs) are vulnerable to textual adversarial attacks. However, existing attack methods either suffer from low attack success rates or fail to search efficiently in the exponentially large…
View article: Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models Open
Large-scale pre-trained language models have achieved tremendous success
across a wide range of natural language understanding (NLU) tasks, even
surpassing human performance. However, recent studies reveal that the
robustness of these mode…
View article: Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models Open
Large-scale pre-trained language models have achieved tremendous success across a wide range of natural language understanding (NLU) tasks, even surpassing human performance. However, recent studies reveal that the robustness of these mode…