Mengwei Xu
YOU?
Author Swipe
View article: Preparation of Monoclonal Antibodies Against the gD Protein of Feline Herpesvirus Type-1 by mRNA Immunization
Preparation of Monoclonal Antibodies Against the gD Protein of Feline Herpesvirus Type-1 by mRNA Immunization Open
This study aimed to develop monoclonal antibodies (mAbs) against the gD protein of FHV-1 for rapid and specific virus detection. The gD protein, a highly conserved part of the FHV-1 envelope, is crucial for viral entry into host cells, mak…
View article: Ubiquitous memory augmentation via mobile multimodal embedding system
Ubiquitous memory augmentation via mobile multimodal embedding system Open
Forgetting is inevitable in human memory. Recently, multimodal embedding models have been proposed to vectorize multimodal reality into a unified embedding space. Once generated, these embeddings allow mobile users to quickly retrieve rele…
View article: MobiEdit: Resource-efficient Knowledge Editing for Personalized On-device LLMs
MobiEdit: Resource-efficient Knowledge Editing for Personalized On-device LLMs Open
Large language models (LLMs) are deployed on mobile devices to power killer applications such as intelligent assistants. LLMs pre-trained on general corpora often hallucinate when handling personalized or unseen queries, leading to incorre…
View article: GUI-Shift: Enhancing VLM-Based GUI Agents through Self-supervised Reinforcement Learning
GUI-Shift: Enhancing VLM-Based GUI Agents through Self-supervised Reinforcement Learning Open
Training effective Vision-Language Models (VLMs) for GUI agents typically depends on large-scale annotated datasets, whose collection is both labor-intensive and error-prone. We introduce K-step GUI Transition, a self-supervised inverse dy…
View article: LoRASuite: Efficient LoRA Adaptation Across Large Language Model Upgrades
LoRASuite: Efficient LoRA Adaptation Across Large Language Model Upgrades Open
As Large Language Models (LLMs) are frequently updated, LoRA weights trained on earlier versions quickly become obsolete. The conventional practice of retraining LoRA weights from scratch on the latest model is costly, time-consuming, and …
View article: Uncertain Machine Ethics Planning
Uncertain Machine Ethics Planning Open
Machine Ethics decisions should consider the implications of uncertainty over decisions. Decisions should be made over sequences of actions to reach preferable outcomes long term. The evaluation of outcomes, however, may invoke one or more…
View article: Does Chain-of-Thought Reasoning Help Mobile GUI Agent? An Empirical Study
Does Chain-of-Thought Reasoning Help Mobile GUI Agent? An Empirical Study Open
Reasoning capabilities have significantly improved the performance of vision-language models (VLMs) in domains such as mathematical problem-solving, coding, and visual question-answering. However, their impact on real-world applications re…
View article: Ubiquitous Memory Augmentation via Mobile Multimodal Embedding System
Ubiquitous Memory Augmentation via Mobile Multimodal Embedding System Open
Forgetting is inevitable in human memory. Recently, multimodal embedding models have been proposed to vectorize multimodal reality into a unified embedding space. The generated embeddings can be easily retrieved to help mobile users rememb…
View article: Resource-efficient Algorithms and Systems of Foundation Models: A Survey
Resource-efficient Algorithms and Systems of Foundation Models: A Survey Open
Large foundation models, including large language models, vision transformers, diffusion, and large language model based multimodal models, are revolutionizing the entire machine learning lifecycle, from training to deployment. However, th…
View article: Proceedings Sixth International Workshop on Formal Methods for Autonomous Systems: Manchester, UK, 11th and 12th of November 2024
Proceedings Sixth International Workshop on Formal Methods for Autonomous Systems: Manchester, UK, 11th and 12th of November 2024 Open
This EPTCS volume contains the papers from the Sixth International Workshop on Formal Methods for Autonomous Systems (FMAS 2024), which was held between the 11th and 13th of November 2024. FMAS 2024 was co-located with 19th International C…
View article: PhoneLM:an Efficient and Capable Small Language Model Family through Principled Pre-training
PhoneLM:an Efficient and Capable Small Language Model Family through Principled Pre-training Open
The interest in developing small language models (SLM) for on-device deployment is fast growing. However, the existing SLM design hardly considers the device hardware characteristics. Instead, this work presents a simple yet effective prin…
View article: A Practical Operational Semantics for Classical Planning in BDI Agents
A Practical Operational Semantics for Classical Planning in BDI Agents Open
Implementations of the Belief-Desire-Intention (BDI) architecture have a long tradition in the development of autonomous agent systems. However, most practical implementations of the BDI framework rely on a pre-defined plan library for dec…
View article: MobileViews: A Million-scale and Diverse Mobile GUI Dataset
MobileViews: A Million-scale and Diverse Mobile GUI Dataset Open
Visual language models (VLMs) empower mobile GUI agents to interpret complex mobile screens and respond to user requests. Training such capable agents requires large-scale, high-quality mobile GUI data. However, existing mobile GUI dataset…
View article: Recall: Empowering Multimodal Embedding for Edge Devices
Recall: Empowering Multimodal Embedding for Edge Devices Open
Human memory is inherently prone to forgetting. To address this, multimodal embedding models have been introduced, which transform diverse real-world data into a unified embedding space. These embeddings can be retrieved efficiently, aidin…
View article: Elastic On-Device LLM Service
Elastic On-Device LLM Service Open
On-device Large Language Models (LLMs) are transforming mobile AI, catalyzing applications like UI automation without privacy concerns. Nowadays the common practice is to deploy a single yet powerful LLM as a general task solver for multip…
View article: FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts
FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts Open
As Large Language Models (LLMs) push the boundaries of AI capabilities, their demand for data is growing. Much of this data is private and distributed across edge devices, making Federated Learning (FL) a de-facto alternative for fine-tuni…
View article: ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents
ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents Open
Recent advancements in integrating large language models (LLMs) with application programming interfaces (APIs) have gained significant interest in both academia and industry. Recent work demonstrates that these API-based agents exhibit rel…
View article: Large Language Models on Mobile Devices: Measurements, Analysis, and Insights
Large Language Models on Mobile Devices: Measurements, Analysis, and Insights Open
Deploying large language models (LLMs) inference into mobile devices is cost-efficient for companies, and well addresses the privacy concern of users. However, the limited computation capacity and memory constraints of mobile devices hinde…
View article: Proceedings of the Workshop on Edge and Mobile Foundation Models
Proceedings of the Workshop on Edge and Mobile Foundation Models Open
Deploying large language models (LLMs) inference into mobile devices is cost-efficient for companies, and well addresses the privacy concern of users.However, the limited computation capacity and memory constraints of mobile devices hinder…
View article: WiP: Efficient LLM Prefilling with Mobile NPU
WiP: Efficient LLM Prefilling with Mobile NPU Open
Large language models (LLMs) play a crucial role in various Natural Language Processing (NLP) tasks, prompting their deployment on mobile devices for inference. However, a significant challenge arises due to high waiting latency, especiall…
View article: Poster: Efficient and Accurate Mobile Task Automation through Learning from Code
Poster: Efficient and Accurate Mobile Task Automation through Learning from Code Open
With the emergence and continuous prosperity of large language models (LLMs), artificial intelligence (AI) agents have experienced rapid advancements. Most mobile AI agents merely imitate human operations, executing actions based on the hu…
View article: Mobile Foundation Model as Firmware
Mobile Foundation Model as Firmware Open
In the current AI era, mobile devices such as smartphones are tasked with executing a myriad of deep neural networks (DNNs) locally. It presents a complex landscape, as these models are highly fragmented in terms of architecture, operators…
View article: Deciphering the Enigma of Satellite Computing with COTS Devices: Measurement and Analysis
Deciphering the Enigma of Satellite Computing with COTS Devices: Measurement and Analysis Open
In the wake of the rapid deployment of large-scale low-Earth orbit satellite constellations, exploiting the full computing potential of Commercial Off-The-Shelf (COTS) devices in these environments has become a pressing issue. However, und…
View article: The CAP Principle for LLM Serving: A Survey of Long-Context Large Language Model Serving
The CAP Principle for LLM Serving: A Survey of Long-Context Large Language Model Serving Open
We survey the large language model (LLM) serving area to understand the intricate dynamics between cost-efficiency and accuracy, which is magnified by the growing need for longer contextual understanding when deploying models at a massive …
View article: LlamaTouch: A Faithful and Scalable Testbed for Mobile UI Task Automation
LlamaTouch: A Faithful and Scalable Testbed for Mobile UI Task Automation Open
The emergent large language/multimodal models facilitate the evolution of mobile agents, especially in mobile UI task automation. However, existing evaluation approaches, which rely on human validation or established datasets to compare ag…
View article: A Survey of Backpropagation-free Training For LLMS
A Survey of Backpropagation-free Training For LLMS Open
Large language models (LLMs) have achieved remarkable performance in various downstream tasks.However, training LLMs is computationally expensive and requires a large amount of memory.To address this issue, backpropagation-free (BP-free) t…