Dave Braines
YOU?
Author Swipe
View article: Coalitions of Large Language Models Increase the Robustness of AI Agents
Coalitions of Large Language Models Increase the Robustness of AI Agents Open
The emergence of Large Language Models (LLMs) have fundamentally altered the way we interact with digital systems and have led to the pursuit of LLM powered AI agents to assist in daily workflows. LLMs, whilst powerful and capable of demon…
View article: Can we Constrain Concept Bottleneck Models to Learn Semantically Meaningful Input Features?
Can we Constrain Concept Bottleneck Models to Learn Semantically Meaningful Input Features? Open
Concept Bottleneck Models (CBMs) are regarded as inherently interpretable because they first predict a set of human-defined concepts which are used to predict a task label. For inherent interpretability to be fully realised, and ensure tru…
View article: Towards a Deeper Understanding of Concept Bottleneck Models Through End-to-End Explanation
Towards a Deeper Understanding of Concept Bottleneck Models Through End-to-End Explanation Open
Concept Bottleneck Models (CBMs) first map raw input(s) to a vector of human-defined concepts, before using this vector to predict a final classification. We might therefore expect CBMs capable of predicting concepts based on distinct regi…
View article: Cognitive analysis in sports: Supporting match analysis and scouting through artificial intelligence
Cognitive analysis in sports: Supporting match analysis and scouting through artificial intelligence Open
In elite sports, there is an opportunity to take advantage of rich and detailed datasets generated across multiple threads of the sporting business. Challenges currently exist due to time constraints to analyse the data, as well as the qua…
View article: The Science Library: Curation and visualization of a science gateway repository
The Science Library: Curation and visualization of a science gateway repository Open
Summary Scientific publications from a group or consortium often form a coherent larger body of work with underlying threads and relationships. Rich social, structural, and topical networks between authors and organizations can be identifi…
View article: An Experimentation Platform for Explainable Coalition Situational Understanding
An Experimentation Platform for Explainable Coalition Situational Understanding Open
We present an experimentation platform for coalition situational understanding research that highlights capabilities in explainable artificial intelligence/machine learning (AI/ML) and integration of symbolic and subsymbolic AI/ML approach…
View article: Towards human-agent knowledge fusion (HAKF) in support of distributed coalition teams
Towards human-agent knowledge fusion (HAKF) in support of distributed coalition teams Open
Future coalition operations can be substantially augmented through agile teaming between human and machine agents, but in a coalition context these agents may be unfamiliar to the human users and expected to operate in a broad set of scena…
View article: Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI
Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI Open
Artificial intelligence (AI) systems hold great promise as decision-support tools, but we must be able to identify and understand their inevitable mistakes if they are to fulfill this potential. This is particularly true in domains where t…
View article: Explainable AI for Intelligence Augmentation in Multi-Domain Operations
Explainable AI for Intelligence Augmentation in Multi-Domain Operations Open
Central to the concept of multi-domain operations (MDO) is the utilization of an intelligence, surveillance, and reconnaissance (ISR) network consisting of overlapping systems of remote and autonomous sensors, and human intelligence, distr…
View article: gl2vec
gl2vec Open
Learning network representation has a variety of applications, such as network classification. Most existing work in this area focuses on static undirected networks and does not account for presence of directed edges or temporal changes. F…
View article: gl2vec: Learning Feature Representation Using Graphlets for Directed Networks
gl2vec: Learning Feature Representation Using Graphlets for Directed Networks Open
Learning network representation has a variety of
\napplications, such as network classification. Most existing work
\nin this area focuses on static undirected networks and does not
\naccount for presence of directed edges or temporal chan…
View article: Learning Features of Network Structures Using Graphlets
Learning Features of Network Structures Using Graphlets Open
Networks are fundamental to the study of complex systems, ranging from social contacts, message transactions, to biological regulations and economical networks. In many realistic applications, these networks may vary over time. Modeling an…
View article: Stakeholders in Explainable AI
Stakeholders in Explainable AI Open
There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable. However, there is no general consensus over what is meant by 'explainable' and 'interpret…
View article: Hows and Whys of Artificial Intelligence for Public Sector Decisions: Explanation and Evaluation
Hows and Whys of Artificial Intelligence for Public Sector Decisions: Explanation and Evaluation Open
Evaluation has always been a key challenge in the development of artificial intelligence (AI) based software, due to the technical complexity of the software artifact and, often, its embedding in complex sociotechnical processes. Recent ad…
View article: Network Classification in Temporal Networks Using Motifs
Network Classification in Temporal Networks Using Motifs Open
Network classification has a variety of applications, such as detecting communities within networks and finding similarities between those representing different aspects of the real world. However, most existing work in this area focus on …
View article: Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems Open
Several researchers have argued that a machine learning system's interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable. We describe a m…
View article: Interpretable to Whom? A Role-based Model for Analyzing Interpretable\n Machine Learning Systems
Interpretable to Whom? A Role-based Model for Analyzing Interpretable\n Machine Learning Systems Open
Several researchers have argued that a machine learning system's\ninterpretability should be defined in relation to a specific agent or task: we\nshould not ask if the system is interpretable, but to whom is it interpretable.\nWe describe …
View article: Sherlock: Experimental Evaluation of a Conversational Agent for Mobile Information Tasks
Sherlock: Experimental Evaluation of a Conversational Agent for Mobile Information Tasks Open
—Controlled Natural Language (CNL) has great potential to support human-machine interaction (HMI) because it provides an information representation that is both human readable and machine processable.We investigated the effectiveness of a …
View article: Human Computer Collaboration at the Edge: Enhancing Collective Situation Understanding with Controlled Natural Language
Human Computer Collaboration at the Edge: Enhancing Collective Situation Understanding with Controlled Natural Language Open
Effective coalition operations require support for dynamic information gathering, processing, and sharing at the network edge for Collective Situation Understanding (CSU). To enhance CSU and leverage the combined strengths of humans and ma…
View article: Human Computer Collaboration at the Edge: Enhancing Collective Situation Understanding with Controlled Natural Language
Human Computer Collaboration at the Edge: Enhancing Collective Situation Understanding with Controlled Natural Language Open
Effective coalition operations require support for dynamic information gathering, processing, and sharing at the network edge for Collective Situation Understanding (CSU). To enhance CSU and leverage the combined strengths of humans and ma…