Michael Hind
YOU?
Author Swipe
View article: Developing a Risk Identification Framework for Foundation Model Uses
Developing a Risk Identification Framework for Foundation Model Uses Open
As foundation models grow in both popularity and capability, researchers have uncovered a variety of ways that the models can pose a risk to the model's owner, user, or others. Despite the efforts of measuring these risks via benchmarks an…
View article: Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations Open
Large language models (LLMs) are susceptible to a variety of risks, from non-faithful output to biased and toxic generations. Due to several limiting factors surrounding LLMs (training cost, API access, data availability, etc.), it may not…
View article: Quantitative AI Risk Assessments: Opportunities and Challenges
Quantitative AI Risk Assessments: Opportunities and Challenges Open
Although AI systems are increasingly being leveraged to provide value to organizations, individuals, and society, significant attendant risks have been identified and have manifested. These risks have led to proposed regulations, litigatio…
View article: AI Explainability 360: Impact and Design
AI Explainability 360: Impact and Design Open
As artificial intelligence and machine learning algorithms become increasingly prevalent in society, multiple stakeholders are calling for these algorithms to provide explanations. At the same time, these stakeholders, whether they be affe…
View article: Evaluating a Methodology for Increasing AI Transparency: A Case Study
Evaluating a Methodology for Increasing AI Transparency: A Case Study Open
In reaction to growing concerns about the potential harms of artificial intelligence (AI), societies have begun to demand more transparency about how AI models and systems are created and used. To address these concerns, several efforts ha…
View article: AI Explainability 360: Impact and Design
AI Explainability 360: Impact and Design Open
As artificial intelligence and machine learning algorithms become increasingly prevalent in society, multiple stakeholders are calling for these algorithms to provide explanations. At the same time, these stakeholders, whether they be affe…
View article: A Methodology for Creating AI FactSheets
A Methodology for Creating AI FactSheets Open
As AI models and services are used in a growing number of highstakes areas, a consensus is forming around the need for a clearer record of how these models and services are developed to increase trust. Several proposals for higher quality …
View article: Trust and Transparency in Contact Tracing Applications
Trust and Transparency in Contact Tracing Applications Open
The global outbreak of COVID-19 has led to focus on efforts to manage and mitigate the continued spread of the disease. One of these efforts include the use of contact tracing to identify people who are at-risk of developing the disease th…
View article: Experiences with Improving the Transparency of AI Models and Services
Experiences with Improving the Transparency of AI Models and Services Open
AI models and services are used in a growing number of highstakes areas, resulting in a need for increased transparency. Consistent with this, several proposals for higher quality and more consistent documentation of AI data, models, and s…
View article: Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness Open
Many proposed methods for explaining machine learning predictions are in fact challenging to understand for nontechnical consumers. This paper builds upon an alternative consumer-driven approach called TED that asks for explanations to be …
View article: One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques Open
As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they…
View article: FactSheets: Increasing trust in AI services through supplier's declarations of conformity
FactSheets: Increasing trust in AI services through supplier's declarations of conformity Open
Accuracy is an important concern for suppliers of artificial intelligence (AI) services, but considerations beyond accuracy, such as safety (which includes fairness and explainability), security, and provenance, are also critical elements …
View article: Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning
Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning Open
Using machine learning in high-stakes applications often requires predictions to be accompanied by explanations comprehensible to the domain user, who has ultimate responsibility for decisions and outcomes. Recently, a new framework for pr…
View article: TED: Teaching AI to Explain its Decisions
TED: Teaching AI to Explain its Decisions Open
Artificial intelligence systems are being increasingly deployed due to their potential to increase the efficiency, scale, consistency, fairness, and accuracy of decisions. However, as many of these systems are opaque in their operation, th…
View article: Promoting Distributed Trust in Machine Learning and Computational Simulation via a Blockchain Network
Promoting Distributed Trust in Machine Learning and Computational Simulation via a Blockchain Network Open
Policy decisions are increasingly dependent on the outcomes of simulations and/or machine learning models. The ability to share and interact with these outcomes is relevant across multiple fields and is especially critical in the disease m…
View article: AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias Open
Fairness is an increasingly important concern as machine learning models are used to support decision making in high-stakes applications such as mortgage lending, hiring, and prison sentencing. This paper introduces a new open source Pytho…
View article: AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and\n Mitigating Unwanted Algorithmic Bias
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and\n Mitigating Unwanted Algorithmic Bias Open
Fairness is an increasingly important concern as machine learning models are\nused to support decision making in high-stakes applications such as mortgage\nlending, hiring, and prison sentencing. This paper introduces a new open source\nPy…
View article: Trusted Multi-Party Computation and Verifiable Simulations: A Scalable Blockchain Approach
Trusted Multi-Party Computation and Verifiable Simulations: A Scalable Blockchain Approach Open
Large-scale computational experiments, often running over weeks and over large datasets, are used extensively in fields such as epidemiology, meteorology, computational biology, and healthcare to understand phenomena, and design high-stake…
View article: Collaborative Human-AI (CHAI): Evidence-Based Interpretable Melanoma\n Classification in Dermoscopic Images
Collaborative Human-AI (CHAI): Evidence-Based Interpretable Melanoma\n Classification in Dermoscopic Images Open
Automated dermoscopic image analysis has witnessed rapid growth in diagnostic\nperformance. Yet adoption faces resistance, in part, because no evidence is\nprovided to support decisions. In this work, an approach for evidence-based\nclassi…
View article: Teaching Meaningful Explanations
Teaching Meaningful Explanations Open
The adoption of machine learning in high-stakes applications such as healthcare and law has lagged in part because predictions are not accompanied by explanations comprehensible to the domain user, who often holds the ultimate responsibili…