Jonas Schuett
YOU?
Author Swipe
View article: A Grading Rubric for AI Safety Frameworks
A Grading Rubric for AI Safety Frameworks Open
Over the past year, artificial intelligence (AI) companies have been increasingly adopting AI safety frameworks. These frameworks outline how companies intend to keep the potential risks associated with developing and deploying frontier AI…
View article: On Regulating Downstream AI Developers
On Regulating Downstream AI Developers Open
Foundation models – models trained on broad data that can be adapted to a wide range of downstream tasks – can pose significant risks, ranging from intimate image abuse, cyberattacks, to bioterrorism. To reduce these risks, policymakers ar…
View article: Third-party compliance reviews for frontier AI safety frameworks
Third-party compliance reviews for frontier AI safety frameworks Open
Safety frameworks have emerged as a best practice for managing risks from frontier artificial intelligence (AI) systems. However, it may be difficult for stakeholders to know if companies are adhering to their frameworks. This paper explor…
View article: Towards risk-based AI regulation
Towards risk-based AI regulation Open
In this thesis, I explore key elements of risk-based artificial intelligence (AI) regulation. First, I discuss how the scope of risk-based AI regulations should be defined (Chapter 1). Then, I analyse the key risk management provision in t…
View article: Safety case template for frontier AI: A cyber inability argument
Safety case template for frontier AI: A cyber inability argument Open
Frontier artificial intelligence (AI) systems pose increasing risks to society, making it essential for developers to provide assurances about their safety. One approach to offering such assurances is through a safety case: a structured, e…
View article: Safety cases for frontier AI
Safety cases for frontier AI Open
As frontier artificial intelligence (AI) systems become more capable, it becomes more important that developers can explain why their systems are sufficiently safe. One way to do so is via safety cases: reports that make a structured argum…
View article: Frontier AI developers need an internal audit function
Frontier AI developers need an internal audit function Open
This article argues that frontier artificial intelligence (AI) developers need an internal audit function. First, it describes the role of internal audit in corporate governance: internal audit evaluates the adequacy and effectiveness of a…
View article: A Grading Rubric for AI Safety Frameworks
A Grading Rubric for AI Safety Frameworks Open
Over the past year, artificial intelligence (AI) companies have been increasingly adopting AI safety frameworks. These frameworks outline how companies intend to keep the potential risks associated with developing and deploying frontier AI…
View article: From Principles to Rules: A Regulatory Approach for Frontier AI
From Principles to Rules: A Regulatory Approach for Frontier AI Open
Several jurisdictions are starting to regulate frontier artificial intelligence (AI) systems, i.e. general-purpose AI systems that match or exceed the capabilities present in the most advanced systems. To reduce risks from these systems, r…
View article: Risk thresholds for frontier AI
Risk thresholds for frontier AI Open
Frontier artificial intelligence (AI) systems could pose increasing risks to public safety and security. But what level of risk is acceptable? One increasingly popular approach is to define capability thresholds, which describe AI capabili…
View article: How to design an AI ethics board
How to design an AI ethics board Open
The development and deployment of artificial intelligence (AI) systems poses significant risks to society. To reduce these risks to an acceptable level, AI companies need an effective risk management process and sound risk governance. In t…
View article: Three lines of defense against risks from AI
Three lines of defense against risks from AI Open
Organizations that develop and deploy artificial intelligence (AI) systems need to manage the associated risks—for economic, legal, and ethical reasons. However, it is not always clear who is responsible for AI risk management. The three l…
View article: Towards Publicly Accountable Frontier LLMs: Building an External Scrutiny Ecosystem under the ASPIRE Framework
Towards Publicly Accountable Frontier LLMs: Building an External Scrutiny Ecosystem under the ASPIRE Framework Open
With the increasing integration of frontier large language models (LLMs) into society and the economy, decisions related to their training, deployment, and use have far-reaching implications. These decisions should not be left solely in th…
View article: Coordinated pausing: An evaluation-based coordination scheme for frontier AI developers
Coordinated pausing: An evaluation-based coordination scheme for frontier AI developers Open
As artificial intelligence (AI) models are scaled up, new capabilities can emerge unintentionally and unpredictably, some of which might be dangerous. In response, dangerous capabilities evaluations have emerged as a new risk assessment to…
View article: Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives
Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives Open
Recent decisions by leading AI labs to either open-source their models or to restrict access to their models has sparked debate about whether, and how, increasingly capable AI models should be shared. Open-sourcing in AI typically refers t…
View article: Risk assessment at AGI companies: A review of popular risk assessment techniques from other safety-critical industries
Risk assessment at AGI companies: A review of popular risk assessment techniques from other safety-critical industries Open
Companies like OpenAI, Google DeepMind, and Anthropic have the stated goal of building artificial general intelligence (AGI) - AI systems that perform as well as or better than humans on a wide variety of cognitive tasks. However, there ar…
View article: Frontier AI Regulation: Managing Emerging Risks to Public Safety
Frontier AI Regulation: Managing Emerging Risks to Public Safety Open
Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term "frontier AI" models: highly capable foundation models that co…
View article: Auditing large language models: a three-layered approach
Auditing large language models: a three-layered approach Open
Large language models (LLMs) represent a major advance in artificial intelligence (AI) research. However, the widespread use of LLMs is also coupled with significant ethical and social challenges. Previous research has pointed towards audi…
View article: Frontier AI developers need an internal audit function
Frontier AI developers need an internal audit function Open
This article argues that frontier artificial intelligence (AI) developers need an internal audit function. First, it describes the role of internal audit in corporate governance: internal audit evaluates the adequacy and effectiveness of a…
View article: Towards best practices in AGI safety and governance: A survey of expert opinion
Towards best practices in AGI safety and governance: A survey of expert opinion Open
A number of leading AI companies, including OpenAI, Google DeepMind, and Anthropic, have the stated goal of building artificial general intelligence (AGI) - AI systems that achieve or exceed human performance across a wide range of cogniti…
View article: How to design an AI ethics board
How to design an AI ethics board Open
Organizations that develop and deploy artificial intelligence (AI) systems need to take measures to reduce the associated risks. In this paper, we examine how AI companies could design an AI ethics board in a way that reduces risks from AI…
View article: Auditing large language models: a three-layered approach
Auditing large language models: a three-layered approach Open
Large language models (LLMs) represent a major advance in artificial intelligence (AI) research. However, the widespread use of LLMs is also coupled with significant ethical and social challenges. Previous research has pointed towards audi…
View article: Risk Management in the Artificial Intelligence Act
Risk Management in the Artificial Intelligence Act Open
The proposed Artificial Intelligence Act (AI Act) is the first comprehensive attempt to regulate artificial intelligence (AI) in a major jurisdiction. This article analyses Article 9, the key risk management provision in the AI Act. It giv…
View article: Defining the scope of AI regulations
Defining the scope of AI regulations Open
The paper argues that the material scope of AI regulations should not rely on the term "artificial intelligence (AI)". The argument is developed by proposing a number of requirements for legal definitions, surveying existing AI definitions…
View article: Auditing Large Language Models: A Three-Layered Approach
Auditing Large Language Models: A Three-Layered Approach Open
View article: Open-Sourcing Highly Capable Foundation Models: An Evaluation of Risks, Benefits, and Alternative Methods for Pursuing Open-Source Objectives
Open-Sourcing Highly Capable Foundation Models: An Evaluation of Risks, Benefits, and Alternative Methods for Pursuing Open-Source Objectives Open
View article: Three lines of defense against risks from AI
Three lines of defense against risks from AI Open
Organizations that develop and deploy artificial intelligence (AI) systems need to manage the associated risks - for economic, legal, and ethical reasons. However, it is not always clear who is responsible for AI risk management. The Three…
View article: Risk management in the Artificial Intelligence Act
Risk management in the Artificial Intelligence Act Open
The proposed EU AI Act is the first comprehensive attempt to regulate AI in a major jurisdiction. This article analyses Article 9, the key risk management provision in the AI Act. It gives an overview of the regulatory concept behind Artic…
View article: Corporate Governance of Artificial Intelligence in the Public Interest
Corporate Governance of Artificial Intelligence in the Public Interest Open
Corporations play a major role in artificial intelligence (AI) research, development, and deployment, with profound consequences for society. This paper surveys opportunities to improve how corporations govern their AI activities so as to …
View article: AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries
AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries Open
As artificial intelligence (AI) systems are increasingly deployed, principles for ethical AI are also proliferating. Certification offers a method to both incentivize adoption of these principles and substantiate that they have been implem…