Model evaluation for extreme risks Article Swipe
Related Concepts
Toby Shevlane
,
Sebastian Farquhar
,
Ben Garfinkel
,
Mary Phuong
,
Jess Whittlestone
,
Jade Leung
,
Daniel Kokotajlo
,
Nahema Marchal
,
Markus Anderljung
,
Noam Kolt
,
L. Lawrence Ho
,
Divya Siddarth
,
Shahar Avin
,
Will Hawkins
,
Been Kim
,
Iason Gabriel
,
Vijay Bolina
,
Jack A. Clark
,
Yoshua Bengio
,
Paul Christiano
,
Allan Dafoe
·
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2305.15324
· OA: W4378474292
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2305.15324
· OA: W4378474292
Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why model evaluation is critical for addressing extreme risks. Developers must be able to identify dangerous capabilities (through "dangerous capability evaluations") and the propensity of models to apply their capabilities for harm (through "alignment evaluations"). These evaluations will become critical for keeping policymakers and other stakeholders informed, and for making responsible decisions about model training, deployment, and security.
Related Topics
Finding more related topics…