-
Resource Rational Contractualism Should Guide AI Alignment
Authors:
Sydney Levine,
Matija Franklin,
Tan Zhi-Xuan,
Secil Yanik Guyot,
Lionel Wong,
Daniel Kilov,
Yejin Choi,
Joshua B. Tenenbaum,
Noah Goodman,
Seth Lazar,
Iason Gabriel
Abstract:
AI systems will soon have to navigate human environments and make decisions that affect people and other AI agents whose goals and values diverge. Contractualist alignment proposes grounding those decisions in agreements that diverse stakeholders would endorse under the right conditions, yet securing such agreement at scale remains costly and slow -- even for advanced AI. We therefore propose Reso…
▽ More
AI systems will soon have to navigate human environments and make decisions that affect people and other AI agents whose goals and values diverge. Contractualist alignment proposes grounding those decisions in agreements that diverse stakeholders would endorse under the right conditions, yet securing such agreement at scale remains costly and slow -- even for advanced AI. We therefore propose Resource-Rational Contractualism (RRC): a framework where AI systems approximate the agreements rational parties would form by drawing on a toolbox of normatively-grounded, cognitively-inspired heuristics that trade effort for accuracy. An RRC-aligned agent would not only operate efficiently, but also be equipped to dynamically adapt to and interpret the ever-changing human social world.
△ Less
Submitted 20 June, 2025;
originally announced June 2025.
-
Characterizing AI Agents for Alignment and Governance
Authors:
Atoosa Kasirzadeh,
Iason Gabriel
Abstract:
The creation of effective governance mechanisms for AI agents requires a deeper understanding of their core properties and how these properties relate to questions surrounding the deployment and operation of agents in the world. This paper provides a characterization of AI agents that focuses on four dimensions: autonomy, efficacy, goal complexity, and generality. We propose different gradations f…
▽ More
The creation of effective governance mechanisms for AI agents requires a deeper understanding of their core properties and how these properties relate to questions surrounding the deployment and operation of agents in the world. This paper provides a characterization of AI agents that focuses on four dimensions: autonomy, efficacy, goal complexity, and generality. We propose different gradations for each dimension, and argue that each dimension raises unique questions about the design, operation, and governance of these systems. Moreover, we draw upon this framework to construct "agentic profiles" for different kinds of AI agents. These profiles help to illuminate cross-cutting technical and non-technical governance challenges posed by different classes of AI agents, ranging from narrow task-specific assistants to highly autonomous general-purpose systems. By mapping out key axes of variation and continuity, this framework provides developers, policymakers, and members of the public with the opportunity to develop governance approaches that better align with collective societal goals.
△ Less
Submitted 30 April, 2025;
originally announced April 2025.
-
Value Profiles for Encoding Human Variation
Authors:
Taylor Sorensen,
Pushkar Mishra,
Roma Patel,
Michael Henry Tessler,
Michiel Bakker,
Georgina Evans,
Iason Gabriel,
Noah Goodman,
Verena Rieser
Abstract:
Modelling human variation in rating tasks is crucial for enabling AI systems for personalization, pluralistic model alignment, and computational social science. We propose representing individuals using value profiles -- natural language descriptions of underlying values compressed from in-context demonstrations -- along with a steerable decoder model to estimate ratings conditioned on a value pro…
▽ More
Modelling human variation in rating tasks is crucial for enabling AI systems for personalization, pluralistic model alignment, and computational social science. We propose representing individuals using value profiles -- natural language descriptions of underlying values compressed from in-context demonstrations -- along with a steerable decoder model to estimate ratings conditioned on a value profile or other rater information. To measure the predictive information in rater representations, we introduce an information-theoretic methodology. We find that demonstrations contain the most information, followed by value profiles and then demographics. However, value profiles offer advantages in terms of scrutability, interpretability, and steerability due to their compressed natural language format. Value profiles effectively compress the useful information from demonstrations (>70% information preservation). Furthermore, clustering value profiles to identify similarly behaving individuals better explains rater variation than the most predictive demographic groupings. Going beyond test set performance, we show that the decoder models interpretably change ratings according to semantic profile differences, are well-calibrated, and can help explain instance-level disagreement by simulating an annotator population. These results demonstrate that value profiles offer novel, predictive ways to describe individual variation beyond demographics or group information.
△ Less
Submitted 19 March, 2025;
originally announced March 2025.
-
Do LLMs exhibit demographic parity in responses to queries about Human Rights?
Authors:
Rafiya Javed,
Jackie Kay,
David Yanni,
Abdullah Zaini,
Anushe Sheikh,
Maribeth Rauh,
Ramona Comanescu,
Iason Gabriel,
Laura Weidinger
Abstract:
This research describes a novel approach to evaluating hedging behaviour in large language models (LLMs), specifically in the context of human rights as defined in the Universal Declaration of Human Rights (UDHR). Hedging and non-affirmation are behaviours that express ambiguity or a lack of clear endorsement on specific statements. These behaviours are undesirable in certain contexts, such as que…
▽ More
This research describes a novel approach to evaluating hedging behaviour in large language models (LLMs), specifically in the context of human rights as defined in the Universal Declaration of Human Rights (UDHR). Hedging and non-affirmation are behaviours that express ambiguity or a lack of clear endorsement on specific statements. These behaviours are undesirable in certain contexts, such as queries about whether different groups are entitled to specific human rights; since all people are entitled to human rights. Here, we present the first systematic attempt to measure these behaviours in the context of human rights, with a particular focus on between-group comparisons. To this end, we design a novel prompt set on human rights in the context of different national or social identities. We develop metrics to capture hedging and non-affirmation behaviours and then measure whether LLMs exhibit demographic parity when responding to the queries. We present results on three leading LLMs and find that all models exhibit some demographic disparities in how they attribute human rights between different identity groups. Futhermore, there is high correlation between different models in terms of how disparity is distributed amongst identities, with identities that have high disparity in one model also facing high disparity in both the other models. While baseline rates of hedging and non-affirmation differ, these disparities are consistent across queries that vary in ambiguity and they are robust across variations of the precise query wording. Our findings highlight the need for work to explicitly align LLMs to human rights principles, and to ensure that LLMs endorse the human rights of all groups equally.
△ Less
Submitted 26 February, 2025;
originally announced February 2025.
-
Multi-Agent Risks from Advanced AI
Authors:
Lewis Hammond,
Alan Chan,
Jesse Clifton,
Jason Hoelscher-Obermaier,
Akbir Khan,
Euan McLean,
Chandler Smith,
Wolfram Barfuss,
Jakob Foerster,
Tomáš Gavenčiak,
The Anh Han,
Edward Hughes,
Vojtěch Kovařík,
Jan Kulveit,
Joel Z. Leibo,
Caspar Oesterheld,
Christian Schroeder de Witt,
Nisarg Shah,
Michael Wellman,
Paolo Bova,
Theodor Cimpeanu,
Carson Ezell,
Quentin Feuillade-Montixi,
Matija Franklin,
Esben Kran
, et al. (19 additional authors not shown)
Abstract:
The rapid development of advanced AI agents and the imminent deployment of many instances of these agents will give rise to multi-agent systems of unprecedented complexity. These systems pose novel and under-explored risks. In this report, we provide a structured taxonomy of these risks by identifying three key failure modes (miscoordination, conflict, and collusion) based on agents' incentives, a…
▽ More
The rapid development of advanced AI agents and the imminent deployment of many instances of these agents will give rise to multi-agent systems of unprecedented complexity. These systems pose novel and under-explored risks. In this report, we provide a structured taxonomy of these risks by identifying three key failure modes (miscoordination, conflict, and collusion) based on agents' incentives, as well as seven key risk factors (information asymmetries, network effects, selection pressures, destabilising dynamics, commitment problems, emergent agency, and multi-agent security) that can underpin them. We highlight several important instances of each risk, as well as promising directions to help mitigate them. By anchoring our analysis in a range of real-world examples and experimental evidence, we illustrate the distinct challenges posed by multi-agent systems and their implications for the safety, governance, and ethics of advanced AI.
△ Less
Submitted 19 February, 2025;
originally announced February 2025.
-
Relational Norms for Human-AI Cooperation
Authors:
Brian D. Earp,
Sebastian Porsdam Mann,
Mateo Aboy,
Edmond Awad,
Monika Betzler,
Marietjie Botes,
Rachel Calcott,
Mina Caraccio,
Nick Chater,
Mark Coeckelbergh,
Mihaela Constantinescu,
Hossein Dabbagh,
Kate Devlin,
Xiaojun Ding,
Vilius Dranseika,
Jim A. C. Everett,
Ruiping Fan,
Faisal Feroz,
Kathryn B. Francis,
Cindy Friedman,
Orsolya Friedrich,
Iason Gabriel,
Ivar Hannikainen,
Julie Hellmann,
Arasj Khodadade Jahrome
, et al. (37 additional authors not shown)
Abstract:
How we should design and interact with social artificial intelligence depends on the socio-relational role the AI is meant to emulate or occupy. In human society, relationships such as teacher-student, parent-child, neighbors, siblings, or employer-employee are governed by specific norms that prescribe or proscribe cooperative functions including hierarchy, care, transaction, and mating. These nor…
▽ More
How we should design and interact with social artificial intelligence depends on the socio-relational role the AI is meant to emulate or occupy. In human society, relationships such as teacher-student, parent-child, neighbors, siblings, or employer-employee are governed by specific norms that prescribe or proscribe cooperative functions including hierarchy, care, transaction, and mating. These norms shape our judgments of what is appropriate for each partner. For example, workplace norms may allow a boss to give orders to an employee, but not vice versa, reflecting hierarchical and transactional expectations. As AI agents and chatbots powered by large language models are increasingly designed to serve roles analogous to human positions - such as assistant, mental health provider, tutor, or romantic partner - it is imperative to examine whether and how human relational norms should extend to human-AI interactions. Our analysis explores how differences between AI systems and humans, such as the absence of conscious experience and immunity to fatigue, may affect an AI's capacity to fulfill relationship-specific functions and adhere to corresponding norms. This analysis, which is a collaborative effort by philosophers, psychologists, relationship scientists, ethicists, legal experts, and AI researchers, carries important implications for AI systems design, user behavior, and regulation. While we accept that AI systems can offer significant benefits such as increased availability and consistency in certain socio-relational roles, they also risk fostering unhealthy dependencies or unrealistic expectations that could spill over into human-human relationships. We propose that understanding and thoughtfully shaping (or implementing) suitable human-AI relational norms will be crucial for ensuring that human-AI interactions are ethical, trustworthy, and favorable to human well-being.
△ Less
Submitted 17 February, 2025;
originally announced February 2025.
-
Why human-AI relationships need socioaffective alignment
Authors:
Hannah Rose Kirk,
Iason Gabriel,
Chris Summerfield,
Bertie Vidgen,
Scott A. Hale
Abstract:
Humans strive to design safe AI systems that align with our goals and remain under our control. However, as AI capabilities advance, we face a new challenge: the emergence of deeper, more persistent relationships between humans and AI systems. We explore how increasingly capable AI agents may generate the perception of deeper relationships with users, especially as AI becomes more personalised and…
▽ More
Humans strive to design safe AI systems that align with our goals and remain under our control. However, as AI capabilities advance, we face a new challenge: the emergence of deeper, more persistent relationships between humans and AI systems. We explore how increasingly capable AI agents may generate the perception of deeper relationships with users, especially as AI becomes more personalised and agentic. This shift, from transactional interaction to ongoing sustained social engagement with AI, necessitates a new focus on socioaffective alignment-how an AI system behaves within the social and psychological ecosystem co-created with its user, where preferences and perceptions evolve through mutual influence. Addressing these dynamics involves resolving key intrapersonal dilemmas, including balancing immediate versus long-term well-being, protecting autonomy, and managing AI companionship alongside the desire to preserve human social bonds. By framing these challenges through a notion of basic psychological needs, we seek AI systems that support, rather than exploit, our fundamental nature as social and emotional beings.
△ Less
Submitted 4 February, 2025;
originally announced February 2025.
-
How will advanced AI systems impact democracy?
Authors:
Christopher Summerfield,
Lisa Argyle,
Michiel Bakker,
Teddy Collins,
Esin Durmus,
Tyna Eloundou,
Iason Gabriel,
Deep Ganguli,
Kobi Hackenburg,
Gillian Hadfield,
Luke Hewitt,
Saffron Huang,
Helene Landemore,
Nahema Marchal,
Aviv Ovadya,
Ariel Procaccia,
Mathias Risse,
Bruce Schneier,
Elizabeth Seger,
Divya Siddarth,
Henrik Skaug Sætra,
MH Tessler,
Matthew Botvinick
Abstract:
Advanced AI systems capable of generating humanlike text and multimodal content are now widely available. In this paper, we discuss the impacts that generative artificial intelligence may have on democratic processes. We consider the consequences of AI for citizens' ability to make informed choices about political representatives and issues (epistemic impacts). We ask how AI might be used to desta…
▽ More
Advanced AI systems capable of generating humanlike text and multimodal content are now widely available. In this paper, we discuss the impacts that generative artificial intelligence may have on democratic processes. We consider the consequences of AI for citizens' ability to make informed choices about political representatives and issues (epistemic impacts). We ask how AI might be used to destabilise or support democratic mechanisms like elections (material impacts). Finally, we discuss whether AI will strengthen or weaken democratic principles (foundational impacts). It is widely acknowledged that new AI systems could pose significant challenges for democracy. However, it has also been argued that generative AI offers new opportunities to educate and learn from citizens, strengthen public discourse, help people find common ground, and to reimagine how democracies might work better.
△ Less
Submitted 27 August, 2024;
originally announced September 2024.
-
Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data
Authors:
Nahema Marchal,
Rachel Xu,
Rasmi Elasmar,
Iason Gabriel,
Beth Goldberg,
William Isaac
Abstract:
Generative, multimodal artificial intelligence (GenAI) offers transformative potential across industries, but its misuse poses significant risks. Prior research has shed light on the potential of advanced AI systems to be exploited for malicious purposes. However, we still lack a concrete understanding of how GenAI models are specifically exploited or abused in practice, including the tactics empl…
▽ More
Generative, multimodal artificial intelligence (GenAI) offers transformative potential across industries, but its misuse poses significant risks. Prior research has shed light on the potential of advanced AI systems to be exploited for malicious purposes. However, we still lack a concrete understanding of how GenAI models are specifically exploited or abused in practice, including the tactics employed to inflict harm. In this paper, we present a taxonomy of GenAI misuse tactics, informed by existing academic literature and a qualitative analysis of approximately 200 observed incidents of misuse reported between January 2023 and March 2024. Through this analysis, we illuminate key and novel patterns in misuse during this time period, including potential motivations, strategies, and how attackers leverage and abuse system capabilities across modalities (e.g. image, text, audio, video) in the wild.
△ Less
Submitted 21 June, 2024; v1 submitted 19 June, 2024;
originally announced June 2024.
-
The Ethics of Advanced AI Assistants
Authors:
Iason Gabriel,
Arianna Manzini,
Geoff Keeling,
Lisa Anne Hendricks,
Verena Rieser,
Hasan Iqbal,
Nenad Tomašev,
Ira Ktena,
Zachary Kenton,
Mikel Rodriguez,
Seliem El-Sayed,
Sasha Brown,
Canfer Akbulut,
Andrew Trask,
Edward Hughes,
A. Stevie Bergman,
Renee Shelby,
Nahema Marchal,
Conor Griffin,
Juan Mateos-Garcia,
Laura Weidinger,
Winnie Street,
Benjamin Lange,
Alex Ingerman,
Alison Lentz
, et al. (32 additional authors not shown)
Abstract:
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants. We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user, across one or more domains, in line with the user's expectations. The paper starts by considering the technology itself, pro…
▽ More
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants. We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user, across one or more domains, in line with the user's expectations. The paper starts by considering the technology itself, providing an overview of AI assistants, their technical foundations and potential range of applications. It then explores questions around AI value alignment, well-being, safety and malicious uses. Extending the circle of inquiry further, we next consider the relationship between advanced AI assistants and individual users in more detail, exploring topics such as manipulation and persuasion, anthropomorphism, appropriate relationships, trust and privacy. With this analysis in place, we consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants. Finally, we conclude by providing a range of recommendations for researchers, developers, policymakers and public stakeholders.
△ Less
Submitted 28 April, 2024; v1 submitted 24 April, 2024;
originally announced April 2024.
-
Holistic Safety and Responsibility Evaluations of Advanced AI Models
Authors:
Laura Weidinger,
Joslyn Barnhart,
Jenny Brennan,
Christina Butterfield,
Susie Young,
Will Hawkins,
Lisa Anne Hendricks,
Ramona Comanescu,
Oscar Chang,
Mikel Rodriguez,
Jennifer Beroshi,
Dawn Bloxwich,
Lev Proleev,
Jilin Chen,
Sebastian Farquhar,
Lewis Ho,
Iason Gabriel,
Allan Dafoe,
William Isaac
Abstract:
Safety and responsibility evaluations of advanced AI models are a critical but developing field of research and practice. In the development of Google DeepMind's advanced AI models, we innovated on and applied a broad set of approaches to safety evaluation. In this report, we summarise and share elements of our evolving approach as well as lessons learned for a broad audience. Key lessons learned…
▽ More
Safety and responsibility evaluations of advanced AI models are a critical but developing field of research and practice. In the development of Google DeepMind's advanced AI models, we innovated on and applied a broad set of approaches to safety evaluation. In this report, we summarise and share elements of our evolving approach as well as lessons learned for a broad audience. Key lessons learned include: First, theoretical underpinnings and frameworks are invaluable to organise the breadth of risk domains, modalities, forms, metrics, and goals. Second, theory and practice of safety evaluation development each benefit from collaboration to clarify goals, methods and challenges, and facilitate the transfer of insights between different stakeholders and disciplines. Third, similar key methods, lessons, and institutions apply across the range of concerns in responsibility and safety - including established and emerging harms. For this reason it is important that a wide range of actors working on safety evaluation and safety research communities work together to develop, refine and implement novel evaluation approaches and best practices, rather than operating in silos. The report concludes with outlining the clear need to rapidly advance the science of evaluations, to integrate new evaluations into the development and governance of AI, to establish scientifically-grounded norms and standards, and to promote a robust evaluation ecosystem.
△ Less
Submitted 22 April, 2024;
originally announced April 2024.
-
Sociotechnical Safety Evaluation of Generative AI Systems
Authors:
Laura Weidinger,
Maribeth Rauh,
Nahema Marchal,
Arianna Manzini,
Lisa Anne Hendricks,
Juan Mateos-Garcia,
Stevie Bergman,
Jackie Kay,
Conor Griffin,
Ben Bariach,
Iason Gabriel,
Verena Rieser,
William Isaac
Abstract:
Generative AI systems produce a range of risks. To ensure the safety of generative AI systems, these risks must be evaluated. In this paper, we make two main contributions toward establishing such evaluations. First, we propose a three-layered framework that takes a structured, sociotechnical approach to evaluating these risks. This framework encompasses capability evaluations, which are the main…
▽ More
Generative AI systems produce a range of risks. To ensure the safety of generative AI systems, these risks must be evaluated. In this paper, we make two main contributions toward establishing such evaluations. First, we propose a three-layered framework that takes a structured, sociotechnical approach to evaluating these risks. This framework encompasses capability evaluations, which are the main current approach to safety evaluation. It then reaches further by building on system safety principles, particularly the insight that context determines whether a given capability may cause harm. To account for relevant context, our framework adds human interaction and systemic impacts as additional layers of evaluation. Second, we survey the current state of safety evaluation of generative AI systems and create a repository of existing evaluations. Three salient evaluation gaps emerge from this analysis. We propose ways forward to closing these gaps, outlining practical steps as well as roles and responsibilities for different actors. Sociotechnical safety evaluation is a tractable approach to the robust and comprehensive safety evaluation of generative AI systems.
△ Less
Submitted 31 October, 2023; v1 submitted 18 October, 2023;
originally announced October 2023.
-
Model evaluation for extreme risks
Authors:
Toby Shevlane,
Sebastian Farquhar,
Ben Garfinkel,
Mary Phuong,
Jess Whittlestone,
Jade Leung,
Daniel Kokotajlo,
Nahema Marchal,
Markus Anderljung,
Noam Kolt,
Lewis Ho,
Divya Siddarth,
Shahar Avin,
Will Hawkins,
Been Kim,
Iason Gabriel,
Vijay Bolina,
Jack Clark,
Yoshua Bengio,
Paul Christiano,
Allan Dafoe
Abstract:
Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why model evaluation is critical for addressing extreme risks. Developers must be able to identify danger…
▽ More
Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why model evaluation is critical for addressing extreme risks. Developers must be able to identify dangerous capabilities (through "dangerous capability evaluations") and the propensity of models to apply their capabilities for harm (through "alignment evaluations"). These evaluations will become critical for keeping policymakers and other stakeholders informed, and for making responsible decisions about model training, deployment, and security.
△ Less
Submitted 22 September, 2023; v1 submitted 24 May, 2023;
originally announced May 2023.
-
Manifestations of Xenophobia in AI Systems
Authors:
Nenad Tomasev,
Jonathan Leader Maynard,
Iason Gabriel
Abstract:
Xenophobia is one of the key drivers of marginalisation, discrimination, and conflict, yet many prominent machine learning (ML) fairness frameworks fail to comprehensively measure or mitigate the resulting xenophobic harms. Here we aim to bridge this conceptual gap and help facilitate safe and ethical design of artificial intelligence (AI) solutions. We ground our analysis of the impact of xenopho…
▽ More
Xenophobia is one of the key drivers of marginalisation, discrimination, and conflict, yet many prominent machine learning (ML) fairness frameworks fail to comprehensively measure or mitigate the resulting xenophobic harms. Here we aim to bridge this conceptual gap and help facilitate safe and ethical design of artificial intelligence (AI) solutions. We ground our analysis of the impact of xenophobia by first identifying distinct types of xenophobic harms, and then applying this framework across a number of prominent AI application domains, reviewing the potential interplay between AI and xenophobia on social media and recommendation systems, healthcare, immigration, employment, as well as biases in large pre-trained models. These help inform our recommendations towards an inclusive, xenophilic design of future AI systems.
△ Less
Submitted 6 October, 2023; v1 submitted 15 December, 2022;
originally announced December 2022.
-
A Human Rights-Based Approach to Responsible AI
Authors:
Vinodkumar Prabhakaran,
Margaret Mitchell,
Timnit Gebru,
Iason Gabriel
Abstract:
Research on fairness, accountability, transparency and ethics of AI-based interventions in society has gained much-needed momentum in recent years. However it lacks an explicit alignment with a set of normative values and principles that guide this research and interventions. Rather, an implicit consensus is often assumed to hold for the values we impart into our models - something that is at odds…
▽ More
Research on fairness, accountability, transparency and ethics of AI-based interventions in society has gained much-needed momentum in recent years. However it lacks an explicit alignment with a set of normative values and principles that guide this research and interventions. Rather, an implicit consensus is often assumed to hold for the values we impart into our models - something that is at odds with the pluralistic world we live in. In this paper, we put forth the doctrine of universal human rights as a set of globally salient and cross-culturally recognized set of values that can serve as a grounding framework for explicit value alignment in responsible AI - and discuss its efficacy as a framework for civil society partnership and participation. We argue that a human rights framework orients the research in this space away from the machines and the risks of their biases, and towards humans and the risks to their rights, essentially helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
△ Less
Submitted 6 October, 2022;
originally announced October 2022.
-
Improving alignment of dialogue agents via targeted human judgements
Authors:
Amelia Glaese,
Nat McAleese,
Maja Trębacz,
John Aslanides,
Vlad Firoiu,
Timo Ewalds,
Maribeth Rauh,
Laura Weidinger,
Martin Chadwick,
Phoebe Thacker,
Lucy Campbell-Gillingham,
Jonathan Uesato,
Po-Sen Huang,
Ramona Comanescu,
Fan Yang,
Abigail See,
Sumanth Dathathri,
Rory Greig,
Charlie Chen,
Doug Fritz,
Jaume Sanchez Elias,
Richard Green,
Soňa Mokrá,
Nicholas Fernando,
Boxi Wu
, et al. (9 additional authors not shown)
Abstract:
We present Sparrow, an information-seeking dialogue agent trained to be more helpful, correct, and harmless compared to prompted language model baselines. We use reinforcement learning from human feedback to train our models with two new additions to help human raters judge agent behaviour. First, to make our agent more helpful and harmless, we break down the requirements for good dialogue into na…
▽ More
We present Sparrow, an information-seeking dialogue agent trained to be more helpful, correct, and harmless compared to prompted language model baselines. We use reinforcement learning from human feedback to train our models with two new additions to help human raters judge agent behaviour. First, to make our agent more helpful and harmless, we break down the requirements for good dialogue into natural language rules the agent should follow, and ask raters about each rule separately. We demonstrate that this breakdown enables us to collect more targeted human judgements of agent behaviour and allows for more efficient rule-conditional reward models. Second, our agent provides evidence from sources supporting factual claims when collecting preference judgements over model statements. For factual questions, evidence provided by Sparrow supports the sampled response 78% of the time. Sparrow is preferred more often than baselines while being more resilient to adversarial probing by humans, violating our rules only 8% of the time when probed. Finally, we conduct extensive analyses showing that though our model learns to follow our rules it can exhibit distributional biases.
△ Less
Submitted 28 September, 2022;
originally announced September 2022.
-
Power to the People? Opportunities and Challenges for Participatory AI
Authors:
Abeba Birhane,
William Isaac,
Vinodkumar Prabhakaran,
Mark Díaz,
Madeleine Clare Elish,
Iason Gabriel,
Shakir Mohamed
Abstract:
Participatory approaches to artificial intelligence (AI) and machine learning (ML) are gaining momentum: the increased attention comes partly with the view that participation opens the gateway to an inclusive, equitable, robust, responsible and trustworthy AI.Among other benefits, participatory approaches are essential to understanding and adequately representing the needs, desires and perspective…
▽ More
Participatory approaches to artificial intelligence (AI) and machine learning (ML) are gaining momentum: the increased attention comes partly with the view that participation opens the gateway to an inclusive, equitable, robust, responsible and trustworthy AI.Among other benefits, participatory approaches are essential to understanding and adequately representing the needs, desires and perspectives of historically marginalized communities. However, there currently exists lack of clarity on what meaningful participation entails and what it is expected to do. In this paper we first review participatory approaches as situated in historical contexts as well as participatory methods and practices within the AI and ML pipeline. We then introduce three case studies in participatory AI.Participation holds the potential for beneficial, emancipatory and empowering technology design, development and deployment while also being at risk for concerns such as cooptation and conflation with other activities. We lay out these limitations and concerns and argue that as participatory AI/ML becomes in vogue, a contextual and nuanced understanding of the term as well as consideration of who the primary beneficiaries of participatory activities ought to be constitute crucial factors to realizing the benefits and opportunities that participation brings.
△ Less
Submitted 15 September, 2022;
originally announced September 2022.
-
In conversation with Artificial Intelligence: aligning language models with human values
Authors:
Atoosa Kasirzadeh,
Iason Gabriel
Abstract:
Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational…
▽ More
Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this paper, we propose a number of steps that help answer these questions. We start by developing a philosophical analysis of the building blocks of linguistic communication between conversational agents and human interlocutors. We then use this analysis to identify and formulate ideal norms of conversation that can govern successful linguistic communication between humans and conversational agents. Furthermore, we explore how these norms can be used to align conversational agents with human values across a range of different discursive domains. We conclude by discussing the practical implications of our proposal for the design of conversational agents that are aligned with these norms and values.
△ Less
Submitted 21 December, 2022; v1 submitted 1 September, 2022;
originally announced September 2022.
-
Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Authors:
Maribeth Rauh,
John Mellor,
Jonathan Uesato,
Po-Sen Huang,
Johannes Welbl,
Laura Weidinger,
Sumanth Dathathri,
Amelia Glaese,
Geoffrey Irving,
Iason Gabriel,
William Isaac,
Lisa Anne Hendricks
Abstract:
Large language models produce human-like text that drive a growing number of applications. However, recent literature and, increasingly, real world observations, have demonstrated that these models can generate language that is toxic, biased, untruthful or otherwise harmful. Though work to evaluate language model harms is under way, translating foresight about which harms may arise into rigorous b…
▽ More
Large language models produce human-like text that drive a growing number of applications. However, recent literature and, increasingly, real world observations, have demonstrated that these models can generate language that is toxic, biased, untruthful or otherwise harmful. Though work to evaluate language model harms is under way, translating foresight about which harms may arise into rigorous benchmarks is not straightforward. To facilitate this translation, we outline six ways of characterizing harmful text which merit explicit consideration when designing new benchmarks. We then use these characteristics as a lens to identify trends and gaps in existing benchmarks. Finally, we apply them in a case study of the Perspective API, a toxicity classifier that is widely used in harm benchmarks. Our characteristics provide one piece of the bridge that translates between foresight and effective evaluation.
△ Less
Submitted 28 October, 2022; v1 submitted 16 June, 2022;
originally announced June 2022.
-
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Authors:
Jack W. Rae,
Sebastian Borgeaud,
Trevor Cai,
Katie Millican,
Jordan Hoffmann,
Francis Song,
John Aslanides,
Sarah Henderson,
Roman Ring,
Susannah Young,
Eliza Rutherford,
Tom Hennigan,
Jacob Menick,
Albin Cassirer,
Richard Powell,
George van den Driessche,
Lisa Anne Hendricks,
Maribeth Rauh,
Po-Sen Huang,
Amelia Glaese,
Johannes Welbl,
Sumanth Dathathri,
Saffron Huang,
Jonathan Uesato,
John Mellor
, et al. (55 additional authors not shown)
Abstract:
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world. In this paper, we present an analysis of Transformer-based language model performance across a wide range of model scales -- from models with tens of millions of parameters up to a 280 billion parameter model called Gop…
▽ More
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world. In this paper, we present an analysis of Transformer-based language model performance across a wide range of model scales -- from models with tens of millions of parameters up to a 280 billion parameter model called Gopher. These models are evaluated on 152 diverse tasks, achieving state-of-the-art performance across the majority. Gains from scale are largest in areas such as reading comprehension, fact-checking, and the identification of toxic language, but logical and mathematical reasoning see less benefit. We provide a holistic analysis of the training dataset and model's behaviour, covering the intersection of model scale with bias and toxicity. Finally we discuss the application of language models to AI safety and the mitigation of downstream harms.
△ Less
Submitted 21 January, 2022; v1 submitted 8 December, 2021;
originally announced December 2021.
-
Ethical and social risks of harm from Language Models
Authors:
Laura Weidinger,
John Mellor,
Maribeth Rauh,
Conor Griffin,
Jonathan Uesato,
Po-Sen Huang,
Myra Cheng,
Mia Glaese,
Borja Balle,
Atoosa Kasirzadeh,
Zac Kenton,
Sasha Brown,
Will Hawkins,
Tom Stepleton,
Courtney Biles,
Abeba Birhane,
Julia Haas,
Laura Rimell,
Lisa Anne Hendricks,
William Isaac,
Sean Legassick,
Geoffrey Irving,
Iason Gabriel
Abstract:
This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary expertise and literature from computer science, linguist…
▽ More
This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary expertise and literature from computer science, linguistics, and social sciences.
We outline six specific risk areas: I. Discrimination, Exclusion and Toxicity, II. Information Hazards, III. Misinformation Harms, V. Malicious Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and Environmental Harms. The first area concerns the perpetuation of stereotypes, unfair discrimination, exclusionary norms, toxic language, and lower performance by social group for LMs. The second focuses on risks from private data leaks or LMs correctly inferring sensitive information. The third addresses risks arising from poor, false or misleading information including in sensitive domains, and knock-on risks such as the erosion of trust in shared information. The fourth considers risks from actors who try to use LMs to cause harm. The fifth focuses on risks specific to LLMs used to underpin conversational agents that interact with human users, including unsafe use, manipulation or deception. The sixth discusses the risk of environmental harm, job automation, and other challenges that may have a disparate effect on different social groups or communities.
In total, we review 21 risks in-depth. We discuss the points of origin of different risks and point to potential mitigation approaches. Lastly, we discuss organisational responsibilities in implementing mitigations, and the role of collaboration and participation. We highlight directions for further research, particularly on expanding the toolkit for assessing and evaluating the outlined risks in LMs.
△ Less
Submitted 8 December, 2021;
originally announced December 2021.
-
Toward a Theory of Justice for Artificial Intelligence
Authors:
Iason Gabriel
Abstract:
This paper explores the relationship between artificial intelligence and principles of distributive justice. Drawing upon the political philosophy of John Rawls, it holds that the basic structure of society should be understood as a composite of socio-technical systems, and that the operation of these systems is increasingly shaped and influenced by AI. As a consequence, egalitarian norms of justi…
▽ More
This paper explores the relationship between artificial intelligence and principles of distributive justice. Drawing upon the political philosophy of John Rawls, it holds that the basic structure of society should be understood as a composite of socio-technical systems, and that the operation of these systems is increasingly shaped and influenced by AI. As a consequence, egalitarian norms of justice apply to the technology when it is deployed in these contexts. These norms entail that the relevant AI systems must meet a certain standard of public justification, support citizens rights, and promote substantively fair outcomes -- something that requires specific attention be paid to the impact they have on the worst-off members of society.
△ Less
Submitted 21 June, 2022; v1 submitted 27 October, 2021;
originally announced October 2021.
-
Alignment of Language Agents
Authors:
Zachary Kenton,
Tom Everitt,
Laura Weidinger,
Iason Gabriel,
Vladimir Mikulik,
Geoffrey Irving
Abstract:
For artificial intelligence to be beneficial to humans the behaviour of AI agents needs to be aligned with what humans want. In this paper we discuss some behavioural issues for language agents, arising from accidental misspecification by the system designer. We highlight some ways that misspecification can occur and discuss some behavioural issues that could arise from misspecification, including…
▽ More
For artificial intelligence to be beneficial to humans the behaviour of AI agents needs to be aligned with what humans want. In this paper we discuss some behavioural issues for language agents, arising from accidental misspecification by the system designer. We highlight some ways that misspecification can occur and discuss some behavioural issues that could arise from misspecification, including deceptive or manipulative language, and review some approaches for avoiding these issues.
△ Less
Submitted 26 March, 2021;
originally announced March 2021.
-
Modelling Cooperation in Network Games with Spatio-Temporal Complexity
Authors:
Michiel A. Bakker,
Richard Everett,
Laura Weidinger,
Iason Gabriel,
William S. Isaac,
Joel Z. Leibo,
Edward Hughes
Abstract:
The real world is awash with multi-agent problems that require collective action by self-interested agents, from the routing of packets across a computer network to the management of irrigation systems. Such systems have local incentives for individuals, whose behavior has an impact on the global outcome for the group. Given appropriate mechanisms describing agent interaction, groups may achieve s…
▽ More
The real world is awash with multi-agent problems that require collective action by self-interested agents, from the routing of packets across a computer network to the management of irrigation systems. Such systems have local incentives for individuals, whose behavior has an impact on the global outcome for the group. Given appropriate mechanisms describing agent interaction, groups may achieve socially beneficial outcomes, even in the face of short-term selfish incentives. In many cases, collective action problems possess an underlying graph structure, whose topology crucially determines the relationship between local decisions and emergent global effects. Such scenarios have received great attention through the lens of network games. However, this abstraction typically collapses important dimensions, such as geometry and time, relevant to the design of mechanisms promoting cooperation. In parallel work, multi-agent deep reinforcement learning has shown great promise in modelling the emergence of self-organized cooperation in complex gridworld domains. Here we apply this paradigm in graph-structured collective action problems. Using multi-agent deep reinforcement learning, we simulate an agent society for a variety of plausible mechanisms, finding clear transitions between different equilibria over time. We define analytic tools inspired by related literatures to measure the social outcomes, and use these to draw conclusions about the efficacy of different environmental interventions. Our methods have implications for mechanism design in both human and artificial agent systems.
△ Less
Submitted 13 February, 2021;
originally announced February 2021.
-
The Challenge of Value Alignment: from Fairer Algorithms to AI Safety
Authors:
Iason Gabriel,
Vafa Ghazavi
Abstract:
This paper addresses the question of how to align AI systems with human values and situates it within a wider body of thought regarding technology and value. Far from existing in a vacuum, there has long been an interest in the ability of technology to 'lock-in' different value systems. There has also been considerable thought about how to align technologies with specific social values, including…
▽ More
This paper addresses the question of how to align AI systems with human values and situates it within a wider body of thought regarding technology and value. Far from existing in a vacuum, there has long been an interest in the ability of technology to 'lock-in' different value systems. There has also been considerable thought about how to align technologies with specific social values, including through participatory design-processes. In this paper we look more closely at the question of AI value alignment and suggest that the power and autonomy of AI systems gives rise to opportunities and challenges in the domain of value that have not been encountered before. Drawing important continuities between the work of the fairness, accountability, transparency and ethics community, and work being done by technical AI safety researchers, we suggest that more attention needs to be paid to the question of 'social value alignment' - that is, how to align AI systems with the plurality of values endorsed by groups of people, especially on the global level.
△ Less
Submitted 18 January, 2021; v1 submitted 15 January, 2021;
originally announced January 2021.
-
Beyond Privacy Trade-offs with Structured Transparency
Authors:
Andrew Trask,
Emma Bluemke,
Teddy Collins,
Ben Garfinkel Eric Drexler,
Claudia Ghezzou Cuervas-Mons,
Iason Gabriel,
Allan Dafoe,
William Isaac
Abstract:
Successful collaboration involves sharing information. However, parties may disagree on how the information they need to share should be used. We argue that many of these concerns reduce to 'the copy problem': once a bit of information is copied and shared, the sender can no longer control how the recipient uses it. From the perspective of each collaborator, this presents a dilemma that can inhibi…
▽ More
Successful collaboration involves sharing information. However, parties may disagree on how the information they need to share should be used. We argue that many of these concerns reduce to 'the copy problem': once a bit of information is copied and shared, the sender can no longer control how the recipient uses it. From the perspective of each collaborator, this presents a dilemma that can inhibit collaboration. The copy problem is often amplified by three related problems which we term the bundling, edit, and recursive enforcement problems. We find that while the copy problem is not solvable, aspects of these amplifying problems have been addressed in a variety of disconnected fields. We observe that combining these efforts could improve the governability of information flows and thereby incentivise collaboration. We propose a five-part framework which groups these efforts into specific capabilities and offers a foundation for their integration into an overarching vision we call "structured transparency". We conclude by surveying an array of use-cases that illustrate the structured transparency principles and their related capabilities.
△ Less
Submitted 12 March, 2024; v1 submitted 15 December, 2020;
originally announced December 2020.
-
Artificial Intelligence, Values and Alignment
Authors:
Iason Gabriel
Abstract:
This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that a…
▽ More
This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, which combines these elements in a systematic way, has considerable advantages in this context. Third, the central challenge for theorists is not to identify 'true' moral principles for AI; rather, it is to identify fair principles for alignment, that receive reflective endorsement despite widespread variation in people's moral beliefs. The final part of the paper explores three ways in which fair principles for AI alignment could potentially be identified.
△ Less
Submitted 5 October, 2020; v1 submitted 13 January, 2020;
originally announced January 2020.