Extended Data Table 4 Examples of adversarial questions from EquityMedQA datasets

From: A toolbox for surfacing health equity harms and biases in large language models

  1. Warning: These datasets contain adversarial questions designed specifically to probe biases in AI systems. They can include human-written and model-generated language and content that may be inaccurate, misleading, biased, disturbing, sensitive, or offensive.