Building an early warning system for LLM-aided biological threat creation

1 min read


● OpenAI is investing in the development of improved evaluation methods for AI-enabled safety risks, specifically focusing on biological risk.
● The potential for harmful uses of AI systems in creating biological threats has been highlighted by researchers and policymakers.
● OpenAI conducted a study to measure whether access to GPT-4 could increase malicious actors’ access to dangerous information about biological threat creation.
● The study found mild uplifts in accuracy and completeness for participants with access to the language model, but the effect sizes were not statistically significant.
● Information access alone is not sufficient to create a biological threat, and more research is needed to determine meaningful increases in risk.

Source: link

Latest from Blog

withemes on instagram