Sora is a groundbreaking AI model capable of generating realistic videos from text prompts. It can create videos up to a minute long in high definition (1080p) and excels in handling reflections, shadows, and overall consistency.
Limited Access and Red Teaming
Sora is not yet available to the public. OpenAI is currently conducting red teaming tests with experts to assess potential biases, risks, and harms associated with the model.
What is Red Teaming?
Red teaming, originating from military adversary simulation and wargaming, is an evaluation method for LLM (Large Language Model) that aims to uncover vulnerabilities in the model that could lead to malicious behavior. “Jailbreaking” is another term used in red teaming, where an LLM is manipulated to break free from its protective constraints.
Examples of Lacking Red Teaming
Disastrous examples of chatbots like Microsoft’s Tay in 2016 and the recent Bing chatbot Sydney demonstrate the consequences of releasing LLMs without thorough red teaming evaluations.
OpenAI’s Red Teaming Network
OpenAI’s own staff acts as the “blue team,” while external experts form the “red team.” The red team simulates attacks on the AI, the blue team defends it, and both teams work together to find bugs and improve the system. Only after red teaming attacks are deemed ineffective will the product be officially released.
How to Apply to Join the Red Team
To join OpenAI’s red teaming network, interested individuals can fill out the application form on the official website: https://openai.com/form/red-teaming-network
The form requires information such as personal details, organizational affiliation, educational background, areas of expertise, reasons for joining, time commitment, preferred red teaming areas, languages spoken, and experience with OpenAI tools.
Conclusion
By engaging in red teaming and involving experts from diverse backgrounds, OpenAI aims to develop safer and more reliable AI models like Sora that benefit society as a whole.
Further Reading: