Open Trustworthy AI —
Trust Is the Ultimate Form of Intelligence
View More
Embodied AI
Introducing VLABench
VLABench is an open-source benchmark for evaluating Vision-Language-Action models, featuring 100 real-world tasks with natural language instructions. Designed to assess both action and language capabilities, it supports development of more robust AI systems. Join us in advancing trustworthy Embodied AI research through this community-driven initiative.
Mar 1, 2025
#Survey
Releasing Large Model Safety Survey
Our latest survey "Safety at Scale: A Comprehensive Survey of Large Model Safety" systematically analyzes safety threats facing today's large AI models, covering VFMs, LLMs, VLPs, VLMs, and T2I Diffusion models. Our findings highlight the current landscape of AI safety research and the urgent need for robust safety measures and collaborative efforts to ensure trustworthy AI development.
Feb 13, 2025
#Vision
Introducing the VisionSafety Platform
The safety of vision models is critical to trustworthy AI. We proudly launch the VisionSafety Platform—a cutting-edge initiative to rigorously evaluate model robustness through highly transferable adversarial attacks and million-scale adversarial datasets. This platform represents a major leap forward in securing vision-based AI systems against emerging threats.
Dec 24, 2024
Advancing Trustworthy AI Through Open Collaboration

OpenTAI is an open platform where researchers collaborate to accelerate practical Trustworthy AI solutions. We prioritize tools, benchmarks, and platforms over papers, bridging research with real-world impact.

Research
Benchmarks
Datasets
Tools
Institutions of Our Collaborators