CAIS Engineering Docs¶
The Center for AI Safety is a research organization dedicated to studying the risks associated with advanced artificial intelligence (AI) systems and developing methods to ensure their safe and ethical use. We work with researchers, policymakers, and industry leaders to develop and promote best practices in AI safety and governance.
Getting Started¶
To get started with the Center for AI Safety, please refer to our about page for more information on our mission, values, and research projects. You can also learn more about our team and collaborators on our people page.
Documentation¶
This documentation website provides resources and information on a range of topics related to AI safety, including:
- Risk assessment and management for AI systems
- Ethical considerations in AI development and deployment
- Governance frameworks for AI policy and regulation
- Technical approaches to ensuring AI safety and reliability
Contributing¶
We welcome contributions from researchers, industry professionals, policymakers, and other stakeholders to advance our mission of promoting AI safety and governance. If you would like to contribute, please refer to our contributing guidelines for more information on how to get involved.
OCI Draw IO Libraries¶
We use the OCI stencils libraries for diagrams- you can grab them here.