Are you interested in how we can design AI responsibly and safely? AI Safety Aachen e.V. offers you the perfect platform to get involved and develop yourself in this important field.
We work on making AI systems safer and promoting responsible development. We combine technical expertise with political engagement.
1-on-1 career counseling for your path into AI safety research and practice.
Talks by experts like Jan Kirchner (then at OpenAI), Clark Barrett (Stanford University) and other leading figures in industry and academia.
Our Highlight: The EU AI Safety Forum with guests from Stanford University, European Parliament, EU AI Office, TÜV AI Lab, DFKI, the Dutch Government, etc.
Collaborate with key stakeholders such as leading politicians (AI Act), Mary Phuong (Google DeepMind), or Duncan Eddy (Stanford Center for AI Safety). Our network also includes alumni like Tom Lieberum from the DeepMind AI Safety Team, Lennart Heim (RAND Washington), and Rafael Albert (Jane Street London).
Gain insights into AI safety over a drink, have exciting conversations with other students interested in AI safety.
Work with leading experts from the Stanford Center for AI Safety to give politicians a tangible understanding of the technical context behind AI. We create interactive apps that allow people to "touch" the results of the most important AI papers. Build technical skills in the process and learn how to sell them.
We help you find the right project and support our teams in execution. Gain practical technical and non-technical skills in the Catalyst Program and work in a team – ideal for building or developing your software development skills.