Trustworthy AI is a concept that refers to the creation of Artificial Intelligence that operates ethically and responsibly and is focused on the safety and protection of people and their data. Trustworthy AI is intended to help build trust in AI and its applications.
Developing AI according to trustworthy design principles, or deploying trustworthy AI, pays off in two ways: On the one hand, it ensures that legal requirements that will be placed on AI solutions in the near future are met and thus investments in AI technology are also future-proof. On the other hand, this is the only way to gain the trust of users, which will be a key factor for market success in the future.
Accompaniment on the way to trustworthy AI
- Legal and technical advice on the topic of trustworthy AI
- AI self-assessment tools as a risk initial analysis tool for companies
- Accompanying companies in the field of research
- Development of metrics/certification criteria
- Trainings & Lectures
- Technological support
- Certifications, seals of approval
Know Center 360° Model
The Know Center 360° model was created following the core requirements of the High-Level Expert Group on the use of AI. Our interdisciplinary research will address robustness, security, data governance, transparency, diversity, non-discrimination, fairness, and human oversight over the long term.
The High-Level Expert Group was set up by the European Commission to advise it on its strategy for artificial intelligence and is putting forward ethics guidelines to guide AI towards sustainability, growth, competitiveness and inclusion. These ethics guidelines serve as a blueprint for the AI Act, which is currently being drafted.