Based on enormous global challenges ranging from pandemics to climate change, artificial intelligence (AI) being a universal tool will make an important contribution by ensuring economic and social sustainability. At the same time there are uncertainties and risks associated with the use of AI applications.
The EU Commission recently presented the draft of the ‘Artificial Intelligence Act’ decree to regulate systems using artificial intelligence. The fundamental idea of steering AI’s evolution in the right direction and building trust in the long term is well accepted. However, consequentially there is a risk of competitiveness getting lost and the development of AI systems becoming tortuous and cost-intensive.
The decree’s main focus is regulating AI systems in high-risk sectors such as education, human resource management, critical infrastructure and medicine among others. In the future, compliant AI systems must be CE-branded while providers will be largely responsible for conformity assessments of AI systems in high-risk sectors. In some special and particularly high-risk sectors the assessment will be carried out by an external party obligatorily.
Providers must meet extensive compliance requirements like establishing risk and quality management, keeping technical documentation and logs, ensuring accuracy, resilience and cybersecurity adequately. While self-assessment is required for high-risk sectors only, all other applications are encouraged by the decree to be assessed as well.
Users (except for mere private use) are obligated to monitor AI-systems by notifying system providers of any risks or malfunctions and by suspending system operations if required.
The Artificial Intelligence Act overall poses a sound starting point for setting global standards and advancing a secure and transparent digital EU-internal market. Avoiding a “patchwork” of national regulations is one of the main objectives of the decree. However, extensive obligations might cause uncertainty and overburdening among companies and reduce or even prevent the adding value of AI. Palpable support and funding proposals are crucial in order to avoid slowing down the innovative power of companies and the commercialization of research results.
We at Know-Center, one of Europe’s leading research centers for self-explanatory, privacy-respecting AI offer practical guidance and support services for companies. EU’s intended AI Sandbox programs, which will prioritize SMEs and startups are also worth mentioning in this context. AI systems are tested and trained in a regulatory “sandbox” (an isolated area sealed off from the environment) for a limited period of time supervised by national authorities.