01875nas a2200229 4500000000100000008004100001653002800042653001800070653001100088653001500099653001500114653001400129653001300143653001900156100001900175700002300194700001900217245011200236300001000348490000700358520128000365 2021 d10aartificial intelligence10acertification10aEthics10agovernance10alawfulness10arobust AI10asecurity10atrustworthy AI1 aGeorge Sharkov1 aChristina Todorova1 aPavel Varbanov00aStrategies, Policies, and Standards in the EU Towards a Roadmap for Robust and Trustworthy AI Certification a11-220 v503 a
Within recent years, governments in the EU member states have put increasing efforts into managing the scope and speed of socio-technical transformations due to rapid advances in Artificial Intelligence (AI). With the expanding deployment of AI in autonomous transportation, healthcare, defense, and surveillance, the topic of ethical and secure AI is coming to the forefront. However, even against the backdrop of a growing body of technical advancement and knowledge, the governance of AI-intensive technologies is still a work in progress facing numerous challenges in balancing between the ethical, legal and societal aspects of AI technologies on the one hand and investment, financial and technological on the other. Guaranteeing and providing access to reliable AI is a necessary prerequisite for the proper development of the sector. One way to approach this challenge is through governance and certification. This article discusses initiatives supporting a better understanding of the magnitude and depth of adoption of AI. Given the numerous ethical concerns posed by unstandardized AI, it further explains why certification and governance of AI are a milestone for the reliability and competitiveness of technological solutions.