knowledge | 31 May 2019 |
AIming High: New Framework for Development of Artificial Intelligence
The OECD has adopted a set of principles to guide the development and deployment of innovative and trustworthy artificial intelligence in ways that respect human rights and democratic values.
On 22 May 2019 Ireland joined in the Organisation for Economic Cooperation and Development (“OECD”) adopting a Recommendation on Artificial Intelligence. The Recommendation is the first inter-governmental standard to be produced on artificial intelligence (“AI”) and, although not legally binding, will serve as the guiding framework for the development of trustworthy AI across all 36 OECD member states (and seven non-member states that have also subscribed to it).
Although expressing principles at an abstract level, the Recommendation sets an important framework for innovation in this fast-developing area.
Principles for responsible stewardship of trustworthy AI
In recognition of the potential of AI to increase innovation and productivity, and to improve the welfare and well-being of society, the Recommendation identifies a number of “value-based principles” for the responsible stewardship of trustworthy AI. The Recommendation asks AI actors, in their respective roles, to promote and implement these values. They include:
- Inclusive growth, sustainable development and well-being: AI actors should engage proactively in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, ie reducing social and financial inequalities, and protecting natural environments.
- Human-centred values and fairness: AI should be built with respect for the rule of law, for the rights of human beings and for the values that they hold. AI actors are also asked to implement safeguards to allow human control, where necessary.
- Transparency and explainability: AI actors should be transparent in their use of AI and should provide meaningful information to the people that may be affected by an AI decision.
- Robustness, security and safety: AI actors should ensure that AI systems are robust, secure and safe for use throughout the duration of their lifecycle.
- Accountability: AI actors should be held accountable for the proper functioning of an AI system in line with the values listed above.
National policies and international co-operation for trustworthy AI
The Recommendation is also intended to provide a stable policy environment that promotes a human-centric approach to trustworthy AI at an international level. In this regard, the Recommendation identifies five key policy points for governments:
- Investing in AI research and development: Governments should consider long-term public investment and encourage private investment in research and development in order to spur innovation in the field of AI.
- Fostering a digital ecosystem for AI: Governments should develop a digital ecosystem for trustworthy AI, where AI-related educative materials (such as code, algorithms, and training manuals) can be shared.
- Shaping an enabling policy environment for AI: Governments should ensure that the regulatory framework is capable of supporting the development and transition of trustworthy AI, from the early stages of research and development through to the deployment and operational stages.
- Building human capacity and preparing for labour market transformation: Governments should empower people with the skills to effectively use AI and ensure a fair transition for workers by providing access to training programmes and creating other opportunities in the sector.
- International co-operation for trustworthy AI: Governments should co-operate to advance the principles of the Recommendation and should foster the sharing of AI-related materials and information.
Also Contributed By: Andy McDonnell and Stephen Traynor.
This document has been prepared by McCann FitzGerald LLP for general guidance only and should not be regarded as a substitute for professional advice. Such advice should always be taken before acting on any of the matters discussed.