How can laws and regulations be adapted to the effects of AI applications? Norway's Data Protection Authority has established a regulatory sandbox to provide a safe environment for testing AI applications and determine how they adhere to the laws and frameworks governing data protection in the country. The project team provides guidance to carefully selected companies to assess their compliance with regulatory standards and identify any challenges that these applications may face before releasing them to the public.
AI applications have posed particular challenges to regulatory bodies, especially in terms of their compliance with ethical controls, transparency, and protection of user privacy. Given that AI has become an integral part of everyday life, AI developers need to test their solutions in a controlled and monitored environment that ensures compliance with data protection laws and protects privacy prior to their release to the public. Regulatory sandboxes offer a solution by finding a middle ground between limiting harmful uses of technology and keeping pace with technological progress. Sandboxes allow regulators to sharpen their practical skills and clarify how governing laws are implemented in the technology sector.
The AI regulatory sandbox launched by Norway aims to fulfill these purposes, namely by facilitating compliance with the provisions of the European General Data Protection Regulation (GDPR) by running experiments on AI applications and sharing their results with AI organizations.
To this end, the Norwegian Data Protection Authority (Datatilsynet) has established a regulatory sandbox to provide free guidance to selected public and private institutions and companies of varying types and sizes from different fields to encourage the development of ethical and responsible solutions from a privacy and data protection perspective.
The sandbox operates according to the key principles that set a regulatory framework for the responsible use of AI, ensuring that all applicable laws and regulations, as well as ethical values and principles, in terms of transparency and clarity, are well respected. Furthermore, the sandbox relies on carefully thought-out tech solutions, in order to prevent security breaches or misuse.
The companies participating in the regulatory sandbox have been selected according to a set of criteria. The first criterion stated that company projects should revolve around AI, whether by developing new or using existing AI solutions or by establishing frameworks or policies that control the use of AI. The second criterion stressed that projects must benefit individuals and society by providing products or services with a health or social benefit, or proposing innovative data protection solutions. The proposed project should also include a challenge directly related to the topic of privacy, in order to leverage the guidance provided by the Data Protection Authority. Finally, the participating company must be under the supervision of the Norwegian Data Protection Authority, i.e. registered in Norway, and subject to its data protection laws.
After joining the regulatory sandbox, companies will be advised and mentored by a multidisciplinary DPA team to ensure that the products or services they provide comply with the relevant laws and respect data privacy. Organizations participating in the program will work with the DPA to develop an individual plan over a period of 3-6 months, outlining their need for guidelines and how they will formulate them. In this way, the contribution of the DPA is tailored to the needs of each project.
Examples of activities include assessing the impact of data protection, identifying associated challenges, providing feedback on legal and technical solutions for these challenges, etc. Participants are given the opportunity to attend orientation workshops organized by the DPA, which are tailored to the needs of each company.
In order to maintain transparency and share experiences with other entities that may benefit from the sandbox, the DPA publishes information periodically, before compiling the program's experiences in a final report to be shared with others. To avoid spreading information that would reveal business secrets, the DPA reaches out to participants before sharing experiences with external parties.
The regulatory sandbox is expected to benefit organizations by allowing them to better understand regulatory criteria and how their AI-based services and products can comply with the standards of data protection systems. Moreover, the sandbox will also benefit the DPA itself, which will gain a comprehensive understanding of AI practical applications and thus enhance its administrative and supervisory processes in all matters related to AI and privacy protection. Finally, the program will form the basis for AI services and solutions that customers and society at large can trust, given its emphasis on accountability, transparency, and the protection of the user's fundamental rights.
References:
https://www.datatilsynet.no/en/regulations-and-tools/sandbox-for-artificial-intelligence/