Sectors are witnessing a rapid expansion in AI technologies and algorithms. These technologies are now used in multiple fields, starting from processing immigration applications and predictive policing to the pricing of goods and services available on the internet.
However, why are these decisions which are based on algorithms so controversial? Such decisions often lack transparency, thus making it difficult to assess their compatibility with human rights laws, labor laws, and other governance ecosystems. Digital companies are increasingly using algorithms to set prices for users based on their input, such as postcode, browsing history, and other methods that some find concerning.
Decision-making algorithms have been criticized and challenged many times because they contribute to biases like discrimination and do not include marginalized groups, such as people facing challenges like people of determination. This is of particular concern because these critical decisions may negatively affect the lives of individuals, especially those who belong to marginalized groups in society.
Artificial intelligence and algorithms tend to worsen social biases around the world. The clearest evidence of this is the fact that these algorithms are often "trained" using historical data, i.e., data referring to pre-existing bias. There are many examples of algorithmic bias that the world is increasingly witnessing. In Florida, for example, artificial intelligence has been used to predict the danger level of prisoners, which means the likelihood of committing crimes in the future. As a result, the system classified dark-skinned prisoners as future offenders at nearly twice the rate of light-skinned prisoners. Some other predictive policing systems have also marginalized vulnerable groups by increasing police patrols in low-income neighborhoods or neighborhoods inhabited by non-whites, proving the disproportion in police patrols compared to other neighborhoods. The 2018 Citizen Lab report documented automated decision-making in Canada's immigration and refugee system. This report shed light on the complex nature of many immigration and refugee cases, which made the use of artificial intelligence of great importance, especially with these vulnerable groups facing so many challenges.
New Zeeland is no exception. In a 2019 study on the use of algorithms in the public sector, a significant variation was found in the extent to which these systems were used and how they were used.
Accordingly, Statistics New Zealand launched the Algorithm Charter for Aotearoa New Zealand to give New Zealanders confidence that data is being used safely and effectively across government entities. New Zealand is the first country in the world to establish such a charter and standards to regulate the use of algorithms by government sector entities.
Entities pledged full commitment to being publicly transparent, under the New Zealand Charter, about the impact of algorithms on decision-making, including giving "simple explanations," providing information on the used processes and how data is stored unless forbidden by law (such as for reasons of national security), and identifying and managing biases caused by algorithms.
Entities should also consider te ao Māori, or indigenous people, on data collection and consult with groups affected by their equations and algorithms. In New Zealand, Māori are disproportionately represented in the justice and prison system.
Entities committed to the charter included New Zealand’s accident compensation scheme, which was criticized in 2017 for using algorithms to detect fraud, in addition to correction entities that harnessed algorithms to determine a prisoner’s risk of committing another crime or misdemeanor. The immigration agency, found in March to be using algorithms for profiling applications, is also a charter signatory.
Other entities have been heavily criticized for their adoption of algorithms, including the police that came under fire from privacy advocates in 2019 for using facial recognition technology without announcing it. As for spy entities, they did not sign the charter.
The charter has been signed by 21 different entities. Although some entities did not sign until this day, no entity has ruled out the charter and others were expected to sign later on.
Most New Zealanders understand the importance of algorithms in supporting the decision-making process and the implementation of government policies, but they wish to make sure that these systems are used safely and responsibly. This is when the charter comes into play to give people confidence. The charter builds community trust over the long term, thus unleashing the full potential of data to improve lives.
The Algorithm Charter for Aotearoa New Zealand was the result of a recommendation in a 2018 report by the Government Chief Data Steward and Chief Digital Officer. The report suggests that the safe and effective use of operational algorithms requires more coherence and consistency across the government. The report also relies on the Principles for the Safe and Effective Use of Data and Analytics designed by the Government Chief Data Steward and the Privacy Commissioner.
The standards do not include an implementation mechanism at this stage. However, entities that have signed the charter are expected to provide information through their official websites to explain how to use algorithms and supply the source code. If they failed to do so, the public can request such information.
This charter is one of the many initiatives launched by the government to improve transparency, including the formation of an independent data ethics advisory group and the work to improve data ethics education at the higher education level. New Zealand is also a signatory to the International Open Government Partnership.