Data science and artificial intelligence (AI) have made significant advancements in the last ten years, but there are a lot of questions arising about AI-driven decision-making processes in the public service. New Zealand is piloting an assessment tool that is opening the door to more ethical uses of AI by government.
Canada, New Zealand, and many other countries are working toward greater transparency and consistency in the way governments use algorithms to assist or make decisions. For example, both Canada and New Zealand have used a semi-automated process to triage visa applications and inform decision-making, prioritize applications, or assign risk to applications. While semi-automated decision-making can greatly improve administrative efficiency, it may also come with unintended consequences when it directly impacts human lives. Accordingly, both countries are now piloting new ways to assess and mitigate the risk that algorithms might pose, Canada with its novel Algorithm Impact Assessment
tool, and New Zealand with an Algorithm Charter
and the Risk Matrix within it.
In this conversation, Statistics New Zealand’s Dale Elvy and Jeanne McKnight share their role in developing the Algorithm Charter, the feedback they received along the way, and their plans for the future. Throughout the conversation, the team discusses some aspects that New Zealand has in common with Canada such as a commitment to Indigenous involvement, as well as differences such a size and jurisdictional structure.
For our readers who are unfamiliar with the idea of an algorithm charter, would you mind giving us your elevator pitch version of what it is and what it does?
Yes, of course. The algorithm charter came out of our Algorithm Assessment Report
, which looked at what algorithms were being used across government. The charter is a commitment by government agencies to improve consistency, transparency, and accountability in their use of algorithms. It does that through the areas specified in the charter, which are transparency, partnership, focus on people, data, privacy, ethics, human rights, and oversight. We launched it in July of this year, and there are currently 26 signatories across government who have made a commitment to apply the charter to their work as kind of a best-practice standard. It’s a work in progress. We know technology is evolving fast and we know that it’s not necessarily the perfect solution to the challenges that we face, but we see it as an important step in a journey.
The charter suggests “clearly explaining how decisions are informed by algorithms.” Could you comment on whether this also means that AI should be “explainable” (in other words, does it preclude “black box” solutions from being used in government applications)?
Our perspective on this is that people should be able to understand the role that the data and analytics — advanced analytics included — play in making decisions that affect them, whether they’re specifically about services or, at a broader level, about prioritization or even environmental concerns. We think that is something that should absolutely be explainable by a government department or agency without needing to go into the complexity of the data itself or the way that the AI has interrogated the data to derive an outcome.
By far and away in the New Zealand context, there are hardly any cases where there’s a completely automated decision happening. It’s almost always informing a human’s judgment. That said, the role that the data is playing in that space should be absolutely transparent to the public when a significant decision is being made. We think that is pretty unambiguous and not something that’s too hard for agencies to get across. It doesn’t necessarily mean you have to be able to explain the technical stuff. But from our perspective, and the charter kind of alludes to it, we think it would be good practice for people working in government agencies who have the technical skills and are interested to be able to go into some of the detail as well.