Artificial Intelligence (AI) is advancing in the private sector, the public sector, and computer science departments around the world. However, transforming AI research into a working application for your average business is easier said than done. In this conversation, Anna Koop, Alberta Machine Intelligence Institute (Amii) Managing Director of Applied Science, speaks with ICTC’s Faun Rice about Amii, its place in Canada, and its team. In addition, Anna and Faun talk about the challenges that businesses face in applying AI solutions, as well as the degree to which researchers in academia are responsible for considering the social impact and eventual application of their work.
Could you talk a little about what sets Amii apart? What would you say the core difference is between Amii, MILA, and the Vector Institute?
Toronto has the Vector Institute, led by Geoffrey Hinton. Their real core is deep learning, and being in a major business centre, they have a lot industry supporting fundamental research on a big scale. MILA’s research excellence is a combination of deep learning and reinforcement learning, and they have some talent-matching programs like ours. Then Amii has a lot of breadth. We cover a lot of ground with the expertise in our faculty, including reinforcement learning, and we’re a very collegial group. Students who interact with us are always flabbergasted when our Fellows are readily available to chat with them at conferences.
We also have a very “boots on the ground” approach to working with industry in Alberta. hroughout our history, we’ve been concerned with how we can help industry and how we can help diversify Alberta’s economy. A lot of new businesses know that they should care about machine learning but oftentimes have no idea where to start. It’s really expensive to get started if you come at it from a perspective of, “OK, I need to hire a PhD graduate with machine learning expertise” and somehow turn that into business value real quick. It’s a massive change! A lot of companies are still struggling to digitize data and then actually look at it. So to go the next step and use predictive models or more advanced techniques is kind of overwhelming.
We’re talking about industry applications of AI, so let’s pivot to a recent set of events. As I understand it, NeurIPS, one of the leading AI conferences in Canada, recently added a requirement to include a social impact statement in conference proposals, and that kind of sparked a bit of a conversation in the discipline. So, blank slate, what has your experience been with the inclusion of social impact assessments in publishing and conferences?
Yeah, it’s a really thorny issue, actually. And I am 100 percent on board. We have a really serious responsibility to understand how the technologies we develop are being used and the impact they’re having in the world. On the one hand, I think the person who “invented electricity” — well, no one person invented electricity — but my point is that Thomas Edison is not responsible for every electrical device that comes after him. On the other hand, the models we release on the world have impacts and it’s just kind of a basic human-decency thing to understand the impact your actions are having. One of the ways that has translated into academic publishing is now several conferences have impact statements. You have to ask, “What are the ethical implications of your work?”
For an example of this in application, an exercise we do with clients sometimes is this: “Imagine I handed you a box that can, say, classify exactly the probability that a person is going to stay at your company for two years? I’m going to hand you that box. What are you going to do with it? How does that fit into your process?” And then once they’ve actually spent some time thinking about that, then what about if I say, “but it’s only accurate 80 percent of the time?” What are the implications of that? Or, “it’s accurate 100 percent of the time, but remember that it’s only telling you the probability they’ll stay at your company for two years, not that they’ll be a good employee, not that they’ll contribute positively to your culture.” It’s telling you that one little fact, which we hope is correlated with other things, but it’s often not.