Language selection

Search

Artificial Intelligence Is Here Series: What Is AI? (DDN2-V05)

Description

This event recording captures a discussion on the risks and opportunities related to artificial intelligence and its potential to transform government.

Duration: 01:32:12
Published: May 24, 2022
Type: Video

Event: Artificial Intelligence Is Here Series: What Is AI?


Now playing

Artificial Intelligence Is Here Series: What Is AI?

Transcript

Transcript

Transcript: Artificial Intelligence Is Here Series: What Is AI?

[The CSPS logo appears on screen.]

[A video chat appears with Taki Sarantakis, Peter Loewen and Gillian Hadfield.]

Taki Sarantakis, Canada School of Public Service: Good morning, good afternoon, good evening, depending on when you are joining us. I am Taki Sarantakis, I'm the president of the Canada School of Public Service. And today I am delighted to be introducing a new series that we are bringing forward for Canada's public service. And it is on artificial intelligence. This series is called, "Artificial Intelligence, It Is Here." And as public servants, you may not know, you may not feel that it is here, but it is here. It is not about tomorrow, it is not about the future. The only thing that will happen tomorrow and in the future is the intensification of artificial intelligence.

So, we are bringing this series in partnership with our friends at the University of Toronto. And specifically, in this case, our friends at the University of Toronto are from the Schwartz Reisman Institute for Technology and Society, which is relatively new. And we are very honoured today to have the director, Gillian Hadfield, and the associate director, Peter Loewen. And we'll start with Gillian to give a few remarks and then we'll pass it over to Peter and then we will get on with our video. Gillian.

Gillian Hadfield, Schwartz Reisman Chair in Technology and Society: All right, thanks, Taki. We're really excited to be partnering with you on this series. Schwartz Reisman Institute for Technology and Society is a research and solutions hub dedicated to ensuring that technologies like AI are safe, responsible and make the world better for everyone. We're developing new approaches to better understand the social implications of powerful, new technologies and we are working to ensure that these technologies are harnessed to serve the global goal of cultivating human societies that are vital, peaceful, inclusive and just. With access to a range of leading experts in the field of AI and in public sector management, as well as a whole range of other disciplines, we are uniquely positioned and really pleased to deliver this series of sessions for the Canada School of Public Service, focusing on recent developments in AI and what these mean for our public servants.

So, in this first session of the series, I'll begin by providing an overview of the landscape of AI today, including its widespread applications and how new techniques of machine learning differ from previous approaches towards computing and decision making. And in the second part of the session, my colleague Peter Loewen, who is the director of the University of Toronto's Munk School of Global Affairs and Public Policy, as well as an associate director at the Schwartz Reisman Institute, will discuss the risks and opportunities for AI and machine learning to support and potentially transform the important work and decisions made by governments and public servants. We really hope you enjoy these lectures and the following panel discussion.

Taki Sarantakis: Thank you so much, Gillian. I was going to introduce Peter, but Gillian did it for us.

Peter Loewen, University of Toronto and Schwartz Reisman Chair in Technology and Society: Very smooth, that's great.

Taki Sarantakis: So, Peter is, today, he's here in his capacity as the associate director at Schwartz Reisman, but he has just very recently been appointed as the new director of the Munk School of Global Affairs and Public Policy, one of Canada's top think tanks and educational units within the University of Toronto. Peter.

Peter Loewen: Thanks very much, I'm Peter Loewen and it's my pleasure to be here with the Schwartz Reisman Institute for Technology and Society and to talk about all the opportunities that artificial intelligence is going to offer to governments. But also, the challenges the government will face in building citizen consent and implementing artificial intelligence and in harnessing all of its upsides while managing its downside risks. So, looking forward to the discussion after the lectures and thanks so much for helping us out in this series.

Taki Sarantakis: Not at all. The thanks is ours because, as I kind of hinted at the beginning, it's already here and it's going to intensify, and we need to understand this. So, what are you going to watch, is one of the most remarkable little pieces of video I've ever seen on AI? It comes in two chunks. The first chunk is Gillian walking us through the basics of artificial intelligence. And she does this in a very gifted way. She does it in a way that all you have to do is be, you know, an interested generalist. And at the end of this, you will know a lot more about AI than you know, before you watch this next segment. And then after that, we pass it off to Peter, and Peter takes it directly from Gillian's introduction to AI as a concept, and Peter takes it into your job. And Peter takes it into your job as a public servant. And again, you may not feel or know today that you are dealing with AI, but you are, and you will be in the future. So, let's watch this really, really cool and insightful overview of AI and the start of its application to government. So, can we roll that, please?

[A title screen reads "Artificial Intelligence is Here Series. What is AI?"]

[Gillian stands in front of a blue background, showing representative pictures as she speaks. Key quotes and words drift by.]

Gillian Hadfield: : What do you think of when you think of artificial intelligence? Do you think of futuristic robots? Terminators that might take over the planet? A disembodied intelligence someone could fall in love with? We're still a long ways off from these kinds of artificial intelligence, and hopefully the Terminator variety has never unleashed.

But in lots of other ways, AI is already here, and making incredibly rapid advances in many, many domains. There is AI in the little robot that zooms around cleaning your floors, AI in your phone, recognizing your face and your voice. AI is deciding what you see on your social media newsfeed. And what recommendations you're given about the next video or TV series or new music release to enjoy. Maybe you're driving a car with driver assist powered by AI. As we wait for autonomous vehicles and drones to move from pilot programs to widespread use. Or relying on AI to adjust the thermostat at home. AI is producing many of the great effects we can now get with our smartphone cameras and is responsible for many of the features we've grown used to in video conferencing. Like virtual backgrounds or automatic highlighting of a speaker. Not to mention smoothing out internet hiccups and giving you the option of wacky special effects. AI is powering the apps we use to decide what route to take home or order a meal, groceries or a ride, deciding what options we see, what prices were offered and how fast the service will be.

And it's not just consumers that are using AI daily. Banks and financial institutions are using AI to detect fraud and money laundering and to score loan applicants. Many businesses use AI powered chatbots to support customer service. Online retailers use AI to decide what products to show. AI is helping to coordinate shipping and delivery logistics, automate payments, monitor employees, review job applications and schedule interviews, read, sort and respond to emails, contracts and other documents.

Much more is on the near horizon. AI systems can already diagnose some diseases faster and more reliably than human doctors. And more robust systems are in development to monitor patients and recommend optimal treatment strategies to improve health care. Cities are exploring the use of AI to make buildings, transit systems and public services smarter. Schools, and colleges, and universities are looking to AI to improve education with smart tutoring and evaluate applications. Some places are trying out AI to help identify children at risk for elevated lead levels or pregnant women at risk for birth complications, or buildings at risk for housing code violations, so that those at risk can be prioritized for community resources and support.

At least one large city is experimenting with AI to help leverage homeless youth social networks by identifying peer leaders who can be most effective in getting the word out about health-related information, such as how to avoid the spread of HIV. AI could help us improve security at the border, track climate change, invent new drugs and materials, and adjudicate claims in everything from tax to immigration to Social Security. AI could transform our courts and dispute resolution processes and how we design public policy and conduct elections.

["How does AI work?"]

The possible transformations coming from AI are truly astonishing. And they present most of us who are not computer scientists (and even some computer scientists) with a challenge. This is new technology and, in many ways, it's not like anything we've seen before. And it presents both opportunities and risks that we haven't really addressed before.
So, one of the things I've learned from talking with people and industry and governments and civil society organizations, is that it helps to understand a bit, just how AI works. What's under the hood. And that's what I'm going to focus on in this talk.

Let's start with something that is familiar. Computer programming. Now, even if you are not a programmer, even if, like me, you cannot write computer code. You probably have a good, intuitive understanding about how computers work. When we program a computer, we, the humans, tell the computer exactly what to do with data we give it. The software inside your word processing program says things, like: "if the shift key is pressed, make the next letter that is pressed on the keyboard a capital." Or "compare each word in this document to a list of properly spelled words in this list. And if there's no match, put a red line under the word on the screen."  Generically, that's all a computer program is. A collection of rules for the computer to follow. If/then statements, if Shift Key, then Capital. If misspelling, then red line. If X, then Y. Up until now, this is how all of even the most amazing and powerful computer programming, the stuff that does things, like, keep planes in the air has worked. Somebody told the computer what to look for and what to do with what it found. And even if the code took thousands of lines to write, every line was written by a human. And a programmer could always read the code and see, ah, there. That's where I told the machine to do Y when it saw X in the data.

Now, think about what that means for how we use computers. We want to do something like find all the people in this city who haven't been vaccinated. And put them on a priority list for reach out from public health. So, we program or more likely, we ask someone else to program, a computer to review a data set composed of vaccination records and produce a list. Or better yet, schedule a visit for a public health worker when they are next in the neighbourhood. We are fully in control. It's fairly easy for us to understand what the computer is doing. And if something goes wrong, if there are objections that giving the computer the list of people who have been vaccinated is a violation of privacy, or if the schedule visits for public health workers are biased against some neighbourhoods, or if they violate employment rules, well, we know who to talk to. We know who is responsible. The person who told the computer, or the programmer, to tell the computer what to do. And it's reasonable to hold that person or department responsible because we're not asking for them to do something unreasonable when we ask them to take care and what they ask the computer to do.

Some sets of instructions for computers might be really complicated. Like the programs that help planes fly on autopilot. But at some point, these complicated instructions were developed by human experts. Aerospace engineers, perhaps. And they were tested by other experts to make sure they behave like everyone expected them to. Accidents can happen, but the risk is mostly known and manageable. And this is what modern AI is changing.

The AI that is transforming almost everything we do writes its own rules. It accomplishes some pretty amazing feats by doing so, but it also means that we don't have the same kind of control over it that we have over conventional programming. It is not easy to understand why the AI is doing what it's doing. And it is much more challenging to figure out how to hold humans responsible for what the AI does. That's why figuring out how to use AI in government and how to regulate its use in industry and civil society is such a new challenge.

You might be thinking, well, why build or use AI that you can't easily control and understand? Why not stick with AI based on rules written by humans? That is, in fact, how the early efforts to build artificial intelligence back in the 1950s started. Computer scientists tried to write out all the rules for how a computer should solve a problem, like how to drive a car or play chess. And there were some important breakthroughs with this kind of AI, sometimes called "Good Old-Fashioned AI" or "GOFAI." This is the kind of AI that was inside IBM's Deep Blue machine, which beat Garry Kasparov, the world champion at chess in 1997. But it eventually became clear that the world is just too complicated to write down all the rules, and progress in AI was very slow.

...Until 2012, when the age-old strategy of sidestepping an obstacle yielded a new solution. Don't try to write down the rules. See if the machine can figure out the rules. That's what the modern approach to AI, known as machine learning, does. Instead of a human telling the machine what to do, humans give the machine lots of data about the task that they want the machine to do, the goal they'd like the machine to achieve. And then they program the machine to crunch all that data, to figure out what rules would work best to get another machine or program to achieve the goal.

So instead of giving the machine a rule, like, "if a person is missing from the list of vaccinated people, put them on a priority list for public health to reach out." We give the machine lots of what we call Training Data. Vaccination rates, infection rates, demographics, medical services, online searches, travel patterns, you name it. This is historical data. Data from the past. And then we ask the machine "what rule for public health priorities would have achieved our goal of reduced infection rates in this historical data?" The machine decides what if X then Y rules it would have been best to use.

So how does it do this? Basically, through trial and error. It starts with a random guess, like, online searches for cough medicine are good predictors of illness. And it sees how well a model built with that guess does at predicting the thing we told the machine to care about, infection rates. Of course, it does terribly at first. So, it adjusts the model. It tunes the dials. Let's put a little more weight on data. Hmm. From medical records about underlying risk factors in the community, such as diabetes. Maybe that does a little better. How about some weight on how much people in this neighbourhood use public transit? It's like experimenting with recipes in your kitchen. A little more salt? Oh, no, less salt! The machine just keeps doing that, tuning the model. Millions of times, over and over until, if we've got good enough training data, it gets pretty good at predicting the thing we asked it to. Then we can use the model that the machine learning system built by getting good at predicting the training data, by giving it the real-world data. The new data, we want to make choices about.
So, we could give our public health model current data about infection rates, vaccination, public transit, online searches, etc. and ask it to recommend what we should do. Where to send the public health resources. Give it the Xs and let it tell us the Ys based on the rules it built up. If we think it's really good, we could even convert that recommendation into an automated decision. Straight from the mouth of the Machine Learning Model to the ear of the public health worker's daily schedule.

Sometimes machine learning looks like it's doing magic, because it starts seeing things, patterns in data that we humans did not see or know about. And that's precisely why it was so hard for us to write down all the rules. We didn't know already that there was a hidden relationship between public transit use and infection rates. Or maybe only in some neighbourhoods, only at times when the buses get more crowded. When IBM's Deep Blue beat Garry Kasparov using good old-fashioned AI, every move the machine made had been thought out at some point by a human. That's where the rules the machine used came from. But when Google's DeepMind built AlphaGo, a machine learning based computer program to play the ancient game of Go And sent it up against the world champion Lee Sedol in 2016, well, the machine surprised everyone. It made moves, brilliant moves, that no human had ever imagined.
But just a game, right? Well, in 2020, DeepMind's AlphaFold, built using the same AI technology as AlphaGo, achieved a major scientific breakthrough discovering the secret to predicting the 3D structure of proteins. A challenge that human biologists had been working 50 years to solve and weren't sure they ever would. As one evolutionary biologist put it, "This will change medicine. It will change research. It will change bioengineering. It will change everything."

So, we face this trade off with AI. It can do things we never dreamed of and potentially bring us enormous benefits. But it does so precisely by doing things we don't predict or understand. So, that's now our challenge. How do we make sure that this powerful technology does not do more harm than good?

["How are models created?"]
           
Let's look a bit at the pipeline that produces AI, and where the choices are being made that influence the way AI works and the balance of benefits and costs it produces. From 30,000 feet, it looks something like this. A computer takes in training data and applies machine learning algorithms, which are rules for figuring out the best rules. And through millions of iterations produces a model, a set of if X, then Y statements. The model is then deployed, given current data to make real predictions and recommendations or decisions. Some people think the pipeline starts with the training data, but I think we have to go one step back to the decision to build AI for a particular purpose in the first place. Who makes that decision? What AI is being built? This is a critical place for people in policy and government and civil society to get involved.

Right now, most of the AI being produced is being built in response to commercial incentives. Market demand inside Big Tech companies. And that's why a huge amount of our current AI is focused on building better techniques for targeted advertising. Because that's the business model for monetizing a lot of the digital economy. Now, I'm an economist. I like markets, and I recognize that better advertising can mean better goods and services and more consumer surplus. But it's become a crazy, powerful driver of some not-so-great stuff, using those same AI technologies to target political messages and spread conspiracy theories and misinformation, for example. Figuring out how to get people addicted to their phones and screens to just keep scrolling.

But if AI is going to help us solve real human problems, improving government services, making our legal and administrative systems fairer and more accessible, making our communities healthier and safer and our planet more sustainable, then we have to start paying attention to the very first step in this pipeline. We need more AI being built to the specs of the public sector, not just the private sector. For governments, that means some combination of public investment, AI policy, AI procurement policy and the types of public/private sector partnerships we use to build public infrastructure. We all need to get creative to make sure the AI we get is the AI we need.

["What's in a data set?"]

Now, let's look at the choices to be made about data. First, there is the data used to build the AI. Suppose we want to build AI to make better predictions about mortality risk in ICUs to help health care personnel and hospitals make better decisions. A good starting point is data from electronic health records. So perhaps, we get a large hospital, or health care system, or insurance company to share those records with our AI developer team, and we asked the team to build a model that can predict mortality events: death in or just after the ICU visit, which is an entry in the electronic health record, using all the other information in a patient's health record.

This data will get divided into three sets. The first and biggest set is the training set. The information in that set about death events is the output variable that the AI model is going to try to predict using all the other information in the set as input variables. The second dataset is the validation set. This is the data used to check to see how well the guessing is going as the model trains. The errors made by the current version of the model in predicting mortality in the validation set tell the developers that they need to keep tuning the dials. Aiming to bring the error rate down. The third set is the test set. This is where the AI developers check what they think is the final model. They don't go back and tune the model after they see those results because that would be cheating. We want the performance of the model. The error rate in predicting mortality that's revealed by running the model on the test set to be a good guide to what is going to happen when we start using the model on new, unseen data out there in the real world.

Now, I've told you a lot about data, probably more than you wanted to know. But the key thing I want you to see is that there are a lot of decisions being made here about what data to use and how to use it. And those are decisions lots of people other than just the computer scientists need to be involved in. How is a data set being constructed? What's in the electronic health record? What's not in the electronic health record? Who's in the data set? Who is left out? If we train the model on data from a wealthy suburban hospital, it might not perform very well for populations in rural or diverse urban communities. If the data sets are too small or too narrow because of barriers to data sharing put in place by hospitals trying to protect trade secrets, or lawyers worried about liability or privacy oversight, then they might be missing important variables. Or they just might not be rich enough to yield good insights. And what about those test sets? Who is deciding what is an adequately representative dataset? Who is making sure there was no cheating during developing? Who is testing to make sure that the performance on the test set is not a fluke? These are all questions that we should not be leaving to computer scientists alone. They are ones that policymakers need to be answering.

Once we have a model that is performing well at prediction, we still have a lot of choices to make about if, where and how the model should be deployed. Who will decide if an AI system is fit for purpose, safe enough, and fair enough to be used? How will they decide that? What will be the relationship between the AI system and the humans? Will the AI system be limited to making recommendations for humans to make decisions? Or will it become an automated decision system? How will humans check to make sure it is working as expected and intended? Will humans over trust the machines' recommendations? Not trust them enough? If the AI based system makes a mistake and causes harm, will we know? Who will be held accountable? Will our systems of accountability and responsibility work when the humans using the machine or judging fault don't really understand how it works? These are really hard questions. They are ones that I firmly believe there are answers to.

We need to get cracking on the research and policy making needed to get those answers. Today we are living in almost a complete regulatory vacuum around the development and deployment of AI. It's happening so fast, and it's a critical reason why governments and policymakers need to get their heads around AI and step up to the plate. Just think about that AI helping to reduce mortality and health care. Almost everyone who works in health care is licensed in some way. Everyone except the AI developers building AI systems. And every drug and medical device in use has been tested and certified by independent companies and agencies with expertise in testing and certification. But we don't have any of those systems in place for AI yet. It's easy to think the answer is for governments to hold up a big stop sign and just ban AI. But there are at least three big problems with that.

First, we'd be foregoing all the benefits AI can bring us. A bit like saying no to the industrial revolution 200 years ago. Our economies work pretty well today, but there are still billions around the globe living in poverty. We are facing enormous challenges to health, sustainability, human rights and peaceful societies. AI won't solve all our problems, but it can definitely help. Think about what's possible when AI can help diagnose illness in remote communities and track water quality better than we can now.

Second, there's really no way to just stop AI. Because it's far from easy to draw the line between what is and what is not a use of AI. Suppose we wanted to say, as legislation around the globe is now trying to do, that governments should not use automated decision making. Governments already use loads of automated decision making. An algorithm is just a rule for how to respond to data. A risk scoring system that tells people in a government department how to decide who gets a small business loan or who is released on parole is an algorithm and it might be an automated decision-making system. Is it better or worse if that system is built on eyeballing the data, doing conventional data analysis of the data or using machine learning?

And third, there's really no way to stop industry from using AI. AI is what we economists call a general-purpose technology. It can help us do just about anything. In fact, lots of people don't even like to think of it as a technology because it's not embodied in a product or a machine, per se. It's an approach to solving problems. It's a way of understanding data. Industry will keep building AI and it won't be possible to make them stop. Governments can ban particular uses of data, like the use of facial recognition software by police or retailers, but it will be almost impossible to keep up with all of the possible uses of this way of analyzing the world.

So, back to our core question. How do we make sure that this powerful technology does not do more harm than good? Answering that question will require new thinking, not only from computer scientists, but also from social scientists, and humanists, and policymakers. In fact, it's critical for those of us who are not computer scientists to understand and be involved now, in how AI is being built and regulated. The raw technology may be hard to predict. But that doesn't mean that this is how it gets used or deployed in the world. We have to remain in charge. The biology of complicated drugs, like mRNA vaccines is hard to predict. But that means we design effective ways of testing drugs and require testing before we send them out for people to put into their bodies. The challenge in deciding when it is smart to use AI and how, and how best to build and regulate it, is a daunting one. But it is one we can and must respond to. We all have to be involved. We all have to understand better what AI is and how it works. Computer scientists are good at what they do, but not what the rest of us do. We need to be working together to make AI that benefits us all.

[The video fades back to the title screen.]

["What are the risks and opportunities for AI to transform government?" Peter stands in front of the blue background. Representative pictures and quotes fade in and out as he speaks.]

Peter Loewen: Well, thank you very much for joining me for this talk. My name is Peter Loewen. I'm a professor at the University of Toronto and I'm the Associate Director at the Schwartz Reisman Institute. Along with our director, Gillian Hadfield, I'm very pleased to organize this series on artificial intelligence and government. It's a particularly exciting topic, I think, because artificial intelligence and machine learning hold out so much promise for governments. It's an important topic because with that promise comes deep vexing risks. Manageable ones, to be sure, but risks, nonetheless. What I wish to achieve in the short talk is to give you a framework to think about how artificial intelligence and machine learning might change government, how we might think about the risks involved in that and how we might think about where we can most effectively deploy artificial intelligence. And not only because it will make government more efficient or cost effective, but because it might make government and the public services that support governments better.

Before I do that, though, I'd like to ask you to reflect on the work that you do. And I'm actually going to give you a little bit of an assignment as you watch this video. What I want you to do is to think about the work that you do every day. Of course, with hundreds of thousands of public servants, the diversity of tasks you engage in, the fields in which those tasks are situated in your place within the larger bureaucracy is remarkably varied. I can't begin to imagine all of it. But despite that, I think there are some common elements to the tasks you all engage in. And asking questions about those tasks could help motivate our understanding of how artificial intelligence and machine learning might be employed in government. So here are the questions.

Number one, can you manage the number of decisions you have to make each day? Or do you feel overwhelmed by this? Number two, do you understand the reasons you are asked to take the decisions you are making? Number three, do you understand how to make these decisions in a fashion that is fair and in keeping with the ethics of public service? To understand how the decisions you make are consistent with larger democratic values. And fourth, do you learn from the decisions you take? Or do you simply move on to the next set of decisions, unsure, if you're advising on the right course of action?

If you're not sure about these things yourself, imagine three different people who are taking decisions. One might be a frontline worker figuring out eligibility for employment insurance for someone whose files were put under review. Having decisions like this are they making in a day? One might be someone determining the suitability of some applications for residency or immigration over other applications. How do they know the exact criteria they should be choosing on when they know these applicants and where they're coming from imperfectly? Or imagine someone working as a director in the Ministry of Finance seeking to understand whether new tax rules will lead to net increases in revenues. How can they be more certain that there won't be other changes in the behaviour of citizens or in the responses of other states, which might make their revenue forecasts incorrect? These are all the kinds of thorny problems that governments face. And for which AI and Machine Learning hold out a lot of promise.

["Four fundamental challenges that governments face."]

My hope and having you reflect on the kinds of decisions which you make in your own day-to-day work is that you might land on some of the fundamental challenges the governments face when making decisions. There are many, of course, but I want to focus on four in this talk. I think these challenges share important characteristics. They're straightforward, and maybe they're even self-evident. On their own, they're troublesome. But taken together, when you add all of them up, they're much more vexing. And they're challenge is not only for individuals, but for all of government. So, what are these challenges?

First, people in government are asked to make a large number of decisions. Second, decisions are to be made in a way that's consistent with policy goals and objectives. Third, decisions, especially those that have bearing on the public, and most especially those that have bearing on individuals, should be made in a way that is procedurally fair and consistent with democratic norms. Fourth, we should learn from the decisions that we make.

Now, I'm not sure that all of you listening to this will face all of these challenges, it might be that you recognize just one or two of them. But I assure you that all of these challenges are faced by your department and certainly all are faced by the government writ large. These challenges that are collective and they are organizational. Let me expand on each of these challenges just a little bit more.

First, people in government are asked to make a large number of decisions. As the old saying goes, to govern is to choose. Now, this is typically meant to capture the idea that politicians have to make choices, big choices, between different policy options. But it's also true, I think, in a very important sense about the work that you do as public servants. Every drafted policy memo, which ends with a recommendation or three recommendations for policy action is suggesting some course of action. It is, in other words, encouraging one decision over others. Every time an EI case is reviewed, a decision is being made. Now, sometimes that decision will be deterministic, a set of rules will be applied, and there will be no human judgment applied, but a decision has nonetheless been made. This person worked some number of weeks and therefore they qualify. But those rules were themselves a kind of decision. Every time a policy official in finance settles on a course of action for a tax rule change to bring to their minister and that measure is put into a budget and then realized in legislation, a decision has been made. Of course, these are just a few examples, but it remains true that millions of decisions are made by the government of Canada each year.

Second, decisions are to be made in a way that is consistent with policy goals and objectives. Government cannot, or at least should not make decisions arbitrarily. Instead, decisions should be made consistent with some set of policies that have been articulated to guide that decision making. Sometimes these policy guidelines or objectives are particularly long with them. Let me give you an example. Which I hope will make this distinction clear. Suppose we're interested in how AI cases are determined, in particular where there are cases with unclear information about the circumstances under which an individual left their role. Whether they left voluntarily, whether they were fired with cause, whether they were laid off. With a well-developed policy, there will be a detailed set of guidelines or decision rules, there they are again, decisions, which will help an agent work through whether the individual qualifies for EI. But beyond those rules, there may be objectives or principles which are guiding the decision. For example, the reason for denying EI when individuals voluntarily leave a job, might be that we want to discourage people from leaving gainful employment and making recourse to EI when they could otherwise continue working. In this case, we want to avoid a certain moral hazard because it strains the program.

But maybe there's a different reason or objective. Maybe the rule is there to encourage people to work for some other moral or normative reason. Why does it matter? Well, knowing why a rule is there, understanding the values on which the rule is based and the objectives which it is trying to meet, may help the decision maker understand if their decisions are consistent. Not only with sometimes fuzzy rules, but also with the overall goals of a program. Reasons matter, not just decisions.

Third, how we make decisions in government matters. Which is to say process matters. Decisions, especially those that have bearing on the public and most especially those that are bearing on an individual, should be made in a way that's procedurally fair and consistent with democratic norms. A challenge for government is not only that a very large number of decisions need to be made, but the way they need to be made has to be consistent with what people think is fair and democratic. I understand that this may sound fuzzy. Let me give you an example.

Suppose you're in a rush to park your car one morning, dropping off your child or your grandchild at a new summer camp program. You hurriedly park your car after checking quickly checking the parking sign, which said ten-minute drop offs were permitted. After running your child inside, you come back to your car to find a ticket on it. It turns out, you're not allowed to make drop offs at that place on Monday mornings, as this is when the street sweeper drives by to clean the curb lane. You've been fined $100. There is, it also turns out, a sign there saying this, but it's partially obstructed by trees, and you couldn't easily see it. In any way, it is you were only in for a bit and the street sweeper didn't come. In most cases like this, you can appeal the decision.

So, suppose, you do go to appeal the decision and you get two minutes in front of some city clerk or a justice of the peace. As you go to explain your predicament, the clerk does not appear to be listening carefully. Perhaps he's looking at his phone or imagine he doesn't look at you at all instead, he just stares out the window. How would you feel about this? Would you feel like your case was being heard well? I think many of us would say, no. We would feel disrespected, unheard, unfairly treated. Indeed, we may even infer from this that the clerk was going to deny us our case. Now, imagine that the clerk, having paid little attention, says, "I decided to quash your ticket, goodbye." Would you feel good? Probably. After all, you've just saved $100 dollars.  But you would not, I reckon, feel perfectly fine. Because you would have some sense that the decision was not taken in the right way, whatever the outcome. The way you were treated was not consistent with the levels of respect you might expect from that institution. Or the level of fairness and seriousness. Humans have deep senses, not only of what outcomes they prefer, but how those outcomes should be arrived at. This, I hope, does a little bit to illustrate one of the challenges of decision making in government. The public cares not only about what decisions are made, but how decisions are made.

We go back to the example for just one more minute. Suppose you decided to hang around in that courtroom for a bit longer to see what other kinds of decisions were being made by this clerk. Suppose that someone who looked just like you, entered. Lo and behold, they received the same ticket a week earlier. Imagine further that the judge treated them just as he treated you, with the same kind of distracted indifference. How would you feel if you then saw that the individual did not have their ticket quashed? Instead, they were put on the hook for the $100? Would you be bothered by not only the decorum of the clerk, but also his inconsistency? Even if you got the better end of that inconsistency? Of course, you would. So, the challenge then, is not only that decisions are procedurally well executed, but also that they are consistent.

Fourth, we should learn from the decisions that we make. This may seem self-evident, obvious, even daft, but it is important and true. What does it take to learn from the decisions that we make? Well, one rough and ready understanding would be that we evaluate how and maybe why we expect the decision to turn out. We then observe how the decision worked out. We then update our beliefs about that course of action and maybe related courses of action. Easier said than done, however. There are several challenges to this.

First, we have to have data on how our decision turned out. Those data may not always be available to us. Second, and this problem is the one that social scientists call the "Fundamental Challenge of Causal Inference" or the "Fundamental Challenge of Separating Cost from Correlation," we have to know what outcome would have resulted if we had made a different decision. Third, we have to have the time to look at the data. Fourth, we have to figure out what beliefs or what models of the world motivated the decision in the first place. And then we have to update those. Learning is not impossible, but it's very, very hard. Now, this is not to say that good decisions can't be made, of course they can, and they regularly are. But could they be made better? And what promise made AI and machine learning offer for better decision making?

["The promise of AI and machine learning for governments."]

So, let's talk about the promise of AI and machine learning. Artificial intelligence and machine learning have the potential for major advances on all of these challenges. What I wish to do is not only or principally to give you examples of how AI and machine learning can solve these challenges. We'll do that in other talks and in our discussions. But introduce you to the logic of how these technologies can improve decision making. First, AI machine learning can help us automate decisions. They might do this by developing a series of decision rules or prediction rules. When given certain data inputs, they'll make a decision. For example, we might develop a model of what factors lead to a small business successfully paying a loan. If these factors can be measured and they can be added into a model. Which can decide which small business should receive a government loan. Or, we might rely on a series of algorithms to make recommendations and leave the final decision to the decision maker, aided by AI and by machine learning.

Now, this kind of example might sound obvious. Don't we already rely on decision rules? After all, small businesses do apply for loans now. And when they do so, they do submit data about their revenues, their business plans, their credit scores. So, what's new here? Well, there are three things.

First, by automating the consideration of these factors, we could potentially increase the speed of decision making. Machines after all, do not get tired and they have substantially more computational power, though sometimes with less complexity than humans. Second, as important as speed is, the consistency of decisions made by machines. Under certain parameters, given the same data, a machine will always make the same decisions. We can't actually say the same thing about humans. As we're constantly beset by smaller changes in our environment, on our mood or otherwise irrelevant factors. Or features of a decision which change the decision that we make. Sometimes these irrelevant factors will involve things like bias or prejudice, which have the potential to do great harm. Other times, it might be something as arbitrary as weather or food affecting our mood. In between these are all manner of things that might make us make one decision one day and another decision the next with no change in the facts. With no change in the inputs. So, one of the promises of automated decision making is the potential to bring to bear data. And other information that are chosen because of their consistency with policy goals. At the same time, we can eliminate the influence of irrelevant to potentially biased or biasing sources of information.

Third, automated decision making can incorporate a much larger store of information into any decision. Indeed, one of the great promises of data driven automated decision making is that it can consider a massive volume of data from past decisions. It can match up outcomes from decisions where different paths were taken and understand of path A and path B in fact, led to different outcomes. And it can constantly improve on its predictions. It can, in a word, learn. And it can do so at a rate much faster than humans.

If the public service faces fundamental challenges in its decision making, AI and machine learning hold up solutions to at least some of these fundamental challenges. It does not do so without risk, however.

["Major risks of AI and machine learning."]

So, what then are the major risks of employing AI and machine learning in government decision making? There are, by my lights, at least, three major risks of relying more on automated decision making. And it's important that we're clear eyed about these, because they have to be front of mind for us if we want to effectively implement automated decision making. By ignoring them, we invite citizens' objections to governments using algorithms. This is something we talk about much more in a future session. First, there's the problem of biased data inputs. Let me give you an example. Suppose people are divided into two groups Group A and Group B. People in both groups are applying for small business loans. People in these groups are otherwise the same and their talents and their abilities and their likelihood of success. But people in Group B are subject to discrimination in the real world. Such that when real humans evaluate or evaluated their potential, they would generally assign to them lower scores. What would happen then, if we use the decisions made by humans to train or to inform our machines or algorithms to make these decisions on our behalf? The machines will, with some likelihood, learn our biases and make equally bad predictions. This is a problem of biased data inputs. Now, this is just a specific instance of a general problem. Problems of poor data, especially if the quality of data varies by group or region or some other factor will lead to worse predictions for some groups than others. What we put into our models will affect what comes out of our models.

Second, there's the problem of explain-ability. Why does a machine make the decisions which it makes? You might not think understanding the reasons that a machine makes the decisions that it has matters. After all, we count on any number of machines to function or serve some purpose. We don't care much about how they get there. Who really knows how an air conditioner works, for example? But think back to the example of the parking ticket. Now, imagine yourself instead of being in front of a judge, be in front of a machine that's considering your case. Do you feel better that a machine is considering the particularities of your case? Well, in public matters, reasons matter because they tell us important things about the kinds of decisions an actor will make in other situations. And they tell us a lot about whether we can trust that actor in the future. Reasons matter and explanations matter, because they help us trust decision makers. So, the individual who is rejected for a small business loan may not like the decision, but they will feel much better with it. If they can, for example, understand what they would need to change to be successful in the future. Or if they understood what factors were brought to bear in good faith by the decision maker.

Finally, and relatedly, there are problems of consent and procedural unfairness. Emerging research is very clear on this matter. Citizens are more likely to accept the decision when they know that there has been a human in the loop, at least somewhere. They need to feel heard by another human being to believe that their unique factors were included in a decision. We can't do this for everything, but we should think about how AI machine learning systems can still allow humans, public servants to bring their judgments to bear, to explain how decisions were made, and to demonstrate that these decisions were aided, rather than impeded, by artificial intelligence and machine learning. We look forward to talking about this more.

[The video fades from a title screen back to the chat between Taki, Gillian and Peter.]

Taki Sarantakis: So that's it, that's artificial intelligence. And again, Gillian gave us a very good introduction and more than an introduction. She gave us an overview of the key issues involved in artificial intelligence from a kind of a conceptual, thematic perspective. And then Peter took that and really kind of started the dive into what artificial intelligence means to you as a public servant. So, I want to go back, Gillian. I want to start a little bit at the beginning. AI, we're starting to think about AI in different ways. And one of the first ways that I really kind of "got" AI, was a book written by one of your colleagues at the University of Toronto or a bunch of your colleagues at the University of Toronto. Prediction Machines. And when I read this book, I was like, oh, now I get it. Now I understand it. And there's a couple of shelves of books back there on AI that I'm kind of plowing through, and I think I've read about a little- I'm into about three quarters of one of the shelves. And every time I read another book; I get another significant tweak on AI. And so, prediction machines is one way of understanding AI.

Another way of understanding AI, though, is kind of what you and Peter started touching upon. And one of the things that kind of maybe causes us to have a little bit of pause. Which is in addition to being prediction machines, we might even be better off starting to think of them as decision machines. And maybe, let's kick off the conversation with your thoughts on going back to the big parts of AI, which is: are they prediction machines, which kind of takes us down one path, or are they decision machines, which takes us down a whole different path.

Gillian Hadfield: It's a critical point, and it's actually a bit of a debate I have with my colleagues. Because I think it is really important to understand that the way artificial intelligence is working is: it is making predictions. It's making predictions about what will happen if the wheel of the car moves a little bit or what will happen, if you know, if you make a loan to one person, the small business. It's also making predictions about how humans will label things. Would they call that thing over there a cat, that picture a cat? Or would they call it a dog? So, it's definitely making predictions. But I really think what's critical there, because that kind of makes it seem kind of passive, like, it just predicts things for us. And then we go on about our ordinary work and lives. But the point you're making about them being decision machines is really critical because we are seeing the introduction of automated decision making through these mechanisms. And effectively, we're seeing machines that are kind of predicting, oh, here's what I think a human who had to decide whether to grant that claim or not grant that claim or admit that person or not admit that person. We're seeing not just the prediction of the decision the human would make, but sometimes just straight, you know, let's just-- We'll just implement that decision, send the letter telling somebody that their application has been denied directly through the machine. So, a really, really important distinction there, between just predicting which is feeding into human decision making itself and having the machine make the decision itself.

Taki Sarantakis: Yeah, and that gets back to will AI be one of our tools or will we become one of the tools for AI? And Peter, I've heard you talk a little bit in different contexts over the last couple of years about something called Algorithmic Government, which kind of sounds scary. But if you think about it, it's not because in some ways we all kind of have been doing algorithmic government for a long, long time, kind of since 1867. What do you mean by algorithmic governments? And how is that different today?

Peter Loewen: Yeah, it's a very good question. So, I'll say a couple things on it. The idea of algorithmic government is really, it's really the idea that in the process of government deciding what to do- and that goes all the way from, you know, setting policy to making decisions about the implementation of policy to evaluating it. The notion of algorithmic government is that all those stages do have the potential to implement algorithms or decision rules, or ways of computing to some conclusion that could replace the decision making of a human.

So, an example would be, I mean, Gillian's used the example. Let's suppose that we're deciding who should get a CERB or not? Who should get income support through the pandemic? There's a number of criteria that get applied. But those criteria sometimes have gray zones in them, right? And whether they are gray zones or not, you could have a computer simply saying, "given your information, you are eligible." And you can never have a human in that loop, right? Where the computer is making all those decisions. To the extent that those decisions are farmed out to algorithms, computers, rather than humans, you're getting into the domain of algorithmic government.

Now, what's really important here, and I think this is what makes- just to broaden just slightly, what makes AI so potentially powerful for governments, is that governments have actually, as you sort of have noted, Taki, have had algorithmic systems for a long time. Decision making in the public service is highly structured. It involves different times in which some processes are farmed out to some people, gathering information, doing jurisdictional scans, coming to options. Those options are then sent up to someone who's senior to them, who often is then making a decision over, like, an attenuated set of information, over one or two or three options. And then someone might be reviewing that later on and deciding whether that was a good decision or not. It's all kind of different, than, for example, a small businessperson who's just sort of organically going along with their day, making a product, trying to sell it, but is not in a highly structured decision-making environment.

So, the types of things that we think about when we're thinking about algorithms making decisions. Actually have- the groundwork has already been laid for those in government in a lot of ways, because you've got highly structured decision-making processes already which have accountability mechanisms in them. Which have systems for boiling down information to central points which have predictions in them actually, right? About what will happen if you adopt one policy over another. So the idea of algorithmic government is really just a way of saying, we could think systematically of where at all of those different points you could use machines, rather than humans to aid in the decisions that are being made, or aid in the processes that are being enacted to develop a policy, to implement it, or to evaluate it.

Taki Sarantakis: Yeah, because again, like, as you just put very well, in government, or at least in bureaucracy, and let's call it a bureaucracy rather than government, because government's a little messy. But bureaucracy in theory is cleaner because it's more structured. We've been using algorithms forever. Since before confederation, because really, in public service and in public administration, you have a body of rules and then you have kind of a specific situation. Whether that's can "I have a passport?" "Can I receive CERB?" "Am I eligible for unemployment insurance?" You know, "can I get a vaccine?" And that's the particular situation. And then you apply that particular situation against the body of rules, and that is the marriage. And that is public service. And that's what a lot of us do, whether that's at adjudicating a tribunal decision, or whether that is a public health deciding who and what gets vaccinated or receives a vaccine or doesn't.

So, the notion of an algorithm in our daily work is something that we shouldn't really be afraid of. Because an algorithm is just another way of saying, you have rules, and you apply something to those rules. But Gillian, something that you said during your video, I think is now, what starts to get us into this, "oh, this is different" moment. And I really liked your example of the games and in chess we had the kind of program, everything. We had, to kind of feed, you know, the Big Blue. Or back then when the computer beat Kasparov, for the first time. We had the kind of, like, feed millions or at least thousands and thousands of chess games into a computer and like, program, "oh, you know, if the knight does this the bishop does this, the queen does that." We, the humans, had to kind of preemptively code that. But now what we're finding more and more, the real power of AI isn't that it can kind of make those predictions. The real power of AI is that the machine just kind of goes, "okay, so those are the-- You don't even have to tell me what the rules of chess are. Just feed me chess games and I'll figure it out and kind of step aside, human. I'll show you how to play chess."

Gillian Hadfield: Yeah, that's exactly right. And it's the, I think, the most important first thing to really understand. Well, the first thing you emphasized, that algorithms, we use them all the time. They're just sets of rules or recipes for here's what to do when you have these sets of circumstances or facts or ingredients.

But then I think that next point that "here's what's different," that when we have done conventional programming where we've written those rules, those algorithms before, a human has said, "here's what we should do." Like you just said, like if you know, if the night is here, move the bishop there. And the fact that when we do machine learning, the machine itself, we just tell it, like you say, we don't even need to give it the rules of the game. We just say, "play a whole like, millions of games, we'll tell you when you win. And so, you'll get that little bit of information. Did you win the game or not? And then you'll figure out, machine, what's the best way to achieve that objective, if what you're trying to do is win more often." So, I think that's the piece that really transforms why this has- introduces a whole new set of opportunities and challenges for doing our work. So, it's a key point that the machine is saying, "here's the best way to do this thing you told me you want to do: win chess games." You know, make- You know, allocate your benefits in the most fair and efficient way.

Peter Loewen: If I can just build on what Gillian has noted there, just take the analogy from chess into government. There's a really important thing that we want to underline here. And what you were pointing out, Taki, is that often what is going on is that people are making decisions based on rules, but there are judgments involved in those rules, right? And often those judgments have values in them that are not explicitly stated. You know, the way we do things in the public service and the reasons for which we make decisions shouldn't be rooted in things that are prejudicial or biased or arbitrary, right? And that may not be read into things, but there are norms that would lead to that. The difficulty with– The challenge with machine learning, right, is that as that machine learns how to win a chess game, right? That's- it's learning it in an environment where that environment is very constrained, right? But if you think about a machine that starts learning from what humans are doing in the messy real world, the concern becomes in some ways that the machine may adopt some of the more nefarious or biased reasons for why humans would make decisions. And the capacity of machines to learn our biases and to replicate them at very high speed is actually one of the more frightening things about- not frightening but concerning things about the use of AI in decision making.

If we feed a machine a bunch of data about who's been given a loan and who hasn't been given a loan and then who's successfully repaid and who didn't. But for some reason, all those data have been corrupted by the biases of loan officers, for example, who gave money to people from one city over another or from one group over another or one income level over another. In a way that was not consistent with our values. The machine won't pick up that those decisions were inconsistent with our values. The machine will say, "ah, this is how you make those decisions. And I'm just going to do it more efficiently."

Taki Sarantakis: Exactly.

Peter Loewen: And so, the concern here is that- where do we- Essentially, how do we teach machines that don't have values or morals? What values and morals we want them to make decisions based on when often, you know, we have trouble doing ourselves, sometimes.

Taki Sarantakis: Yeah, and I want to like, stay on that for a moment because we hear a lot about, you know, one of the big fears in AI, is exactly what you said. Which is that the algorithm is biased, or the algorithm reproduces bias. But really, it's not the algorithm that is biased or the machine that's biased. It is the fact that the data that it's drawing from has biases, and that data actually comes from human decisions. So, for example, you know, over the years, one of the top examples of this has been Amazon. Where it just kind of said, well, you know, the way we hire people, it's hard, it's labour intensive. You do interviews, you fly people in. Like, let's just start having a machine tell us who should be our executives or who we should hire. And if that is a kind of a data neutral environment, that's kind of good. But if that environment draws on who we've hired in the past and who we've hired in the past has largely been men, has largely been white, has largely been this, has largely been that, then the machine kind of starts to assume that as a value. It says, "oh, I get it, that's what you want." So, we have to kind of distinguish between what we want and what the machine is giving us. Because the machine, in some respects, gives us back a mirror of who and what we are, whether that's who we're already allowing into the country or who we're already allowing to get citizenship or who we are allowing today to get served.

Gillian, like, is this-- is this a challenge that you're seeing– Forget government for a moment. Is this a challenge that you're seeing in the broader society? Oh, absolutely. And we'll dive deeply into a lot of these topics in future sessions of this series. But I think they're really– we're absolutely seeing this risk. And just to take it back to, you know, this basic point about what's special about machine learning as compared to our conventional programming. Or even our hand crafted, here's my spreadsheet. You know, I get these five columns, I fill them in, I add them up and I decide the score for an applicant. And that's, you know, I often say, I'm- it's kind of unfortunate we use the language of learning in machine learning. Because it leads all of us to think, oh, the machine is like one of us, right? So, if you took a person and you showed them, here's the history of how we decided who to hire in this company. We are, in fact, incredible processors of information, and we come to that decision with tons of ways of incorporating our understanding of the world. You know, if we're well socialized, if we manage to make it out of the door on our own after our teen years, right? You know, we're- we actually pick up a lot. We would not make the mistake of thinking, oh, you hired a bunch of white men in the 80s and 90s. I guess that's what you want to do now, right? We would know, we'd lived in the world. We've seen the world and talked about, "well, no, that's actually, a lot of people had said to you, "no, no, don't assume we want to hire men for this job." Even though it wasn't in the past data. But the machine is like somebody who's trying to learn in a dark room and all you're throwing them is, you know, little pieces of information. And you're giving that machine, because it's just math, you're giving it a very an objective to optimize against with just little pieces of information.

And I think, you know, the key insight here is to recognize, we have lots and lots of work to do to figure out what's the right way to train our machines? And what's the right way to incorporate human oversight? It's like, you know, our little kids. We don't send them out into the world all on their own to make decisions. We're always right there, you know? Oh, no, don't cross the street. Yeah, that sign is not right or even if it's the right one, I see a car coming, right? We stand next to our children as they learn. We're gonna have to figure out how to stand next to our machines for a while. We eventually hope that they're able to have as rich an understanding of the world as we do. But right now, they're working on limited amounts of information, a specific objective we gave them. They're giving us a lot of benefit from that, but that risk that Peter introduced is one that is coming from the fact that, oh, don't think that they're like us because they figured that out, they understand everything. They only understand what you asked them to do.

Taki Sarantakis: And I love that analogy, and I'm going to quote you in the future, which is: we have to stand by, beside our machines as they learn the same way that we stood beside our children when they learned not to run into traffic. Peter, in the video you had four, a specific kind of notions that you thought were really applicable to public servants or the public service vis-à-vis AI. And I want to talk about each of them in turn. The first that you mentioned was volume. Talk to us a little bit about what you mean by volume.

Peter Loewen: Yeah, the issue here really is that we make a lot of decisions in the public service, and the volume of decisions can be overwhelming. So, if we think about, for example, just the number of people who applied for and got CERB. Millions of people, right? The number, the backlog of immigration cases that we have, which is partially about the capacity to absorb people, given annual quotas. But it's also just about the capacity of people working to consider a value, to consider their applications. It's about their capacity to go through them.

So, the first thing you know, when you think about where could we apply AI? What you want to look for is, I think, is cases where there's a high volume of decisions being made by people, and they're being made on repeated criteria, right? The decision you're making is a decision that you're making over and over and over again. And I think that that frankly just characterizes a lot of what's going on in government. Now, I might just say that the really important thing I need to point out is that a lot of the time when all those decisions are being made, I think people's experience will be that nine out of ten of them are very easy. That algorithms are applied, or rules are applied-

Taki Sarantakis: Yeah, we'll park that. We'll come to that in a moment because–

Peter Loewen: And one in ten is tough, right?

Taki Sarantakis: Yeah, and that's one of your later principles. So, volume is number one. Number two is consistency, which dovetails in with your volume. So, give us a word or two on consistency.

Peter Loewen: Yeah, the notion of consistency is just that there's a reason why we make our decisions that are made. And we want there to be consistency in the judgments that are made that the two cases with the same fundamental criteria should have the same fundamental outcome. Unfortunately, humans aren't always consistent in how they make decisions.

Taki Sarantakis: Yeah. And so far, this sounds awesome to me. You know, as a taxpayer, as a public servant, as a Canadian. What you're saying is, you know, if I have a set of facts in my tax return and my neighbour has the same set of facts in his or her tax return, we should be treated the same way. Like, you know, I shouldn't have to pay more tax than she does, and she shouldn't have to pay more tax than I do, given that we have, you know, the same kind of tax return and the same situation. So that's kind of easy, I would think, right? Volume and consistency. Now we start getting into a little bit of the tricky parts. Your third one is fairness. And I think this is where you started to want to go into a slightly different notion. So, fairness.

Peter Loewen: Yeah, so the notion of fairness here is you might think, well, that must just be consistency. But it's the fairness I'm talking about is a certain kind of procedural fairness. And it's the idea that humans really value being heard by other humans. So, to take the example of a tax return, you know, there are gray areas in our taxes. And when your taxes are disputed with CRA, CRA provides provision for you to speak with a human about your case.

Well, why is that-- Why is that provided? Well, there's a couple of reasons why at least, right? One of them maybe that you know, actually some things that are put in the tax return aren't fully informative. How something was treated as a capital gains, it can actually be a point of dispute, for example, right? What was a medical expense can actually be a point of dispute and sometimes talking through it with a person will help you get more information so that you can properly apply the rule. But the other one of it, the other side of it, especially on the citizen side, is that we have as humans, I think we have a real moral taste for being hurt. And it's fundamentally important in democratic societies that people are heard by other people. So, the example in the talk of a judge who's not listening while- or justice of the peace was not listening while you're disputing a parking ticket is kind of a cute case. But I think it actually captures what it is that we want in government, which is that we're being governed by people. So, at some point in time, we want people to hear our cases, not even because the decision will be different, but because we think it's important that another human is the one who's making the judgment about us, not a machine, and I think that's a fundamentally important part.

You know, the public's view of government is that government is where you go to get your driver's license, right? It's where you go to get your passport, it's your teachers and the frontline workers. And there's not an appreciation of just a sheer overwhelming volume of what happens in the back office of government, if you will. But, so, there's a tension there between what people think government is, this forward-facing thing, and what in fact, is, a massive machine processing millions of decisions a day. So, the third challenge is that when you apply AI, just in short- is you want to think about ways you can apply it that don't completely remove humans from the loop. Not only because we want to ensure that good decisions are being made, but we want decisions to be made in a way that's procedurally perceived to be fair. Justifiable.

Taki Sarantakis: And in the last one, the last of your four is learning. And Gillian touched a little bit on learning. And she said it was unfortunate that we kind of put this label of learning on the machine. What are your thoughts on learning vis-à-vis AI in the government?

Peter Loewen: Well, I think Gillian is exactly right that humans are able to learn in a way that machines are not. The depth and the breadth of information that we can take in as social creatures is really amazing. The intuitions we can form around things where we've learned something that we don't even know as we've experienced it. It's a remarkable capacity that humans have that will take machines a very, very long time to emulate if they ever will.

But the challenge for learning in government is not that we don't have those capacities. I think it's that the people in government don't have time to actually learn from their decisions and to systematically review in a way that is itself unbiased. Whether the decisions they made were the right ones, right? So, let me give you an example, right? The person in CRA who decides to audit a tax return, right, makes that decision. The tax return goes into the audit process, and that person may be removed from the loop from then on, and they just continue making that decision. Are they learning about whether the decisions they've made are the right ones? And are they learning about not only sort of, did they apply the rules, right, but were their intuitions right? About whether one tax return was in fact fraudulent or not. So, the challenge with learning to me goes right back to that first point that the volume is so massive that the capacity to actually reflect on it, to look back on the decision we made, and to actually think about what the world would be like, had you not made those decisions which is required for learning. Is something that machines are very good at doing, but humans are not very good at doing because it requires time. So, we're very good at this kind of intuitive level, visceral level of learning. But we're not good at processing massive amounts of data in a systematic way, the way the machines are.

Taki Sarantakis: Now, Gillian, in addition to your role at Schwartz Reisman and in addition to kind of the stuff you do on AI, you're also professor of law. And it strikes me that law is very conducive to an AI application in that you actually have codified rules which, you know, pieces of legislation or contracts. And then you have like, a body of data, which could be kind of precedents of the past. But the body of data could also be the specificities of the case. You know, Bob did sell five widgets or Mary did buy three widgets. We know in law, over the course of years that people have done studies. We know that there is a lot of bias in the way that we make decisions today. We know, you know, if you are in front of a judge for a hearing of whether you're going to get a parole or pardon, you know, that judge is a human being. He or she may be hungry, they may be angry, they may be irritated, they may have slept, they may not have slept the night before. And then we're hearing kind of studies of, you know, like, if you get a judge after lunch, he or she is happier. And you get kind of more parole if you get a lunch just before. If you get a judge just before lunch, their stomach is gurgling. They want to go out. How do we-- how do we start to make sure that those things now start working together? Kind of the judge and the algorithms, because it seems to me that I don't really want either of them. I don't really want a hungry judge. I might want a judge that's well fed and happy, but I don't really think I want a tired, hungry judge judging me. And I don't really think I want an algorithm judging me. So, what do we do?

Gillian Hadfield: : So, I think what we're faced with is- And we really are just at the beginnings of learning how to do this figuring out, is to create our joint machine human teams that are engaged in these decision-making processes. So, you know, Peter has emphasized a lot of really important things about volume and about procedural fairness, which is, I think, a really, really critical insight. There's a lot of discussion around outcome fairness. You know, similarly, situated in consistency. I think he called similarly situated people being treated the same. And of course, that's a core element of our systems of rule of law. And I think that's actually a core piece of what keeps our society stable. And moving forward well is our sense that, you know, we will be fairly treated and evaluated. But let's come back to that volume point as well. So, a lot of the work I do, I mean, I work a lot on AI right now, but I've done a lot of work on access to justice in the past. And the volume problem is fantastic, right? I mean, we can be- we want to be concerned about the lack of fairness you might get when you actually appear in front of a decision maker. But the fact is, vast quantities of people never actually get in front of a decision maker, because we have such a high volume. And that's also a way in which our systems are, you could call biased.

So, what we're looking for is- and this is one of the things that we're very focused on at the Schwartz Reisman Institute is integrating work from our, you know, our- the technical side. Our computer scientists and machine learning engineers, how do we actually build these systems working closely with people from political science and law and philosophy to say "how do we build those systems so that there's an interaction between, at what point- who's making that choice about what data is being used? What kinds of procedures are we following to evaluate the decisions that are being reached?" Because we do have this capacity that will be able to address our volume problem, address the bias problem you're talking about, the hungry judge, right? The grumpy judge. The person who's just trying to get through the last pile of these things before the end of the day. That we might be able to get higher rates of volume, higher rates of consistency, but we will only do that if we have created a system that the public feels is treating them fairly, is treating them with respect and dignity. And I think it's really important not to lose sight of that. And I think that's what we get when we bring in our, you know, our legal scholars, and our political scientists, and our philosophers to the team as well. We get reminded of how important that is. And that's a key thing for us to be designing. So, in thinking about implementing AI in government, I think it's really important to not just say, "oh, well, we'll buy that AI system from the vendor that showed up and showed us these fantastic statistics on how well they could predict outcomes in the dataset." It's: okay, how are we going to design this new process that we need in order to make sure that we're achieving all of our goals by integrating this new-

Taki Sarantakis: And that's kind of the last big point that this series that we're just about to embark on and we're just about to close our first amazing introductory session, is it's not so much about technology. It's about the things around the technology that Gillian just highlighted. So, I want to close our first session in this series by kind of asking you guys to give us each a little bit of an analog as to how you personally see AI going forward. And some people have said it's the new, you know, it will dwarf the industrial revolution or it will make the internet look like, you know, a little toy that it will, you know– Some people, even Stephen Hawking, I think before he passed away, he said, AI will be our last invention. And after that, like our fundamental relationship with things like, each other, biology or our environment will start to change.

Peter, maybe if I can ask you, how do you see AI in that kind of grand sweep of history? You're a political scientist. You hear people talking about, like, the AI arms race or the geopolitics of artificial intelligence. If you kind of had to make a prediction, how do you see AI as kind of an analog historically to the things that have come before us?

Peter Loewen: I think it's a great question, Taki, and it's one in which it's an occupational hazard to make forecasts from it. I think I'll say, there's two things about it. One is that I think in its capacity to process data and make decisions, I think AI is going to have a pretty impressive, you know, immediate and medium-term effects. But I think that where the action really is in how we interact with machines and how those machines interact with our natural environment. And I think that's not something we talk about too much in the series, but I think if people have a deep interest in this. Thinking about how we're going to, over time, interact with data in a much deeper way and with machines in a much deeper way than we are now, I think it's- is the more profound shifts are going to– are going to come from. That could be things just like augmentations of our bodies, all the way through to augmentations of how we see the world, right? Whether it's through virtual reality or other things like that. Now all of this is of course a little bit, it's a little bit futuristic, and it always seems a bit science fiction-y to me to talk about. But I think those things are coming faster than we think they are.

I'll just say quickly that for me, the really interesting thing is how we're going to do democracy and how we're going to do government in a world in which we are eventually, potentially changing the way we make decisions. There's a great old Isaac Asimov story. I don't read a lot of science fiction, but it's about an algorithm that seeks out the one human in America who is more representative, who's perfectly representative of all Americans. And he's left to make a decision on behalf of all others about who should be the president of the United States. And it's a really kind of profoundly evocative way of thinking about what happens when we take decisions away from humans, and think that, you know, we can take humans out of this, and we can make everything just an algorithm. And it seems like a great idea until you realize that actually the magic in all of this is that it's all of us being involved in the process of governing ourselves and in making decisions.

So, you know, that's not a very good answer, I think, to what you've asked, but I think that I think that this will change us profoundly in ways that we don't really understand. But I think that maintaining the magic of what we've created here, which is self-government, is actually the most profound challenge.

Taki Sarantakis: And Gillian, you get the last word.

Gillian Hadfield: : All right. So, I always think about this in the context of the long arc of human development, which I think is mostly a positive one. You know, I think, you know, we've raised our levels of material well-being. We've raised our opportunities for exploring meaning in life. I think we move in that direction. I think, you know, it's a very bumpy road, but we move towards greater levels of fairness and respect for others and equality. So, I- artificial intelligence is very different. It's not a tool. I mean, we use it as a tool, but it's not just that. It's a- as an economist, we call it a general-purpose technology. It can really transform everything. And as soon as we say, it makes-- it can make decisions. It can make predictions, right? Well, that's kind of everything we're doing. We're making predictions when we decide I'm going to step off the sidewalk or not, right now. So, I think I agree with Peter. It has the capacity to fundamentally transform just about everything that we do. And I think that, on an arc, that's a good one. But I also want to think about the way we, you know, that process of change has sped up so much over the last few hundred years, right? Like the difference between five hundred years ago and today is really great. One hundred years ago and today is really pretty great as well. And it's the speed of that transformation that I think makes it really important for everyone to be understanding, thinking about, learning about what this transformation is bringing. 'Cause it is moving. So yep, the name of the series is right. It's here.

We've already seen the way AI powered social media platforms have transformed our relationships, our politics, the globe. That's happening very quickly. And I think we need to be figuring out: how do we remain in charge of that process? And how do we become much more agile in our responses to it? So yes, I'm not a science fiction reader myself as well, but I spend a lot of time now with people who have, you know, a lot– there's lots of views of where we're headed. But I think the one thing that is certain is, it is transformative. Yeah.

Taki Sarantakis: Gillian Hadfield, Peter Loewen. Thank you so much for bringing your considerable talents, energies and insights from the University of Toronto into this incredibly important area, Artificial Intelligence. It is indeed here, and we so look forward to sharing time with you and your colleagues over the course of this series to better understand what it means to us in the public service of Canada that AI is here. Thank you so much and take care, and we'll talk soon. Be well.

Gillian Hadfield: Thanks, Taki.

Peter Loewen: Thanks, Taki.

[The video chat fades to the CSPS logo.]

[The Government of Canada logo appears and fades to black.]

Related links


Date modified: