Language selection

Search

Artificial Intelligence is Here Series: AI Lessons and Predictions for Government (DDN2-V20)

Description

This event recording revisits crucial lessons learned about artificial intelligence as it relates to citizen consent, bias, economic impacts and regulatory frameworks, and offers predictions about what impact AI is likely to have on the future of government.

Duration: 01:17:40
Published: September 16, 2022
Type: Video

Event: Artificial Intelligence Is Here Series: AI Lessons and Predictions for Government


Now playing

Artificial Intelligence is Here Series: AI Lessons and Predictions for Government

Transcript | Watch on YouTube

Transcript

Transcript: Artificial Intelligence is Here Series: AI Lessons and Predictions for Government

[The CSPS logo appears on screen alongside text that reads "Webcast".]

[John Medcof appears on his webcam.]

John Medcof: Hello, everyone. Welcome to the Canada School of Public Service. My name is John Medcof. I'm the lead faculty here at the school and I'm going to be your moderator for today's event which is called.

Before we begin, it's important to me personally to acknowledge that since I'm broadcasting from Ottawa today, I'm in the traditional unceded territory of the Anishinaabe people and while participating in this virtual event, I would invite all of us to recognize that we are located in different places and that therefore we work on different traditional Indigenous territory. So, I'd ask you to take a moment and pause to recognize and acknowledge the territory you are occupying. Thank you.

Today's event is the eighth and final installment of our Artificial Intelligence Is Here series which the Canada School offers in partnership with the Schwartz Reisman Institute for Technology and Society who are research and solutions hub based at the University of Toronto, and the hub is dedicated to making sure technologies like A.I. are safe, responsible, and harnessed for good.

And in today's event, we're going to retrace some of the key takeaways that were highlighted in previous events as well as hear some predictions from our expert speakers about what the future of A.I. in government looks like and why all of us in the public service have more to look forward to than to fear.

So, we're going to begin by watching a brief lecture featuring Peter Loewen who's the Director of the University of Toronto Munk School of Global Affairs and Public Policy as well as a professor of political science at the Munk School, and then following that, I'll be joined by Peter and Gillian Hadfield, who's the director of the Schwartz Reisman Institute for Technology and Society, and we're going to a live panel discussion to explore, in more detail, some of the themes that are addressed during the lecture

Finally, we're going to invite you to ask questions to our speakers throughout the event. You can do that by scanning the QR code from any mobile device or by entering the URL seen on the screen which is www.wooclap.com with the access code AISERIES.

So, you've got that information there. I'll also say before we play the lecture, you can note that simultaneous interpretation is available for our participants joining us today on the webcast. So, to access that, you can follow the instructions provided in the reminder e-mail which includes a conference number that will allow you to listen to the event in the language of your choice.

So, with that, let's start by, by playing the lecture.

[A graphic appears with the text "Artificial Intelligence Is Here series" Text appears that reads "The future of AI in government" Peter Loewen, is standing facing the camera.]

Peter Loewen Lecture:

Thank you very much for joining me for this talk. My name is Peter Loewen. I'm a professor of political science at the University of Toronto and the Director of the Munk School of Global Affairs and Public Policy. I'm also the Associate Director at the Schwartz Reisman Institute for Technology and Society.

[What is the future of AI in government?".]

Along with our director, Gillian Hadfield, it has been my pleasure to organize this series on A.I. in government, Artificial Intelligence Is Here.

In this final lecture, I want to do two things. First, I want to briefly review what we've learned in this series and perhaps, in doing that, to encourage you to watch some of the earlier lectures that you might have missed.

And second, I want to say some important things about the future of A.I. in government. When I do that, I'll share with you three important insights about the impact of A.I. on your work, on what will be valued in public service, and on the future of democracies versus autocracies.

But first, let's start with what we've learned.

We've explored contemporary approaches to artificial intelligence and how they represent a form of learning by computers or machines that does not require constant human direction.

A.I. is a new technology unlike previous forms of computing. In a traditional computer program, every line of code is written by human and tested by human experts. We know who's responsible if something goes wrong but with modern A.I. techniques such as machine-learning, the computer writes its own rules. As a result, it's not easy to see or understand why the A.I. is doing what it is doing.

We don't have the same control over it. It's therefore much more challenging to figure out how to hold humans responsible for its outcomes.

We need new regulatory frameworks and practices both to enable and regulate the use of this new technology. This is especially challenging because the technology is moving so quickly, much faster than we can write new legislation.

If we are going to be successful in regulating A.I., we need to think differently about how we craft and implement policy in order to develop innovative approaches that match the speed and complexity of the task at hand.

Next, we learned that government and public forms of decision-making impose special obligations on the use of A.I. Governments have to make a large number of decisions in a way that is consistent with policy goals, is procedurally fair, and provides opportunities for learning and feedback. These challenges are faced by every department, they're collective and are organizational, and solving them within the public service depends not on ignoring A.I. but on tackling these problems head on.

Government needs the consent of citizens for everything it does and we learned that the use of algorithms in government poses particular challenges for citizen consent, and it does so for four key reasons. First, citizens support many different reasons for the use of algorithms in government but they don't support a single coherent set of justifications yet. In fact, support for the use of algorithms by government often varies along partisan and ideological lines, and those need to be bridged if we're to experience wider usage of A.I.

Second, citizens have a status quo bias that leads them to evaluate algorithmic innovations negatively. Citizens, almost as a matter of habit, prefer the way decisions are made now to how they might be made better in the future.

Third, citizen trust in algorithms develops independently of how well those algorithms perform. Citizens are harsh judges of algorithms and they're unlikely to extend them the deep trust that they do to human decisionmakers.

And finally, opposition to algorithmic government is higher among those who fear the broader effects of automation in A.I. In other words, it's tied up in broader debates about what the future of technology in society will be.

We next dug into the risks and opportunities of using A.I. in different circumstances, whether it's assisting or replacing human decisionmakers and whether it's being used internally in government or in a public facing way.

We explored regulatory responses to these uses and how the predictive power of A.I. can be used to support decision-making processes of public servants, enabling them to focus on decisions that require greater judgment, nuance, and even empathy.

We then zoomed out, exploring the broader uses and implications of A.I. Avi Goldfarb taught us about how A.I. is broadly transforming the economy by giving us faster, cheaper, more accurate predictions.

Importantly, this increases the value of other things like good human judgment in areas where machines can not perform as well. Gillian Hadfield provided us with a deeper legal understanding of how bias, fairness, and transparency matter for A.I. in practice and in the law. She gave us a framework for justifiable A.I. which is something different and better than the explainable A.I. that we hear so much about.

And finally, we got a global view of A.I., first through Phil Dawson's lecture about the global effort to regulate the use of A.I. and secondly through Janice Stein's masterclass on the promise and perils of A.I. in foreign policy.

As Dawson notes, dozens of countries have put forward efforts to regulate A.I. and they've used remarkably consistent approaches. Are these working? They will not, unless there's widespread international cooperation to make these regulations testable and coherent, and the private sector is waiting on governments to regulate A.I. because they want a common regulatory framework before they invest in all of the gains that can be unlocked by wider use of A.I.

As Professor Stein showed us, using the case of the United States exit from Afghanistan in 2021, there is major work to be done in effectively employing artificial intelligence in our foreign policy. Nonetheless, it forms a major part of many countries emerging foreign policy strategies.

A.I. is imperfect and complicated but it's surely here now. What then is the future of artificial intelligence in government? I know this question is on the minds of many of you who have listened to and participated in earlier sessions. You've asked questions like, "What will the use of A.I. in government mean for my job? How will it change it? Will I still have a place? Is there a role for anyone but data scientists and programmers? Will government actually be better if we use algorithms and machine learning?, and How should government be thinking about how to use and regulate A.I. in a way that is consistent with democratic values and the values of the public service?"

Of course, the answers to these questions are hard to come by because the future is inherently uncertain and in some cases, even unknowable but having thought about this for some time, I thought I might share with you three sets of insights and maybe, dare I say, predictions about the future of artificial intelligence and government.

The first insight has to do with what we might call the distributional fact of artificial intelligence. That is, the use of artificial intelligence is likely to be spread across many tasks and jobs rather than concentrated entirely in a few, but we'll talk about what that means for the public service.

The second insight is based upon what we might call the values premium or the increased importance of values and principles in public decision-making.

And finally, the third insight is what we might call the democratic advantage. That is, the use of artificial intelligence in algorithmic government will be different and better in democracies than it will be in autocracies.

I'd like to go through each of these insights in turn and then, at the end, share one final insight about the unique nature of the public service and why I think this sector is perhaps more ready than any other organization, any other sector, to effectively and ethically employ artificial intelligence.

Let me start first though with you, the public servant. What is the future of a public servant at a time when more and more jobs can be automated? I appreciate the fear you might feel here and you're not alone. In fact, when I surveyed thousands of Canadians in 2019 and asked them about whether they expected to lose their job to a computer or a machine, 10% said that they expected to do so in the next five years, and fully a quarter expected to have their job replaced by a computer or a machine within the decade.

These fears were not limited, I must say, to those working in manufacturing or manual labour positions but are these fears well founded? Well, the answer depends in good part on how we think A.I.-driven automation will be rolled out and incorporated into organizations. Here, I think the distributional fact of technology is important, and what is that fact? It's that for nearly all jobs, some parts can be automated but in nearly no jobs can all parts be automated? Let me tell you what I mean.

When we surveyed Canadians in 2019 about their views and fears over automation, we also asked them detailed questions about their jobs, not only what their jobs were but what tasks their work involved. Does your work involve following instructions closely? Does it involve talking to other people? Does it involve navigation, manual labour, communicating through writing, etc.?

And once we figure out what tasks make up a person's job, we can then get a better grasp of what share of the things they do could be automated using what technologists call currently demonstrable technology, and what do we find?

Well, for the average Canadian respondent, 65% of the tasks they perform are currently performed by machines at a median level of about 50%. In other words, for about two thirds of the things we do in our jobs, a computer could replace that task with equal performance only about half the time. It hardly seems like a good bargain if you're buying robots.

On the other hand, 93% of our respondents have at least one task in their current occupation for which current technology performs in the top quartile of human performance, or in other words, about 90% of people have some tasks which a computer can do better than 75% of humans. So, nearly everyone then is at least partially exposed to automation in A.I.

So, this is the distributional fact, nearly all of us could replace some of the things we do with automation but important parts would inevitably remain. Now, what if those tasks we could replace are the ones we do not enjoy or the ones that cause us stress, or perhaps most importantly, the ones we really do not do all that well while we keep the functions that are most important to us and even more crucially or important to the larger purposes of our work.

Suddenly, the implementation of A.I. appears as less of a threat and more as a welcome innovation that can generate greater efficiency, like the shift from typewriters to computer word processing, paper and pen-based ledgers to spreadsheets, or card catalogues to database software.

The second probable future of A.I., maybe especially in the public service, is what we might call the values premium. Artificial intelligence is, in one very well regarded telling, a form of prediction technology. It is an efficient and potentially ever-improving system for predicting the probable outcome from an action using information that we have about the past. This is a remarkably promising technology then, if our goal is to determine who is most likely to succeed as a selected immigrant, to estimate whether a tax return is in fact covering up a fraud or is honest, or to determine if an EI applicant might be faster to return to work if offered some customized micro-credential.

What these prediction machines are not good at determining or not good at understanding is how the humans who are involved in and observe these decisions will understand and interpret them. Will they find these decisions justifiable? Will they accept the reasons given for selecting one outcome over another? Will they trust the machine of the future to make more decisions on their behalf?

Why does this matter? It matters precisely because in these complex social systems we live in, that's a fancy way of saying society, the values and reasons that underwrite our decisions and actions matter as much as the decisions and the actions themselves, and this is more important, I might venture, in decisions made in and by governments than those made in the private sector.

How does this matter for the future of A.I.? It matters because the main protectors of those values and principles will not be machines and will almost certainly not be those writing directives from the top of an organizational pyramid. It will instead be those who put these decisions into action, what others have sometimes called street-level bureaucrats. It's here where there will be a premium on values like trust, transparency, and decency. It's this final principle of decency that I'd like to spend a minute on.

In his seminal work, The Decent Society, Margalit asks us to consider the following scenario. Suppose there's a truck delivering food to people in a village during a famine. From the back of the truck, each villager is handed a loaf of bread, enough to fill their stomach at least for the day. Isn't this a generous and noble act, and what does it say about people delivering the bread? Aren't they generous and noble people?

But now, consider a slight change. Suppose those delivering the bread, instead of handing it out, throw it on the ground so the villagers have to scramble for it in the dust. They all get a loaf in the end and their hunger is sated to the same degree. Why is this different? It's different because it isn't decent. It's different because it involves a humiliation.

The decent society is one in which people are not humiliated. I don't want to overstate the case here but I do want to make it strongly. Government is too often an impersonal organization, one for which many citizens' experiences are of indifference if not contempt. There's a real risk that this experience of indifference will become even more common as more decisions and allocations are left to the seeming caprices of an algorithm.

The important job of public servants in this context is to put a great premium on the values of trust, transparency, and decency on humanity, you could say, to make sure that A.I. is enhancing the human element of public service rather than completely draining it from the system.

My third point, which is not my own but that of my colleague Henry Farrell, is that the more governments employ algorithmic decision making and artificial intelligence, the more stark the differences between democracies and autocracies will become. The basic idea is this, we're wrong to think that artificial intelligence and machine-learning deployed at scale will make countries like China some sort of technocratic leviathan, ready to outperform and eventually eclipse democracies.

We're wrong, in short, to think that employing A.I. and other technologies will make autocracy stronger. In fact, there's good reason to believe that the implementation of A.I. will amplify their weaknesses.

The core problem of autocracies has always been an inefficient feedback mechanism in which the public can express its dissatisfaction to the state. In the place of the feedback provided by democratic engagement, autocratic states seek control of citizens. Rather than receiving the real and organic expression of happiness or discontent among citizens, the autocratic state imposes an order and assumes that as long as things are working, even minimally, then everyone is happy.

But in this system, the inherent shortcomings of A.I., the multiple opportunities for biases to enter into the process, and a lack of value alignment in those processes will amplify these blind spots of the state, leading to more discrimination of some groups and more repression, and perhaps less outside dissent, breaking down further the one static feedback mechanism these states have.

Democracies are far from perfect but they do have a built in advantage. Democracies invite self-criticism. They create incentives for groups who are marginalized or disadvantaged to mobilize against that marginalization or disadvantage, to point to solutions, and to make political and legal claims to correct those imbalances. This makes decision-making cumbersome, certainly, but it also makes it self-correcting. This feature is what will give democracies the advantage as we try to work out the best ways to employ A.I. and other technologies for social good in the future.

Importantly, it's also the right reason for us to advocate for maximum transparency and explainability, for justifiability and the public use of A.I., precisely so it can be more easily critiqued and corrected.

Having said all of this about the future of A.I., let me say something in particular about the public service. I know government is sometimes viewed as a laggard, behind on the latest trends, management practices, fads, and innovations. This is sometimes a fair criticism but other times it's quite off-base.

But on A.I., I believe the following is true. Democratic public services are perhaps more culturally ready for the adoption of A.I. than any other organization because public services have been set up like human-assisted artificial intelligence systems for a very long period of time. Let me explain.

The work of many public servants is to be part of a prediction machine, to be presented with a problem, to formulate and test multiple solutions using the data at hand, to make recommendations which move through a series of considerations or algorithms, and to eventually reach a human who makes a choice over a small number of options.

The human cannot see all of the deliberations that have led to the decision but they can know the process and they can know the values that guided that process, and they have an obligation to be able to defend and explain not only the decision but how it was arrived at. All of these elements map onto a well-designed system of human-assisted and assisting artificial intelligence.

If this is true, then A.I. can find productive and ethical uses in government as much as in the private sector and maybe even especially in democratic governments.

Thank you very much for your time and your attention.

[John Medcof appears in a webcam window. Peter Loewen appears in a separate webcam window. Text appears stating: "Artificial Intelligence Lessons and Predictions for Government" "Artificial Intelligence Series is around the corner".]

Panel Discussion

John Medcof: Peter, Gillian, great to have you back. Welcome to the final installment of our A.I. Is Here series, so glad to have you both with us here today to reflect on everything we've learned through these events and to close with this look to the future that Peter was talking about.

And let me start by maybe extending my thanks again to the two of you, to the Schwartz Reisman Institute for Technology and Society, and to the University of Toronto for partnering with us here at the school and making these events happen. We've certainly had some fascinating discussions over the past months.

And maybe, before we get into this part of our event today, I'll briefly remind participants that you can submit your questions or comments for Peter and Gillian using Wooclap, and I think you had the details of that earlier, but let's jump into things.

[Gillian Hadfield appears in a webcam window.]

Peter, your lecture today, I think, set the stage really nicely to integrate our learning to date and cast our sights forward to consider what all this means for our jobs in the public service, for public policymaking more broadly, and then ultimately for our democracies.

So, let me start with a question that builds on one of those points, you know. One of the key recurring themes I think we've heard about in this series has been the tremendous challenges and opportunities of implementing broad-based A.I. in a government context, and a really important point you made today in your lecture, and I know you've talked about this in previous sessions as well, Gillian, is the ongoing need to ensure human inputs and agency at different stages of the application of A.I. in a public policy context, you know, grounded in this values premium, I think, that you talked about today, Peter, and which is a more optimistic vision of what I think we sometimes read about A.I. in the workforce and in the media.

So, you know, to get to my question, I for one am feeling very energized and encouraged by the possibility of a future where public servants still play this fundamental role of working with distributed A.I., bringing that human trust, transparency, and decency you talked about, and I think a lot of our viewers would probably share that feeling but if I'm also thinking about how I do my job today and that I might need to expand and enhance my current skill set in order to play this evolving role more effectively, you know, how do I do that?

There's no danger of my becoming a coder overnight but, maybe, what are the broader skills or knowledge or mindset that each of you would say public policy practitioners will need to navigate our new A.I.-enhanced workplaces today and in the future? So, long question but why don't I start with you, Peter, since you introduced the idea in your lecture, and then I'll go to you next, Gillian.

Peter Loewen: Thanks very, very much, John, for the chance. It's a great question and I'm really happy to have the chance to answer it, and just- I think I'll extend my thanks, I'll do it on behalf of Gillian as well, to say that it's wonderful to be doing this series with CSPS, not least because we got to get a bunch of our really smart colleagues to talk about this stuff which is a real pleasure for us to watch and learn from them as well.

So, I think there's an imperative here around at least thinking about how government can use A.I. and there's a couple of reasons for that imperative, you know. One is that, actually, it's going to be used throughout the rest of the economy as time goes on and I think it's actually important that government look like the rest of the world. Like, that's not an end goal but it is an important thing that government should be doing.

And in using A.I. enhancements to making decisions, for example, you know, government's going to be better at thinking about and being familiar with and eventually learning how to regulate other uses of it in the economy. So, it's good for that reason.

I think it can actually help government but I also think that, you know, people are looking for government to get better at what it does constantly. I think there really is an imperative here and it's almost a democratic imperative on public services to get much better at what they're doing as rapidly as possible because part of that kind of- there's a real grand contest of ideas going on, actually, about what the best systems in the world are and public services have to be at the top of their game to be in a position to defend themselves and defend this broader system of kind of democratic decision-making in large public services that we've set up in this country and in other ones.

But let me just talk with about specific question a little bit because I understand that there's two challenges here, maybe three. It's still fuzzy when we talk about this stuff in some ways, right, and it's a little bit scary for some people to think about having to learn new skills and maybe get themselves replaced by something, I don't think they will but- which is the first part of the lecture, and then it's also just kind of hard to know and imagine how it could be done.

But let me give an example and then talk about why I think actually that this is the kind of the last point of the lecture, why public services are ready to do this. So, suppose that your job is to determine- you've got an HR function in the government and you've got the nice job of figuring out who you should approach for additional training for promotion. So, you're not on the- you're not doing the firing stuff today or the disciplining stuff. You're doing the positive stuff where you're tapping people on the shoulder and saying, we see bigger things for you.

Now, there's a lot of different skills that people bring to human resource functions. People who are very good human resource professionals have a real talent for spotting talent and for spotting potential and seeing things in people that other people may not see, but they could be assisted at this in A.I., and we do lots of- government will do lots of testing and lots of probing of people, lots of questionnaires and interviews, but we leave all sorts of information on the table when we do that.

You're interviewing someone for an hour, you're talking all that time, you might miss something they say. You might not pick up on something they say because they used an idiom you weren't familiar with or, excuse me, you were in a moment of distraction or something like that. Well, there are tools that we can use from transcripts to learn about people and learn about the way they talk about things and learn from them based on the way they talk about things. We look at people's job performance and the kind of evaluations that their employers give and we can dig into those with more complex data tools.

I can come up with all sorts of examples here but the point is that all the tools that HR professionals might be using right now to assess potential talent in the future, they could assess that differently and even better or at least in an enhancing way with various A.I. and machine-learning applications.

So, you're the HR professional now and what's your job now? Well, your job is to use your own judgment but also to allow that judgment to be augmented by some other tools that someone might bring into your unit to teach you how to use, a talent scorer or something, you know. It's going to have some name that's going to be two words put together with two capitals, inevitably, but the point is that there's going to be some tool that you're going to have to learn how to use.

But what is really important here is that public servants and the public service, I think, has inherent in it- and I'd love to hear Gillian's views on this and her expansion on it because she has very interesting ideas around not only what we might call explainability but justifiability. So, I'll set that table for you, Gillian, and leave it to you but inherent in public service is this practice of and this need at an almost ethical level to be able to explain why you've made the decisions that you've made, right?

Because there's such a hierarchy that you're operating in, right, and there's so much input at all levels and then there's all these little decisions made at the top that everybody has to be able to say that they did stuff by the book, so to speak, right? Well, that's actually a virtue of when you're working with A.I. systems because you have to be able to explain how you use the system, where the system contributed, what judgment you made in using the system.

So, all that's a long way of saying that, actually, I think just the everyday practices are built into the ethos of being a public servant. Particularly, being a manager or being a person who's making analytical decisions then making recommendations will make it easier for you to have your work complemented by an A.I. maybe than a person who is not operating in such a values-driven environment, in a commercial environment, for example, right, one environment that has less structure.

So, that's a really long wind-up, John, to your question, to say what's exciting about this in some ways is that I think that there is- not only is there need for a lot of human-enhancing A.I. within government but there's actually already sort of the ethos and the framework there for making decisions and for doing things that's kind of ready-made for A.I. It's not competitive to what people are doing now. It's much more complementary to the functions that they're already undertaking in their work.

John Medcof: Great. Thanks very much, Peter, and another really concrete example that I think reflects a situation in which many public servants can see themselves working with A.I. tools. I want to give Gillian a chance to respond as well though. Gillian, anything you would suggest in terms of broader skills or knowledge or mindset that we need to bring to this work on top of Peter's very encouraging words that 'we already know how to do a lot of this'.

Gillian Hadfield: Right, no, it's great. I think I'd emphasize two things. So, one is if there's something new to learn here, and I do take Peter's point, there's a lot that is consistent with sort of existing mindsets in the public service but there's something new here. I think it's gaining some mastery competence about the way A.I. works, and this is why we started the lecture series off with a little bit of what's under the hood, nuts and bolts, to say you don't need to be a programmer, you don't need to be a mathematician, an engineer, you know, that people with skillsets like mine, I'm not a programmer, can understand the way these systems work and can understand, conceptually, how they are doing what they do and therefore have confidence to understand their role, the human's role, in overseeing this.

Because I think one of the dangers we face is that, you know, people will fall down in front of a machine and saying, 'Oh, the machine has said this', and I think that's- the mindset we have to be careful of is that one. We want the humans who are working with A.I. to say, okay, the A.I. is working for me and I can't do what the A.I. is doing but I can understand what's producing the results, I know what the risks are, and I know I have the confidence to be able to evaluate the results, and say, no, don't think that looked right, and- you know, or that there's something kind of strange here.

So, I think developing that level of competence, that mastery is really quite important and, again, a reason why I think it's really quite critical for everybody to kind of learn a little bit about how it works, and I think we all can.

And the second mindset that Peter has really emphasized, and I love the point about decency in the video and the loaves of bread because I think that is so important, the relational part of, you know, I'm one- two humans together saying, okay, we're- you know, we're in this activity together, we're running this society together, I've got my job, you've got your job, and we treat each other with respect, understanding, and decency, and I like that very much.

And I think that's a very important thing as we look to a world where A.I., you know, will have the capacity for saying, oh, you know, here's how you should evaluate that file, here's how you should evaluate that claim, here's how you should respond to that policy proposal.

It's- again, it's bringing that humanness back in and I think it's just really quite important to focus on that and to say that's where- that's the human part, and that, again, as Peter has emphasized, the capacity to augment what humans can do, to say, okay, you know, this person has walked into my office, you know, it used to be that I had to take a couple of hours, a couple of days to understand their situation, the background, you know, can I have that kind of process in a very competent way quickly by A.I. so that I can now devote my time and my attention to it in a much more efficient way, but not in a sense of okay, yeah, sorry, the machine said, no, you're out of here.

So, it's about having competence over that and then the capacity to really bring that human element in. I think those are the key things that I see here.

John Medcof: Great, yeah, thanks for that and love how you brought it back to the human-centered piece because that's something that, as Peter has suggested, you know, we already bring this to the work that we do and we are maybe using different tools but the mindset remains the same. The value proposition or the value add remains the same.

Let me maybe ask another question then. In the series, we've covered such a broad range of themes and, Peter, you gave a really nice summary, I think, in your lecture today, and we've heard from some really amazing global A.I. experts from different sectors, the public sector, the private sector, the academic sector, and I think today's session really ties together nicely some of these themes and casts our view forward to consider why values and principles in public service decision-making are going to be even more important as A.I. usage increases.

And, you know, one of the things maybe I'd like to ask you is, in Canada, maybe we aren't currently considered to be one of the two or three, let's say, global A.I. superpowers, you know. That said, coming out of this series, I think many of us are seized by the challenges we face but also feeling quite energized by the discussions that we've had and the opportunities that you've presented to us.

And so, let me ask you, the Schwartz Reisman Institute is providing important leadership and is collating A.I. expertise here in Canada, and let me ask, where do you see the space for us as a nation to have global influence or impact in the adoption and implementation of A.I.?

And maybe, Gillian, I'll start with you this time.

Gillian Hadfield: Sure. Thanks, John. That's a great question because I do think that there is opportunity for- you know, the field is still wide open. You pointed to sort of A.I. superpowers that are building lots of the technology but the real challenge that we're seeing both in governments and in industry is the integration of those technologies into our human systems. We see it in health care, we see it in industry, and I think we see it in government. There's a lot of that part of the puzzle that hasn't been solved yet.

So, your- you know, your first question about the mindset and skillsets that are needed in government is really something about- you know, it's going to be true in health care, it's going to be true in industry, it's going to be true in lots and lots of sectors.

I think there's, you know- so, let's just imagine what would the world look like with truly excellent use of A.I. in the public sector, right? It's a wonderful vision of, you know, you're dealing with somebody in the public service and they know lots, they've got lots of expertise at their fingertips, but they're able to devote even more of their time and energies to the respectful, careful interaction with the public, policy choices, communities, and so on.

So, I do think that there's that capacity for building that and we haven't done it yet. There's lots of opportunity to be also in a leadership position with respect to the design of the right kind of regulatory structures around A.I. that's being used in the private sector which is obviously a really important part of government.

So, I think that, you know, again, having the, you know, public service that's smart about A.I., that's enthusiastic about it, that is creative, and is ready to take chances and to design the new systems that we need, I think there's real opportunity for that.

I mean, Canada does have a great reputation globally for its public service, for, you know, the devotion to that public- to the public sector. So, I think there's room to integrate those pieces together.

John Medcof: Great. Thank you for that, and I think that's a really nice frame of how we can use some of the inherent, maybe systemic institutional and social advantages we have in Canada and lever those to be leaders with our artificial intelligence.

But Peter, I'm going to go to you to see if there's anything you'd like to add to that.

Peter Loewen: Yeah, I mean, I think everything Gillian has said- and yet I'm just going to take the role of being Eeyore here for a second. There clearly is something wrong in in the federal government in its capacity to- and I think this really matters in this case, but there really- there clearly is something wrong in the federal government's capacity to procure large technology contracts. There's just- and these have to do with data and Phoenix is just the beginning of it, you know.

I mean, we've got- you know, there's- Canada will become kind of a living museum for coding languages based on the legacy systems that we continue to run our systems on, and I don't say it to score a cheap point, right, but there's something in- we have a very good public service. It's one that doesn't make big mistakes very often. It's one that also is not- we've- the United States lost something on the order of $300 billion in payments, in income support payments, through the first year of COVID. We didn't have an equivalent in Canada as far as the AG is concerned up to this point, right?

So, like, we do some things well and there's a natural conservatism there and there's a natural deference to our public service. I think that has something to do with our inability to do big wide, large-scale, rapid kind of data-based innovation at the federal level and certainly in the interaction between federal and provincial governments.

I think it's actually- for what it's worth, I think it's reflected in our banks in the way they think about data and the hoarding of data rather than the sharing of it, right? They don't actually have all that much data but they don't want to share what they've got, you know.

So, there's something cultural in Canada around some mix of privacy and institutional conservatism that just, for whatever reason, has led to a track record of us not being very great at dealing with large degrees of personalized data, and Gillian can talk about this in the health context in Ontario where she's really broken some dams there very usefully.

I just say that because, at the end, data is one of the- you know, data is one of the things you've got to feed a machine if you want to learn with A.I., right?

So, I think that there's some combination of very aggressive, by aggressive, I mean just ambitious, wide-scale experimentation that we need to do. We need to show people that those things work, and at the federal level, we really need to be ready to do two things.

One is to start thinking differently about what we mean by privacy and to not be, I think, as bound as we are by extremely conservative legislative conceptions of what people actually care about, let alone what privacy is, and also allow for some failure, not at the scale of Phoenix but failure in the administration of things, because- and I can always- everyone just says Estonia when this stuff comes up, when they bring up an example of a large and digital society but there's lots of other places in the world where it's much easier to pour out your financial data and it's much easier to file your taxes automatically. There are all sorts of examples of day-to-day things where it's easier to undertake stuff in digital interface with the government.

There's not many places in the world that have more trust in their government. There are not many places in the world that have, you know, a cleaner public service. So, we have those things right but I think we do have to confront that there is something on the technology side where we're just- there is something off there, and I don't mean that in any corrupt sense. I just mean that there's something we need to fix, I think, if we're going to take advantage of all of this and allow for this, like, massive private sector A.I. boom that we're experiencing in Canada to really animate our public services.

John Medcof: Yeah, I think that's a really key point. I mean, we have an opportunity to lead but if we're going to seize that opportunity, we have some inherent challenges in the way we're working now that we are going to need to really tackle in a very serious way, you know. Privacy is not a small thing to tackle but one that is, you know, really important to the path forward. So, thank you for balancing out and reminding us that, yes, there's an opportunity but we need to get some things right if we're really going to seize it.

Peter Loewen: Yes.

John Medcof: We're starting to have some comments and questions come in from our viewers today. So, maybe I'll go to them, and there's actually one comment and one question that are on the same theme. It's one we've talked about a little bit already but that remains, I think, one of the key challenges that many of us are thinking about.

And I'll share the comment first, it's that, "It's interesting that the general position being held is that humans are needed in the loop to maintain trust. However, the other position is that humans are the problem, introducing bias in the hiring process for example. If A.I. were to algorithmically handle hiring then this has the potential to remove human decisions around hiring and remove things like nepotism."

And the related question that another viewer has sent in is, "Can you discuss how A.I. can add more fairness than presently exists or if anyone has studied this benefit of using A.I. and removing the biased human from the equation?" So, challenging question, I know one we talked about in the opening event in this series.

So, Peter, let me go to you for some quick thoughts on this.

Peter Loewen: Yeah, it's a great question because it's a framing issue in a lot of respects, isn't it? You know, that- and Gillian and I have probably heard this talk given from each of two different directions a thousand times, right? There is genuine concern that when biased inputs are put into A.I., biased outputs result, right?

But it is also the case that if you can somehow address those, you can come up with decisions with more rapidity and more consistency, and with less bias of, you know, formally statistical but also informal terms than you would if humans were making those decisions.

And what's remarkable about an A.I. is that it can make decisions very, very, very quickly, right?

I'll give an example that Gillian's heard me use because it animates an important point here which is that, you know, there are millions of veterans in the United States because it's a country with a large military and it's engaged in a lot of war, and those veterans often have a very complex welter of handicaps from the things that they've undertaken, some of them psychological, some of them physical.

And they're actually not easy for a human to assess, you know. I tell you I've got pain in my hip. What does that mean, you know? Like, it could be that it actually is so overbearing that it keeps you from working or it could just be that it nags me when I walk on a rainy day.

So, benefits determinations are made by humans in response to, you know, some form- some share of questionnaires and interviews with people, right, but often, veterans feel as though the benefits they're getting are not what they're owed and in their process of adjudicating this, they allow veterans to go before what's now called a judge, but basically an adjudicator, who will hear their case and decide what benefits they should get.

But there are so many cases that the average person gets a hearing of only 5 minutes, and when you look at the actual relationship between their objective underlying conditions when they're properly measured and then the award that they're making by the adjudicator- the award that they're given by the adjudicator, there's no statistical relationship, which is to say the adjudicator is not actually using any of the information. They're using something else, right?

So, that's one of those cases where you say, well, wouldn't it just make perfect sense to have someone fill out a form, you know, and do a survey and have some measurements done right and then let the computer decide what they're owed? You might think that, right?

But a very large share of those people going before the adjudicator would still rather go before the adjudicator. Why? We like to be heard by humans. There's something in this reasoning process of me telling you what my problem is and just being heard that seems more fair.

So, we could apply that to any number of things and it's one of the reasons why we put a human, right? It's one of the reasons why whatever tax disputes you got with CRA, you can always talk to a human at the end, right, or whatever issue you've got with your EI, you can always talk to a person.

We take humans out of the equation and what you take out of them sometimes is unfairness bias but you also take out compassion because it's difficult for a computer to be compassionate, right? You take out that sense that someone in government is listening, that there is a human face there, so- and it doesn't answer the question except to highlight the fact that there's more than one reason to have a human making a decision, and part of it is a deeply procedural reason, right, that you want someone who's actually listening, right?

It might be so good that we can- you know, we might get so good at A.I. and human machine interface that we're able to make a machine genuinely express compassion to people where they really recognize that we're not tricking them but they understand that the machines thought about all this stuff, right?

I think we're a long, long way away from that. Ideally, right, what you get is some mix of judges being genuinely aided by a machine which is telling them where the statistical average should be for that allocation, and then they're making, you know, adjudications around that average, for example, a little bit more, a little bit less for this reason.

John Medcof: Yeah, thank you. I think, again, another really concrete example that illustrates your point very well, and I go back to this, the learning opportunity that you talked about and the ability of the machine to learn and self-correct along the way could be a key to getting us there, right?

And even as we maybe reproduce some of our biases in the system, as the systems get smarter and smarter, will there be a way to overcome those? Gillian-

Peter Loewen: John, if I could answer just for a second. I mean, just- because you're- because it's- learning is the important part and listening is the important part. You can imagine that if you could create- filling out a survey is not fun. I rely on survey data for my whole career and I just- God bless people who fill out my surveys. It's not a lot of fun to click through stuff, right?

But imagine that we could actually come up with a good human computer interface where people genuinely had the sense that the computer was listening and was interested and was probing them further, right, and instead of 5 minutes, they got 20 minutes or a 30-minute, you know, interaction, or there's another human on a screen on the other side, taking the questions and then inputting the data.

There are ways that we can imagine making use of, you know, some combination of humans and computers to really actually enhance the compassion with which we and the care with which we deliver government services but also to improve the decisions and to remove that bias.

So, you can actually enhance both, right? There's not necessarily- we're not at the edge of the production function yet for what we can do, right? I think we can make things less biased and make them more human at the same time by taking advantage of technology.

John Medcof: Yeah, great point. This isn't a trade-off between the two. There's a world where both can continue to develop at the same time. Thanks for that, and Gillian, anything you'd want to add on this question on biases from the human inputs.

Gillian Hadfield: Yeah. So, both of these questioners are identifying something that does get overlooked as we get very focused on the rest of algorithmic bias which has drawn a lot of attention and that is, you know, the reason that our algorithms are biased is that the way they're currently built is that they are built based on the history of human decisions, which are biased.

And so, that kind of puts you back to, you know, can we build systems that are helping to remove that, and of course, it's not just bias. I think this was also part of what Peter is emphasizing, it's not just bias in an intentional sense, of course, it's unconscious bias. It's what happens because you're tired, it's because- it's what happens when you have, really, just a volume of decisions to make that, you know, you can't possibly make in a thoughtful way.

In small claims court, for example, I spent a fair bit of time thinking about access to justice, you know, it's getting 5 minutes from the judge on your case. Well, that's pretty bad from the point of view of the litigant. It's also pretty bad from the point of view of the judge in terms of how they're making those decisions.

So, I do think that has to be our aspiration, to build systems that can get us to this really critical ideal for these kinds of public services, and that is, you know, fairness, neutrality, responsiveness, being heard, you know. we're just now in a world where there's such a massive gap between the ideal of what we're supposed to be getting from a review of our benefits decision or a decision in our family law case or whatever it is, and what we're actually getting, because the volume, the complexity, the speed is just, you know, slowing us down so much on that and creating that gap.

So, I actually think both of these questions are getting at the key reason why it's so important that- just to pick up an earlier point of Peter's as well, that we need to get that confidence and that creativity, that ambition to rework the way we do government, the way we do regulation, and not to, you know, approach it, you know, from a defensive crouch, I guess, keep it out of here.

John Medcof: Yeah, and that maybe is another mindset we need to bring to our approach to A.I. and government.

Look, the conversation we've been having is generating a lot of questions and comments from the viewers. So, I'm going to go back to another one that touches on these related points of data and privacy that you've both talked about.

And we've got one question, "Your point about sharing data and breaking down data silos is a good point and might be one emerging advantage in retaining key talent here in Canada. For example, some very talented health care data researchers have left Canada for the U.S. because the datasets they want to work with are more easily accessed there. Where ultimately does responsibility to improve data access lie?"

And a related question, asking if you could elaborate on how we should be looking at or thinking about privacy and how we can do better.

And Peter, maybe we'll go back to you since I think you were- you brought this privacy point up first as being one of our thorny challenges.

Peter Loewen: Yeah, I'll let Gillian speak to health care data but I'll make just one point about it before that, but on privacy, I'm not an expert on privacy at all and it's actually- we have a couple of very clever, very smart experts on privacy, David Lie and Lisa Austin.

David's a computer scientist and Lisa's a lawyer, and listen to the two of them talk about- a computer engineer and a lawyer, and listen to the two of them work through privacy. You start to realize, at a technical level, it's extremely complicated, right, to figure out how you set up a system that's compliant with what we're trying to achieve with privacy law, right?

I think the difficulty is just- from a political economic perspective, just the way I think about a little bit is the following, is that privacy companies have gotten pretty good implicitly at demonstrating to us the value of our data, at least in part, right?

And the fact that they deliver to us advertisers that we want is actually no small thing, right? As a kid who- person who has a kid who delivered a Saturday Star up and down Premier Road one mile each way and that thing was this thick with advertisements,...

[Loewen holds his thumb and index finger about an inch apart.]

... you realize it's pretty good when you can deliver personalized advertisements. People are giving everybody everything and that's actually revolutionary but beyond that, you know, the delivery of information online and newspapers- you know, The Globe & Mail's got a fully algorithmic and personalized algorithmic front page in their newspaper. It's a real revolution to people's consumption of information.

So, we implicitly, I think, see the value in the way the private sector leverages our private data, in some cases, to the degree that it can. It doesn't talk about it but they show us it, right?

And there's a paper that came out last week on GDPR in Europe. I confess, I've looked at it quickly but not carefully, but its claims about the effect of GDPR on- which is the data, the European data privacy framework, its effects on hampering innovation in the creation of apps is just staggering for what the innovation costs are of these data.

So, there are- so, we're at least having that conversation in the private sector but I don't feel like government really makes the argument proactively about why they want data, right?

So, if you recall the requests from Statistics Canada a couple of years ago to access banking data on Canadians which created a huge uproar in the House of Commons, and the reason why is some people don't think the government should know what's in their bank account on principle and some don't because they don't want CRA to see what's in their bank account, right, there wasn't a full-throated argument about why that data were- why those data were useful to StatsCan and what they could learn from it, right, and we just don't talk this way about the benefits we could get by knowing things about data.

I'll give you one more example, Gillian's heard me use it before. We created- the most useless possible design for a contact tracing app is the one that we adopted in Canada because it was the most privacy preserving. In the heart of a pandemic that was changing people's lives, leaving them locked down their houses, our political leadership didn't have the courage to say, we have to track where you're going so we can tell you if you were standing beside a person who had the infection and we can tell you where you were when you had it happen, so you can tell others who may not have had the app. They didn't even try to make that argument. Instead, they made the argument that, here's this thing and it's never going to- you know, and it was assurance after assurance after assurance, and it created an ineffective tool.

So, just on the political economics of it, I would just say that we're not in the practice of public servants and public leaders telling us why we might want to make a trade-off between some information that we give up and what we could get from it that would be better, you know, what we could get from it that would be better.

And I think that that's part of the problem, we presume that the people won't want to trade off that data when in their private lives, they tell us by their behaviours every day that they are willing to trade off those data.

I'll just say one more thing about health care, and then Gillian can talk about it in particular because she knows this well, but here's a curious thing, right? There's- I'll make this claim. I think it's true that the province of Ontario is the largest single purchaser of health care in North America, larger than any single hospital network in the United States, with an incredibly diverse population.

If we could crack the nut on how you use our population, use our system to learn about novel treatments and learn about- you know, to basically run maps of RCTs, random controlled trials for medicine, to test pharmaceuticals and different surgical techniques, and on and on and on, we could be one of the most dynamic and useful and generalizable sites of innovation for health care in the world. We could probably fund a very large share, if not the lion's share, of our health care system through that, right?

But Gillian can tell you how hard it is to actually get at health data.

John Medcof: Great. Gillian, that sounds like a great segue over to you.

Gillian Hadfield: Yeah. So, Peter is thinking about the experience that I had, Schwartz Reisman had, right from the early days in the pandemic with trying to help solve the data problem, and I think- so, I think these questions are getting at something really very fundamental and it's a great example of the more general point, that we need to change our perspectives on things to gain that creativity.

So, right before the shutdown, Schwartz Reisman ran what we call a solutions workshop with- on behalf of Diabetes Action Canada which had brought to us the problem that they were facing in trying to build a shared repository, research, and so on for diabetes care and detection and treatment management in Canada, that it was almost impossible to access the data.

It would take three years to get hospitals to agree to share data for researchers to work with- for the treatment, you know, clinical data, just- we have, to emphasize this point, you know, Peter mentioned, you know, it's- Ontario, for example, a huge purchaser of health care but because we have public health care systems throughout the country, we actually had some of the best health care data in the world.

We're a multicultural society. We have this great coverage, right? We have the- we don't have the same problems that you're going to have in a private insurance system with lots and lots of groups that are excluded from care or getting very different quality of care. So, some of the best health care data in the world but we've arranged that and this is fundamentally a consequence of the way we have our legal design around privacy.

The data is held in tens of thousands of silos and there is such legal risk as well as public, perception risk, reputational risk around transferring that data, that it just basically grinds to a halt. We are not getting that kind of data sharing happening.

So, we had held this workshop right before the pandemic with the shutdown, in March of 2020, and so, you know, I think there's more to be agreed on here. There's more- there's opportunities here, Peter mentioned David Lie and Lisa Austin who work with us, you know, approaches to how you could organize that data and collect it in a place to be able to do research on it, to be able to inform your contact tracing and form your treatment options and form the building of A.I.-powered tools to help detect and respond to the pandemic.

And we were, day and night for a long time, just trying to get that model in there. We- you know, we got part way down the road but the- and this is now, I think, going back to Peter's point about the Eeyore point, that, you know, we still have some work to be done here, that the infrastructure of privacy protection and the norms around it were such that it was just almost impossible to get a new creative, beautifully, privacy protective approach but a different approach, one that had to recognize that, you know, our historical approaches to the identification don't work in the era of A.I. and our historical ways of thinking about data minimization don't work in the era of A.I.

We need new solutions and there absolutely are new solutions but everybody has to say, okay, I'm recognizing that the way I've approached this problem or the way we've institutionally approached the problem for decades needs to give to something new, and can we get focused on what we should care about, another point Peter's emphasizing, which is it's not only about privacy right? It's not only about making sure nobody ever finds out what your test result was on the antigen test or where you were on Tuesday at 4:30 in the afternoon.

It's about figuring out can we- there's public value in that data and public value that with the pandemic, really, was counted in lives and that we have to be able to recognize it's- you know, how are we going to grab that public value? How are we going to recruit that data to that value? Data's a totally different thing today than it was 20, 30, 40 years ago when we first developed these systems.

So, it's a great example and I hope that's giving your questioner some insight into how we need to sort of think differently about privacy because we need to think differently about data and the consequence of sticking with our legacy systems.

John Medcof: Yeah, thank you. I mean, you make a really important point, that our environment is evolving so quickly. We need to find ways to adapt to this, and let me use that as an opportunity to kind of, you know, do a little think back over what we've maybe seen over the past months, you know.

The series is called A.I. Is Here. It's here now and one of the things we talked about when we launched this learning suite back in November, you know, eight events ago, was just how quickly everything is moving in the system and how we need to be responding to it and seizing the opportunities that presents, but if we think back over even just, let's say, those last six months, I'd maybe ask you, have you seen any particularly significant technological or legal or ideological or maybe even commercial or public policy development related to A.I. in Canada or internationally that can help serve as a catalyst for this change?

Like, is there anything happening now or that's just happened that is going to be a game-changer and maybe get us over some of these challenges we're facing?

And I'll start with you this time, Gillian. Anything that you'd like to share on that front?

Gillian Hadfield: Well, I've spent a lot of time thinking about the regulatory side of this. So, obviously that's- you know, that's related to the use of A.I. in government but it's also how government is going to interface with regulating A.I., you know, in all the places we're going to find it which is just about everywhere, you know.

I think the- so, what I have found encouraging over this period of time that I've seen is conversation around- the U.K. has called this, you know, paying attention to the idea that we need to build, you know, a whole ecosystem of how we make sure that A.I. is safe and built in the ways we want to and create the incentives for building that kind of A.I.

If we're going to build a A.I. that, you know, can improve the neutrality of decision-making, for example, as some earlier questions brought out, or if we're going to build the systems that can ensure that data, even in massive volumes and quantities, is being held securely and in appropriately, privacy protected ways, if we're going to do that, we're going to need a whole ecosystem of providers who are certifying that they're figuring out what it means for it to be fair and so on.

So, I've seen development of that more advanced way of thinking about the problem that we're facing as opposed to the- you know, the EU A.I. Act approach, which was introduced about a year ago, of proposed legislation which is still very top down. So, I don't see it as a creative response or a flexible response or something that can deal with the rate of change and the speed of change. So, I've been encouraged to see that development.

On the technological side, we are just- it is moving so fast and I think that the key thing that we are seeing is the scale of the system is massive and that these are not systems that you can just kind of pluck out and say, okay, here's a system, we're going to evaluate it, these are massive systems, and that many of them are built on top of each other.

So, there's, you know, something- what the researchers at Stanford have called foundation models, massive models that are being used to analyze real text and natural language and that are then being used to build all kinds of other things including to do coding, and they're just- you know, it's the scale, the speed. It continues to just- yeah, it's really- it's very striking.

So, I would say those are some developments that I've seen in the last six months since we started this.

John Medcof: Yeah, amazing. Those are game-changers. Peter, anything you've seen recently that you see as a catalyst for a path forward?

Peter Loewen: Well, I'll tell you about a game-changer in our industry, that is education that Gillian and I are in, which is a major one, which is- I read an account this week of the potential for, is it GPT-3, Gillian?

Gillian Hadfield: Yeah.

Peter Loewen: To generate essays, so this- basically, as I understand it, it's a massive computer in Silicon Valley and you put in a question and it gives you an answer, but its capacity to synthesize information and turn that into a written essay on some question which looks different than the next time you put in the same question, which is to say that it creates a unique answer that you can't run through a plagiarism machine, is potentially very disruptive for education because students will use it to write their papers.

At the same time, there's- you know, there's new technologies coming out which are just overlaying on top of Google Scholar, which is itself a pretty simple A.I., to give us answers to academic questions. You want to ask the question, you know, what increases voter turnout or, you know, what is the most effective diet for combating heart disease? There's technologies now that are basically synthesizing every answer that's found in the abstract of an academic paper.

It's just speeding up the speed. It's really increasing the speed, so the second derivative is positive, with which we're able to write things and learn things in our world. It's going to create problems for professors who are trying to catch students who are not writing their own essays but what it represents in a more important sense is how rapidly we're getting to the place where knowledge can be synthesized to a high level of abstraction. That's really positive, right, as a development.

And for public servants, it's very positive, right, because the job of a policy analyst often or the job of a person designing a policy is to try to answer a whole bunch of very difficult questions, that have very large uncertainty bounds, quickly for the purposes of some, you know, political objective, and I don't mean that pejoratively but for the objective of advancing some file. It's done with all sorts of uncertainty with a lack of expertise or when you want to bring in expertise, it's incredibly slow because it requires careful study and/or widespread consultation.

So, the rate with which we've increased our capacity to ask and answer questions drawn out of academic literature, which is to say scientific knowledge, really is impressive to me. It's happening faster than I thought it would and that's interesting for us as academics but it's also interesting if people came up with kind of good interfaces for that to be used easily by public servants. It could really change the way you learn about policy choices pretty rapidly.

John Medcof: Yeah, that would show great promise. Maybe it starts in the academic sector but certainly, as per your point, you can see many applications for such a tool more broadly and particularly in the public policy context.

Look, we're down to just a last couple of minutes so I'm going to invite each of you to share, in one sentence, a key takeaway you would like to leave with our learners as we close the series here, and I'll start with you, Gillian. Go ahead.

Gillian Hadfield: Okay, so the key takeaway I'm hoping people get here is that it's really important that all of us, not just computer scientists, be building A.I. and building the systems around it. It needs to remain deeply connected to our human systems and our human ways of relating to one another.

And currently, I'm feeling quite worried about the fact that it is so dominated by computer scientists whom I love, but I'm really critical for all of us who are not computer scientists to get involved and to be part of creating this new world.

John Medcof: Great, thank you. A call for all of us to get involved. Peter, the last word is over to you.

Peter Loewen: Yeah, so the lives that Gutenberg changed weren't only the lives of printers, right? It was pretty widespread and the pace at which A.I. is going to change all of our lives in technology, more broadly but A.I. in particular, is pretty quick and it's going to be very, very widespread.

So, I think we- you know, just three subpoints to the sentence, right? One is that we should be ready for it to happen quickly, we should understand that it's going to be disruptive and we should kind of treat that disruption with some caution but we should also embrace it, and third, that, you know, democratic governments need to be at the lead of figuring out how to properly and constructively use A.I. because the private sector won't be seized by the same moral imperatives. The government will be and God knows China will be. So, it's really a key role for public services, especially like Canada's, or Canada's in particular, to play.

So, that's one of the great reasons for us to have done this series which we're so happy to have done.

John Medcof: Amazing opportunity for us to lead. Thank you to you both, Peter and Gillian, for your time and for your valuable insights today, and more broadly, on behalf of the Canada School of Public Service, I'd like to thank not just both of you but everyone at the Schwartz Reisman Institute who was involved in bringing the A.I. series to life over the course of the past eight events with the help of some really impressive guest speakers.

Our viewers, I think, have been taken on a really amazing journey as it relates to the many different facets of A.I. and the transformative impact it can have on nearly every aspect of our personal and professional lives. So, we are immensely grateful to have had the opportunity to work with you to bring such high-quality learning material to our public service audience.

And to our learners, we hope you enjoyed today's event in the A.I. Is Here series. We'd love to hear your feedback on today's session via the electronic evaluation you will be sent by e-mail and you're also going to find a link to some of the previous events and series' available on the School's web page.

And with that, I will close. Thank you very much again, Peter, thank you very much again, Gillian, and thank you everyone for watching.

Peter Loewen: Cheers. Bye.

Gillian Hadfield: Great. Thanks, John.

[The video chat fades to CSPS logo.]

[The Government of Canada logo appears and fades to black.]

Related links


Date modified: