Language selection

Search

The New Economy Series: The Data-Driven Economy (LPL1-V02)

Description

This event recording features a panel discussion on data ownership, the governance of emerging technologies and setting standards for big data and artificial intelligence.

Duration: 01:15:48
Published: July 8, 2020
Type: Video

Event: The New Economy Series: The Data-Driven Economy


Now playing

The New Economy Series: The Data-Driven Economy

Transcript | Watch on YouTube

Transcript

Transcript: The New Economy Series: The Data-Driven Economy

[The animated white Canada School of Public Service logo appears on a purple background. Its pages turn, opening it like a book. A maple leaf appears in the middle of the book that also resembles a flag with curvy lines beneath. Text is beside it.]

Webcast | Webdiffusion

[It fades out, replaced by a Zoom video call. The video window is filled with a man with brown skin and short black hair. He wears a blue suit jacket over a yellow button-down shirt. His camera is angled up slightly, so we see the upper walls and ceiling behind him. A picture of a person on a snowy mountaintop hangs on a yellow-beige wall.]

Anil Arora [AA]: Welcome, everyone, to this second event in the New Economy series, a partnership between the Canada School of Public Service and the Centre for International Governance Innovation, or CIGI. I want to start by thanking the Canada School of Public Service and all of you for the opportunity to join you today. And it's a real privilege for me to be your moderator for this session today. I'm really proud to have been invited to participate in this important program that, through a series of learning events, explores features of today's economy, such as the importance of intellectual property and data strategies in driving economic growth, cyber security, foreign direct investment, and the importance of global standards and governance regimes.

[Anil's audio is slightly tinny and muffled as he speaks.]

Allow me to introduce myself, first of all. My name is Anil Arora and I'm the Chief Statistician of Canada at Statistics Canada. Colleagues, as you know, the last few months have been a challenge for all of us as we navigate the new reality of living with this pandemic and the uncertain future that it presents. You'll have read many times this is a data-driven pandemic, the need for timely, meaningful, and granular data has never been more important as we try to understand the interrelated economic, social and health impacts of COVID-19 on Canadians.

Statistics Canada's role is to provide data and insight for all Canadians, and we at Statistics Canada have had one philosophy over the last few months: how can we be helpful? Helpful to frontline staff, to Canadians, to policy-makers, to the media, to businesses, to NGOs and, of course, on the international scene as one of the leading agencies in the world. Bringing reason and meaning to what we do has been a powerful motivator for all our staff. The insights we provide are important because we've learnt that Canadians are being affected in all areas of life. For example, 90% of Canadians report practicing social distancing impacting social and working lives. Approximately 5 million work most of their hours, from home; 3.3 usually work elsewhere. 36% percent express strong or extreme anxiety about their health, 54% about the health of others, and 34% experiencing family stress.

The importance of data, standardization and governance has been highlighted in the context of this pandemic. In our history, Statistics Canada has played a leadership role in defining data standards and promoting data governance. As the calls for more data persist from various jurisdictions, it's evident that solid data guardrails are as important as ever. Data must first be generated in a standardized way if we are to provide relevant information and allow meaningful comparisons and insights for all Canadians. Over the past few months, the COVID-19 pandemic challenge has underscored the importance of collaboration amongst organizations and across jurisdictions so that we can collectively address this pandemic. So, as it turns out, data is indeed a team sport. We must work closely with partners in our federal family, the provinces and territories, municipalities and other organizations to bring more data and insights to Canadians at this time. And like all team sports it is very important that everyone plays by the same rules.

We've had a number of discussions with a number of our partners in the community to find ways to enhance the collaboration and create the foundation of common understanding. They provide the ground rules for the effective use of data. Whether it's addressing the COVID-19 pandemic or the many other issues that Canadians face today, we need a consistent set of standards as a foundation that allows for timely, well-informed decision-making. A key understanding, or undertaking, rather, in our work with the Data Governance Standardization Collaborative. This summer the Collaborative plans to release a roadmap that will lay the foundation, the groundwork for responsible use of data and help us to determine the functional requirements for data governance, from a data lifecycle perspective. Open data and standards go hand-in-hand with greater use of standard data sharing. We'll be making more data available to everyone. And with more data comes greater insight into the Canadian experience. As we face high, very urgent demands for data, we have to focus on providing the information Canadians need when they need it, instead of worrying about accuracy to the nth degree. And all this while, of course, while protecting the privacy and confidentiality of Canadians' trusted data.

Now, I could continue speaking for quite a while on these issues, but before we launch into a discussion and hear from our panelists, I do want to highlight a few housekeeping items for our session today. Simultaneous translation is available in the language of your choice through the portal. Instructions were sent to you with your webcast link. Viewers are invited to submit questions to the moderator Q and A near the end of the event. You can submit a question on your desktop computer by clicking on the icon of the person with their hand up on the top right of the screen. And I'm joined today by three experts from the Centre for International Governance Innovation, CIGI. And with this, I'd like to introduce Aaron Shull, who is the managing director and general counsel at CIGI, to offer a few words of welcome, introduce CIGI, and introduce our three panelists. Over to you, Aaron.

[A cursor unpins Anil's video, and four more video windows appear. In the bottom middle window is Aaron, a white man with short dark hair and a small beard. He wears a grey suit with a black shirt and tie. Behind him, half of a poster for CIGI is visible. The cursor pins Aaron's window, and it fills the screen.]

Aaron Shull [AS]: Great. Thank you very much, Anil. And it's a pleasure to be with you all this afternoon. As Anil said, my name is Aaron Shull. I am the managing director of the Centre for International Governance Innovation, based in Waterloo, known as CIGI for short. And CIGI is a public policy research institute often called a think tank. We focus on emerging issues and challenges at the intersection of technology and international governance. And we are delighted to partner with the Canada School of Public Service to deliver this important series, the strategic objective of which is to outline some of the fundamental challenges facing Canada as the economy changes from one where the primary drivers of growth were tangibles, things like bricks and mortar, plants and assembly lines, to one where the principal driver of wealth will be intangible in nature, with control over intellectual property and data being the core features of creating prosperity.

Before introducing the speakers, I first wanted to say a special word of thanks to Taki Sarantakis, the president of the Canada School, for his leadership and vision in bringing all of this together under one series.

To introduce my colleagues from CIGI is a real pleasure because we brought together some true experts in this area. First, Michel Girard. Michel is a senior fellow at CIGI, where he contributes expertise in the area of standards for big data and artificial intelligence. In addition, Michel provides standardization advice to help innovative companies in their efforts to enter international markets. He contributes to the CIO Strategy Council's standardization activities and was recently appointed to the International Electrotechnical Commission's market strategy board, which helps develop and maintain more than 10,000 international standards for electrical and electronic technologies. He has 22 years of experience as an executive in public and not for profit sectors, and prior to joining CIGI, Michel was the vice president, strategy at the Standards Council of Canada.

Teresa Scassa joins us. She's also a CIGI senior fellow, and is the Canada Research Chair in Information Law, Policy and a full professor at the University of Ottawa, Faculty of Law, where she and her groundbreaking research explore the issues of data, ownership and control.

[Aaron's video window is unpinned, and the three panelists' video windows reappear.]

Teresa is an award-winning scholar, and is the author and editor of five books, and over 65 peer-reviewed articles and book chapters. She has a track record of interdisciplinary collaboration to solve complex problems of law and data.

And finally, last but not least, my colleague Taylor Owen. Taylor Owen is also a CIGI senior fellow and cohosts our very own podcast, The Big Tech Podcast. If you haven't seen it, I would commend it to you. He's an expert on the governance of emerging technologies, journalism and media studies, and on the international relations of digital technology. He's a Beaverbrook Chair in Media, Ethics, Communications and the director of the Centre for Media Technology and Democracy at the Max Bell School of Public Policy at McGill University, where he is also an associate professor. So with that, I will turn it back to Anil, and let me just finally say a special thanks to Anil. I know how busy you are and how many demands there are on your time. Thank you, Anil, for taking the time to chair this important panel.

[Anil's video appears alongside the others.]

Anil Arora: Thank you very much, Aaron; again, it's a pleasure to be with you. Colleagues, as a public service, we're increasingly being confronted with challenges related to national and international data governance. The ethics of data use and sharing and how to leverage existing data sources to accelerate innovation.

In collaboration with my colleagues at PCO and TBS, I had the privilege of leading the development of the government of Canada's data strategy, which seeks to position Canada as a leading jurisdiction in the field. The recommendations in the data strategy are structured around four themes: strong governance, improved data literacy and skills, enabling infrastructure and legislation, and more focused treatment of data as a valuable asset. The goal being to set a foundation so that the government of Canada creates more value for Canadians from the data that we hold. And I'm pleased to say that government departments have made good progress in establishing and implementing their respective data strategies. And our work continues. We continue to collaborate with the Canada School of Public Service on a number of initiatives supporting data strategy, including building the Data Science Community Initiative, which involves building a data science ecosystem to share expertise and best practices in order to build data science capacity right across the public service.

[Anil's window is pinned.]

We've also worked together to establish data literacy competencies, guide the development of the Discover data training course and contribute to their data literacy training project to produce online training resources in core data literacy areas. So, as Aaron said, we really, really are looking forward to continuing the leadership of the school under Taki.

So as you know, many of the challenges related to data are no longer purely academic in nature. Governments around the world are working to privacy implications and are challenged by the power of emerging technologies, which may threaten the integrity of disaggregated and anonymized data. Questions of data privacy and security are not new, but the pandemic has brought a new urgency to this issue, as I mentioned in my initial opening remarks. And so with this, I'd like to invite our panelists to each provide their opening remarks. And I'm going to start with Michel. So, Michel, over to you.

[Michel is a white man with glasses and short white hair. He wears a grey button-down shirt, and he sits in a room with two white doors behind him. Michel's window fills the screen.]

Michel Girard [MG]: Thank you very much, Anil, for this invitation to share with you some of our insights regarding the challenges that we face in transitioning Canada's economy to a digital economy. And as you heard before in this webcast, we really need a national framework in order to succeed. A framework that will allow organizations to collect data, to share data and then to generate insights from that data. And that framework, as was said before, it really needs to be based on sound governance principles.

Digitization to us means harnessing new data sets. It's not only talking about existing data that we've already mastered, but thinking about the new data sets that will be coming up soon, such as Internet of Things devices. There will be billions of them installed across Canada to collect data. It is about transitioning to 5G networks. They will be installed at some point and it will multiply, triple the amount of data that can be transmitted across the country. And then, thinking about data access and ensuring that our data scientists have access to the best possible data in order to generate new insights. So, it's coming. We have new data sources coming up and we have a lot of interesting things happening. But we know also that there is a demand from across economic sectors in Canada for this to happen. I'm just pointing to some of the reports that came up after Minister Morneau created economic strategy tables. There were seven of them. And these economic strategy tables looked at the challenges faced by various sectors; the manufacturing sector, resources for the future, agriculture, health and biosciences. And one common theme that emerged from all of these reports was the need to establish and develop and implement national digital strategies so that we can help these sectors individually to digitize.

Let me just quote you one sentence from the advanced manufacturing report in terms of the urgency of doing this. And they said, "We will either digitize operations or we will die." So it's not like it's a "nice to have," it is a "must have." We need to find a way to integrate new data sets, new data insights into our existing operations. Otherwise, we're not going to be competitive anymore and we're going to be left behind. So, we need data collaboration to transition to this new economy. But for that, we need a framework. We need a framework to help organizations share data, right? It's not only about one organization generating a data set and then taking advantage of it. It is creating a network, an opportunity for multiple organizations to share data so that they can help each other achieve common goals. It could be social, it could be economic. We also need a framework to create new data platforms. We need access to data. And right now in Canada, it doesn't really exist. We know we have the big tech platforms, the Facebooks and the Twitters and others, but they're not owned and operated in Canada. So the data just flows away, and you don't have a lot of control over what to do with this data.

The third thing that Anil mentioned is that whenever we will create those data collaboratives, we need to ensure that the data is trustworthy. We need to be able to say, "OK, I understand the features of the data. I know how to use it so that I can manage it properly." Those are exciting times where we're now entering a new era and we're witnessing the creation of a new economy that cuts across sectors and it hasn't happened a lot in our history, so we've got a really good opportunity here. But it's not only an economic issue. Governments have a huge role to play here in order to get it right, because digitization is not only an economic issue, it's also a public policy issue. So, getting data governance right is extremely important if we want that successful transition to the digital economy. So, those are my opening remarks. I'm going to let my two other colleagues from CIGI add some and then we can answer your questions. Thank you.

[The other participants' windows reappear.]

Anil Arora: Thank you very much, Michel. As always, I'm always intrigued by your framing in your comments, so thank you for that. I think it'll generate a lot of good discussion coming up. Next, I'm going to turn to you, Taylor, and get you to give your opening remarks. So, over to you.

[Taylor is a white man with short sandy hair and black glasses. He wears a light blue button-down shirt. Behind him, a white wall is offset with a large red panel, and white fragments of sculptures and carvings hang on it. Taylor's video window is pinned. His audio is slightly tinny, and his video freezes occasionally as he speaks.]

Taylor Owen [TO]: Yeah, sure thing. Thanks, and it's great to be here. So, I'm broadly interested in how we govern emerging technologies generally, and I think data governance is a piece of that. But I'll make a few remarks to make the case that it is a component of what needs to be a much broader agenda. In particular, I'm interested in whether our current legal and regulatory regimes, as we've built them over the past century for a set of policy demands and challenges, are suitable to meeting some of the policy challenges we face due to new technologies. And if they're not, what kind of new framework do we need? And I think that's a very big question, because technology's touched so many aspects of our lives. And because it's such a big problem and it's so vast and it covers so many different policy domains across departments and across ministries and across areas of expertise, that comprehensiveness of it often means we don't have entry points into the conversation. And I think there's a real governance challenge of figuring out new policy agendas in this space.

But one thing that I think can be quite clarifying are moments when we're making policy decisions around very particular technologies for very particular purposes, because these decisions can reveal how complex and broad and multifaceted the governance of technology actually is. And I think the recent experience we just had with contact tracing procurement offers a window into that. And Teresa and I are both going to use that, I think, as a sort of entry point into our perspectives on data governance. So, let me just say a few things about that moment we just went through around governments deciding if they should procure or develop themselves a set of tools for contact tracing or exposure notification and some of the implications of that.

Just very briefly, as a framing, there was this amazing moment in a press conference when the protests were ongoing in Minneapolis where the public safety commissioner described how they were going to start tracking protesters. And he said, "we're going to track who everyone's associated with, what platforms they're advocating for and on and how they are organized. And we're in the process of building out that informational network now." So it's this kind of crystallizing moment where you have the language of contact tracing being used in this epidemiological sense from governments, including ours, being appropriated for policing surveillance, right? And I think it shows that we're really talking about a much broader set of issues here.

So, what does our experience tell us about data governance generally? One is that it was very clear from the start that not all technologies being proposed and discussed were of the same kind. Under this bubble of discussing a capacity to track the spread of the disease, we were talking about very different technologies with very different implications and infrastructure needs from exposure notification, which could be very decentralized, to contact tracing, which demands some degree of centralization—and there are some bids that were really proposing that—to a much broader deployment of more invasive surveillance technologies writ large. And each of those three categories of technologies had vastly different needs, both on the technologies, the infrastructure behind them, the legal and ethical frameworks that we need to sit on top of them, the capability of them even being possible, and the long-term implications of us engaging in those sets of governance, using these technologies. And they really demanded a high level of literacy, I think, from the people who are making the decisions around this. And even the fact that we landed on an exposure notification model and kept calling it a contact tracing model shows just how conflated these things became.

A second point is that I think when we conflate the promise made of some of these technologies with an assumption that they are possible makes it very difficult for us to judge the trade-offs between them. There's always grand promises when either societies or governments are considering new technology infrastructures, and unless we know how feasible the promises are, we cannot make trade-offs because all of these technologies, particularly in the surveillance space, encompass large trade-offs between rights, collective securities, levels of democratic accountability and ultimate desired outcomes, particularly in a pandemic. It's very hard to calibrate those when we don't have detailed assessments of their promise that I don't think that happened in this case.

A couple more quick points here. One is that how we deploy these technologies can also entrench power, and we've seen this over and over again as we've debated how and whether we are going to govern emerging technologies. Joe Stiglitz makes this argument that I think is really powerful that, during the Great Depression or coming out of the Great Depression, we did an era of trust busting because companies got much bigger during that time period. But it wasn't just because they were getting economically more powerful. It was because during that time period, they also became politically much more powerful. They became embedded in the political system. And his argument now, which I think we need to consider, is that in a similar moment of crisis and potentially long-term economic challenges, we may be entrenching a new set of power and interests. So when we talk about deploying individual technologies, we're not just talking about those technologies. We're talking about the economic systems and political systems that are becoming embedded with those technologies, and I think that is really important to keep in mind.

And finally, I think we need to be very aware of how technologies can embed certain norms of governance and can reinforce existing inequalities. A lot of these solutions that on the surface look viable and generally laden with minimal risk in either Ottawa or Palo Alto, where they're being talked about, look very different in communities with existing deep structural inequalities. And I think that clearly came out in the debate over contact tracing particularly, that when you embed these kinds of systems, they affect people differently, and particularly for technologies that we hope have this collective social benefit, ignoring those inequalities and how those can become exacerbated in technology is really challenging. So I think I'll leave it there. And I know Teresa is going to continue on our COVID case study here, and I look forward to discussing it.

[Anil, Michel, and Teresa's video windows reappear alongside Taylor's.]

Anil Arora: Thank you very much, Taylor. That was excellent. You sure have raised a lot of issues that I would think many people may not have even thought about. It gives us a lot to discuss in all those areas that you have just raised.

Last but certainly not least in these set of opening remarks, I'd like to now turn it over to you, Teresa, for your thoughts.

[Teresa smiles. She is a white woman with glasses and greying dark hair. She wears a pale blue and white striped button-down shirt, and her window's background is blurred behind her. Her mouth moves silently, and she gestures with her hands.]

Anil Arora: Teresa? You might be muted.

Teresa Scassa [TS]: Oh, I'm muted.

[Teresa chuckles. Her video window is pinned.]

So, thank you, Anil, and thank you also to Taylor for framing the contact tracing app discussion. And so I'm going to continue with it and I'm going to try and give some concrete examples of how I think the debate over contact tracing apps, both globally but also in Canada, raises some interesting issues around digital technology adoption by governments. So, one point is whether the actual need defines the technology or vice versa, whether the technology defines what government is going to get. And I think with contact tracing apps or exposure notification apps, I think you'll probably find that what public health authorities wanted or needed in Canada is probably different from what they're going to end up with, with the Google app exposure notification model that has been adopted. And some of the factors, I think, that shaped the ultimate decision-making, that meant that they ended up with less than what they ideally wanted, included a set of privacy and security concerns, public trust issues, that Taylor's already talked about, as well as another phenomenon, which was the growing international dominance of the platform or the approach suggested by Google and Apple. And I think that that's an interesting phenomenon, too, that we might return to later in the discussion.

Another issue which was particularly acute in Canada, and always is, is the issue of digital federalism. Public health and contact tracing are primarily within provincial jurisdiction. The pandemic created a national-level concern. And the question was: when it comes to sort of adopting a digital solution, who is supposed to lead and how does that get done? And obviously technological interoperability was particularly important—you can see its importance in a city like Ottawa where people live on one side of the river and work on the other—if you don't have interoperable contact tracing apps or exposure notification apps, then it's a pointless exercise in that region. So, there are reasons for federal action, but there are challenges that come with that because of the federal-provincial relationships. And so I think that it just exposed once again these challenges that we have of doing digital federalism and laid those bare.

Another issue that comes up is issues of data ownership and control. If you're adopting technology, I think it's a fundamentally important issue if the technology is going to be collecting data, who has control over that data? What was interesting in the context of the exposure notification system that was ultimately adopted at the federal level in Canada is that, for the most part, it is the individual and not the state or any private sector actor that has control over the data. And I think that—I'm just going to mention that briefly, because I think that it's an illustration of the control issue, and, of course, whoever has control, it has an enormous impact on who can actually access or use the data and for what purposes.

Another issue that I think has been very important and will continue to be important in this space is transparency. Tech adoption requires public trust. Transparency is important to public trust. In the case of contact tracing and exposure notification apps, we saw a trend very much towards open source code so that the code was visible and people could look at it and examine it. We've seen somewhat less transparency around privacy impact assessments. For example, the Government of Alberta has released an app but not released its privacy impact assessment yet. And so there may be a little bit less transparency around that sort of thing. I think there's been mixed transparency around goals and objectives of these systems and why they're being adopted. And we can get into that in more detail but I do think that's an area where there's been less than ideal transparency, and I'm going to talk in a minute about metrics because that's to come. And I think we need transparency around that and I'm not sure that we're going to get that.

Privacy obviously has been an issue with these apps and has shaped the kind of technology I think that we've ultimately ended up with. Privacy is linked to public trust. There are surveillance issues with contact tracing apps, which Taylor has spoken about, and app security issues in the privacy context. But there are many issues that go beyond privacy, and privacy in the context of this technology took up a lot of the oxygen in the room and left some of the other issues under-discussed. And yet these are going to be issues I think that will be fundamentally important with the adoption of digital technologies generally by governments. In this case, we were talking about a new and unproven technology being unleashed on a vulnerable population. People have lost their jobs. They've lost income. They're in fragile mental health. They're concerned about illness. So, we have a very vulnerable population. There are complex issues of ethics and human rights that we are not yet great at in terms of evaluating and assessing at the adoption stage of technology.

So for contact tracing apps some of the issues were, should they be voluntary or mandatory? If the technology is adopted and if we're told it's useful and it will protect us, which we have been told, then what about access to the technology for those that don't have smartphones or don't have smartphones with the right operating system, where there are other barriers of language or physical ability? And so these are issues that we need to think about. There's the potential for businesses and employers to make a voluntary app mandatory as a condition of return to work or entry to premises. And how does a government that's decided to make an app voluntary deal with the conversion of this app to an effectively mandatory app? There's also the risk of potential harms from false alerts and who will disproportionately experience those. And I think that it's complicated but it's important to think about. If I download this app and I get false alerts that basically mean I have to go into self-isolation until I can get tested, that doesn't matter too much because I've got a great job that I can do from home. But if I'm the sole breadwinner for my family and I have to go out to work and I'm going to have to stop working if I'm told to self-isolate or I need to get tested or if my employer has made the app mandatory, then this will have consequences for me in very different ways. And I think some of the debates that we've had around equality and equity within society feed into that as well in terms of the disproportionate impact. There are also potential harms from overreliance or false expectations. There are people who actually believe, and they've been told, that this app will protect them. Well, what does that mean, protect them from infection? Not really. That's not what it's designed to do. So what is that messaging and how does that impact the public?

And a final point I'll make before I stop is one about metrics. How are we going to evaluate this technology? Up until now, we've seen fairly crude metrics around the numbers of people who've downloaded the apps and whether or not that makes the app a success. But those are really crude metrics. And to understand whether this app is actually useful or successful, we're going to have to have other data about its performance. Well, how do we get that data? Is it possible to build that onto the GAEN system that we've adopted, which minimizes data collection? Is it possible to do it in a privacy protected way? What are the measures that we need to put in place? Are we going to unroll an app with the capacity to collect the necessary metrics to understand whether it's working as it should be and is successful or whether we should decommission it because it's having an unduly adverse impact and is not a beneficial technology? And are we going to be transparent about that? So, I'm going to stop there. That's sort of a canvas of a range of different issues that this technology raises that I think are raised in other contexts as well.

[Anil, Michel, and Taylor's windows reappear.]

Anil Arora: Thank you very much, Teresa. Again, some really intriguing questions, and I think we're going to explore a number of these areas now in the next 20 minutes or so, where I'm going to ask you a few questions, get your input, and then I think we'll turn to the audience to see what questions they have. So, on that note, I encourage our audience, our participants to submit their questions. And we've already got a couple. So that's great, keep them coming. So let me just start here a little bit. Michel, we've talked about the importance of frameworks, how they're really important, we've talked about separate tables as well. Tell us, is there another country that's got it right? Is there a framework out there that we could either emulate or use as a foundation for us to kind of move forward? So, what does that look like, and what are our legal and policy infrastructure challenges? Are they robust enough? And in fact, if we had to focus our efforts, where are we vulnerable and where do you think we should put some of our effort?

[Michel nods and smiles.]

Michel Girard: Oh, that's a big question, Anil.

[Michel, Anil, and Teresa chuckle.]

In terms of best practices, I think we're seeing a lot of different viewpoints around the world and a lot of different philosophies basically being used to deploy new technology. So in the US, it's the market rules and we're seeing a lot of successes with the deployment of new platforms. But the backstops are not really there. China is taking a very "command and control" approach to this. We're going to be seeing soon from China a document called Standardization 2035 or something like that and it's about basically creating a set of standards so that 5G is connected to IOT is connected to AI downstream. There'll be thousands of standards, not necessarily focusing a lot on some of the core issues that we just heard about earlier this afternoon, but certainly on the technology. China is moving decisively to establish the technical underpinnings for this to work. The only group of countries that have begun to think through and implement this backdrop that we need to create a balance between the technology and preserving our rights is the European Union. And they're doing it almost as a defensive mechanism because they don't really have the firepower to design and implement new technologies in that field yet. I mean, they're struggling. They're not part of the leading pack. In my world, in standardization, I can tell you that it's trench warfare right now between all of these various groups to try to dominate the agenda and impose your technologies and impose by de facto choices that are solutions that we're going to have to live with.

So, I think Canada is really sitting pretty here right now. I think we do have a chance to make a difference when it comes to embedding a set of values to the norms that we need for these technologies to be deployed properly. Alex Benay was the chief information officer at some point last year, a couple of years ago, and he had a lot of interesting things to say about this. And he said, "You know, Canada could become the Switzerland when it comes to data governance, to marry the two imperatives, designing the right framework for data value chains to work and at the same time embedding the data governance framework with the values that we need to survive as a democracy."

[Anil nods thoughtfully.]

So, there's a lot of stuff happening out there. I don't see a model that we can basically take and apply and we have an opportunity here because we're seen as credible to design something that makes sense for us and that can be used by a lot of other democracies.

Anil Arora: Thank you. Thank you very much, Michel. So, some virgin territory for us to actually take some leadership and, as you said, some real opportunities for businesses: new kinds of jobs, innovation. So I think the importance of having a solid, robust framework is really important. Taylor, I'll turn to you. You talked about how this can impact communities, different groups of people differentially. And we need to worry about inclusion in the adoption.

[Taylor nods.]

You talked about the difficulty of entry points. I wonder if you could kind of give some advice to some of the policy-makers, folks that are in the regulatory kind of area. What advice would you have for them to create those entry points, to create that conversation? How do we do things in a way that respects the principles but also ensures the greatest inclusion, and that those unintended consequences I think that you were trying to refer to are things that we can bring up front and make sure that they're part of our thinking. So, what advice or what thoughts would you have?

[Taylor chuckles.]

Taylor Owen: Again, that is such a broad question, partly because, and this sort of highlights one of the real challenges I think we face here is that the ways in which the adoption of these technology infrastructures, whether they be social platforms or new business models or new computational systems that are enabling a whole set of economic and social and political activity, they touch so many aspects of our lives and affect so many people in potentially negative and positive ways that that scoping exercise is vast. I think the problem we've fallen into with how we treat these topics is we allow our existing mandates and institutions to make that calculation themselves about what benefits and harms and what trade-offs exist without actually having a much broader conversation as a society about how we want to adjudicate those trade-offs.

One example in particular I think we're in the middle of right now and are going to be wrestling with for some time to come is how norms of speech and how our laws and regulations around speech and allowable speech are going to adapt to the digital sphere or not. I think we are in the middle of a very hard debate about what speech we are going to allow as a society. And at the core of that debate is a tension between two values: a value of protecting free speech and a value of protecting people from the harms of speech. And we as a society, taking Canada as an example, we have had that debate before, and the result of those debates have led to the institutionalization of certain policies, whether it's through the charter or the criminal code or just broad norms of acceptable speech and media regulation, and it extends broadly.

But are we having that conversation now about how we want to balance that trade-off and how technologies might change speech fundamentally and affect new people, affected communities, censor new people embolden new people? We are in a different landscape of how speech is disseminated, how it's filtered, what speech gets prioritized, who gets harmed and benefits from speech. So even just that one entry point of "how are we going to deal with harmful and hateful speech online" reveals the scope of the challenge, I think.

And when you think about that scope… The other thing I really think we have to grapple with here is that even framing this as data policy is far too limiting. That leads us down a path of talking about privacy regimes, of data standards, of data rights regimes potentially, of policies around the movement of data and storage of data, all of which are absolutely important. Transparency around the use of data and algorithms. All of that is critical. But if it's not done alongside other sets of policies, whether they be in the competition space, and how big are the companies that are controlling this space? Are we allowing foreign companies to do things in Canada that we wouldn't otherwise in other spaces? And these kinds of content policies around what kind of speech interaction we're going to allow, then we're not going to get at these problems. But that's really hard for governments to do, because those sets of policies and regulatory regimes and legal systems exist in various different siloed components of our government, particularly at the federal level. But unless we do it all together, I worry we're going to miss the core opportunity here.

[Anil nods.]

Anil Arora: Thank you very much. So, I'll just continue on with my round of easy questions to Teresa next.

[Taylor and Michel chuckle, and Teresa smiles.]

So, Teresa, you raised a number of really, really important issues. You talked about contact tracing apps, both you and Taylor talked about it. I mean, we've got another example of Sidewalk Labs in Toronto. So, some of these have real opportunities going forward, and yet, if they're not done correctly, you can have some real failures as well. You talked about the importance of trust. You talked about the importance of ownership. You talked about digital federalism. You talked about a number of themes. Do you have a good example of where somebody has done it right? How can we experiment and how can we kind of move forward so that we don't have continuously false starts or look back and go "oh, darn it, we forgot about the following three aspects, so the only answer now is to kill it" kind of thing? So, do you have some advice on a road map or a way that we can do it or emulate or somebody or somewhere where it's done, where we can take some of those lessons and embed them into our unique frame as Michel has talked about and Taylor has also raised?

Teresa Scassa: Well, that really is an easy question.

[Teresa and Anil laugh, and Michel and Taylor smile.]

Yeah, that's… that's quite challenging. So, I'm going to give a federal government example, which I think is an interesting example and I'm going to confess that I'm giving it because I've also spent some time looking at this, so I feel a little bit more comfortable talking about it. And it's the Directive on Automated Decision-Making and the algorithmic impact assessment statement that has come along with it. And what I had found interesting about—there are a lot of things that I found interesting about that initiative. One was the very open way in which it was drafted. And in terms of openness and process, it wasn't perfect, but it was still innovative and different from how these things are done. So Michael Karlin in drafting The Directive on Automated Decision-Making, published drafts, sought input and feedback, received input and feedback, and used that to refine and develop the tool. I also find it interesting because although it deals with automated decision-making and different people will categorize that in different ways, I was interested to see the real influence of administrative law principles in the Directive on Automated Decision-Making. In other words, that it was drawing on a normative framework that went beyond the usual things that you see with artificial intelligence, which is what we have to—we have to look at this in terms of an ethics code or we have to look at this in terms of privacy.

[Anil nods.]

Ethics and privacy are part of it, but it's also framed in terms of administrative law, principles of fairness in decision-making. And so I thought that was interesting as well. It's part of this... I think it's an example of breaking outside the silos to look at some fundamental organizing principles of our society, in this case in the context of decision-making. And then it attempted to do all of this in a thoughtful, organized way with a tool that could actually be used to implement this process with the adoption of automated decision-making. And there are parts that still need to be articulated and worked out, and there's lots of ways it would be easy to sort of pull it apart and criticize different aspects of it. But there's a tremendous amount that's in that framework and there's a tremendous amount of thinking and of approaching the issues from a variety of different perspectives and embedding that.

So, I think that that is a really interesting example of trying to build a framework that concretely supports fairness in the adoption of automated decision-making. And that, I think, will be experimented with or will be used and deployed in the next little while, and we're going to get some experience from it, and there's lots of things that are built into it to evaluate how it's doing and evaluate both how the director is doing and evaluate how automated decision-making systems that are implemented following the directive are doing. And so, on a small scale, I think that's a tremendously interesting project on all kinds of levels and one that's worth watching. It's very limited, and that's one of my—I suppose my concerns about it is that it's limited to a very specific category of decision-making, a very specific category of deployment of automated decision or automated systems within government and we don't have anything for all those other things. But as something to think about, to look at, to study and to see how it progresses, I would give that as an example of something that's really quite innovative and interesting.

Anil Arora: Thank you very much. Now, panelists, feel free to jump in, just kind of raise your hand and if you want to add to any of these, do so. I also encourage our participants to continue to send your questions. And what I'll do is I'll try and embed my questions with some of the questions that we're getting. So if you see me look down, it's not like I'm doing emails on the side, I'm actually looking at your questions as you're sending them.

[Anil and the three other participants chuckle.]

I'm actually paying attention, just for the record. Michel, maybe I'll turn to you. #e've had a lot of conversation about data trust as a bit of a framework for us to think about and move forward for sharing data. I wonder if you have some thoughts on data trust? And while you're thinking about that, can I just tee-up Teresa and Taylor a little bit? You both touched on the fact that there are many dominant companies in the data era that are not based in Canada. You know, the "FAANG," I guess. And what is the role for antitrust policies to protect Canadian data from being controlled by non-Canadian entities? So if you could think about that. Michel, maybe I can turn to you as data trust as a model, if you could just kind of describe it a little bit for folks, and what does it teach us in terms of the framework of Canada?

Michel Girard: OK, and thank you, Anil. Well, as we're thinking through these issues, we're seeing the complexity and we're going to see the same complexity in data trust. We can no longer think and impact in silos. Data governance is not an engineering problem. It's not a software engineering problem. We have to talk to lawyers.

[He chuckles, and Anil nods.]

We have to negotiate with people who have very different sets of expertise and points of view in order to get to something. And I can tell you before I get to data trust that the UN basically issued a report last year on data collaboratives and they said, we built the Internet over the past 25 to 30 years, the 3 layers, the application layer and the networks and whatnot and it works, from an engineering perspective, it works. But we never—I mean, there's the IETF is an international organization, Internet Engineering Task Force, international organization that managed the development of thousands of standards so that the Internet works and you can use it—but they never address any of the governance issues associated with the new applications that were being deployed for the past 20 to 30 years.

So in a sense, we're kind of harvesting right now some issues that were planted 30 years ago when the Internet was created. And now it's even worse with big data analytics, because now we have a new generation of technologies that are gonna be coming up. So it's no surprise that we're going to be talking about this for a while and we need to have that conversation. Now, regarding data trust. Data trust is a model that we could use to manage data access. It's going to be very important for any data collaboration project or initiative to work, for participants to feel secure that the data they send will be managed properly, that they will be protected, that they will be graded and stored and disposed of in a way that everybody agrees to. So the concept of the data trust is kind of taking a concept in the financial world, financial trust, so that you can put something in the middle there and get a trustee to manage the sum of the things that you put into a trust in terms of money or other assets. It's trying to apply that concept to data.

I think the jury's still out when it comes to data trust. I haven't seen a lot of organizations successfully launch a data trust with those core principles being used. There's a lot of contracts associated with that. There's a lot of contract law that needs to be applied. I'd prefer a little bit more flexible approach where you design a handbook for data governance with a series of core principles and you use that as basically your flexible contract between somebody upstream that's going to share data with you and somebody downstream that's going to access the data, so, you know, the data scientist will do AI. So, I think the conversation we need to have is not necessarily on the model itself, but the principles that we want those new data controllers to use so that they can protect all of us, so that we can have trust. I think Statistics Canada could be a great place to start that conversation.

[Anil nods.]

When you think about the need for data collaboratives in the country, I don't see any organization that can do this right now. But Stats Can probably has the basics to begin to churn out the right frame and make it happen. So that's just a thought, Anil, for you to think about. Canada needs data collaboration. We need data collaboratives. We need a framework so that we can get that data to those who are interested in playing with it. But it needs to be safe. It needs to be trustworthy. And there are many models that we can design that basically meet our values. That help?

[Anil nods.]

Anil Arora: Thank you very much, and folks, I did not tell Michel to put a plug in for Statistics Canada.

[Michel laughs.]

It is a public institution with a very clear mandate and that's exactly what we do. We collect information from Canadians, turn it into consumable products in a transparent way. Legal structures are in place and the rules of the game are very transparent. And so I think the principles that can, in fact, be used. So I totally understand that, but we still need work to do to kind of get into that area, and I like your notion of "let's not be totally wedded to that model, but let's look at guidelines" so that there's broader applicability of this for the benefit of all Canadians. I wonder, Taylor or Teresa, if you wanted to jump in on that conversation before going to the second question that I posed? If you wanted to have a brief intervention on data trust, but by all means, go ahead. If not, I'll just turn to you, Taylor, to just kind of take it from there and see if you wanted to touch on the second question as well.

Taylor Owen: Yeah, I think the issue of market concentration and jurisdiction reveal a lot about this topic right now. So, one, it clearly shows the interconnection of these issues. So, when we look at the boycott that's happening right now around Facebook, for example, that boycott is a reflection of market pressure, but it is one that will almost for sure not work because we do not have market choice. So, the reason why advertisers have returned and continue to scale their spend on Facebook and Google advertising is because there are no other choices, ultimately. If you want to do that kind of advertising globally, you need one of those two companies. So, you can't have market solutions to problems when there's a market failure. And that is why we have antitrust and competition policy, is to create the right conditions so that the market can function effectively. And that, I would argue, in many domains of the technology space, is not currently there.

But I think maybe more broadly it shows the limits of our ambition in this space right now in Canada, frankly. I think other governments... So, the argument we've heard for not pursuing competition policy in Canada is that we can't possibly break up companies that sit outside of our jurisdiction and that that's going to fundamentally be an American debate. And that is certainly true. But the domain of competition policy is far broader than that. And we have seen how other governments have very effectively used different levers in that space to force real change on some of these companies.

Just a few quick examples: the German Effectively Competition Bureau has changed the definition of consumer harm or is trying to change the definition of consumer harm from financial harm to potentially abuse of data so that they will not have to demonstrate financial harm, which is very difficult to do for what are ultimately free services online. But they can use the abuse of data as a form of consumer harm to apply competition policy against these companies, and that could effectively make the business model of some of these companies very challenging to deploy in Germany. So, I think an example of them using the tools they have to step into a space that many are saying is solely the domain of in this case, the United States.

Mergers and acquisitions review. I mean, this is happening. Facebook just tried to buy Giphy or is in the process of trying to buy them, and the Australian government and British governments have started a review of that purchase, which could end up halting it entirely, particularly if more countries were to collaborate in that investigation, which we certainly could do. And I think, again, getting to this interconnection of issues... If what we're trying to do is create effective market conditions and create dynamism in a market to give consumers choices, we don't just have to use traditional competition policies. We can use things like enforced interoperability and standards that companies need to use so that consumers have choice of where they take their data or how those data are used and whether at all. So these things that we put in the domain of data policy and put in that little bucket actually can be used in far greater policy challenges. So a few comments on competition policy, but fundamentally, these issues are interconnected and we need to be far more ambitious about how we view policies in these spaces, I think, or we have a lot of tools at our disposal if we want to use them.

[Anil nods thoughtfully.]

Anil Arora: Teresa, I turn to you to build on any of those two comments, and just to make sure that people don't think that I'm not picking on you equally…

[The participants chuckle.]

…I'll just add one more question from respondents, which is: when data is collected, who really owns it? You talked about data ownership. So, when somebody volunteers their data to use as a service, does their data then, in fact, belong to the organization? So, feel free to talk about the other two issues and then see if you have any thoughts on that as well.

Teresa Scassa: Yeah, maybe I'll start with that because Michel and Taylor have, I think, given really thoughtful answers to the other two issues. So, I'm going to jump into data ownership, which is always a fun topic. And I think that the term ownership is one that is perhaps misused when it comes to data, although everybody does use it these days, because it's one of those words that gets associated with having some kind of absolute dominion over. And I think that when you're talking about data, it's not the right vocabulary, because I think what you have are obligations, responsibilities and interests when it comes to data. And so, for example, where an individual agrees to share their data with a company in order to receive a certain service, the company has an interest in that data because they need it for whatever part of their business operations it's been collected for. And they may be using it in various ways to generate value or to operate their business. So, they have an interest in that data. And the individual still has an interest in that data by virtue of data protection laws, which set limits to what the company can do with that data and gives them some rights of access and if we ever get PIPEDA reform, those interests may actually increase if we have data portability or right of erasure, some of these other things that may be coming our way.

So, the individual maintains interest, but the organization also has an interest in the data. And then, of course, if they use that data in other activities, profiling or analytics, then they're creating or generating new data, which gets really complicated because then the issue becomes what really is the level of interest of the individual in this data that is no longer simply the data they provided, but something which has a value added to it. And it gets more complicated there. And I think, again, the answer remains that they have some interest in it, but it may be a different interest and the degree or nature of the organization's interest may be different. So I think that's one of the challenges in that context, is defining and setting the boundaries for those different interests and data protection laws, one of the ways in which that's done, and we currently have weak and out-of-date data protection laws, both at the federal or public sector and private sector levels. And the laws aren't really, I think, fully addressing the different data needs and data interests of all of the parties involved. And so I think that that's partly what is so challenging in the Canadian context.

[The other panelists nod.]

Anil Arora: Thank you very much. I was reminded of, in Statistics Canada's context, so we're not necessarily interested in Teresa's response, but Teresa is essentially donating that data as a proxy that we then use statistical techniques to infer what's going on. So, even in that notion, where does ownership sort of start and end in a context like ours? So, you're absolutely right. Some of it is very contextual. It depends on the organization, depends on the situation. So just even conceptual frameworks of how we look at things like ownership I think are really, really important, because if we stop at the beginning and never really evolve to the next, well then we are doing ourselves a disservice because we're really not getting the full value of the hidden insights in those data, because you can, in fact, address some of, as you said, the privacy issues of ownership and so on, because that's really what's at the heart of that issue. But you can address them and still use the value of data. So it's a bit of a continuum.

Michel, a question from one of our respondents in terms of this national framework. They're asking, what role can regional development agencies play in a data-driven economy? Should we be thinking about these questions nationally? Or does a regional approach make more sense? Maybe I'll take some artistic license and say, what should be that balance, if you like, between national and regional?

[Michel chuckles.]

Because, in the Canadian context, you can't speak about one without the other. So I'll just put that out there and maybe tee the next question up to Teresa and Taylor for you to think about, which is: we know that AI And the Internet of Things is expanding the volume of data collected at a rapid rate. Michel talked about that. What is the role of government in regulating AI in the private sector and protecting Canadians' data? Maybe you could give that a little bit of thought while Michel is enlightening us on the other easy question.

[Michel laughs.]

Michel Girard: Yeah. Well, I'm trying to reconnect the dots to the economic strategy tables, and they all said, whether it's agriculture or agrifood, there was health, there were biosciences, there was high-tech digital industries, and then different natural resources. They all said we need our own national data strategy. But clearly, these industries are not located equally across regions. So you could think of a scenario where you would take a region and you'd say, "OK, well, we have a lot of mining taking place here and we have, you know, anchor companies that are really interested in establishing a data value chain around, upstream and downstream with governments for compliance and to get a better handle on transportation bottlenecks, whatnot". You could create a data collaborative for that sector and it would be probably anchored in Alberta or in B.C.. Let's think about the superclusters. We have five of them and they're not national in scope. They're kind of regional in scope in terms of the special knowledge that is required. You're seeing regional nodes being created. So, maybe there's a way for us to look at those various sectors and figure out what would be the best home for these things to start from. But at the end of the day, we will gain economies of scale and we will get better algorithms if we pull together the data from across the country. And even that, it probably is not enough. Some algorithms would require, or we'd be better off if we had access to data from other countries. So, thinking about a regional approach to create those collaboratives, but eventually getting access to as much data as possible to get better algorithms is probably the way to go. That makes sense?

Anil Arora: Yeah, thank you very much. I've said before, too, that data actually appreciates in its value as you start to—and you have to use it responsibly, obviously. And that's that dialogue with Canadians. But AI Isn't going to take care of underlying biases. So you do need high-quality data in addition to volume to be able to make good use of it.

Taylor, I put the other question on the table for you to look at, but while you're answering that, you wrote a fascinating article on sort of the whole Bretton Woods—"we need another Bretton Woods moment" in a sense of the foundation for global data governance, so perhaps you could weave in a few of your thoughts from that while you're trying to address some of the other easy questions that our respondents are sharing with us.

[Taylor smiles.]

Taylor Owen: You're very good at layering on multiple complex questions into one. I think that that AI governance question is not about governing the very specific thing of AI. Of course we should be governing the applications of AI, the companies that are building AI, the data that goes into AI, the particular use cases that may have negative consequences on our society that we deem unacceptable from a—it is a window into this much broader conversation. We are seeing real market concentration and the ownership of data that's needed to deploy, particularly machine learning. We are seeing clear social and economic consequences of the deployment of the technology, right? So, all of these things are clearly in the domain of government. And so a big yes, we are going to have to, but it doesn't mean there is a thing called AI that we can develop a policy for. There is a technical and social and political and economic infrastructure that is engaged in the development of AI that we need to govern. It shows the complexity of these challenges, I think. And sometimes we end up just saying there's this thing called AI we can govern, and I think that gets us away from these much more challenging problems.

And there is an international dimension to this. I think when we are talking about how we are going to govern the deployment of these new technologies globally, it is not at all clear which institutions should be responsible for that. So, you're seeing this around the conversation of AI governance right now, that there is not a natural home for that right now. The UN is not the right place. The WTO is not the right place. The EU has a certain particular perspective on it. The OECD is trying to take on a mantle of it, but again, they're very limited. And partly that's because we've built an international institutional structure for a certain kind of economy and a certain kind of problem. It was for an industrial economy, the management of goods and services and tangible goods in particular, the limiting of conflict between nation states. There's a core set of problems and actors and material realities that we were trying to build institutions for. And I think there is a question now of whether those are the right institutions for these kinds of problems. And in some cases, we might find that there's a natural home for some of these issues, whether it's AI or global data standards or bioethics, bioethical issues and technology that we're going to face, geoengineering. There's a whole bunch of technology-fueled challenges that we face that are probably misaligned with our current institutions. So, yeah, I think that is a conversation we have to have.

Anil Arora: Another opportunity for Canada on the international stage as well.

Taylor Owen: Absolutely.

Anil Arora: Thank you very much. Teresa, your thoughts on all this? Michel and Taylor saying things that you agree with, or do you want to sort of counter any of these points?

Teresa Scassa: No, I largely agree with them. I think that one of the challenges that—well, we face a lot of challenges in Canada, but one of them, and I think about this context, in AI regulation and data protection, and so on… If we talk about AI regulation generally, we're about to—I still believe it, even though the evidence is not overwhelming, that we're about to enter into a period of reform of data protection laws in Canada. Sooner or later, we will get a bill to reform PIPEDA and there may even be one to reform the Privacy Act. And so, in a lot of ways, I think what that's going to do is it's going to push many of these issues into that basket by default, and I think some of them belong there, some of them are data protection issues, but some of them may be broader issues and may belong elsewhere or need to be considered elsewhere in another context, as Taylor has been describing.

And so I think that just by the fact that these are the laws that are most desperately out of date and most in need of amendment and are higher up on the agenda, I think that we're going to see a lot of the discussion around regulating AI take place in the context of data protection and take place in the context of these very specific laws that have their own limitations, that are built in by federalism and by other sets of issues. And I think that's unfortunate. I think it has to happen, and please no one interpret this as me saying let's not reform those laws—that has to happen—but I think it's unfortunate that that's going to be how the discussion is largely framed legally and politically in Canada. And that's where it's going to be taking place, because I do think it needs to be kind of a broader, more comprehensive approach. And yet what I foresee happening is that we're going to start to talk about AI regulation purely in terms of what provisions are we going to add to PIPEDA and which ones we're going to debate over what the wording of those provisions should be, and that's going to be where the discussion takes place. I do think that this is one of the challenges we face is finding the place to have that broader discussion as opposed to just being kind of sidelined into the smaller discussion, because that's what's at the top of the agenda.

[Anil and Taylor nod.]

Anil Arora: Well, it's hard to believe, but we're almost out of time. Colleagues, this has been an incredibly rich discussion and I'm sure it's just the opening volley or the opening act to many more within our respective organizations and our communities. I want to say thank you to you all, to Teresa, to Taylor, to Michel, and to Aaron as well, and to CIGI for this incredible work. Again, thanks to Taki and the incredible leadership of the School for putting this together. It was a pleasure for Statistics Canada and for myself to be part of this discussion. So, colleagues, I hope you got the complexity of this issue and also the opportunity for Canada when it comes to getting this right.

The next event in the New Economy series will be held on August the 13th, and the event will focus on the importance of intellectual property in shaping the new economy, and will be moderated by my colleague Mark Schaan from ISED. I wish you all a fantastic rest of the day. And I want to thank you all for participating in today's conversation. Thank you all. Bye-bye.

Taylor Owen: Thank you.

[Teresa waves as the Zoom call fades out. The animated white Canada School of Public Service logo appears on a purple background. Its pages turn, closing it like a book. A maple leaf appears in the middle of the book that also resembles a flag with curvy lines beneath. The government of Canada Wordmark appears: the word "Canada" with a small Canadian flag waving over the final "a." The screen fades to black.]

Related links


Date modified: