Is Privacy Tech Europe’s Secret Weapon In The AI Race?

DO EUROPE’S PRIVACY LAWS PROVIDE A HIDDEN ADVANTAGE WHEN IT COMES TO ARTIFICIAL INTELLIGENCE?

While many pundits argue that European companies cannot compete with their US and Chinese peers in artificial intelligence (AI), due at least in part to Europe’s attitudes and laws on privacy, the experts we recently spoke with believe that innovations brought to bear by European companies and individuals will have a significant impact in the AI race.

Our webinar featured Ivana Bartoletti, Technical Director for Privacy at Deloitte and the Co-Founder of the Women Leading in AI Network, Leo Clancy, Head of Technology, Consumer & Business Services at IDA Ireland, and Dave Lewis, Associate Professor at the School of Computer Science and Statistics at Trinity College Dublin and Deputy Director of the ADAPT SFI Research Centre for Digital Content Technology, explored the role of European companies, and European values, in AI-based innovation.

View the AI Race webinar

Constraints Faced by European Companies in Artificial Intelligence

At the outset, the panellists agreed that European companies face certain constraints when it comes to AI-based innovation that their US and Chinese peers do not. As described by Leo Clancy, US and Chinese companies have an advantage in this space in the first place due to historical reasons (AI having had its genesis in the US), strategic reasons (the Chinese government making it a strategic priority to successfully develop AI-based applications) and the fact that “they are both large, federated jurisdictions with large populations…and regulatory access to data.” But, as explained by Dave Lewis, the key disadvantage for Europe is not just the smaller quantum of data available for AI purposes but rather Europe’s reduced “ability to collect and concentrate that data” due to differences in culture and legal regimes, which has resulted in Europe not having “the same large, consumer-driven companies [as the US and China do] that are making the strides in AI.”

Concerns regarding the Ethics and Privacy Issues Surrounding Artificial Intelligence are Increasing

We delved into the ethical and privacy risks that come with AI, noting the role of technology in addressing those risks. As described by Ivana Bartoletti, “Privacy-enhancing technologies are crucial – because if we want to manage the risks, while innovating in a European way…such that we’re innovating in a way that respects human rights, then that’s where privacy-enhancing technologies come in.”

But caution was also emphasized in over-relying purely on technology. Ms. Bartoletti stated that there is a need to focus on both the privacy-enhancing technologies and the data handling standards and values that those technologies are designed to protect, which primarily stem from the principles of “fairness, transparency and explainability.” As Ms. Bartoletti explained, “what we’ve seen happening in Europe is the setting of high standards when it comes to data handling. And these standards are not just European – we’re seeing the entire world following suit, such as with the CCPA in California. This shows the soft power of the EU, which is the idea that a regulatory framework does not necessarily hinder innovation.”

Privacy Tech’s Role in Artificial Intelligence

This leads one to the proverbial “privacy paradox,” wherein consumers state clearly time and again that they want their data used responsibly by companies, yet they continue to use services that are irresponsible with their data from a privacy perspective. This leads one to ponder whether this time is different and if, in fact, responsible data usage will become the next conviction-based movement for consumers (such as the green movement, the organic movement and the fair trade movement). Ms. Bartoletti thinks this may well be the case: “I think now things are shifting. I think we are in a different era – consumers want to see that companies care.” And, in terms of those companies, she continues “privacy is becoming a differentiator for companies. Companies want to invest in privacy-enhancing technologies, partly because of course it’s the right thing to do, but also because it’s a great investment in terms of growth for the company.”

Europe’s Impact in Global Artificial Intelligence-based Innovation

To really understand the likely impact of European values and companies in AI (globally), you need to think of this as more than a global “AI Race” or competition, but rather a global ecosystem that Europe has made (and will continue to make) valuable contributions to, including via its legal regimes and its ability to help set standards. “I don’t think of this as a “race” in connection with AI, but rather an opportunity for Europe to be a very strong part of the global supply chain set, starting with GDPR (which has been very successful as an export)” says Mr. Clancy. As evidence, Mr. Clancy points out that “some of the very largest companies that have data-oriented business models in the world have been thinking about GDPR in building their own data handling models”.

With that lens, various ways emerge for Europeans to turn their seeming constraints when it comes to AI into advantages that will help them bring innovations to AI – in a way that would be much more challenging for US or Chinese companies. “Europe’s an interesting case in point, because we don’t benefit from having these huge consumer facing-companies” says Professor Lewis. This, he continues, drives European companies to focus on “how can we do more with less data…so [those constraints] enable us to innovate in other areas, which then means that we can apply the benefits of AI to areas where you’ll never get large amounts of data. These constraints also benefit [Europeans] in terms of thinking more about the legal, governance and societal aspects…such as data trusts…relating to AI and we’re therefore probably more likely to reach solutions because we’re working with these constraints, which are really benefits.” Mr. Clancy agreed, saying that “One thing that Europe has certainly led on is the thinking that it’s done on the data impacts on society, and particularly around privacy. This has certainly been a beacon for other jurisdictions in terms of how their companies think of AI policies.”

In summing up these thoughts, Ms. Bartoletti stated that “the recent Schrems II decision highlights something that is important to consider here, which is how the EU brings together the international dimension to data usage while also safeguarding the privacy and human rights in connection with data. This gets back to why the EU introduced the GDPR – because, yes, we wanted to put dignity of humans at the heart of what we do. But we also wanted to enhance the possibility of transferring and sharing data for the advancement of the EU – so there was a strong economic underpinning in this. And [European companies] have to go back to that and ask “what else can we do to continue to harvest the value of this data [while also respecting the values enshrined in the GDPR], such as creating new data infrastructures, creating new intermediaries, etc.”

Perhaps this is the way in which Europe, and European companies, are destined to “compete in the AI Race.” Not in some absolute sense, but rather by applying European values and conditions to enhance AI-based innovation. By turning the factors that are unique to Europe, which may superficially appear to be constraints, into benefits that make European companies more attuned to opportunities to use data innovatively but also responsibly, in a way that resonates with people globally. And, perhaps most critically, Europe can “win” in AI by impacting the way that companies and people globally (including in the US and China) use data for AI purposes – in a way that helps those companies and people appreciate and embrace the benefits of using data productively but also responsibly, mindful of the need to at all times ensure data is used consistently with the global (not just European) desire for fairness, transparency and explainability.