AI’s Most Imminent Problems - Douglas Heintzman, Part 1
By Kitty Chio
For over half a century, humankind has pondered the “singularity”, a doomsday scenario where an artificial superintelligence changes the fundamental nature of civilization in unpredictable and potentially dire ways. Almost 70 years ago, the pioneering computer scientist Alan Turing was asking the question “can machines think?” This question, and the consequences of its answer being “yes”, has not only lead to practical questions such as the need for a “kill switch” as discussed in AI control strategies, but has also been fodder for both philosophical debate and science fiction — often combined in tropes like Asimov’s 3 Laws of Robotics and Hollywood blockbusters like Blade Runner.
Fast forward to today, technological advances continue to fuel our imagination and challenge our preconceived notions of general intelligence and machine consciousness. We continue to speculate upon the potential of AI technology and the societal revolution that it will inevitably trigger.
In short Douglas Heintzman proposes a number of problems that AI is beginning to present:
Data Quality & Bias: When an AI engine is trained on a bias data set it will lead to bias recommendations and actions.
Black Box Solutions: With AI being trained on large data sets, it makes it more challenging for humans to understand how it came to a certain result or recommendation. This uncertainty results in a lack of accountability.
Ethical Debating AI: Reports of fatal accidents with autonomous cars is just one example that brings to light the ethical implications of AI. Heintzman states that there is a lot of room for growth when it comes to the governance and regulation of AI.
AI & Big Brother: Heintzman addresses how society will react to events such as the EU's introduction of GDPR and China's use of AI driven facial and behavioural recognition systems to quantify social reputation at an individual level.
What happens when we invent an intelligence that is smarter than us?
By definition, this machine will be smart enough to invent a machine smarter than it is, and in turn that machine will, again, invent a machine smarter than it is, and so on. How long will it take for an artificial intelligence to be so different from us that we can’t relate to it in the same way that an ant can’t really relate to a human? Is AI an existential threat or is it a huge boon to human productivity and welfare? And finally, what technological, business and socio-political problems should we think through if its advent is an inevitability?
To consider these questions, the ABD team recently interviewed Douglas Heintzman, a thought leader in disruptive technologies and AI. When we asked him about the singularity he was very thoughtful but ultimately dismissive, “There are so many steps between that kind of world from where we are today and we don’t even have many ideas about what those steps are. I suspect we are at least a century away from having to worry about Skynet.”
In the meantime, Heintzman thinks there are plenty of things to worry about ranging from ethics to governance. Heintzman shared insights about the immediate challenges and opportunities that business and society are facing, and the need for a new social contract as we enter the Fourth Industrial Revolution. He believes that both our economic and political systems will need to substantially evolve in order to fully benefit from AI’s potential, and that the changes driven by AI will be upon us sooner than most people imagine.
Today’s AI Challenges
“Today AI is great at performing many tasks for which it is specifically trained. That’s great. It helps optimize air traffic, helps doctors diagnose cancer and allows us to ask a home appliances when the final season of Game of Thrones is coming out. In the real world, the problems we face are more compound, sophisticated and filled with all kinds of nuances”, Heintzman said.
“We are still a ways from truly generalized engines that we can submit compound problems to. In the meantime we need to work on the potential weaknesses, functional deficiencies, and the ethics questions those weaknesses imply.”
Problem #1: Data quality and biases
Today’s AI engines are specifically designed and trained by a selected data corpus to do specific things. Heintzman shared with us that “one the biggest problems that AI faces is the bias that may be hidden in training sets”. When an AI engine has been trained by a bias laden data set — and bias in this context may simply be an under sampling of a particular group — the AI engine will come up with the wrong answer or recommendation or take the wrong action. Creating high quality training data sets can be a tricky business. There is complexity and effort in transforming a massive dataset with fully labelled predictors and responses for use in AI. Following the notion of “garbage in, garbage out”, Heintzman is concerned that AI engines can learn the wrong types of lessons if the sample data used is not vetted to adequately contain patterns of interest, or is simply not large enough. Unless the data is heavily fine-tuned, sometimes massaged with synthetic data points and attributes, the engine’s success is hugely susceptible to any biases inherent in its training data. As Heintzman says, “the AI engine might design a substandard marketing plan, make a faulty medical diagnosis, or drive a car off a cliff.”
Echoing Heintzman’ concern on the risk of persistent bias, McKinsey’s Risk Insights argues that machine learning algorithms used to predict behavioural outcomes are also prone to bias. This is due to its heavy dependence on historic patterns, and the persistence of prejudices carried forward from criteria initially programmed by its human architects. Predictive models generated in this manner fail to recognize new patterns that are absent in historic data and reinforce the same biases under the assumption that things will function more or less the same as before. For example, a social media recommendation engine that filters news based on existing user preferences will naturally encourage confirmation bias in readers. In turn, this will amplify stability bias in future recommendations.
Though it is no easy feat, it is possible to lessen data biases through not only rigorous data cleansing but also through advances in algorithmic design. Heintzman is especially excited about the potential of unsupervised neural network architectures, such as generative adversarial networks (GANs), that are promising to combat such biases. GANs are very clever algorithms that train each other in a double feedback loop. They create an adversarial learning environment for two independent neural networks, a discriminator and a generator.
As Heintzman explained, “The generator tries to fool or prevail over a discriminator and the discriminator tries to detect when they are being fooled. A generator might, for example, be instructed to create a photo-like representation of a flower. The discriminator tries to detect the fraud. As these two networks iterate this process over and over again, the generator gets better and better at creating flower photos and the discriminator gets better and better at detecting frauds. These two engines can train each other very well, very quickly.”
A notable example of a GAN is DeepMind’s AlphaZero, which generates self-play games (Go, Chess, Shogi) and in parallel trains networks, all without access to exhaustive data in opening and closing board positions.
Problem #2: Black Box Solution
Another significant challenge that Heintzman and other AI professionals are concerned about is known as the “black box problem”. Whereas a traditional computer program executes a defined logic tree which can be examined to understand why a certain decision was made, AI engines that learn from large training sets and from experience can be “black boxes”. It may not be completely clear why it made a certain decision or recommendation. As Heintzman pointed out, “This opacity of decision making introduces some significant challenges around the assignment of accountability”.
Problem #3: Ethical Debate of AI
The first reported fatal accident involving an autonomous Uber test car and a pedestrian in Arizona raises this question of accountability. Putting aside the question of whether a human driver could have avoided the collision by not watching TV at the time of the accident, and the reality that autonomous vehicles can see better, make faster reactions and are statistically safer than human drivers, the question of accountability needs to be addressed.
Was the diver responsible because of inattentiveness?
Is the software maker responsible for discarding the sensor reading as a false positive?
Was the sensor maker responsible due to a lack of resolution?
Is the car maker or car owner responsible for putting the car on the road in the first place?
Heintzman argues that “the sensor recordings will likely help in assessing distribution of responsibility in situations like this, but there will be situations where we don’t understand why an AI made a certain decision. Because of AI’s potential in dramatically improving transportation safety in general, we as a society might simply have to accept some degree of unknowingness and embed that in our insurance and judicial systems.” Still, it is becoming evident to Heintzman and other like-minded technology and policy experts that there is a lot of room for growth on the governance and regulatory front, particularly in handling cross-disciplinary complications.
Problem #4: AI AND Big Brother
An even more pressing concern for many people is AI’s role in a “big brother”society.
“China is one of the largest investors in the development and deployment of AI”, Heintzman explained.
“They are using AI driven facial and behavioural recognition systems to monitor behaviours and actions of various minority populations. The data collected from this type of surveillance is used to quantify social reputation at an individual level. This directly dictates one’s access to certain levels of housing or education.” Heintzman and government leaders are getting increasingly concerned as the public under surveillance ponders.
“Is this a world we want to live in?” Heintzman argues that the answer is actually “yes”, at least to some extent. “The convenience and value added to public safety, transportation infrastructure and medical outcomes for example, are very compelling and will have to be balanced against privacy concerns and potential misuse by governments and companies.” To their credit, governments are starting to tackle privacy issues more seriously but anything approaching a unified standard and an agreed upon approach have yet to merge.
After the European Commission introduced the General Data Protection Regulation (GDPR) to give data ownership back to consumers, policy leaders in America and China followed suit. According to the Council of Foreign Affairs, American policies, conventionally known for supporting big tech in a laissez-faire manner, are being re-examined by the nation’s top policy makers due to mounting public pressure related to a series of data breaches and privacy intrusions involving companies such as Facebook and Cambridge Analytics.
China’s Personal Information Security Specification was put in effect in May 2018 and is somewhat similar to GDPR but was designed to be less cumbersome for businesses. With such divergent political and economic agendas, new policies are bound to emerge as world powers and tech giants learn to balance data monetization with privacy regulation. Heintzman argues that progress in this area is critical especially in light of the possibility that “the vast majority of AI created value and wealth will flow to two super states, China and the United States.”
Conclusion: An AI Driven Economy
Heintzman isn’t alone in his prediction of an AI driven economy dominated by two economic superpowers. Kai-Fu Lee, the founding president of Google China, also foresees the rise of an AI duopoly in the global political landscape where China and the United States become the dominant players. China is well beyond its historical dependency on IP theft to power its technical innovation. The world’s second largest economic superpower has gained tremendous strength in AI through massive investment in research and development. Moreover, China has by adopted a business model that is based on rapid iterative product design, and is informed by vast market data harvested from the world’s largest population. Lee predicts that China will eventually dominate the markets in South East Asia, Africa and partially South America, leaving United States with Europe, Australia, and North America. Heintzman postulates a possible implication from this situation where the economics of AI may lead to a superstate vs client state reconfiguration of the world order.
“We are already seeing international trade in data. In the not-so-distant future, data will be treated as a commodity. There is so much economic value will flow from AI driven decision making. It is entirely possible we will see a situation where two superstates will agree to fund comprehensive welfare systems of their client states in exchange for market access and access to data.”
How the future of the world’s economic and political landscape plays out, is as yet, uncertain, but what is certain, is that competitiveness between the United States and China will undoubtedly accelerate AI development.
Douglas Heintzman is the Practice Lead for Innovation at the Burnie Group, a management and technology consulting firm in Toronto, Ontario. He is also chair of the selection committee for the NSERC Synergy awards for Innovation and a jury member of the Robot of the Year, the first international prize that rewards the best innovations in ethical artificial intelligence and robotics, beneficial for humans across 11 industries worldwide.