Artificial Intelligence: Challenges, Risks, Opportunities & Outlooks

Victor Manzueta
14 min readApr 14, 2020

What is Artificial Intelligence?

“It could be terrible, it could be great. It’s not clear. One thing is for sure. We will not control it.” -Elon Musk

Artificial Intelligence (AI) is a term used to describe intelligence demonstrated by machines. The term is often used to describe systems that mimic “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”. (Wikipedia) The contemporary definition of today’s AI is a machine that can make a perfect chess move while the room is on fire. (Li)

Computer scientists have been training information technology to perform complex strategies since the 1950s. Since 1995, internet usage by America adults has gone from 9 to 89%. In 2010 there were 1.8billion people connected online. Today it’s roughly 2.8billion. It will be 7 billion by 2020. (Silva) Complimenting these internet usage statistics, AI information technology computation is also advancing exponentially. In the last decade, AI was revolutionized by cloud computing, deep learning, neural networks, and backpropagation. (Somers) These technologies and techniques began allowing machines to discern complex patterns in huge quantities of data, build representations of ideas (via pictures, words, recordings, medical data, etc.), and react in real time. (Sechler) These technologies have spawned the google search algorithm, self-driving cars, cancer detecting computers, and machines that instantly translate spoken language. Smart toilets currently perform daily microbiome analysis and generate personal profiles for users. (Duncan) Deep learning algorithms are being used to find patterns in data, detect vulnerable user behaviors, and predict security trends (e.g, Azure Sentinel — Microsoft’s cloud-based Security Information Event Monitor). (Kucharski) Companies are seeing more precise, higher quality manufacturing with lowered operational costs; less downtime because of predictive maintenance and intelligence in the supply chain; and fewer injuries on factory floors because of more adaptable equipment. (Gallagher)

The revolution of AI seems destined to infiltrate nearly every aspect of our lives. Revolutions take unexpected turns, however. (Sechler) AI will eventually help us make sense of all manner of health and biomedical data. Within the coming years, mental health, drug discovery, lifestyle management, virtual assistants, hospital management, medical imaging and diagnostics are all poised to reap the rewards of AI technology. (Duncan) While AI may automate many of today’s jobs and relieve us of daily tedious tasks, it may also steal our careers, rob our lives of meaning, or worse. “I think we should be very careful about artificial intelligence. If I were to guess what our biggest existential threat is, it’s probably that. With artificial intelligence we are summoning the demon.” (Elon Musk)

What are the Existential Challenges and Risks of AI?

Current and prospective AI technologies possess distinct challenges, risks, impacts, and opportunities. AI will undoubtedly enhance products and services, and optimize internal business operations across the industrialized world. Nevertheless, ever since the beginning of industrial society, people have simultaneously marveled at the power of automation and lamented that human capabilities are being devalued. (Porlan) These laments remain lamentably true as AI economies begin to take shape. Estimates suggest that 60–70% of India’s high-tech economy will be replaced by AI in 10 years. Automation could hit India particularly hard because much of its high-tech economy involves routine work that is automatable. “In the near term, people will lose their livelihoods.” (Subramanian) These shifts in workforce demands could cause conflict and cyber-crime. Autonomous cars will be prime targets for these types of attacks. “People would want to hack automated cars because AI could leave a lot of people unemployed, and some are going to be angry.” (Garfinkel) These attacks have already begun. Security researchers managed to successfully jam various sensors on the Tesla Model S, making objects invisible to its navigation system. (Greenberg) Defending against traditional cyber-attacks as well as advanced adversarial machine learning attacks will be key security engineering priorities for AI researchers moving forward. AI systems need to be engineered to defend against unknown evils. Security engineering considerations for AI applications include data integrity and abuse detection. Systems should be designed to perform in safe, secure and reliable manners. Safeguards should be foreseen to prevent any unintended adverse impacts. Security architectures must incorporate resilience to attack, fall back plans, and general safety, accuracy, reliability and reproducibility. (Independent European Commission)

While AI platform security is important in order to withstand attack, the integrity of fundamental AI functional engineering is equally as important in order to avoid a distinctly disastrous peril. AI is currently in a nascent, fragile and brittle state. The most advanced algorithms and machine learning modules that mine monstrous data lakes, require tons of preparation by human researchers, special-purpose coding, special-purpose sets of training data, and new custom learning structures for each new problem domain, can still be fooled using simple and primitive noise added to algorithmic inputs. For instance, facial recognition software can tell identical twins apart, but ask a computer whether people on a surveillance camera are fighting or being playful, and it has no clue. (Mitchell) AI maturity is still generally low across the board. (Loucke)

This nascence leaves AI very susceptible to fundamental flaws that may be difficult to remove as platforms evolve and are built upon. These flaws might lead us to make wrong decisions based on AI recommendations. A major concern of AI engineering is the propagation of automated social biases. Currently, AI is learning to be prejudiced against certain dialects. People with strong accents have trouble being understood by Siri and Alexa. In various hypothetical scenarios, AI could have notable negative impacts on society due to baked in socially biased functionality. For example, google ads displaying higher-earning job ads to men than women (Goldstaub) or biased algorithms making life-changing decisions like denying prison inmate parole or granting loans (Dwork). The use of AI systems for ‘predictive policing’ which may help to reduce crime, in ways that entail surveillance activities that impinge on individual liberty and privacy. (Hanbury) At a macro-level these types of problems and flaws could easily impact decisions in finance, health care, education, and the judicial system.

To address potential social bias issues of AI, the Principle of Explicability should be incorporated into all AI designs. Processes should be transparent, the capabilities and purpose of AI systems openly communicated, and decisions explainable to those directly and indirectly affected. Nevertheless, proprietary ‘black box’ algorithms, the workings of which are unknown, are a commonplace within the AI industry. The risk that our unconscious sexism or unconscious racism is seeping into the machines that we are building is ever-present. (Goldstaub)

The existential threats, risks, challenges, and opportunities of AI have created a sense of urgency among developed nations. According to Vladimir Putin, whoever leads in Artificial Intelligence will rule the world. (Meyer) To that point, “path dependency suggests that the entity that develops rules first may be able to impact / shape the rules of the road for others.” (Dr. Maryann Cusimano Love) World leaders agree, and a technological race among a select number of key players on the global stage of AI has begun.

Who are the Global Players in AI?

US, China, and the EU are different beasts in the same technological jungle. They have each developed AI strategies to advance their AI capabilities through investment, talent development, and risk management. In the US, control of citizen data is privatized. In China, it’s the opposite. The Chinese government owns and operates its citizen’s data as it sees fit. At a macro level, the US and China both have centralize data economies. The EU finds itself in the middle, seeking a place on the global stage in regards to AI. While these entities are taking distinct approaches to AI integration (e.g., US Executive Order on AI Leadership, China’s Next Generation Artificial Intelligence Development Plan, AI Made in Germany, etc.), they are all establishing education programs and pursuing R&D to support business within their borders. The US and China are investing the most, but the EU is not far behind. (Loucks)

How do their Approaches to AI Differ?

The US is the global leader in Artificial Intelligence research, development, and deployment. In 2018 venture capitalists pledged $8billion investment capital into this field. The US’s AI strategy is based on five principles:

- Drive technological breakthroughs in AI to promote scientific discovery, economic competitiveness, and national security

- Drive technological standards development and reduce barriers to testing and deployment

- Train the US workforce to develop and apply AI technology

- Foster public trust in AI technology

- Promote an international environment that supports American AI research and innovation and open markets for American AI industries

The objectives of the US’s AI strategy are to:

- Promote sustained investment in AI R&D for continued economic and national security

- Enhance access to data/models/computing resources while maintaining safety, security, privacy and confidentiality protections

- Reduce barriers to the use of AI technology

- Ensure tech standards minimize vulnerability to attacks by malware

- Train next generation of American workers through apprenticeships

- Develop/implement an action plan to protect the advantage of the US in AI against strategic competitors and foreign adversaries

Unlike the European Union that has gone down the path of regulatory influence on AI integration, the US has been relatively permissive toward AI technologies. (O’Sullivan) The US is pursuing decentralized development and experimentation through the establishment of a common foundation of shared data, reusable tools, frameworks, standards, and cloud. The reason being is that no US federal agency is dedicated to AI. Instead, there are multiple authorities that have vested interests in the ethical development and integration of the technology. For example, the FTC and the National Highway Traffic Safety Administration have hosted workshops to determine how to oversee self-driving car technology, and the DHS has put out advisory notifications pertaining to potential AI threats on critical infrastructure. Regulators can only enact policies, guidance and legislations that relate to their specialized knowledge. Nevertheless, various entities have expressed interest in establishing governance structures for artificial intelligence development within the United States (e.g. the ‘Federal Search Commission’, ‘Federal Robotics Commission’, etc.). These approaches have adopted the precautionary principle, which advocates that innovation must be decelerated or halted altogether if a regulator determines that the associated risks are too much for society to bear. (Knight) For the US, security is the top concern. Specifically, protecting institutional data, and biased influences to critical AI models. (Porlan) As a result, the US Department of Defense is betting big on AI. The 2017 Army Robotics and Autonomous Systems Strategy predicts full integration of autonomous systems by 2040; replacing today’s bomb-disposal robots and other machines that are remotely operated by humans. (TRADOC) AI will transform the Department of Defense, spanning operations, training, sustainment, force protection, recruiting, healthcare, etc. (Department of Defense)

The EU believes in human machine partnerships. Germany and France have pledged over 4billion Euros in AI research institutions, talent development, open data ecosystems, AI ethics, and enhancements of sector specific AI contributions. These investments aim to improve competitiveness by adding over 32 billion in revenue to manufacturing output. (Loucks) Their vision of AI is one of ‘Industry 4.0’, an industry in which ‘machines communicate with each other, inform each other about defects in the production process, identify and re-order scarce material inventories.’ (Brooks) They have established organizations such as the European Factories of the Future Research Association, which aims to rear the industrialization of AI in the most impactful and profitable direction possible. Optimism in EU’s AI initiative lies in their car industry. The EU possesses a large amount of international patents for autonomous vehicles due to its mature and technically advanced car industry. A potential opportunity for the EU to take a leading role on the global AI stage is through policy and governance. The EU has been fairly progressive in their cyber-related policy that has impacted international entities. Recently, Facebook was fined $2.2 billion due to GDPR regulation infractions. (Hanbury) The EU can build upon this and pursue political and regulatory leadership through AI governance. To this end, the EU recently drafted Ethics Guidelines for Trustworthy AI. These guidelines are based on four ethical principles; Respect for Human Autonomy, Prevention of Harm, Fairness, and Explicability. These guidelines are also based on seven key requirements:

- Human Agency and Oversight

- Technical Robustness and Safety

- Privacy and Data Governance

- Transparency

- Diversity, Non-discrimination and Fairness

- Environmental and Societal Well-Being

- Accountability

France, Finland and Germany have created individual AI strategies. The greater EU will needs to condense these efforts into a single plan. The inevitability of a collaboration requirement is a source of skepticism in EU’s ability to become a global power in AI because political divisions can hinder progress. Case in point, plans for a joint France/Germany AI research center have been abandoned due to lack of prioritization. Another source of skepticism is the fact that China and US possess vast amounts of vital data (via Google, Amazon, Huawei, etc.). The EU does not. The more data that AI applications receive, the more accurate their decisions are in identifying threats and responding to them. Because of this, the EU is very likely to adopt mature foreign technology. They have reported major or extreme concerns over the risks of AI related to unfair bias. (Loucke) They believe AI systems will be able to shape and influence human behavior through sub-conscious processes, deception, herding, and conditioning. (Sechler) To defend against this dooms-day scenario, the EU is seeking national debates on regulatory and ethical standards, and advocating stakeholders all over the world to build an international consensus and work towards a global framework for Trustworthy AI. (Independent European Commission)

The Chinese government has declared its ambition for China to become the world leading AI innovator by 2030. (Loucke) While many countries all over the world fear that AI will eliminate jobs and worsen wealth inequality, China believes the opposite. (Knight) They believe that human workers and AI technologies will augment each other to produce new ways of working. They are least concerned with AI risks, compared to other early AI adopters, and believe that they are widening a lead over competition in this space. In 2017 Chinese AI startups received 48% of global AI venture funding. This country has the world’s second highest number of AI companies, and boasts home to the world’s most highly valued AI company — SenseTime Group Ltd. SenseTime produces computer vision technology. They developed an image-processing technique to automatically removing smog and rain from photographs and another for tracking full-body motion using a single camera. (Knight) The longer-term AI strategy in China is calling for homegrown AI to match that developed in the west within three years. They are pursuing this goal through various avenues. First, education: 80% of the funding of China’s two best universities (Peking University and Tsinghau University) are being aimed at commercializing AI. Second, innovation: they are amassing immense data-lakes to inform their AI platforms, the results of which can be seen through the advent of sophisticated machine learning facial recognition systems that identify workers at offices, customers in stores, and users authenticating to mobile apps. (Independent European Commission) When China sets its sights on accomplishing a goal, China succeeds. In the previous decade China aimed its sights on a high-speed rail network that would spur technological development and improve the country’s transportation system. That train network is now the most advanced in the world. Skepticism in regards to their AI ambitions can be found in the fact that China is not known for its strong research culture. Their cyber espionage campaigns have victimized world markets and global entities in order to steal intellectual property. They do these things because it is easier to steal a capability than to create it yourself. This reality is illustrated in the fact that among AI early adopters, China possesses the smallest proportion of companies and commercial entities with mature AI capabilities. While they are the most optimistic and resourceful AI power on the global stage, they will likely come to recognize the risks and challenges of AI integration as their AI experience increases. (Porlan)

How does it end?

As AI grows to touch more domains of existence, cyberattack, imbedded bias, as well as centralized governance can have dangerously influential impacts over our daily lives. Global players in AI which have distinct approaches, priorities and outlooks on the evolution of this technology must guard us from these doomsday scenarios, while allowing AI to take us into a prosperous future. In order to achieve long-term strategic maturity in this field, these players will need to put AI related policies, procedures and metrics in place. Policymakers should embrace humility, collaboration, and transparent solutions instead of antiquated and overly bureaucratic approaches that can stifle innovation. The age of smart machines needs a new age of smart policy. (O’Sullivan)

The benefits of AI systems will exceed the foreseeable risks. AI may eliminate certain jobs, but it will expand the economy and create wealth by making many industries far more efficient and productive. We need to come to grips with exponential change, but realize that humans will not be taken out of the loop. AI will revolutionize how humans work. Tomorrow’s smart machines will be optimized through human historians, sociologists, psychologists and policymakers that can teach context, unbiased logic, and the rule of law. Global players in AI policy will bring about human machine partnerships. Through these partnerships, “human consciousness will be freed from the constraints of Darwinian life. No more survival of the fittest. No more fighting to survive; just the highest expressions of our humanness, fully flourishing. Its art, and poetry, and love, and radical self-expression. It’s meaning spilling over.” — Jason Silva


Brooks, Rodney. “The Seven Deadly Sins of AI Predictions”. MIT Technology Review, Vol 120. No 6. Page 79. 1-Nov-17

Department of Defense.” Summary of DoD AI Strategy — Harnessing AI to Advance Our Security and Prosperity”. Department of Defense. 12-Feb-19

Duncan, David Ewing. ”Constant Monitoring + AI = Rx for Personal Health”. MIT Technology Review, Vol 120. No 6. Page 46. 1-Nov-17

Dwork, Cynthia. ”How to Root Out Hidden biases in AI”. MIT Technology Review Vol 120. No 6. Page 53. 1-Nov-17

Gallagher, Sean. ”The Fourth Industrial Revolution emerges from AI and the Internet of Things”. Ars Technica. 18-Jun-19

Garfinkel, Simson. ”How Angry Truckers Might Sabotage Self-Driving Cars”. MIT Technology Review, Vol 120. No 6. Page 14. 1-Nov-17

Goldstaub, Tabith. ”The Dangers of Tech-Bro AI”. MIT Technology Review. Vol 120. No 6. Page 45. 1-Nov-17

Greenberg, Andy. ”Hackers Fool Tesla S’s Autopilot to Hide and Spoof Obstacles”. Wired. 4-Aug-16

Hall, Louisa. “How We Feel About Robots that Feel”. MIT Technology Review, Vol 120. No 6. Page 74. 1-Nov-17

Hanbury, Mary. “Facebook is looking down the barrel of a $2.2 billion fine for storing millions of passwords insecurely”. Business Insider. 25-Apr-19

Independent European Commission. “High Level Expert Group on Artificial Intelligence — Ethics Guidelines for Trustworthy AI”. Independent European Commission. 8-Apr-19

Knight, Will. “Another Way AI Programs Can Discriminate Against You”. MIT Technology Review Vol 120. No 6. Page 15. 1-Nov-17. “China’s AI Awakening — The West Shouldn’t Fear Chinas Artificial Intelligence Revolution. It should Copy it”. MIT Technology Review, Vol 120. No 6. Page 68. 1-Nov-17

Kucharski, Kyle. “Where is AI Used the Most? Cybersecurity”. PCMag. 24-Apr-19

Li, Fei-Fei. “Fei-Fei Li Q+A”. MIT Technology Review, Vol 120. No 6. Page 26. 1-Nov-17

Loucks, Jeff. ”Future in the Balance? How Countries are Pursuing an AI Advantage”. Deloitte Insights. 1-May-19

Meyer, David. ”Vladimir Putin Says Whoever Leads in Artificial Intelligence Will Rule the World”. Fortune. 4-Sep-17

Mitchell, Gareth. “Can facial recognition software differentiate between identical twins?”. Science Focus.

O’Sullivan, Andrea. “Don’t Let Regulators Ruin AI”. MIT Technology Review, Vol 120. No 6. Page 73. 1-Nov-17

Pettey, Christy. “Global AI Business Value to Reach 1.2 Trillion in 2018”. Gartner. 25-Apr-19

Porlan, Miguel. “Fearsome Machines: A Prehistory”. MIT Technology Review, Vol 120. No 6. Page 37. 1-Nov-17

Sechler, Craig. “A.I. and the Destiny of Mankind”. Curiosity Stream. 5-Sep-18

Silva, Jason. ”The Road to the Singularity”. Curiosity Stream. 7-Aug-18

Somers, James. “Is AI Riding a One-Trick Pony?”. MIT Technology Review, Vol 120. No 6. Page 29. 1-Nov-17

Subramanian, Samantha. “India Warily Eyes AI — The Effects Automation is Having on Labor”. MIT Technology Review, Vol 120. No 6. Page 38. 1-Nov-17

The Economist. “Can EU become an AI Superpower”. The Economist. 20-Sep-18

The White House. “Executive Order on Maintaining American Leadership in Artificial Intelligence”. The White House. 11-Feb-19

USTRADOC.”US Army Robotics and Autonomous Systems Strategy”. US Army Training and Doctrine Command. 1-Mar-17

Waldrop, Mitchell. “Inside the Moonshot Effort to Finally Figure out the Brain”. MIT Technology Review, Vol 120. No 6. Page 54. 1-Nov-17

Wikipedia. “Artificial Intelligence”. Wikipedia. 27-Jun-19