The Evolution of AI and Human Responsibility (Learn More, Videos)

Artificial Intelligence
AI applications will primarily serve to bolster, not displace, the work of human professionals. Indeed, as more and more fields are aided by artificial intelligence, human oversight will become more and more imperative. It will be humans who monitor and evaluate the key performance indicators of AI algorithms, decide how AI will be implemented in real-world applications without jeopardizing lives, and face accountability for how AI is deployed.

By Doron Cohen, CEO of Fifth Dimension, an AI-driven investigation platform company for law enforcement and enterprise clients

Artificial intelligence is transforming countless facets of modern life, with significant ramifications for a large variety of industries, the workplace, home, financial services, law enforcement, transportation, and the public sector.

Predictably, advances in AI have triggered anxieties about the ethical implications of the technology, its potential impact on employment and social stability, and the shrinking role humans may be playing as AI becomes “smarter and smarter.”

As with all other technologies, the responsible deployment of AI demands an ethical framework in which human accountability and control are paramount.

Critics notwithstanding, AI will not replace humans, but will only serve as a platform for a more productive, effective, and enjoyable human experience – if done right.

(Learn More. Artificial Intelligence experts gathered in Rome as part of the Allianz Global Explorer Program. Courtesy of Newsplex Now and YouTube. Posted on Apr 17, 2018)

Realizing this vision, however, requires discarding the “black-box” view of AI – in which humans remain willfully ignorant of what goes on inside an AI system, simply focusing on its inputs and outputs.

What is needed instead is a more realistic, hands-on approach – viewing AI as a highly useful technology, albeit with limitations, that can be applied responsibly when paired with human monitoring and accountability.

When pondering how AI can be best developed for the benefit of humanity and how we can avoid the pitfalls which some are predicting, it’s instructive to consider what AI actually is – something which often gets clouded by the technology’s cool but often misleading last name, “intelligence.”

Indeed, Artificial Intelligence is not intelligence at all, it is simply a way in which algorithms can model certain phenomena based on a large set of related phenomena.

In short, it’s not about magic – it’s about modeling a problem with respect to a set of similar examples.

That is why AI inherently relies on huge data sets to ensure the best results.

It is also why AI struggles to produce intelligent answers regarding tasks that have no statistical representation in the training data.

Indeed, if the data AI is trained on is too small, not representative enough, or otherwise biased, then the outcome will be inherently flawed – in other words, inadequate for modeling the real world.

In short, biased or otherwise substandard or limited data inputs will generate biased or substandard outputs.

(Professor Andrew Ng is the former chief scientist at Baidu, where he led the company’s Artificial Intelligence Group. He is an adjunct professor at Stanford University. In 2011 he led the development of Stanford University’s main MOOC (Massive Open Online Courses) platform and also taught an online Machine Learning class that was offered to over 100,000 students, leading to the founding of Coursera. Courtesy of The Artificial Intelligence Channel and YouTube. Posted on Dec 15, 2017)

Artificial intelligence, then, is not inherently intelligent. Indeed, as AI pioneer Andrew Ng argues, deep learning algorithms are essentially cartoons – not models – of the human brain.

In fact, the learning structures are not similar at all – neither in the process, nor the scale.

Though AI algorithms may be potent and sophisticated, they still lack the suppleness of human reasoning and deduction.

Specifically, an “intelligent” response to an outlier – an instance of data that is a stranger to the statistics of the training data – simply cannot be expected from an artificial model.

Successful models can generalize to data instances that were not present in the training data, but not if such new instances are very different in their statistical nature.

But just because AI may not be true “intelligence,” that doesn’t mean it isn’t useful. In countless sectors, AI adds tremendous value even without the versatility of thought exhibited by human beings.

To cite just one example from the medical realm: radiologists, charged with spotting disease in CT-scans and MRIs, are these days grappling with overwhelming workloads and lengthening hours, a dangerous combination that leads to more errors in image analysis.

However, AI algorithms have shown the ability to enhance the efficiency and accuracy of radiologists’ work when performing such image analysis, enabling them to focus in on images algorithms have flagged as problematic – ultimately saving lives in the process.

Nevertheless, radiologists are still responsible and accountable for the limitations of the AI systems they utilize, and, likewise, for patient outcomes.

Furthermore, while AI can be employed behind the scenes to enhance efficiency and outcomes, it should not interfere with one crucial element of a doctor’s work – the patient experience.

(Artificial intelligence is everywhere. Let’s look at radiology. The rapid development of artificial narrow intelligence mostly in understanding images, text, and videos will have a significant impact on radiology. Courtesy of The Medical Futurist and YouTube. Posted on Apr 4, 2018)

As in radiology, AI applications in other fields will primarily serve to bolster, not displace, the work of human professionals. Indeed, as more and more fields are aided by artificial intelligence, human oversight will become more and more imperative.

It will be humans who monitor and evaluate the key performance indicators of AI algorithms, decide how AI will be implemented in real-world applications without jeopardizing lives, and face accountability for how AI is deployed.

(How is AI dangerous? Let’s find out when Elon Musk gets asked this question in this interview. Courtesy of The School of Self and YouTube. Posted on Dec 30, 2017)

Blind reliance on technology most often spells trouble – as when we blindly adhere to spell check and end up with a paper replete with correctly-spelled malapropisms, or blindly follow GPS navigation systems and end up at dead-ends or road closures – to give two rather innocuous examples.

As AI is integrated into high-stakes environments like hospitals and financial systems, we must remember that just because the technology is termed “intelligent,” that is not a license to suspend human cognition, judgement or responsibility.

Although his dire predictions about AI’s existential threat to humanity may be overstated, Elon Musk is right to advocate that stakeholders think critically about how best to ensure public safety and guarantee accountability regarding AI deployment.

As Accenture points out, clarifying these questions will bring greater certainty to the marketplace, thereby promoting the development of responsible AI – catalyzing what is a virtuous cycle in AI progress.

AI-powered applications can help humans make better decisions more efficiently. In the final analysis, however, AI at its best is an immensely valuable tool – not a responsible agent.

(Learn More. Devin Wenig, CEO of eBay, Dr. David Hanson, CEO of Hanson Robotics, and Paul Daugherty, Chief Technology and Innovation Officer of Accenture, talk with Jessi Hempel of WIRED about the power of Artificial Intelligence. Courtesy of Accenture and YouTube. Posted on Jan 18, 2017)

About the Author:

Doron Cohen
Doron Cohen

Doron Cohen is the CEO & Co-Founder of Fifth Dimension, an AI-driven investigation platform company for law enforcement and enterprise clients.

Cohen has over 30 years of operational and managerial experience at the forefront of the Israeli intelligence community.

He has vast experience working with governments across the globe, tackling security challenges and managing special operations.

Cohen brings extensive knowledge in intelligence analysis and problem-solving.

Fifth Dimension goes beyond solving specific challenges, leveraging a variety of advanced big data, AI and deep learning technologies to dive deeper, reach core issues and create true value.

Fifth Dimension’s scalable, customizable and modular investigation and insight-driven platform is built for users by users to give customers the most effective insights and investigation tools, transforming customers’ data into insights and insights to value.

Fifth Dimension helps law enforcement officials make the maximum use of their mass-scale data, transforming information into game-changing value. It automatically reveals investigation insights and opens up connections inside the data, speeding case resolution.
Fifth Dimension helps law enforcement officials make the maximum use of their mass-scale data, transforming information into game-changing value. It automatically reveals investigation insights and opens up connections inside the data, speeding case resolution. 

Fifth Dimension empowers governmental customers including law enforcement agencies, intelligence agencies, border control organizations, militaries and more by producing actionable intelligence and providing its customers with the insights that help them respond earlier, faster and smarter.

Enterprise customers from varied industries such as banking, insurance and others can gain a strong competitive advantage using Fifth Dimension’s valuable insights by exploring opportunities, investigating business activities, growing revenue and reducing costs.