Gallagher-Kaiser Corp. constructs facilities for painting vehicles and boasts customers that include Ford, General Motors, Honda and Toyota. And like a growing number of enterprises, Gallagher-Kaiser is experimenting with artificial intelligence tools to detect and prevent malware attacks. The Michigan-based company sees AI as acting as “an extra layer of defense” against viruses, ransomware and other threats spread through email to its employees, according to the company’s security leader.

Experiments like these are not only smart – they are imperative given today’s need to adopt automated systems to compete. The use of machine-first approaches to take advantage of abundant computing power and advanced technologies like AI and machine learning enables businesses to boost product innovation and serve customers better. But the proliferation of automated systems comes with a risk: that they could accelerate the spread of malware in a company’s networks.

One way to mitigate this risk is to tighten access controls for automated machines, setting rules so that they can only access what they have permission to access.

Consider the potential of AI from a hacker’s point of view. Hackers can use AI to personalize phishing emails and gain access to a corporate network. One person’s click on a malware link can spread rapidly to other employees who are caught unaware.

But AI also can be a defensive shield. Enterprises could deploy systems using AI and machine learning to analyze various threats—network intrusions, distributed denial-of-service attacks, viruses in emails, phishing scams—and identify patterns of behavior.

These approaches detect and prevent malware by analyzing its software code. AI can strengthen existing cyber security tools, which can be vulnerable to bad actors who use new methods.

The use of AI in IT security is still new, and it’s understandable that businesses are showing interest. A SANS Institute survey found that 57% of firms from a range of industries were implementing or planning to implement security solutions that use AI. Banks globally will spend $5.6 billion this year on AI-enabled automated threat identification and prevention, fraud analysis and investigation systems, according to IDC.

One last point. While companies consider AI for security purposes, they should monitor the regulatory environment where they do business. Some countries are likely to create laws that penalize cyberattack perpetrators and require corporations to demonstrate they have made every effort to secure their systems. It’s likely that you’ll not only have to secure your systems, but also show regulators how you’re doing so.

Sundeep Oberoi, Global Head for Delivery of the Enterprise Security and Risk Management Unit at TCS, is co-author of the article “Protect Your Robots: How to Design Security into Your Machines” in the TCS Perspectives management journal.


About the author(s)

Sundeep Oberoi

Sundeep Oberoi

Dr. Oberoi is Global Head for Delivery of the Enterprise Security and Risk Management Unit at TCS and Head of the Niche Technology Delivery Group, part of TCS' Enterprise Solutions unit. He has more than 30 years of information and communications technology experience and his Ph.D. in Computer Science. Dr. Oberoi is responsible for the delivery of security support and services and specialized technology including, RFID sensors and NFC, Web 2.0, user experience, collaboration and unified communication, cloud computing and next generation networks for global client engagements.

Related Articles

Posted in AI, Machine Learning

From Integration to Industrialization: Unleashing the Power of MFDM™

Learn more

How AI Can Improve Regulatory Compliance

Learn more

Creating Exponential Value with Data Democratization

Learn more

Harnessing Abundance and Capitalizing on Connections

Learn more

Follow us on Twitter