Your company has started to use Artificial intelligence (AI), but are you effectively managing the risks involved? It’s a new growth channel with the potential to boost productivity and improve customer service. However, particular management risks need to be assessed in cybersecurity. Start by considering AI trends to put this risk in context.
Why Is AI an Emerging Cybersecurity Threat?
Artificial intelligence is a booming industry right now with large corporations, researchers, and startups all scrambling to make the most of the trend. From a cybersecurity perspective, there are a few reasons to be concerned about AI. Your threat assessment models need to be updated based on the following developments.
Early Cybersecurity AI May Create a False Sense of Security
Most machine learning methods currently in production require users to provide a training data set. With this data in place, the application can make better predictions. However, end-user judgment is a major factor in determining which data to include. This “supervised learning” approach is subject to compromise if hackers discover how the supervised process works. In effect, hackers could evade detection by machine learning by mimicking safe code.
AI-based Cybersecurity Creates More Work for Humans
Few companies are willing to trust their security to machines. As a result, machine learning in cybersecurity has the effect of creating more work. WIRED magazine summarized this capability as follows: “Machine learning’s most common role, then, is additive. It acts as a sentry, rather than a cure-all.” As AI and machine learning tools flag more and more problems for review, human analysts will need to review this data and make decisions about what to do next.
Hackers Are Starting to Use AI for Attacks
Like any technology, AI can be used for defense or attack. Researchers at the Stevens Institute of Technology have demonstrated that fact. They used AI to guess 25% of LinkedIn passwords successfully after analyzing 43 million user profiles in 2017. In the hands of defenders, such a tool could help to educate end users on whether they’re using weak passwords. In the hands of attackers, this tool could be used to compromise security.
The Mistakes You Need to Know About
Avoid the following mistakes, and you’re more likely to have success with AI in your organization.
1. You Haven’t Thought Through the Explainability Challenge
When you use AI, can you explain how it operates and makes recommendations? If not, you may be accepting (or rejecting!) recommendations without being able to assess them. This challenge can be mitigated by reverse engineering the recommendations made by AI.
2. You Use Vendor-provided AI Without Understanding Their Models
Some companies decide to buy or license AI from others rather than building the technology in house. As with any strategic decision, there’s a downside to this approach. You can’t trust the vendor’s suggestions that AI will be beneficial blindly. You need to ask tough questions about how the systems protect your data and what systems AI tools can access. Overcome this challenge by asking your vendors to explain their assumptions about data and machine learning.
3. You Don’t Test AI Security Independently
When you use an AI or machine learning tool, you need to entrust a significant amount of data to it. To trust the system, it must be tested from a cybersecurity perspective. For example, consider whether the system can be compromised by SQL injection or other hacking techniques. If a hacker can compromise the algorithm or data in an AI system, the quality of your company’s decision making will suffer.
4. Your Organization Lacks AI Cybersecurity Skills
To carry out AI cybersecurity tests and evaluations, you need skilled staff. Unfortunately, there are relatively few cyber professionals who are competent in security and AI. Fortunately, this mistake can be overcome with a talent development program. Offer your cybersecurity professionals the opportunity to earn certificates, attend conferences, and use other resources to increase their AI knowledge.
5. You Avoid Using AI Completely for Security Reasons
Based on the previous mistakes, you might assume that avoiding AI and machine learning completely is a smart move. That might’ve been an option a decade ago, but AI and machine learning are now part of every tool you use at work. Attempting to minimize AI risk by ignoring this technology trend will only expose your organization to greater risk. It’s better to seek proactive solutions that leverage AI. For instance, you can use security chatbots such as Apollo to make security more convenient for your staff.
6. You Expect too Much Transformation from AI
Going into an AI implementation with unreasonable expectations will cause security and productivity problems. Resist the urge to apply AI to every business problem in the organization. Such a broad implementation would be very difficult to monitor from a security point of view. Instead, take the low-risk approach: apply AI for one area at a time, such as automating routine security administration tasks, and then build from there.
7. Holding Back Real Data from Your AI Solution
Most developers and technologists like to reduce risk by setting up test environments. It’s a sound discipline and well worth using. However, when it comes to AI, this approach has its limits. To find out whether your AI system is truly secure, you need to feed it real data: customer information, financial data, or something else. If all this information is held back, you’ll never be able to assess the security risks or productivity benefits of embracing AI.
Adopt AI with an Eyes Wide Open Perspective
There are certainly dangers and risks associated with using AI in your company. However, these risks can be monitored and managed through training, proactive management oversight, and avoiding these seven mistakes.