Hello everyone, last week I had the opportunity to do a webinar titled "Generative AI in Business, a new attack surface?" to the community of CSA - Cyber Security Angels , "... a group of company people usually ICT & Security Managers, established to create a network of direct knowledge in order to face a common front to emerging problems on the Cyber Security front.”
The goal was to talk about how Generative AI poses new challenges to the security of companies' information systems, ideally in line with my motto "Enjoy AI, responsibly!"
But don't stop reading because you think it's too technical. It concerns anyone with a minimum of responsibility in a company.
Below, you will find a summary (AI-Assisted in this case) of my speech and, at the end, the link to the webinar for those who want to find out more.
The hidden presence of AI in organizations
Imagine discovering that a new figure, highly qualified but never officially hired, is working in your offices. Your colleagues contact you remotely based on the skills they declare they have in their 'curriculum'. They pass them their daily problems, combined with the data necessary to solve them, and expect timely answers, which in many cases they don't even read and transform them into activity.
This is the current situation with AI in many companies. Anyone who knows me knows how much I like to compare Generative AI to hyper-qualified interns who join the company and are available to everyone. According to recent studies, up to 50% of users who use AI tools do not officially declare this. And in my workshops this data is almost always achieved when I ask the people I interact with.
The 'AI Interns' who already work for your organization without you knowing it
These “rogue employees” are manipulating company data, interacting with existing processes, and leaking potentially sensitive information to strangers. If you think this is disturbing, it's not enough. There are critical security and governance issues that need to be addressed quickly. How can we handle something we don't even know we have?
Understanding AI: Three key perspectives
To effectively address this challenge, it is essential to understand the different forms that AI can take within an organization:
AI as an integrated subsystem : AI is no longer just an abstract concept, but is becoming an integral part of many enterprise systems. From human resources management software to financial analysis tools, AI is increasingly present as an "invisible" but fundamental component. This integration offers benefits in terms of efficiency and analytical capabilities, but also requires a new awareness in terms of data security and control.
AI as an amplifier of human capabilities : Think of AI, especially generative AI, as an extraordinarily capable collaborator. It's like having an expert in every field, available 24/7, capable of processing huge amounts of data and providing in-depth analysis in record time. And imagine that every employee in your company has this possibility available: what could go wrong? This “super collaborator” requires careful management: its capabilities, if misdirected, could lead to at least bad decisions or security breaches.
AI as an autonomous agent : This is perhaps the most revolutionary and potentially disruptive perspective. Imagine AI systems capable of operating with a certain degree of autonomy, making decisions and performing actions without direct human supervision for longer or shorter periods of time. These agents could manage entire process chains, optimizing operations in real time. But with great power comes great responsibility: How do we ensure these agents operate within well-defined ethical and security parameters?
This is the new landscape we are dealing with, AI present as a subsystem in existing software, available to everyone in the form of a bot or application that increases people's capabilities and increasingly structured as an autonomous agent. I would say we have a few things to deal with in the next few years.
Security Risks
The introduction of AI into the business landscape opens up new scenarios in terms of cybersecurity. Attacks can now be orchestrated using Generative AI and come from inside or outside your organization.
And if you are releasing Generative AI-based solutions you should expect them to be attacked very soon.
Here are some of the most pressing challenges:
AI-Enhanced Attacks : Hackers and cybercriminals are leveraging AI to create more sophisticated and difficult-to-detect attacks. Custom phishing, adaptive malware, and AI-enhanced social engineering attacks are just some of the emerging threats.
Specific Vulnerabilities of AI Systems : AI-based systems have their own peculiar vulnerabilities. One example is “prompt injection,” where an attacker can manipulate the input of an AI system to make it behave in unexpected or malicious ways. Imagine a corporate chatbot that, due to an attack of this type, begins to disclose sensitive information.
Data poisoning : The effectiveness of an AI system depends on the quality of the data it was trained on. “Data poisoning” attacks aim to corrupt this data, affecting system behavior in subtle but potentially catastrophic ways.
Unrestricted AI models : There are open source AI models that can be run locally, without the ethical and security restrictions typically implemented in commercial versions. This opens up worrying scenarios in terms of generating malicious content or manipulating existing systems.
The AI “black market” : A veritable underground ecosystem of malicious AI tools and services is developing. There are already platforms that offer subscriptions for creating malicious prompts or accessing AI models without restrictions. And you don't need to go to the dark web to find them.
Strategies for addressing AI security
Faced with these challenges, it is essential to take a proactive approach. In the book “Hiring artificial intelligence in the company” I talk about it in depth with a proposed framework that I called AI-PLUG; a method of introducing AI into the company which is essential for every responsible figure in the company to understand what is happening in this world and what impact it will have on the immediate future of organizations.
Regarding security, here are some initial key strategies:
Create organizational awareness : The first step is to educate all levels of the organization about the benefits and risks of AI. This is not to create alarmism, but to promote a realistic understanding of the potential and challenges.
Form a multidisciplinary team : AI security is not just a technical issue. A holistic approach is needed that involves IT experts, security specialists, lawyers, ethicists and representatives from various business units.
Develop company policies on the use of AI : Create clear guidelines on the use of AI tools, covering aspects such as data protection, ethics and regulatory compliance. These policies should be flexible to adapt to rapidly evolving technology.
Implement monitoring and testing systems : Adopt AI-specific monitoring tools capable of detecting anomalous behavior or potential violations. Conduct regular penetration testing focused on AI vulnerabilities.
Handle data carefully : Implement strict data management protocols, paying particular attention to the quality and security of the data used to train and power AI systems.
Adopt the principle of "least privilege" : Limit access to AI systems and sensitive data only to those who actually need it, thus reducing the attack surface.
Maintain a “human-in-the-loop” approach : Even with advanced AI systems, it is crucial to maintain some degree of human oversight, especially for critical or high-risk decisions.
But perhaps the most important is to start with small projects , which you can control from all points of view during the conception, development, testing and production phase. This will allow you to better understand how AI can be used in your business and what the associated risks are.
Don't wait for AI to be fast, perfect, safe to start practicing, please!
Make sure you have a dedicated team that includes cybersecurity experts, AI developers, and representatives from various company departments. This team can help identify areas where AI can be most useful and develop strategies to mitigate risks. And if you don't have these figures in the company, start looking for them on the market immediately, perhaps with partners who work in an "almost consultant" way.
So what?
Cybersecurity in the age of AI is not a choice, but a necessity. The risks are real and present, but with proper planning and implementation, you can transform AI from a potential threat to a powerful ally. Remember, the key is to be proactive: develop clear policies, train your staff and use AI to improve your security. Only in this way can you protect your company and prepare for a future in which artificial intelligence is a normal part of business.
Here you can find the complete webinar, dedicate an hour to it, it's worth it even if it's in Italian.
📢 If you have thoughts or comments, or if you just want to help spread these reflections, share this page with anyone you think might appreciate it. Your opinion matters a lot!
🚀 To always keep you updated on my contents:
🗓️ Contact me if you want to organize an AI Workshop or for any idea.
Until next time!
Maximilian
Commenti