News & Insights

Don’t Forget the Security Implications of Adopting AI

By Sean Tickle

In the sea of articles on the topic of artificial intelligence (AI) and its increasing enmeshment with modern technology, it’s time, perhaps, to consider another perspective when it comes to this phenomenon.

After all, along with AI’s transformative benefits (and I’m not denying these at all, AI has many great uses), it also introduces a whole new landscape of security challenges and considerations for security professionals and organisations alike.  

From adversarial attacks and data privacy concerns to the risk of AI-powered cyber threats, it’s true that, when utilising AI, organisations must also navigate and remediate a new and complex set of risks. These risks, conversely, cannot simply be mitigated with AI-powered security solutions either (e.g., Microsoft Copilot for Security) since, right now, these tools still lack the efficiency and scope of a human security professional.

So, as we continue to unlock the power of machine learning and AI-powered productivity inside our rapidly evolving digital world – if we are to remain as cyber resilient as possible – let’s not forget to consider the multifaceted security challenges that come along with implementing and rolling-out AI tools.

Key security implications of AI

Addressing the security implications of adopting AI requires a comprehensive approach; and it’s worth engaging with a professional security service provider that can help your organisation understand its data structure and specific security vulnerabilities before implementing AI tools.

Key security considerations service providers may look out for include:

Data privacy and protection

  • Massive data requirements: AI systems often require large datasets, some of which can contain sensitive personal, financial, or medical information. Poor handling of these datasets may lead to data breaches, compromising user privacy.
  • Inference attacks: even anonymised datasets may be vulnerable to inference attacks (where an attacker deduces private information from seemingly harmless data).
  • Data ownership: the question of who owns and controls data used for training AI models can also lead to conflicts, especially if sensitive or proprietary data is involved.

Adversarial attacks

  • Adversarial examples: malicious actors can manipulate input data slightly to deceive AI systems, for example in security systems, causing incorrect or dangerous outcomes.
  • Model poisoning: attackers could introduce malicious data during the training phase to poison AI models, leading to biased or harmful predictions and outputs.
  • Model inversion: by querying an AI system repeatedly, cyber-attackers can reconstruct sensitive data the model was trained on, exposing private information.

Bias and fairness issues

  • Discriminatory outcomes: AI systems trained on biased data may perpetuate or even amplify biases, leading to unfair or discriminatory decisions, especially in critical areas like hiring and lending, e.g.
  • Black box decision making: AI models, particularly deep learning systems, are often opaque, meaning most people don’t understand the AI’s decision-making process. This lack of transparency can make it difficult to identify or correct biased or malicious behaviours in the system.

Automation and weaponised AI

 

  • AI-powered cyber-attacks: AI can be used to launch more sophisticated cyber-attacks, such as AI-driven malware that adapts and learns from the environment to avoid detection.
  • Deepfakes and misinformation: AI can generate highly realistic fake content (videos, images, audio), leading to the spread of disinformation and more successful phishing scams, e.g.

System vulnerabilities

  • AI dependency: as organisations become more reliant on AI, their systems become vulnerable to AI-related failures or attacks, e.g., an AI system used for fraud detection might be targeted by adversarial attacks (where malicious actors manipulate inputs to deceive the AI into classifying fraudulent transactions as legitimate).
  • AI software vulnerabilities: as with all software, AI systems can have vulnerabilities that malicious actors can exploit, whether it’s in the AI algorithms or the infrastructure supporting them.

Insider threats and misuse

  • AI Misuse: employees or insiders could misuse AI systems to access sensitive information, manipulate results, or engage in malicious activities like surveillance or fraud.
  • HITL bypass: as AI systems automate more decision-making processes, fewer human checks and balances might be put in place, increasing the risk of misuse or errors going unnoticed; this is called human in the loop (HITL) bypass.

Ethical and regulatory compliance

  • Regulatory compliance: as governments introduce new laws and regulations governing AI, organisations must ensure that AI systems comply with data protection, ethical use, and transparency standards. Failing to do so may lead to legal penalties.
  • Ethical implications: misuse or mishandling of AI can lead to ethical dilemmas, where systems prioritise profits over fairness, privacy, or human welfare.

As we can see, while AI does indeed offer powerful use cases for enhancing productivity,

human oversight and security activity is still necessary if we are to ensure that these systems function ethically, fairly, and safely, providing a safeguard against unpredictable, biased, or erroneous outcomes.

An AI example: Copilot for Microsoft 365

To round off this article, it’s worth considering a real use case of AI; one so many organisations have already implemented and are using today: Copilot for Microsoft 365 (M365 Copilot for short).

To be clear, this is not because M365 Copilot is any less secure than other productivity tools (in fact it benefits from Microsoft’s own security and compliance protocols which can be very robust, when configured correctly), just that the tool is so prolific, and this means so many of us will already have used it or be familiar with it.

Like any large language model (LLM), M365 Copilot interacts with data primarily by processing it in response to user prompts and it generates content or insights based on the patterns it has learnt and the data it can access across SharePoint. Copilot interacts with applications (e.g., Word, Excel, Teams) and company data (chats, emails, documents) to assist users in automating tasks, generating content, prioritising their workload, and so on.

These powerful search capabilities, however, are precisely why – prior to implementing AI tools like Copilot – organisations must establish a strong foundation of data security. In fact, it’s imperative that guardrails are put in place to ensure that the AI can only interact with data that it’s safe for users to access. For instance, without privileged access management controls in place, Copilot could inadvertently expose sensitive and personal data in response to a user prompt, leading to unauthorised access and data leakage.

Along with managing data/user access and data privacy controls, when implementing AI tools such as M365 Copilot, organisations will also need to:

Interact with data outputs – ensure human oversight in reviewing and validating AI-generated outputs, especially in critical tasks such as drafting legal documents or processing sensitive business data.

Ensure data encryption – Copilot requires access to data in real-time, and this data is transmitted to and from the service. If not properly encrypted, there is a risk that this data could be intercepted by malicious actors.

Enact strict sharing policies – without data-sharing policies and monitoring of audit logs, employees may unintentionally share sensitive information outside the organisation or expose proprietary data to unauthorised users.

Provide training for users – users need to understand the limitations of AI and maintain human control over critical or sensitive decisions. Regular audits and reviews of AI-assisted tasks should be implemented to ensure compliance and accountability.

Consider compliance requirements – Microsoft 365 Copilot needs to be configured in line with the organisation’s data governance and compliance requirements, especially in regulated industries like healthcare, finance, or government.  The tool must also comply with data protection laws such as GDPR, HIPAA, or CCPA.

Regularly audit systemsAI systems can be vulnerable to adversarial attacks (e.g., attackers injecting misleading data to manipulate AI-generated outputs). It will be important to regularly update and audit AI models for security vulnerabilities and have internal security protocols in place to monitor and respond to suspicious activity.

Final Word

Before organisations introduce AI like Microsoft 365 Copilot into their environment, I, like Microsoft, recommend that a strong foundation of security is built first.

More information about how to build a Zero Trust security strategy for Copilot is available from Microsoft, covering seven layers of protection in your Microsoft 365 tenant. These are: data protection, identity and access, app protection, device management, threat protection, secure collaboration in Teams, and user permissions to data.

However, if you would like to find out more about how Littlefish can assist in securing your IT environment for the adoption of AI tools, please get in touch with our experienced security specialists using the button on this page.

Sean TickleBy Sean Tickle