40% of AI Data Breaches Will Arise from Cross-Border GenAI Misuse by 2027: Gartner
According to
Gartner, Inc., by 2027, more than 40% of AI-related data breaches will be
caused by the improper use of generative AI (GenAI) across borders.
The swift
adoption of GenAI technologies by end-users has outpaced the development of
data governance and security measures, raising concerns about data localization
due to the centralized computing power required to support these technologies.
“Unintended
cross-border data transfers often occur due to insufficient oversight,
particularly when GenAI is integrated in existing products without clear
descriptions or announcement,” said Joerg Fritsch, VP analyst at Gartner.
“Organizations are noticing changes in the content produced by employees using
GenAI tools. While these tools can be used for approved business applications,
they pose security risks if sensitive prompts are sent to AI tools and APIs
hosted in unknown locations.”
Global AI Standardization Gaps Drives
Operational Inefficiency
The lack of
consistent global best practices and standards for AI and data governance
exacerbates challenges by causing market fragmentation and forcing enterprises
to develop region-specific strategies. This can limit their ability to scale
operations globally and benefit from AI products and services.
“The
complexity of managing data flows and maintaining quality due to localized AI
policies can lead to operational inefficiencies,” said Fritsch. “Organizations
must invest in advanced AI governance and security to protect sensitive data
and ensure compliance. This need will likely drive growth in AI security,
governance, and compliance services markets, as well as technology solutions
that enhance transparency and control over AI processes.”
Organizations Must Act Before AI
Governance Becomes a Global Mandate
Gartner
predicts by 2027, AI governance will become a requirement of all sovereign AI
laws and regulations worldwide.
“Organizations
that cannot integrate required governance models and controls may find
themselves at a competitive disadvantage, especially those lacking the
resources to quickly extend existing data governance frameworks,” said Fritsch.
To mitigate
the risks of AI data breaches, particularly from cross-border GenAI misuse, and
to ensure compliance, Gartner recommends several strategic actions for
enterprises:
Enhance Data
Governance: Organizations must ensure compliance with international regulations
and monitor unintended cross-border data transfers by extending data governance
frameworks to include guidelines for AI-processed data. This involves
incorporating data lineage and data transfer impact assessments within regular
privacy impact assessments.
Establish
Governance Committees: Form committees to enhance AI oversight and ensure
transparent communication about AI deployments and data handling. These
committees need to be responsible for technical oversight, risk and compliance
management, and communication and decision reporting.
Strengthen
Data Security: Use advanced technologies, encryption, and anonymization to
protect sensitive data. For instance, verify Trusted Execution Environments in
specific geographic regions and apply advanced anonymization technologies, such
as Differential Privacy, when data must leave these regions.
Invest in
TRiSM Products: Plan and allocate budgets for trust, risk, and security
management (TRiSM) products and capabilities tailored to AI technologies. This
includes AI governance, data security governance, prompt filtering and
redaction, and synthetic generation of unstructured data. Gartner predicts that
by 2026, enterprises applying AI TRiSM controls will consume at least 50% less
inaccurate or illegitimate information, reducing faulty decision-making.
Leave A Comment