× News Alerts AI News CyberSec News Let's Talk Local AI Bank Tech News Cyber Advisories Contact

Generative AI's rapid growth outpaces corporate security policies.

Generative AI is being rapidly adopted in workplaces, with many employees using it without formal approval. This swift integration creates significant security risks, as most companies lack formal policies or training to guard against data leaks, intellectual property infringement, and other vulnerabilities, leaving them exposed.

Generative AI's rapid growth outpaces corporate security policies.

Generative artificial intelligence (AI) has rapidly integrated into daily workflows, offering significant benefits in productivity and efficiency. However, this swift adoption has outpaced the development of critical security policies, leaving many businesses vulnerable to significant risks.

The use of generative AI in the workplace is widespread. According to research from ISACA, nearly three out of four European IT and cybersecurity professionals report that staff are already using generative AI at work. Despite its prevalent use, only 31% of organizations have a formal, comprehensive AI policy in place. This gap is alarming, as over a quarter (28%) of workers are using generative AI at work, with over half of them doing so without their employer's formal approval.

This unsanctioned use, also known as 'Shadow IT,' introduces a host of dangers. One of the greatest risks is data leakage. Employees may unwittingly input sensitive company information, trade secrets, or personal data into public AI tools. This information can then be used to train the model and be surfaced to other users, leading to confidentiality breaches and potential legal consequences. One survey revealed that over one-third (38%) of employees share sensitive work information with AI tools without their employer's permission.

Intellectual property (IP) risks are equally severe. Generative AI models are trained on vast datasets, which may include copyrighted material without permission. This creates a risk of businesses inadvertently using infringing material in the generated outputs, leading to potential legal disputes. Furthermore, the ownership of AI-generated content can be ambiguous, further complicating IP management.

To address these challenges, experts recommend that businesses urgently develop clear and comprehensive AI governance policies. These policies should define acceptable use of AI tools, manage data, and provide guidelines for protecting sensitive information. Employee training is equally critical. However, one survey found that 40% of organizations offer no AI training at all. Without proper guidance, employees cannot safely harness AI's potential. As the technology continues to evolve, creating a framework for responsible AI use is not just a best practice—it's essential for protecting against potential liabilities and ensuring long-term success.

Subscribe for AI & Cybersecurity news and insights