× News Alerts AI News CyberSec News Let's Talk Local AI Bank Tech News Cyber Advisories Contact

Employees Using Chinese GenAI Tools Expose Sensitive Data: Report

A new research from Harmonic Security reveals that nearly one in 12 employees are using Chinese GenAI tools at work, exposing sensitive data. The research highlights the risks and suggests education and policies to prevent data leaks.

Employees Using Chinese GenAI Tools Expose Sensitive Data: Report

New research from Harmonic Security reveals that nearly one in 12 employees are using Chinese-developed generative AI (GenAI) tools at work, exposing sensitive data in the process . The research, which analyzed the behavior of roughly 14,000 end users in the U.S. and U.K., found that 7.95% of users accessed at least one Chinese GenAI application during a 30-day period .

Of the 1,059 users who interacted with these tools, Harmonic Security identified 535 incidents of sensitive data exposure . The majority of exposure occurred via DeepSeek, which accounted for roughly 85% of the incidents, followed by Moonshot Kimi, Qwen, Baidu Chat, and Manus . In terms of what sensitive data was exposed, code and development artifacts represented the largest category, making up 32.8% of the total . This included proprietary code, access keys, and internal logic .

Engineering-heavy organizations were found to be particularly exposed, as developers increasingly turn to GenAI for coding assistance, potentially without realizing the implications of submitting internal source code, API keys, or system architecture into foreign-hosted models . Alastair Paterson, CEO and co-founder Harmonic Security comments: “All data submitted to these platforms should be considered property of the Chinese Communist Party given a total lack of transparency around data retention, input reuse, and model training policies, exposing organizations to potentially serious legal and compliance liabilities .”

Furthermore, the research revealed that employees use an average of 254 AI-enabled applications in the workplace, with 7% experimenting with China-based apps . The analysis examined 176,460 prompts submitted to various Generative AI platforms by 8,000 end-users across different companies during the first quarter of 2025 . The study revealed that 6.7% of all prompts reviewed potentially disclosed company data .

To mitigate these risks, Harmonic Security recommends training employees on the risks of using unsanctioned GenAI tools, especially Chinese-hosted platforms . It also recommends providing alternatives via approved GenAI tools that meet developer and business needs . Finally, it is important to enforce policies that prevent sensitive data, particularly source code, from being uploaded to unauthorized apps . Organizations that avoid blanket blocking and instead implement light-touch guardrails and nudges see up to a 72% reduction in sensitive data exposure, while increasing AI adoption by as much as 300% .

Subscribe for AI & Cybersecurity news and insights