A new UK government-commissioned research has revealed a deep-seated skepticism among the nation's elite offensive cybersecurity professionals, known as red teams, regarding the current capabilities and promises of artificial intelligence (AI) in enhancing cyber defences.
The study, conducted by cyber consultancy Prism Infosec on behalf of the Department for Science, Innovation and Technology (DSIT), found that the specialists who simulate threat actor attacks to test organisational defences remain largely unimpressed by the hype surrounding AI. According to the report, interviews “overwhelmingly demonstrated the sector remains deeply skeptical of the promises of AI, considering many of its capabilities overstated and overused in products, creating a confused environment as to its true potential and capabilities.”
Red teams are a vital component of the national cybersecurity infrastructure, acting as authorised adversaries who mimic the tactics, techniques, and procedures of real-world attackers. Their goal is to identify vulnerabilities in people, processes, and technology before malicious actors can exploit them. Their skepticism, therefore, serves as a significant bellwether for AI's maturity in the security domain.
Key concerns voiced by the interviewees include risks to data privacy, the high costs associated with AI deployment, and the security of public models. These factors are seen as hampering widescale adoption of the technology in their current offerings. Furthermore, the experts perceived that the most common use of AI by threat actors at this time was to deliver more sophisticated social engineering attacks.
In contrast to the buzz around AI, the study found that cloud adoption has had a far greater impact on offensive cyber services. The migration to the cloud has enforced the development of new tooling and practices as the sector has adapted to how client organisations have migrated following the global pandemic.
Despite the current wariness, there is a sense of optimism for the future. Red team professionals expect that AI could eventually become a useful tool in their arsenal. This is contingent on the emergence of more accessible, private models that can be securely hosted and tuned by cybersecurity firms. Until the technology reaches this level of maturity, the sector will continue to rely on specialised, manual human efforts for the delivery of offensive cyber services.
The research also highlights that the sector has lagged in developing tools for non-Windows environments. Participants felt that investment into developing offensive cyber tools and capabilities for MacOS, Linux, Android, and iOS had lagged significantly.
This cautious stance from front-line experts contrasts with the broader adoption of AI in other business areas. A separate Darktrace report revealed that almost three-quarters (71%) of businesses are already seeing a significant impact from AI-powered cyber threats. Despite this, the vast majority (95%) of UK respondents are not strongly confident in their organisations' ability to defend against such attacks.
The message from the UK's red teamers is clear: while AI holds great promise, the road to its reliable and secure integration into critical cybersecurity functions requires a cautious approach, further development, and a healthy dose of skepticism.