“There’s a huge interest in generative AI for security operations in India. The key is making sure AI is solving real problems, not just being adopted because it sounds cool,” said Steve Santos, senior director analyst at Gartner. With generative AI continuing to proliferate at a rapid pace across industries, security operations teams worldwide are harnessing its powers to address critical challenges.
On the sidelines of the Gartner Security & Risk Management Summit, 2025, indianexpress.com got in touch with Steve Santos, who has expertise in security engineering and network security. The executive offered some insights into how generative AI is reshaping security operations and what organisations can expect in the future.
When it comes to use cases of generative AI in security operations, Santos affirmed adding that it has been implemented in the SIEM (Security Information and Event Management) and XDR (Extended Detection and Response) spaces. However, he acknowledged that it is still early days. “One of the primary use cases right now is chatbots providing summarisation and useful information. The next phase, which is seeing slow adoption, is AI assistants like copilots. While adoption remains in its early stages within security operations, there is significant potential.”
From a vendor perspective, Santos said that the landscape shows mixed approaches. He observed that “quite a few are looking at it from early adoption, how we get quick wins, and others are cautiously optimistic about getting really good use cases.” On the other hand, from a customer’s perspective, he said that there is some scepticism regarding whether it will save money. When they realise that AI won’t necessarily lead to direct cost savings but rather improved efficiency and productivity, that can slow adoption.
On the dawn of AI agents
Much of the AI community has dubbed 2025 as the year of AI agents, with several predictions suggesting that multi-agent AI in threat detection and incident response may reach 70 per cent adoption by 2028. When asked about the rationale behind the adoption, Santos attributed it to the ability of AI to replicate specific functions and tasks rather than just providing summarisation. “The key driver will be the evolution of AI agents. Security has traditionally lagged in adopting new technologies. However, as generative AI solutions become available, organisations will find them increasingly attractive.”
Santos said that the shift from summarisation to actual task replication will be a turning point. “When AI can replicate functions and tasks beyond productivity enhancements, organisations will see true augmentation. Adoption will likely follow an upward curve as AI agents begin to interact with each other, enabling more complex tasks and workflows.”
When asked what generative AI applications organisations can implement in their security operations, Santos said that the biggest use case at the moment was detection engineering. This is because security operations face skill gaps due to the frequent need for various threat detection technologies. “AI assistants can help write detection code, similar to how AI supports software development. This is crucial because every new threat requires new detection rules and correlations. AI can generate about 60-70% of this code, allowing security professionals to refine it further. Some SIEM tools are integrating AI assistants, enabling users to create detection rules using natural language commands,” he said.
Story continues below this ad
Chatbots, challenges, and risk of errors
When asked about how AI chatbots, AI assistants, and AI agents differed in their roles within security operations, Santos emphasised that each of these AI tools adds unique value to security operations. According to him, the challenge is mapping specific use cases to the right AI tool. “AI chatbots are good for summarisation and basic queries, while AI assistants help with tasks like writing detection code. AI agents, on the other hand, handle more complex workflows and automation. Organisations must carefully evaluate their needs and choose the right AI tool accordingly,” Santos said.
Speaking about key challenges companies faced while integrating generative AI into security workflows, Santos singled out trust. “Security analysts and teams are naturally sceptical because they operate in a high-stakes environment where mistakes can have serious consequences.” He said that they need to be confident that AI outputs are reliable. The Gartner executive added that automation is another hurdle, as many organisations are hesitant to allow AI to take preventive actions due to concerns about business impact.
Santos said that it was equally challenging for organisations to ensure that AI-generated insights are accurate and reliable. “Gartner has a TRiSM (Trust, Risk, and Security Management) model that provides guidance, but validation processes are key. Organisations need robust data quality and fidelity to minimise hallucinations and false positives. Emerging AI technologies like multimodal AI that integrates text, audio, video, and images could be game changers. By incorporating diverse data sources, we can improve confidence in AI outputs,” Santos said, adding that security teams must still validate AI-generated insights and simplify this process as much as possible.
When asked what advice he would give to security leaders and businesses exploring generative AI for security operations, Santos said, “Don’t be afraid to experiment, try new things, and even fail—it’s part of the process. Most importantly, listen to your team.”