• Skip to primary navigation
  • Skip to main content

Akshay Aggarwal

on Entrepreneurship, AI & Security

  • Entrepreneurship
  • Artificial Intelligence
  • Cybersecurity
  • Show Search
Hide Search
You are here: Home / Archives for Artificial Intelligence

Artificial Intelligence

2025 Cybersecurity Trends Impacting Investments: The Strategic Role of AI

Introduction

Summary of 2025 Cybersecurity Trends Impacting Investments: The Strategic Role of AI

Cybersecurity is no longer a niche within software—it’s now a battleground of constant innovation, defined by adversarial dynamics and regulatory scrutiny. The defining theme of the next 24 months is artificial intelligence. AI is rapidly transforming how attackers operate and how defenders must respond. For investors, this shift creates asymmetric opportunities: firms that harness AI to automate detection, accelerate response, and protect novel attack surfaces will outpace those relying on legacy models. Conversely, AI-augmented threats will expose the limits of traditional defenses—pressuring boards, CISOs, and insurers to rethink cybersecurity spend.

I. Cybersecurity’s Strategic Differentiators from the Software Industry

The cybersecurity market is distinct from general software in ways that materially impact buying behavior and investment strategy:

  • Adversary-Driven Innovation: Security solutions face an intelligent, adaptive opponent. This creates a Darwinian cycle of rapid obsolescence and product refreshes—far faster than typical enterprise software.
  • Invisible ROI: Unlike CRM or ERP, the value of cybersecurity is realized by what doesn’t happen (breaches, ransom, legal fines). Buyers trust reputation, not just features.
  • Regulation as a Revenue Driver: Mandates like NIS2 (EU), SEC cyber-disclosure rules (US), and upcoming AI safety standards are effectively mandating spend.
  • M&A as a Constant: Acquisitions drive both innovation and exit velocity. Cyber is one of the few sectors where “acqui-hires” remain viable due to talent scarcity.
  • Reliance on Platforms + Best-of-Breed Point Solutions: The vendor landscape remains fragmented despite consolidation trends, creating opportunities for both narrow innovators and platform roll-ups.

II. AI’s Emerging Role in Cybersecurity

A. AI as a Threat: Offensive Capabilities

Adversaries are already operationalizing AI to scale and sophisticate attacks. This is not hypothetical—it’s happening now, and it’s altering the economics of cybercrime.

Offensive AI UseDescriptionInvestment Implication
Phishing 2.0Generative AI crafts hyper-personalized emails, voice and video deepfakes.Demand spike for identity verification, behavioral biometrics, and email security.
Vulnerability DiscoveryLLMs analyze public code repos and binaries to identify zero-days at scale.Application security and SBOM tooling (e.g., supply chain protection) are now critical.
Malware MutagenesisAI generates polymorphic malware that evades signature-based tools.Increases demand for behavior-based endpoint protection (EDR/XDR).
Chatbot HijackingPrompt injection and jailbreaking attacks target AI systems themselves.New submarket for “AI system security” is emerging—early-stage opportunity.

The adversary’s cost to attack has dropped. Enterprises’ cost to defend is rising. This asymmetry means demand for automation, prevention, and recovery tooling will continue to outpace broader IT spend.

B. AI as a Defense: Realizable Value in 24 Months

The AI arms race isn’t just about attacks—defenders are responding. Several capabilities are already producing measurable ROI and competitive differentiation:

Defensive AI CapabilityDescriptionNear-Term ROI (0–24m)
Anomaly Detection (UEBA)Behavioral analytics using ML models to spot insider threats or account takeovers.Already embedded in major SIEM/XDR solutions. Improves detection, reduces alert fatigue.
Automated Triage & Response (SOAR)AI-powered playbooks reduce MTTR (Mean Time to Respond).Cuts staffing costs and speeds up remediation. Mature in MDR/MSSP offerings.
Threat Intelligence CorrelationML links threat signals across telemetry (network, endpoint, identity).Enhances efficacy of threat hunting. Drives consolidation into unified platforms.
Generative SecOpsLLMs assist analysts by summarizing threats, suggesting queries, and writing playbooks.Emerging, but early deployments show 20–30% productivity gains in SOCs.
Secure Code GenerationAI-enhanced IDEs spot security bugs or generate safer code.GitHub Copilot, Replit, and Snyk already integrating. Popular with devs.

Defensive AI is already monetizing. Leading vendors (CrowdStrike, Palo Alto Networks, Microsoft, SentinelOne) are building moats based on proprietary threat data pipelines and ML tuning. The winners will be those who combine visibility with velocity.

III. Market Shifts Shaped by AI (2025–2027)

1. Cloud Security and AI-Native Defenses

Cloud workloads are exploding—but so are misconfigurations and lateral movement attacks. AI helps address cloud-native threats (e.g., identity drift, privilege escalation, API abuse). Expect a new wave of “autonomous cloud security” vendors or features built into CNAPPs (Cloud-Native Application Protection Platforms). Ai-enabled auto remediation from firms like HTCD will redefine and shorten window of vulnerability from months to days or even hours.

Investor Watchpoint: Companies like Wiz, Lacework, and Orca are embedding ML-based anomaly detection directly into cloud runtime. High valuation, but strong market pull. Newcomers like HTCD will fix vulnerabilities at machine scale.

2. Identity Security in the Era of Deepfakes

As generative deepfakes challenge traditional MFA and video verification, the next-gen identity market is forming around continuous authentication and passive biometrics. Expect demand for behavioral signal-based identity proofing (keystroke cadence, mouse movement, typing pressure).

Investor Watchpoint: Vendors in identity verification (e.g., AuthID, BioCatch, Ping Identity) are already pivoting toward “behavioral zero trust.” Strategic M&A targets.

3. XDR Platforms with AI-Driven Detection

Extended Detection and Response (XDR) platforms are evolving from telemetry aggregators to autonomous detection engines. The XDR of tomorrow is an AI-driven defense fabric. AI is making detection less about “rules” and more about patterns unseen by humans.

Investor Watchpoint: Leading XDR vendors (SentinelOne, CrowdStrike, Palo Alto) will either expand AI R&D or acquire to stay ahead. Look for differentiated IP in federated learning and adversarial ML.

4. Cybersecurity for AI Systems

Securing AI models themselves—preventing data poisoning, prompt injection, and model exfiltration—is now a new domain. As AI is embedded into business logic, AI security will be treated as an enterprise risk category.

Investor Watchpoint: New startups (e.g., Lakera, HiddenLayer) are emerging with niche AI security tools. It’s early but parallels the rise of AppSec 10 years ago. High-potential greenfield.

IV. Barriers to AI Adoption in Security

Despite the promise, several frictions remain for widespread AI integration:

  • Explainability: CISOs are wary of “black box” AI. If a system flags a threat, they need to understand why—especially for compliance and incident response reporting.
  • False Positives/Negatives: Poorly tuned models can create alert fatigue or miss subtle attacks. These damages trust in AI systems.
  • Data Quality & Privacy: High-fidelity ML models require massive datasets—often containing sensitive logs. Data privacy regulations (GDPR, HIPAA) can restrict training.
  • Integration Complexity: AI solutions must integrate with legacy infrastructure—SIEMs, ticketing systems, etc. Vendor lock-in and closed ecosystems are pain points.
  • Skill Gaps: Operating AI-enhanced SecOps requires talent with both security and ML skills—a scarce profile.

Implication for Investors: Look for companies solving these frictions—e.g., startups offering explainable AI, synthetic data for model training, or APIs that abstract model complexity from the user.


V. Investment Implications

A. AI is an Enabler, Not a Strategy

A recurring mistake: backing a “cybersecurity + AI” pitch with no proof of problem solved. Investors should treat AI like encryption—it’s necessary, but not sufficient. The bar is real-world, referenceable deployments with measurable uplift (e.g., 30% fewer false positives, 2x faster MTTR).

B. Moats Will Be Data-Driven

The strongest AI models will be trained on proprietary, longitudinal threat data. Companies with large, diverse customer footprints and unified telemetry pipelines (e.g., Microsoft, CrowdStrike) are best positioned to compound their advantage.

C. Vertical-Specific AI Security is Coming

Sectors like healthcare, finance, and industrials will require tailored AI defense stacks due to unique data types and compliance needs. Vertical-focused security vendors (e.g., MedCrypt in healthcare) may command premium valuations as AI threats grow.

D. AI Startups Will Be Consolidation Targets

Expect ongoing M&A as legacy vendors acquire AI-native teams to stay competitive. For startups, the most likely exit remains acquisition—especially if they show technical differentiation + SOC integration readiness.

VI. Final Thought: Navigating the AI-Cyber Nexus

Cybersecurity is now a contest of data, intelligence, and speed. AI doesn’t replace defenders—but it does reshape the landscape for attackers and defenders alike. Over the next 24 months, enterprises will prioritize tools that reduce human workload, detect earlier, and automate response. Buyers will reward vendors that deliver trust through transparency and defensibility through data.

For investors, this is the moment to shift due diligence toward:

  • AI capability as a product differentiator, not just a buzzword
  • Explainability and integration as success indicators
  • Data access and telemetry breadth as competitive moats
  • Defense against both novel attacks and AI attacks

The adversary has AI. The defenders must, too. That is where the next cybersecurity alpha lies.

AI nightmare on Bank Street

Welcome to the second installment of ‘Journey to Trustworthy AI‘. Here, we continue our exploration of AI security, an area that’s changing at an unprecedented pace. In our first article, we outlined our project testing generative AI with security red team. We could hardly have imagined how quickly fiction would confront a real-world adversary.

The ensuing narrative is partly based on events from the past year, giving us a glimpse of what the future could hold in this rapidly evolving field. For confidentiality, we’ve protected the identities of all individuals and organizations involved, and obscured operational details.

Let’s take a step back into a different world, where fast response isn’t a cornerstone of banking systems, where our protagonist Natasha has moved on from the shiny marbled floors of Wall Street to a vibrant, rapidly growing neobank, AnonNeo Fintech. A significant shift, from a traditional banking setup to the digital frontier of finance. Natasha’s not just dealing with ledgers and transactions anymore, but with lines of code, digital wallets, and a customer base as diverse as the city she once used to call home. It’s still 2023, but the rules of the game are different.

Natasha, now the head of fraud at AnonNeo, finds herself dealing with a very different set of challenges. She’s no longer playing in the familiar territory of well-defined transactions and traditional fraud patterns. Her new playground is a volatile landscape of varied customer behavior, each one as unique as the individual behind the screen. They’re digital natives, making payments at online gaming platforms at midnight, buying cryptocurrencies at dawn, splitting bills over lunch, and shopping from international websites while commuting. They’re also well-funded startups looking to do more with their money, leverage better spending control for employees, with founders averse to walk into a traditional bank branch. The one thing that doesn’t change though is that they all care about protecting their financial assets.

Natasha’s team is well versed in the arts of the old guard, yet nimble enough to learn new tunes. She is a maestro in a symphony of firewalls and encryption, databases, and servers. Her orchestra is the heartbeat of the neobank, a rhythm she’s dedicated her life to protecting. But something is slowly going out of tune, something that will bring a new composition to life.

The months are rolling by in an unremarkable fashion. But somewhere, unbeknownst to Natasha and her team, a stealthy melody starts playing. Small, irregular transactions are taking place. They are minuscule, barely noticeable, not large enough to raise alarm in a traditional fraud detection system, lost in the daily humdrum of thousands of transactions.

Weeks turn into months, the silent tune continues, an uninvited soloist in Natasha’s orchestra. Natasha senses something’s off but can’t put a finger on it. A feeling that this second-generation immigrant finds hard to ignore. There are no big fraudulent transactions, no significant breaches, no grand alerts. Just a quiet, unsettling feeling that something is amiss.

As this melody becomes more persistent, Natasha decides to dig deeper. A detailed analysis is conducted on an unprecedented scale, sifting through every transaction, every log, every account. AnonNeo puts in place stricter controls, tighter monitoring systems. But this new tune is adapting, changing with every measure the bank enforces. It’s intelligent, it’s stealthy, it’s… alive.

Finally, after a grueling investigation, the harsh truth is unveiled. The neobank has been hemorrhaging money in the form of small, unnoticed transactions. These transactions have been slowly but steadily adapting to every new security measure, every control the bank puts in place. It was not an attack; it was a siege.

The silent soloist behind this was not a human but an AI. A smart, learning system, patiently and persistently conducting its stealthy symphony, draining the bank’s resources. It’s a sobering realization, a chilling testament to the power of artificial intelligence in the hands of wrongdoers.

The challenge amplifies when Natasha attempts to employ AI systems for fraud detection. She’s faced with her second obstacle: a messy, unstructured dataset, a reflection of her diverse customer base. The AI needs clean, well-labeled data to learn from, to understand what’s normal and what’s not. But the data Natasha has is more like an abstract painting than a neat spreadsheet.

As she grapples with these challenges, the ghost of the silent, stealthy adversary evolves almost as if it detects their attempts to stop it. A gradual, almost imperceptible outflow of funds turns into a deluge. The stealthy AI attack that she encountered is even more complex and adaptable than before.

Natasha sees the patterns she’d come to dread – the attack is as intelligent and patient. Only this time, it’s adjusting itself to a broader, more unpredictable range of customer behaviors. It’s thriving in the same chaos that is confounding Natasha’s attempts at setting up a robust AI-based fraud detection system.

This attack serves as a wake-up call, not just for Natasha’s bank, but the entire financial sector. AI technology represents a double-edged sword. On the one hand, AI offers improved services, reduced costs, and increased efficiency. On the other hand, it opens new vectors of attack for cybercriminals, who are ready to exploit the very same technology.

For Natasha, and everyone in fraud protection within the financial services industry, the fight against cybercrime is not a static one. They can’t rest on their laurels, celebrating the power of AI, while malicious actors are using the same technology against them. The war has a new battlefield, and they must adapt their strategies. They must learn about adversarial attacks, invest in adversarial training for their AI systems, and above all, stay vigilant.

In the end, AI, like any other tool, is only as good or bad as the hands that wield it. As Natasha contemplates this, she takes another sip of her coffee, staring at the digital fortress she’s vowed to protect. It’s still a chilly Monday morning, but Natasha feels the warmth of new determination coursing through her. The fight goes on. For now, things are quiet.

Almost too quiet?

And that’s our story for today. Journey to Trustworthy AI was produced in collaboration with Zove Security and UnGlitch. I’m Akshay Aggarwal, wishing you safe banking. In next week’s episode of Journey to Trustworthy AI we’ll cover Safeguarding AI Models.


Author’s note:

I originally posted this post on LinkedIn

Reference as A Journey Toward Trustworthy Artificial Intelligence: AI nightmare on Bank Street by Akshay Aggarwal, Zove Security

Behind the AI Curtain: A Journey Toward Trustworthy Artificial Intelligence

Upon gaining access to a top-tier generative Artificial Intelligence (AI) system, I found myself surprised by the revelations I encountered. For those unfamiliar, generative AI encompasses technologies such as GPT-4, the most recent AI system that powers notable chatbots like ChatGPT, Google Bard, Cohere, and DALL-E. Along with my colleagues from Zove Security, I was part of a small, expert team tasked to scrutinize the capabilities and constraints of this system, specifically investigating potential misuse. As the chief cybersecurity researcher for this effort, I am now sharing insights from our findings. This article is the first in a four-part series on our journey toward more trustworthy AI.

Over the last year, we formed a “red team” with the mission of conducting an adversarial test of the new model to identify potential vulnerabilities. Our exploration spanned various fields including aerospace, manufacturing, chemical engineering, and banking. We began by formulating hypotheses within these industries and aimed to use AI to either confirm or refute them. For instance, within aerospace, we speculated that we could potentially craft a faulty part that would pass initial tests but quickly fail under real-world conditions. This presented a dangerous possibility of deliberately defective spare parts infiltrating a supply chain and causing disastrous failures. Unfortunately, our hypothesis proved correct with unsettling ease.

Equally concerning was our success in evading financial fraud detection and behavior-based intrusion detection systems. However, the team was taken aback by the model’s thorough response to a hypothetical cyber-attack on crucial infrastructure.

Similar efforts by other teams leveraging OpenAI’s GPT, revealed the potential for the AI to suggest compounds suitable for nerve agents, effectively creating chemical weapons. By utilizing “plug-ins” to supply the ChatGPT chatbot with recent scientific research and a directory of chemical manufacturers, it was even able to pinpoint a potential manufacturing site for such a compound.

These discoveries highlighted the dichotomy of advanced AI technology. While it holds the power to boost and augment our scientific discoveries, it simultaneously harbors the potential to facilitate dangerous activities within physics, chemistry, and cybersecurity. Recognizing these risks, measures were taken to ensure that such outcomes are avoided when technology is made widely accessible.

Our AI Security team’s probing exercise aimed to alleviate public concerns regarding the deployment of potent AI systems in society. Our duty was not only to test the boundaries of the AI model but also to scrutinize it for potential issues like toxicity, bias, and linguistic prejudice. We assessed the model for a range of possible abuses, including perpetuating misinformation, facilitating plagiarism, and enabling illegal activities such as financial crimes and cyberattacks.

In our approach, we combined professional security analysts and penetration testers with industry experts. Over several months, these interdisciplinary teams formulated and tested hypotheses, aiming to breach current defenses and risk mitigations.

One mutual apprehension within the team was the risk associated with linking such powerful AI models to external knowledge sources via plug-ins. We elected not to take this route, although we understand that real-world adversaries may do so.

Another significant issue we identified involved bias in the model’s responses. We observed instances of gender, racial, and religious biases, along with overt stereotypes concerning marginalized communities.

Over time, we suggested alterations to the model and saw marked improvements in safety within the model’s responses. However, the quest for safety and fairness within AI systems remains a continuous endeavor.

In the grand scheme of technology, generative AI stands as an exciting frontier with its astounding capabilities. Our exploration has shed light on its dual nature – from empowering scientific advancements to potentially enabling harmful activities. Although our journey was filled with surprising discoveries, it reaffirmed the essential need for constant vigilance and innovation in cybersecurity. In a world where AI technologies are continually evolving, the work to ensure their safe, ethical, and fair use is an ongoing commitment. As we conclude this initial post in our series, we hope our insights will foster an understanding of the implications of AI in our society and the need for continued exploration. In the upcoming posts, we will delve deeper into the specific challenges and the corresponding measures we proposed to make AI a more secure and trustworthy tool for the future.


Authors Note:

Original Post: This post has been cross-posted from the original on LinkedIn

Next Posts in Series: AI nightmare on Bank Street

Reference as Behind the AI Curtain: A Journey Toward Trustworthy Artificial Intelligence by Akshay Aggarwal, Zove Security


Zove Security’s AI Technology Unit Acquired To Protect High-Value Targets

Zove Security’s AI unit acquired, enhancing cyber defense for high-value targets with ZoveTrustAI technology

Malicious actors are leveraging AI to scale complex attacks at lower costs. The ZoveTrustAI platform protects critical individuals from sophisticated attacks and paves the path to autonomous defense.” — Akshay Aggarwal, CEO, Zove Security

SEATTLE, WASHINGTON, USA, June 25, 2024 /EINPresswire.com/ — Zove Security, a leading provider of emerging technology and information security capabilities, announced today that its AI technology unit has been acquired by a stealth firm, a subsidiary of a renowned global technology enterprise. The acquisition includes all technology assets, exclusive rights to the ZoveTrustAI platform, and Zove’s dedicated operations team. The integration of Zove’s assets into the acquiring firm will be completed over the third quarter of the calendar year. The financial terms of the acquisition are not being disclosed.

ZoveTrustAI: A Game-Changer in Cybersecurity

The deal encompasses the proprietary ZoveTrustAI platform, an artificial intelligence system for devices that merges generative models with personal context and threat reports. This unique solution delivers incredibly relevant and actionable intelligence, enhancing cyber risk management by combining on-device large language models (LLMs) and server-based models. During field trials, ZoveTrustAI successfully identified multiple instances of previously unknown active attacks, demonstrating its effectiveness in real-world scenarios.

A Fruitful Collaboration

For almost two years, Zove Security co-created the solution with the acquiring firm. This solution protects high-value targets (HVTs), including executives, celebrities, and other sensitive individuals from cybercriminals and adversarial state actors. This partnership has focused on active attack identification, leveraging the strengths of both organizations to develop and refine ZoveTrustAI.

Future Integration and Capabilities

Post-acquisition, ZoveTrustAI will be integrated into a security solution designed to manage cyber risk for high-risk individuals. This technology is poised to revolutionize fraud detection and cyberattack response by utilizing personal context and on-device LLMs to deliver autonomous defense mechanisms. With secure on-device data processing, it will ensure your privacy while providing robust protection. It is designed to be smart, adaptive, and always one step ahead.

CEO Statement

Akshay Aggarwal, Founder and CEO of Zove Security, stated, “Advancements in Artificial Intelligence (AI) are poised to significantly impact cybersecurity. For most enterprises, AI presents both threats and potential. Malicious actors are leveraging AI to scale complex attacks at lower costs. The ZoveTrustAI platform allows enterprises to protect their critical users from sophisticated attacks and paves the path to autonomous defense.”

About Zove Security

Zove Security secures the products and platforms that power innovation and underpin our digital lives. Their mission is Platform Trust through secure engineering and trusted operations, ensuring users trust the technology they use and the companies behind them.

About the Acquiring Firm

The acquiring firm, currently in stealth mode, is part of a leading global tech enterprise known for its innovation and premium consumer electronics, including smartphones, PCs, tablets, wearables, and a range of software and services.

Akshay Aggarwal

Copyright © 2025 · Akshay Aggarwal