Sitemap

Day 21: Learnings from Financial Industry Leaders related to SaaS and Agentic AI —a paper by JPM

100 days of Agentic AI: From foundations to Autonomous workflows

11 min readJun 29, 2025

--

I read this JPM paper and it opened few thoughts and ideas — this blog is not point-to-point mapping related to the paper. But, Mr. Pat Opet’s (CISO, JPM) view has opened up few conversation staters on SaaS and Agentic AI and how current state of security and innovation and leaving Security in the back burner will impact us in the long-run. Especially when we are handling century worth of Business into SaaS and AI hands via Data.

JP Morgan open letter highlights security vulnerabilities of AI procurement. LINK

Press enter or click to view image in full size
This paper invoked few thoughts

SaaS + Agentic AI = Blind Spot? — What are the risks, challenges and required guardrails:

I believe that the pace at which AI innovation is happening, the state of security infrastructure, protocols etc. we are atleast 5 years behind. This is the same place where we started Crypto, the world was not ready for Crypto as Security was behind. Yet, both the Application and Security innovation were happening in parallel — few major vulnerabilities happened, yes. It is a balancing act.

SaaS and Agentic AI opens 2 critical areas of discussion.

  1. Introducing Agentic AI in SaaS heavy ecosystem
  2. Introducing SaaS AI in the Organization landscape (key theme of the JPM paper)

View 1: Introducing Agentic AI in SaaS heavy ecosystem

Most of the organization has SaaS in its Application Ecosystem and Data landscape. Organizations are not in control of the SaaS architecture. We input data into a vending machine and get the output data for us. Now deploying Agentic AI directly on this Data line with 40% SaaS and 60% in-house build require some deeper Architecture and Security thoughts.

With SaaS, you know the input data, input or output interfaces and output data. Agentic AI goes beyond passive prediction — it autonomously takes actions, makes decisions, calls APIs, triggers workflows, and plans steps toward goals, often chaining tools, invoking external systems, and learning iteratively. We need to understand the context more clearly to be able to do this. It acts with intent, having the context and not just response.

Likelyhood of introducing Agentic AI on top of our own 60 %in-house built Data and Application landscape, still needs resolving a lot of Semantic layer, understanding the landscape etc., that is why every conference, every leader focuses on Metadata and Business Glossary more than ever.

No SaaS MCP:

If your SaaS vendor, has not integrated MCP yet, implementing Agentic AI on top of it may need to have many tactical work including infusing API based Agents that has business context for the Agents to operate on — this is is still context aware but quite hard-bound ie., API with parameters needs to be called specifically to build this agent and it is not auto-context aware. This kit might be a throw-away work as all SaaS provides will have to provide MCP — context aware for Agents to be able to operate.

Press enter or click to view image in full size
If No SaaS MCP yet — you need to orchestrate to enable Agentic AI in the ecosystem

Sam Altman did publicly urge broader adoption of the Model Context Protocol for AI agents. On X (formerly Twitter), he wrote: “people love MCP and we are excited to add support across our products. available today in the Agents SDK and support for ChatGPT’s desktop app + Responses API coming soon!”

Microsoft has built MCP support into its Windows AI Foundry, calling it the “USB-C of AI apps” and rolling out controlled registry and consent prompts to ensure SaaS and CSP partners can plug in safely .

Sundar Pichai (Google CEO) also publicly endorsed MCP as a unifying standard for agent-tool interoperability, alongside Altman .

Together, these signals from OpenAI, Microsoft, and Google underscore a coordinated push for SaaS platforms and cloud providers to implement MCP endpoints — so that autonomous agents can operate with shared context, security, and governance.

SaaS MCP:

If the provider of the SaaS already has MCP it is a happy problem to solve as we have the context available. Much has been said about the vulnerabilities MCP’s are introducing into the ecosystem and the loop-holes, having MCP’s in the near future is innevitable. This means, till the time we get MCP + commoditized security, we will have to continously put tactical fixes and build with limited exposure. Now SaaS MCP will provide us the context — for example as CRM MCP will help us provide -

  • What is this data?
  • What is the AI trying to do with it?
  • What is and isn’t allowed?
  • Who reviews what the AI does?
  • What logs do we need for audit and safety?

Why it is needed?

Without MCP, the AI may:

  • Use confidential data without knowing it
  • Trigger actions that aren’t permitted
  • Act outside compliance or privacy boundaries
  • Leave no trace of what it did

With MCP:

  • You control the what, why, and how of every AI decision
  • You get visibility, governance, and trust
  • You set the stage for safe scaling of AI

Critical lesson for MCP I inferred from JPM paper: Do not leave Security in the back-burner. While MCP will solve MxN problems; it simplifies the ecosystem; It continously provides Context to your models etc. Security needs to continuously evolve as it has been identified as a gap.

Press enter or click to view image in full size
Source: By Author

The Security challenge and what could go wrong?

Challenge: Context Leakage

What could go wrong? Sensitive metadata (intent, purpose, policies) leaked in prompts or logs

Few likely Mitigation plans: Encrypt/tokenize context payloads; sanitize inputs/outputs; restrict who can view logs.

Challenge: Spoofed Context or Hallucinated context or agent washing

What could go wrong? Attackers can inject fake intent to elevate AI privileges. This is a biggest risk especially when we only Ring-fence SaaS platforms and do not integrate our organization’s security infused into the Agentic AI running on SaaS.
Just by introducing Agentic RAG, TOT, COT etc. we cannot be very sure that the Agents won’t hallucinate. In an Agentic graph, if there are 50 different actions in the chain with 5 LLM agents, if one LLM model hallucinates, it spreads through the entire nervous system — any Agent-to-Agent hands-off or Agent-to-Human hands off or Agent-to-API hands-off to call another static script may still have hallucinated outcome. Many SaaS providers are doing an agent wash of the underlying components and might bring it out claiming MCP’s — deeper understanding and discussion is the effective first step.

Few likely Mitigation plans: Sign and verify context tokens (session or time based); enforce runtime context validation.

Challenge: Unauthorized Access, Auditability and Agent’s Entropy — Degrees of freedom

What could go wrong? Agents can exceed their allowed scope (data or actions). The entropy of agent is low (like liquid) unlike static applications (like solid). For the in-house built systems we can keep the controllable Degree of Freedom — however, the degree of freedom and the Agents colluding and creating more and gaining access from a SaaS based system cannot be controlled by in-house security or architecture teams.
Agents have higher degrees of freedom than an in-house system. Now deploying them on the SaaS which is not controlled by the organization where our business data lives needs to have new gaurdrails defined.

Few likely Mitigation plans: Bind context to agent identities; enforce role-based, least-privilege policies. Discuss with your SaaS providers on the new configuration plans before deploying and using Agentic AI directly from the SaaS platforms — this must include proper observability logs, clear controllable degrees of freedom config, kill-switches for the Agentic AI etc. No end-to-end trace of “why” and “how” an AI decision was made. Immutable, append-only logs capturing context, prompt, model version, identity, and outputs

Challenge: Unvalidated Actions and Invalidated Outcome

What could go wrong? AI acting on incomplete/incorrect context, triggering unintended API calls, unintended data to be used and provides completely incorrect results which is summarized and to go back to check the lineage and correctness is super difficult.

Mitigation plan: Pre-execution policy checks; real-time validation against a live policy registry, discovery agents to build the context, A context validator with confidence and conviction scores are needed.

Challenge: Insecure SaaS Integration — Agentic AI OPS

What could go wrong? If your vendor SaaS APIs don’t natively honor MCP constraints — the SaaS is not 100% ready to be introduced into the Agentic AI Ecosystem.

Mitigation plan: Introduce an MCP-aware API gateway that enforces context-based policies and redacts or denies violations. Introduce Agentic AI Ops for vendor based SaaS ecosystem — the discussion needs to start before using the Agentic AI’s outcome and bringing it into your enviornments.

Challenge: Quadratic Complexity directly introducing Agentic AI on SaaS

What could go wrong?

  1. Exponential API Fan-outEvery new agent × SaaS pair doubles the calls, causing latency, timeouts, and hitting rate limits.
  2. Cost ExplosionAPI‐call billing grows quadratically, blowing budget forecasts.
  3. Cascading FailuresA single agent error ripples through the graph, affecting many downstream actions
  4. Untestable CombinatoricsImpossible to pre-validate every path through a large agent/SaaS graph — undiscovered bugs lurk in edge cases.
  5. Constant Monitoring of Blind SpotsSheer scale makes it hard to see which interaction chain caused an outage or security incident.

Few likely Mitigation plans:

  1. Hierarchical Orchestration: Group related steps under a supervisor agent to reduce direct fan-out. Fan-out Caps:Enforce a max parallel‐call budget.
  2. Caching & Batching: Coalesce repeated fetches and batch writes.• Budget Alerts & Circuit Breakers: Automatically pause workflows on cost thresholds. Introduce Kill-Switches.
  3. Domain-Scoped Agents: Limit each agent’s “blast radius” to a small subsystem. Circuit Breakers & Fallbacks: Isolate and degrade gracefully on failure.
  4. Static Graph Pruning: Analyze and remove irrelevant branches before runtime. Simulation & Chaos Testing:Randomly inject failures to validate guardrails.
  5. Complexity Metrics & Dashboards: Track graph size, depth, and API‐call counts in real time. End-to-End Tracing: Tag every call with a workflow ID.

Agentic AI and MAESTRO framework — need for a new Threat modelling framework — Link

Press enter or click to view image in full size
The 7 Layer Reference Architecture for Agentic AI — Source: CSA
  • Agent EcosystemRisk: Malicious or impersonating agents infiltrate the marketplace.

Mitigation: Enforce signed agent manifests, vet listings, and maintain an immutable registry.

  • Security & ComplianceRisk: Attackers evade or compromise AI-driven defenses.

Mitigation: Continuous adversarial testing of security agents and transparent, explainable decision logs.

  • Evaluation & ObservabilityRisk: Performance metrics or logs are poisoned to mask failures.

Mitigation: Immutable, tamper-proof monitoring with real-time anomaly alerts.

  • Deployment & InfrastructureRisk: Compromised containers or orchestration attacks grant full agent control.

Mitigation: Sign and version IaC, isolate runtimes, and apply network segmentation.

  • Agent FrameworksRisk: Supply-chain backdoors or vulnerable libraries introduce hidden exploits.

Mitigation: Strict SBOM checks, code-signing of framework components, and regular dependency audits.

  • Data OperationsRisk: Poisoned or exfiltrated data corrupts agent behavior or leaks secrets.

Mitigation: End-to-end encryption, authenticated data pipelines, and input-sanity validation.

  • Foundation ModelsRisk: Adversarial inputs fool models or enable full-model theft.

Mitigation: Adversarial hardening, API rate-limits, and continuous query-pattern monitoring.

View 2: Introducing SaaS AI into your ecosystem

If the Gen AI / Agents are directly integrated into the SaaS, and provide only the summarized outcome of the data.

Press enter or click to view image in full size
MCP enabled ecosystem is the starting condition

Why It Matters

  • Rapid Innovation: SaaS vendors continually bake AI features — chatbots, copilots, intelligent insights — into their platforms.
  • Black-Box Risk: You subscribe to “AI-enhanced” functionality without knowing exactly how your data is being used, stored, or shared.
  • Ecosystem Complexity: With dozens of SaaS apps talking to each other, AI may cross organizational boundaries in unexpected ways.

Key Considerations

Data Ownership & Compliance:

  • Clarify who owns the data you feed into the AI service. This is crucial.
  • Map regulatory requirements (GDPR, CCPA, industry-specific rules etc.) to each SaaS provider’s processing model.

Vendor Risk Assessment

  • Treat every AI-enabled vendor like any other critical infrastructure: demand pen-test reports, SOC 2 or ISO 27001 certifications, and transparent data-handling policies.
  • Verify whether AI is run in-house or on third-party models — ask about hosting, model provenance, and data retention.

Scoped Pilots & Sandboxes

  • Kick off with read-only or suggest-only and not DIRECT ACTION-BASED configurations before granting write or action privileges. Do not give autonomy to the Agents withint the SaaS box that is not owned by you — atleast for now.
  • Use isolated environments to test AI behaviors on non-production data, logging every prompt, response, and downstream call.

The Security challenge and what could go wrong?

Challenge: Black-Box Model Behavior

What could go wrong? SaaS-embedded AI may change logic or outputs without notice, leading to unexpected decisions or degraded quality overall.

Likely Mitigation plan: Subscribe only to vendors with model-versioning and change-logs. Start conversations with your vendor now if you are planning for Agentic AI across the Organization landscape. Require notification of breaking changes and run impact tests.

Challenge: Data Privacy & Residency — Agentic-AI Ops

What could go wrong? Sensitive or regulated data sent into the SaaS AI could violate residency or retention policies.

Likely Mitigation plan: A new form of Evaluation-Driven SaaS outcome may need to be defined. Enforce data classification filters on inputs. Use in-region data processing or pseudonymization before invoking the AI. This will help avoid passing data via multiple black-boxes and unable to do lineage or back-track the results.

Challenge: Vendor Service Interruptions

What could go wrong? Outages or throttling of the SaaS AI service halts critical workflows or causes cascading failures.

Likely Mitigation plan: Architect graceful fallbacks (cached responses, local models).• Define SLAs with penalties and multi-vendor redundancy.

Challenge: Compliance & Audit Requirements

What could go wrong? Lack of visibility into SaaS AI decision logs hampers regulatory reporting and forensics.

Likely Mitigation plan: Mandate audit-grade logging from the vendor.• Mirror critical logs in your own immutable store for end-to-end traceability.

Challenge: Customization & Integration Gaps

What could go wrong? Out-of-the-box AI features don’t align with your workflows — forcing manual overrides or workaround hacks.

Likely Mitigation plan: Maintain an AI integration sandbox to build and validate custom adapters. Use middleware to normalize inputs/outputs to your schema.

Challenge: Cost & Usage Spikes

What could go wrong? Pay-as-you-go AI billing can suddenly balloon under heavy usage or model upgrades.

Likely Mitigation plan: Set up budget alerts and rate limits on API calls. Leverage batching, caching, and off-peak scheduling for non-urgent tasks.

At the end of the day, if we are introducing Agentic AI into our ecosystem, there MUST be Air space like controls.

Action Steps before Agentic AI introduction

  1. Inventory AI Features across all subscribed SaaS platforms.
  2. Classify Data Sensitivity before turning on any AI module.
  3. Define a SaaS AI Procurement Checklist that includes security, privacy, and governance criteria.
  4. Train Teams on how to evaluate AI recommendations versus automatic actions.

Build agents to discover the above for you!

Some interesting reads on the same subject:

  1. Forbes — Link
  2. Salesforce — Link
  3. OwnData — Link
  4. AWS — Link
  5. HCL — Link
  6. SPIRL — Link

--

--

LAKSHMI VENKATESH
LAKSHMI VENKATESH

Written by LAKSHMI VENKATESH

I learn by Writing; Data, AI, Cloud and Technology. All the views expressed here are my own views and does not represent views of my firm that I work for.

No responses yet