Pacific Data Integrators' Technology Insights Blog

Informatica Fall 2025 Release: What “Agentic AI” Means for Data Management

Written by Blog Post by PDI Marketing Team | Oct 30, 2025 7:48:40 PM
 

Informatica Fall ’25: Agentic AI for Data Teams

Informatica’s Intelligent Data Management Cloud (IDMC) Fall 2025 release, brings “agentic AI” into everyday data management.
This update introduces CLAIRE Agents, AI Agent Engineering, AI Agent Hub, and unstructured data governance—plus tighter policy enforcement across Databricks, Amazon Redshift, and Microsoft Fabric. For data teams modernizing pipelines, migrating warehouses, or strengthening data quality, these capabilities accelerate discovery, ELT migration, and DQ automation—with built-in governance, lineage, parity testing, and FinOps controls. Source

1. CLAIRE® Agents — AI that Works Alongside You

Informatica expands CLAIRE®, its AI engine, into domain-specific “agents” that handle tasks like exploration, ELT drafting, and data quality automation.

  1. Exploration & Discovery Agents natural-language search across MDM and enterprise sources.
  2. ELT Agents draft pipelines for data engineers to validate and productionize.
  3. Data Quality Agents (Public Preview) create and operationalize DQ rules from plain-language specs.
  4. Product Help Agents embedded, context-aware assistance inside IDMC.
  5. Product Experience Agents (Private Preview) enrich product records using unstructured data.
  6. CLAIRE GPT Enhancementsmulti-step reasoning via Azure OpenAI and AWS Bedrock (Claude). 🔗 CLAIRE Overview
2. AI Agent Engineering

A no-code workspace to build, connect, orchestrate, and govern custom Informatica agents—with test consoles, logging, monitoring, and SDLC integration. 🔗 AI Agent Engineering  

3. AI Agent Hub

A central gallery of pre-built domain agents and automation recipes for tools like Jira, Salesforce, Dynamics, Snowflake, and Microsoft Teams. It also includes MCP servers for secure enterprise connectivity. 🔗 AI Agent Hub

4. Stronger Governance and Policy Enforcement

  1. Unstructured Data Governance (Private Preview): classify, catalog, and tag unstructured files within CDGC using hierarchical taxonomies.
  2. AI Governance in CDGC: model multi-agent systems and scan AI assets across Google Vertex AI.
  3. Policy Pushdown: apply RBAC/ABAC-based access policies directly on Databricks, Amazon Redshift, and Microsoft Fabric. 🔗 Cloud Data Governance & Catalog

5. MDM and Product Data Upgrades
 
  1. Agentic PIM for Product 360: conversational stewardship for product data.
  2. Salesforce Agentforce Extension: unifies governed master data with Salesforce-native AI agents. 🔗 Product 360 Overview

The Bigger Picture: Why Agentic AI Matters

“Agentic AI” describes systems that reason, plan, and act autonomously—but with clear human oversight.

In data management, that means:
  • Auto-generated data pipelines and quality rules
  • Intelligent metadata discovery
  • Automated policy enforcement
  • Faster validation and monitoring cycles
This evolution mirrors what leading analysts call the next phase of AI-driven data fabric.

PDI’s Recommendations for Safe Adoption

Start Small — Pilot the Right Use Cases
  • Discovery and glossary curation using CLAIRE Agents.
  • ELT scaffolding for pipeline migration.
  • Data Quality as Code for consistent rule enforcement.
Implement Guardrails
  • Capture lineage for all agent-generated assets.
  • Enforce policy-as-code with RBAC/ABAC and masking.
  • Run KPI parity tests for all agent-authored pipelines.
  • Maintain full audit trails for prompts, revisions, and model IDs
How Pacific Data Integrators (PDI) Helps
 
PDI helps you adopt Informatica’s agentic AI quickly and safely — with tested accelerators, FinOps guardrails, and hands-on enablement.

Our Services
  • Advisory & Roadmap (2–4 week readiness sprint)
  • Architecture Foundations: Composable, governed, cost-aware design
  • Implementation: CLAIRE Agents, AI Agent Hub, and Agent Engineering integration
  • Quality & Security: DQ as code, parity testing, and policy pushdown validation
  • FinOps: Budget controls, SLO dashboards, and observability
Implementation Checklist
 

Phase

Key Steps

Plan

Define success metrics; classify data sensitivity

Build

Enable CLAIRE Agents; connect MDM/CDGC; set up lineage

Validate

Run parity tests; pilot in low-risk domain

Operate

Define SLOs; monitor cost, lineage, approvals


Ready to begin? Ask about our 4-week Agentic Data Management Accelerator: Discovery → ELT Draft → DQ Rules → KPI Parity + Policy Checks.

Analyst Perspectives:
 
Analysts agree that “agentic AI” only delivers durable value when it’s anchored to composable architecture, active metadata & lineage, governance as code, and FinOps discipline—all operated across hybrid/multicloud estates. Here’s how the major Analysts frame it and what that means for data leaders.
 
🧩Make metadata active. Composable data fabric + augmented catalogs are the substrate for agent planning, data access, and policy checks. Gartner
 
⚙️Run agents like products. Assign owners, SLAs, parity tests, and value KPIs; promote only when guardrails pass. McKinsey & Company
 
📈Bake in FinOps. Enforce scan limits, partitioning/cluster hints, and cost alerts as code in orchestration. McKinsey & Company
 
☁️Prepare for multicloud reality. Standardize lineage, testing, and policy pushdown across Databricks/Redshift/Fabric/Snowflake. IDC Blog
 
🧩Govern the agent lifecycle. Use a framework like AEGIS to tie roles, approvals, logging, and audits to regulatory needs. Forrester

References & Further Reading

Ready to modernize with agentic AI? Contact Pacific Data Integrators to start your pilot.