Open access peer-reviewed chapter - ONLINE FIRST

Autonomous Policy Enforcement: AI Agent-Based Access Control Systems for Dynamic Resource Management

Written By

Yoram Segal and Adi Hod

Submitted: 15 August 2025 Reviewed: 13 October 2025 Published: 03 March 2026

DOI: 10.5772/intechopen.1013584

Data Quality Matters - Best Practices for Integrity and Assurance IntechOpen
Data Quality Matters - Best Practices for Integrity and Assurance Edited by Sebastian Ventura

From the Edited Volume

Data Quality Matters - Best Practices for Integrity and Assurance [Working Title]

Sebastian Ventura, José M. Luna and Antonio R. Moya Martín-Castaño

Chapter metrics overview

2 Chapter Downloads

View Full Metrics

Abstract

This chapter presents an approach to organizational data governance through autonomous Artificial Intelligence agent-based policy enforcement systems, specifically leveraging Data Security Platform technology capabilities. Traditional access control mechanisms often fail to adequately address the dynamic and complex nature of modern enterprise data environments, leading to significant governance gaps that compromise data quality, security, and compliance across distributed organizational infrastructures. The Intelligent Policy Agent Network (IPAN) architecture is introduced as a comprehensive solution where Large Language Model-powered agents operate as autonomous policy enforcers, performing real-time database discovery, enabling conversational access control, implementing dynamic data masking, and ensuring cross-database policy enforcement. Through empirical evaluation across diverse industry sectors including finance, insurance, and healthcare, the framework demonstrates substantial improvements in policy compliance rates, significant reductions in unauthorized access incidents, and dramatic decreases in manual governance overhead.

Keywords

  • AI agents
  • data governance
  • access control
  • policy enforcement
  • Large Language Models
  • enterprise security
  • database discovery
  • dynamic data masking
  • Compliance Management
  • Multi-Agent Systems
  • Regulatory Compliance
  • Adaptive Resource Management
  • Autonomous Systems
  • AI Agents

1. Introduction

In 2025, effective data governance has emerged as a fundamental determinant of organizational survival, competitive advantage, and regulatory compliance, yet the capacity of traditional governance frameworks to manage the complexity and velocity of modern enterprise data environments has reached a critical breaking point. The consequences of governance failures have escalated from operational inconveniences to existential threats, exemplified by the 2023 MOVEit Transfer vulnerability exploitation that compromised data from over 2,700 organizations worldwide, affecting approximately 95 million individuals and resulting in aggregate damages exceeding eight billion dollars in breach response costs, regulatory penalties, and litigation settlements. This incident starkly illustrates how a single governance weakness in third-party software can cascade across entire industry sectors, exposing sensitive data ranging from healthcare records and financial information to personally identifiable information across government agencies, Fortune 500 enterprises, and critical infrastructure providers. The traditional approach to data governance, characterized by manual policy creation, static role assignments, and reactive monitoring systems, has proven fundamentally inadequate for environments where data proliferates across organizational boundaries, regulatory jurisdictions shift rapidly, and threat actors exploit governance gaps with increasing sophistication.

In the contemporary, hyper-connected enterprise landscape, organizations are increasingly contending with an unprecedented proliferation of data sources, diverse user types, and complex access patterns that traditional data governance frameworks are ill-equipped to manage effectively. Modern enterprises typically manage data across multiple cloud platforms, including Amazon Web Services, Microsoft Azure, and Google Cloud Platform; on-premises databases spanning relational and NoSQL systems; data lakes storing petabytes of structured and unstructured information; real-time streaming platforms processing continuous data flows; and edge computing environments distributing processing capabilities to thousands of geographically dispersed locations. Each technological layer presents unique security models, access control mechanisms, authentication protocols, and compliance requirements that must be harmonized within a cohesive governance framework while maintaining performance, availability, and user experience standards. The inherent challenge extends significantly beyond simple user authentication and authorization, now encompassing dynamic policy interpretation that adapts to changing business contexts, real-time risk assessment that evaluates threat indicators across multiple dimensions, contextual access decisions that balance security requirements with operational needs, and continuous compliance monitoring across heterogeneous database systems that must satisfy diverse regulatory mandates simultaneously.

The complexity is further compounded by the need to support diverse user types, including full-time employees, temporary contractors, third-party partners, external customers, and automated systems, each with different access patterns and risk profiles that change dynamically based on business context, project requirements, geographical location, time of access, and temporal factors such as employment status changes or project lifecycle phases. Recent studies highlight the severity of these challenges, indicating that approximately 73 percent of organizations experience at least one significant data governance failure annually, with the average cost of such incidents reaching millions of dollars in direct costs, regulatory fines, litigation expenses, and reputational damage that can persist for years following the initial breach. Traditional governance systems suffer from several critical limitations, including policy drift, where actual access patterns diverge from intended policies over time due to ad-hoc exceptions and workarounds; scalability challenges as the number of data sources and users grows exponentially while governance resources remain constrained; compliance gaps due to the inability to monitor and enforce policies in real-time across distributed environments; and user friction caused by overly restrictive or poorly designed access controls that impede legitimate business operations and drive users toward risky workarounds such as unauthorized data copying or shadow IT solutions. Furthermore, the regulatory landscape continues to evolve rapidly, with new privacy regulations, industry-specific compliance requirements, and cross-jurisdictional data protection laws creating an increasingly complex compliance environment that requires sophisticated governance systems capable of adapting to changing requirements without extensive manual reconfiguration. Organizations must simultaneously navigate regulations such as the General Data Protection Regulation (GDPR), with its stringent consent requirements and right-to-erasure provisions; the California Consumer Privacy Act (CCPA) and its amendments expanding consumer data rights; the Health Insurance Portability and Accountability Act (HIPAA), governing protected health information with severe penalties for violations; the Sarbanes-Oxley Act (SOX), mandating financial data controls and audit trails; the Payment Card Industry Data Security Standard (PCI DSS), requiring specific technical safeguards for payment information; and numerous industry-specific requirements, including financial services regulations like Basel III, government data handling standards such as FedRAMP, and emerging artificial intelligence governance frameworks. Each regulatory regime imposes different data handling requirements, access control specifications, audit trail mandates, breach notification timelines, and documentation standards that must be consistently applied across diverse technological environments while maintaining evidence of compliance through comprehensive audit trails and policy enforcement documentation [1].

These governance inadequacies manifest through five fundamental challenges that render traditional approaches insufficient for contemporary enterprise environments and create the conditions under which catastrophic failures occur. First, policy drift represents the gradual but inevitable divergence between documented access control policies and actual access patterns, which emerges as organizations grant exceptions for urgent business needs, accommodate temporary project requirements, and respond to individual user requests without systematically updating policy definitions or removing outdated permissions when circumstances change. Second, scalability limitations constrain the effectiveness of governance systems as the number of data sources grows exponentially from dozens to thousands of databases, the user population expands to include not only employees but contractors, partners, and automated systems, and the volume of access requests increases from hundreds to millions of daily authorization decisions, overwhelming manual review processes and exceeding the capacity of rule-based systems to maintain consistent policy application. Third, real-time compliance gaps arise because traditional governance systems operate on periodic audit cycles rather than continuous monitoring, discovering policy violations hours or days after they occur rather than preventing unauthorized access at the moment of request, and generating compliance reports that document historical access patterns but provide no capability to intervene before sensitive data exposure occurs. Fourth, user friction from overly restrictive controls leads to productivity losses when legitimate business operations are blocked by governance systems that cannot distinguish between authorized and unauthorized access patterns, drives users toward dangerous workarounds such as sharing credentials, copying sensitive data to uncontrolled locations, or establishing shadow IT systems that bypass governance controls entirely, and creates organizational resistance to security policies that are perceived as obstacles rather than enablers of business objectives. Fifth, and perhaps most critically, traditional systems demonstrate a fundamental inability to handle dynamic, context-dependent access decisions that require evaluating multiple dimensions, including user role, data sensitivity, business justification, temporal factors, geographical location, current threat levels, and regulatory requirements, to determine appropriate access levels, masking strategies, or denial responses that align with both security requirements and operational needs.

Addressing these challenges requires a paradigm shift from traditional rule-based governance to intelligent, autonomous systems capable of adapting to the dynamic requirements of modern enterprise environments [2]. This chapter presents the Intelligent Policy Agent Network (IPAN) architecture as a comprehensive solution to the fundamental challenges of enterprise data governance, introducing a multi-agent artificial intelligence system that transforms policy enforcement from manual, reactive processes into autonomous, intelligent operations that adapt dynamically to changing organizational requirements and threat conditions. IPAN represents a distributed network of specialized AI agents, each powered by Large Language Models (LLMs) and designed to perform specific governance functions, including autonomous policy enforcement that continuously monitors and controls data access without human intervention, real-time database discovery that automatically identifies and classifies organizational data assets across heterogeneous environments, conversational access control that allows users to request permissions and receive explanations in natural language rather than technical specifications, and dynamic data masking that adaptively protects sensitive information based on user context, data classification, and regulatory requirements. The fundamental distinction between IPAN and traditional governance approaches lies in three critical capabilities that were previously impossible to achieve: first, the use of LLMs for natural language policy interpretation enables business users to author and maintain governance policies without requiring specialized technical expertise or formal policy languages; second, autonomous operation allows the system to make intelligent access control decisions, discover new data sources, and enforce policies without constant human oversight while maintaining comprehensive audit trails and explainability for regulatory compliance; third, real-time adaptation enables the system to respond immediately to changing threat conditions, evolving regulatory requirements, and dynamic business contexts by adjusting enforcement strategies, masking levels, and access permissions based on the current organizational state rather than relying on periodic manual updates.

Beyond introducing this novel architectural approach, the research reported in this chapter makes several empirically validated contributions that advance both the theoretical understanding and practical deployment of AI-driven governance systems. The primary contributions of this work include four significant advances that collectively address the critical gap between traditional access control systems and the dynamic requirements of modern enterprise data environments. First, architectural innovation through the IPAN framework represents the first comprehensive architecture for AI agent-based policy enforcement that integrates natural language policy interpretation, autonomous database discovery, dynamic access control, and continuous compliance monitoring within a unified, scalable system designed for enterprise deployment. Second, the integration methodology provides detailed technical guidance and proven approaches for integrating AI agent capabilities with existing Data Security Platform technologies, demonstrating how organizations can enhance their current governance infrastructure incrementally without requiring complete system replacement, thereby protecting existing investments while adding advanced autonomous capabilities. Third, industry validation through comprehensive empirical evaluation across multiple sectors, including financial services, insurance, and healthcare organizations, provides evidence-based demonstration of the approach’s effectiveness and adaptability to diverse regulatory environments, operational contexts, and organizational scales. Fourth, quantitative performance results demonstrate substantial improvements over traditional governance approaches, with empirical measurements showing a 94% improvement in policy compliance rates, a 67% reduction in unauthorized access incidents, and an 85% decrease in manual governance overhead, translating to measurable return on investment through reduced compliance costs, enhanced security posture, and improved operational efficiency.

The remainder of this chapter is organized as follows: Section 2 establishes the theoretical foundations of AI agent-based governance, introducing multi-agent systems principles, LLM capabilities for policy interpretation, and the detailed architecture of the IPAN, including its four primary agent types and their coordination mechanisms. Section 3 presents the technical implementation and integration approaches, covering agent architecture, communication protocols, LLM deployment strategies, database discovery techniques, dynamic data masking implementation, and integration with existing Data Security Platform infrastructure. Section 4 provides comprehensive industry applications and performance evaluation, detailing IPAN implementations in financial services and healthcare contexts, presenting the rigorous experimental methodology employed for performance assessment, reporting quantitative results and business impact, and discussing challenges, limitations, and lessons learned from real-world deployments. Section 5 explores future directions and provides implementation roadmaps, examining advanced AI capabilities under development, extended integration with emerging technologies, and phased deployment strategies for organizations adopting AI agent-based governance. Section 6 concludes the chapter by synthesizing key findings, reflecting on the transformative potential of autonomous AI agents for enterprise data governance, and identifying critical research directions for advancing the field.

Advertisement

2. Theoretical foundation and AI agent architecture

2.1 AI agents and autonomous governance systems

An AI agent is defined as a rational software entity that senses an environment, reasons about goals, and acts autonomously to achieve them [3, 4]. Modern agents rely on three core components that enable sophisticated autonomous behavior in complex environments. A LLM supplies generative reasoning, interpreting instructions and forming plans through natural language understanding and generation capabilities [5]. A persistent memory module stores context and past interactions, preserving long-horizon coherence and enabling continual learning from historical patterns and organizational knowledge [6]. A tool interface links the agent to external APIs, databases, and execution kernels so that it can read files, call services, and alter digital or physical systems, providing grounding in real-world operations [7].

The synergy of language models, memory, and tools allows an agent to run continuously, execute sophisticated tasks without constant supervision, scale across time zones and geographical boundaries, and optimize resources by selecting the most appropriate tool for each step based on task requirements and environmental conditions [5, 7]. Memory accelerates repeated operations by caching learned patterns and organizational context, the LLM adapts to novel situations through its pre-trained knowledge and reasoning capabilities, and tool use grounds reasoning in real-time data, producing adaptive and cost-effective automation that combines flexibility with reliability.

Agents require fast, semantically rich context to make informed decisions in governance scenarios (see Figure 1). The Model Context Protocol (MCP) provides uniform, low-latency access to files, databases, and functions without intermediary embedding pipelines, enabling direct communication between agents and data sources [8, 9]. MCP messages flow directly between host and server, enabling agents to read structured data or execute functions in a single round-trip without the overhead of traditional API architectures. Compared with conventional REST or gRPC APIs, MCP removes bespoke wrappers and preserves native file semantics, simplifying development and reducing error surfaces while maintaining security and access control [8]. In Figure 1 we can see the end-to-end IPAN workflow orchestrating data access decisions by processing natural language policy documents, user roles, and request reasons through specialized agents to generate a comprehensive security report and a complete HTML access decision webpage.

Figure 1.

Comprehensive data access decision orchestration in the IPAN architecture, showing end-to-end request processing and policy enforcement workflow.

Within a multi-agent system, agents employ the Agent-to-Agent (A2A) protocol to share goals, partial outputs, and status updates securely across frameworks and runtimes, enabling sophisticated collaboration patterns [1012]. A2A defines authentication mechanisms to verify agent identity, message schemas that structure information exchange, and hand-off semantics that govern task delegation, so that a planning agent can delegate subtasks to specialists, merge their outputs into cohesive results, and maintain end-to-end traceability for auditing and debugging purposes.

Coupling MCP for data access with A2A for coordination allows organizations to build autonomous policy-enforcement engines that combine data awareness with collaborative intelligence. Each access request triggers an evaluation agent that gathers contextual signals through MCP, including user attributes, data classifications, and environmental conditions; consults policy agents for risk assessment based on organizational policies and regulatory requirements; and uses A2A to order enforcement agents to grant, deny, or revoke privileges in real time based on the collective analysis [13, 14]. The outcome is dynamic, fine-grained resource management that adapts to evolving threats while preserving auditability and compliance through comprehensive logging and explainable decision-making.

2.2 Multi-agent systems in enterprise environments

Multi-agent systems theory provides the fundamental framework for understanding how autonomous agents can collaborate effectively to achieve complex governance objectives within an enterprise environment, where individual agents within a multi-agent system must coordinate their activities while maintaining independence, allowing them to exhibit adaptive behavior and demonstrate emergent intelligence that surpasses the capabilities of any single component. The application of multi-agent systems principles to data governance represents a significant departure from traditional centralized approaches, which typically rely on a single, monolithic policy decision point that becomes a bottleneck and a single point of failure as organizational complexity increases.

Instead, a multi-agent system-based approach distributes intelligence and decision-making capabilities across multiple specialized agents, each optimized for specific governance functions such as data discovery, policy interpretation, access enforcement, and compliance monitoring. This distribution of capabilities provides several key advantages, including horizontal scalability, where additional agents can be added to handle increased load without architectural changes; fault tolerance through redundancy and failover mechanisms that ensure continued operation even when individual agents fail; specialized optimization, where each agent type can be optimized for its specific function; and emergent intelligence, where the collective behavior of the agents leads to enhanced overall system performance.

The agent system produces capabilities that exceed the sum of individual agent capabilities (see Figure 2).

Figure 2.

Multi-agent system architectural advantages demonstrating horizontal scalability, fault tolerance, specialized optimization, and emergent intelligence capabilities.

Healthcare environments particularly benefit from multi-agent systems that integrate AI technologies to create paradigm shifts in data security and patient care, where the distributed nature of healthcare data, combined with strict regulatory requirements and the need for real-time access in emergency situations, makes healthcare an ideal domain for demonstrating the effectiveness of multi-agent governance systems [15, 16]. The scalability advantages of multi-agent systems become particularly apparent in large enterprise environments, where centralized systems would become bottlenecks, as distributing governance functions across multiple agents allows the system to handle increased load by adding additional agents rather than requiring architectural changes to a monolithic system.

Key theoretical concepts from multi-agent systems that apply directly to governance frameworks include agent autonomy, which allows individual agents to make decisions independently based on their specialized knowledge and capabilities while contributing to overall system objectives. Agent communication enables coordination and information sharing between agents through standardized protocols and message formats that ensure interoperability and enable complex collaborative behaviors. Emergent behavior occurs when the collective behavior of the agent system produces capabilities and intelligence that exceed what any individual agent could achieve, resulting in sophisticated governance capabilities that adapt to changing conditions and requirements.

2.3 Large Language Models in policy interpretation

Recent advancements in LLMs have fundamentally transformed how automated systems can understand and interpret human-authored policies, with Foundation and LLMs trained using massive amounts of data to perform various downstream tasks, emerging as particularly promising drivers for Natural Language Processing and other AI-related applications in governance contexts [17]. Traditional access control systems typically necessitate that policies be expressed in formal languages or rigid rule structures that require specialized expertise to create and maintain, creating barriers to policy authoring and maintenance that often require dedicated security professionals to translate business requirements into technical policy specifications.

This translation process introduces opportunities for errors and misunderstandings that can compromise the effectiveness of governance systems, while also creating delays and inefficiencies in policy updates and modifications. In contrast, LLM-powered agents possess the capacity to process natural language policy documents, comprehend subtle contextual nuances, and render decisions based on semantic understanding rather than merely syntactic pattern matching, enabling business users to author policies in natural language while reducing the barrier to policy creation and maintenance, and improving the alignment between business intent and technical implementation.

The application of LLMs to policy interpretation involves several sophisticated techniques, including semantic parsing to extract meaning from policy statements even when they are expressed in ambiguous or incomplete natural language; context understanding to consider the broader organizational and regulatory context when interpreting policies; ambiguity resolution to handle unclear or conflicting policy statements through intelligent inference and contextual analysis; and intent inference to understand the underlying purpose of policy requirements even when they are not explicitly stated (see Figure 3).

Figure 3.

LLM-powered policy interpretation workflow demonstrating natural language policy processing, semantic analysis, and automated decision generation.

Advanced neural network architectures, including EfficientNet-B7 Convolutional Neural Networks, variational auto encoders (VAEs), and Siamese networks, demonstrate sophisticated pattern recognition capabilities that can enhance policy interpretation through deep learning-based behavioral analysis, enabling the identification of patterns in user behavior, data access patterns, and policy compliance that would be difficult or impossible for traditional rule-based systems to detect [18]. The integration of LLMs into governance systems also enables sophisticated explanation capabilities, allowing the system to provide natural language explanations for access decisions that help users understand why their access requests were approved or denied and administrators understand the reasoning behind automated decisions.

2.4 The Intelligent Policy Agent Network architecture

The IPAN architecture represents a comprehensive framework for implementing AI agent-based governance systems that address the complex requirements of modern enterprise data environments through a distributed, intelligent approach to policy enforcement and compliance management. The architecture consists of four primary agent types, each specialized in specific governance functions while maintaining the ability to coordinate and collaborate with other agents to achieve comprehensive governance coverage across diverse organizational systems and data sources.

Discovery Agents serve as the foundational intelligence layer of the IPAN architecture, responsible for the autonomous identification, connection, and analysis of database systems across enterprise environments. These agents employ sophisticated techniques for database discovery, including network scanning, configuration management database integration, cloud resource discovery through APIs, and manual registration processes for systems that cannot be automatically discovered. Discovery Agents must handle heterogeneous database technologies, varying security configurations, and diverse data schemas while providing comprehensive discovery and classification of organizational data assets that form the foundation for effective governance decision-making.

Policy Interpretation Agents leverage LLM capabilities to process and understand organizational policies expressed in natural language, enabling business users to author and maintain policies without requiring specialized technical expertise [19]. These agents employ advanced natural language processing techniques to extract semantic meaning from policy documents, resolve ambiguities through contextual analysis, and translate business requirements into enforceable technical specifications. The agents maintain organizational context and terminology through fine-tuning processes that ensure accurate interpretation of organization-specific policies and requirements.

Enforcement Agents implement dynamic access control and data protection mechanisms based on policy interpretations and contextual analysis provided by other agents in the network. These agents employ sophisticated decision-making algorithms that consider multiple factors, including user context, data sensitivity, business justification, regulatory requirements, and current threat conditions, to make appropriate access control decisions. Enforcement Agents implement various protection mechanisms, including dynamic data masking, selective redaction, format-preserving encryption, and tokenization, based on the specific requirements of each access request.

Monitoring Agents provide continuous oversight and analytics capabilities that track governance effectiveness, identify policy violations, and generate comprehensive audit trails for compliance and forensic analysis. These agents employ machine learning algorithms to identify patterns in user behavior, detect anomalous access patterns, and provide predictive analytics that enable proactive governance responses to emerging threats and compliance risks (see Figure 4).

Figure 4.

Comprehensive IPAN architecture framework depicting four primary agent types, their specialized functions, and coordination mechanisms.

2.5 Agent types and coordination

The coordination mechanisms employed in IPAN implementations draw from established multi-agent systems coordination patterns, including contract net protocols for task allocation, where agents bid on governance tasks based on their capabilities and current load; blackboard systems for shared information management, where agents can share information and coordinate activities through shared data structures; and consensus algorithms for critical decision-making that ensure important governance decisions are made collaboratively with appropriate validation and verification.

Formal verification approaches demonstrate significant potential for enhancing controllable agent behavior and constraint satisfaction, with recent research showing substantial performance improvements in agent control systems that provide mathematical guarantees of correct behavior [20]. Machine learning applications in access control require a comprehensive taxonomic understanding of the dynamic nature of users, resources, and environments, where traditional systems fail to capture the complexity and variability of real-world access patterns and requirements [21].

The agent coordination framework ensures that individual agents can operate autonomously while contributing to overall system objectives through standardized communication protocols, shared data structures, and collaborative decision-making processes. This coordination enables the system to handle complex governance scenarios that require input from multiple agents while maintaining consistency and reliability in governance decisions across diverse organizational systems and contexts (see Figure 5).

Figure 5.

Agent-type coordination hierarchy showing discovery, policy interpretation, enforcement, and monitoring agents with their interaction patterns.

Advertisement

3. Technical implementation and integration

This section describes the proof-of-concept implementation of the IPAN architecture to demonstrate the feasibility of agent-based governance systems. The implementation employs simulated environments and serves to validate core architectural principles rather than provide production-ready deployment specifications. Organizations can adapt these general approaches using diverse technology stacks, deployment models, and infrastructure configurations based on their specific requirements and constraints (Figure 6).

Figure 6.

Core technical implementation components of IPAN including agent architecture, communication protocols, LLM integration, and data masking subsystems.

3.1 Agent architecture and communication protocols

The IPAN proof-of-concept implements a distributed multi-agent architecture, where each agent type operates as an independent service, communicating through standard messaging protocols. Agent implementation follows microservices design patterns, using modern programming languages (our implementation used Python, though Java, Go, or other languages would work equally well) with asynchronous processing capabilities to handle concurrent governance operations [22].

Inter-agent communication employs message-based protocols to enable loose coupling and scalability. Our implementation utilized distributed message queuing systems for asynchronous event propagation and remote procedure call frameworks for synchronous request-response patterns, though alternative messaging technologies (message brokers, event streaming platforms, or direct API calls) could achieve similar functionality (see Figure 7). Messages carry correlation identifiers, timestamps, and payload data structured using schema definition languages to ensure type safety and compatibility across agent versions.

Figure 7.

Communication protocol architecture demonstrating message-based patterns, authentication mechanisms, and security components ensuring IPAN system reliability.

Agent deployment leverages containerization for portability and isolation, with orchestration platforms managing lifecycle operations, scaling, and failure recovery. Resource requirements vary by agent type based on workload characteristics, ranging from lightweight agents requiring minimal compute resources (2–4 CPU cores, 4–8GB memory) for policy interpretation to resource-intensive agents (8–16 CPU cores, 16–64GB memory) for discovery and monitoring operations processing large data volumes. Organizations can deploy agents on cloud infrastructure, on-premises hardware, or hybrid environments depending on data residency requirements and operational preferences.

Security architecture incorporates authentication and authorization for all inter-agent communications, encrypted transport channels, and comprehensive audit logging of agent activities. The proof-of-concept demonstrates these principles using industry-standard approaches, though specific security implementations should align with organizational security frameworks and compliance requirements [23].

3.2 Large Language Model integration

Large Language Model integration enables Policy Interpretation Agents to understand natural language policies, and Enforcement Agents to generate human-readable explanations for access control decisions. The proof-of-concept implementation demonstrates LLM integration through cloud-based API services (our implementation used one commercially available LLM platform, though organizations could utilize alternative providers or self-hosted models).

Large Language Model invocation follows standard patterns: agents construct prompts incorporating policy text, user context, and access request details; submit requests to LLM inference endpoints with configured generation parameters (temperature settings balancing determinism and flexibility, token limits, and other model-specific controls); and parse responses to extract structured decisions or explanations. Different agent types employ different parameter configurations reflecting their operational requirements – policy interpretation benefits from conservative settings favoring consistency, while explanation generation employs more flexible settings enabling natural language variation.

Organizations may optionally adapt pre-trained models to governance domains through fine-tuning on organizational policies, though the proof-of-concept relies on general-purpose models demonstrating adequate performance in simulated scenarios. Fine-tuning approaches, when employed, typically involve compiling representative policy corpora, conducting supervised training with appropriate regularization, and validating adapted models against held-out test cases [24]. Production deployments should evaluate whether general-purpose models suffice or if domain adaptation provides meaningful accuracy improvements (see Figure 8).

Figure 8.

LLM integration process showing prompt construction, API invocation, and response parsing for policy interpretation and explanation generation.

3.3 Database discovery and analysis

Discovery agents identify and catalog database systems across organizational infrastructure through a combination of discovery techniques. The proof-of-concept demonstrates principles using simulated database environments, though production deployments could employ network scanning tools (with appropriate authorization), integration with IT asset management systems, cloud provider APIs, or manual registration processes, depending on organizational policies and infrastructure characteristics. Important caveat: Automated discovery techniques require explicit authorization from organizational security teams before deployment, as unauthorized scanning may violate security policies.

Once discovered, agents establish connections using appropriate authentication mechanisms (service accounts, certificate-based authentication, federated identity integration, or API keys) and retrieve metadata about database schemas, table structures, and data types. Connection approaches vary based on database technologies and security requirements – relational databases typically expose schema metadata through standard query interfaces, while NoSQL and cloud databases may require vendor-specific API calls (see Figure 9).

Figure 9.

Database discovery workflow illustrating automated identification, metadata extraction, and sensitivity classification across heterogeneous database systems.

Data classification employs pattern matching and content analysis to identify sensitive information types. The proof-of-concept implementation demonstrates rule-based classification, matching column names, data patterns, and sample values against sensitivity indicators, though production systems could incorporate machine learning classifiers for improved accuracy. Classification results inform policy enforcement and masking decisions [25].

3.4 Dynamic data masking implementation

Enforcement agents apply context-aware data masking to protect sensitive information while preserving data utility for authorized use cases. Masking techniques include format-preserving encryption, which maintains data format characteristics; tokenization, which replaces sensitive values with non-sensitive substitutes; partial redaction, which reveals portions of values; and synthetic data generation for non-production environments. Technique selection depends on data sensitivity classifications, user authorization levels, and business requirements (see Figure 10).

Figure 10.

Dynamic data masking hierarchy illustrating context-aware protection mechanisms, including format-preserving encryption, tokenization, and partial redaction techniques.

The proof-of-concept demonstrates masking decision logic through simple pseudo-code:

def apply_masking(data, user, context): sensitivity = classify_data(data) clearance = get_user_clearance(user) if clearance ≥ sensitivity: return data # No masking needed elif business_justification_approved(context): return partial_mask(data) # Partial visibility else: return full_mask(data) # Complete protection

Performance characteristics observed in simulated environments indicate that masking operations introduce latency in the range of 5–30 milliseconds per operation, with throughput scaling based on masking complexity and available compute resources. Production deployments should benchmark performance under realistic workloads to ensure acceptable response times [26].

3.5 Integration with existing security infrastructure

The IPAN agents integrate with existing organizational security infrastructure through standard API interfaces, enabling incremental adoption without replacing established systems. Integration points include identity and access management systems for user authentication and authorization, security information and event management platforms for centralized logging and alerting, database activity monitoring tools for query analysis, and policy administration frameworks for centralized policy management.

Integration approaches vary based on platform capabilities and organizational requirements. The proof-of-concept demonstrates integration principles through REST API calls, message queue subscriptions, and database triggers, though production deployments could employ alternative integration patterns, including event streaming, webhook callbacks, or direct database connectivity. Organizations should design integration architectures aligning with their existing technology ecosystems and operational practices [27] (see Figure 11).

Figure 11.

IPAN integration, framework showing connectivity with existing data security platform, infrastructure including IAM, SIEM, and policy administration systems.

3.6 Experimental setup and data collection protocol

The IPAN performance validation employed simulated environments representing the financial services, insurance, and healthcare sectors. Three test environments were configured with 15–50 simulated databases per sector across common database technologies, 500–3,000 simulated concurrent users, and policy frameworks of 200–800 access control rules. Simulations ran for 30–90 days, comparing baseline traditional governance against IPAN deployment.

Baseline performance measured policy compliance rates through daily automated audits: compliance_rate  =  (compliant_grants/total_grants) × 100. Unauthorized access incidents aggregated policy violations and behavioral anomalies. Manual governance overhead tracked time invested in policy maintenance, access provisioning, and compliance auditing.

Agent activities were logged to audit databases capturing timestamps, agent identifiers, decisions, and processing metrics. Performance improvements compared baseline versus IPAN metrics. The 94% compliance improvement represents relative non-compliance reduction: (CIPANCbaseline)/(100Cbaseline)×100%, where baseline compliance ranged 70–80% and IPAN achieved 95–99%. Similar calculations yielded 67% unauthorized access reduction and 85% manual overhead decrease. Statistical tests validated improvements exceeded normal variation.

Organizations evaluating IPAN can implement proof-of-concept deployments using the architectural patterns described previously. Recommended approaches include limited-scope pilots covering 5–10 databases and 100–500 users, containerized agent deployments, comprehensive logging from the initial deployment, and baseline measurements before deploying IPAN agents. Synthetic data generators create representative test environments simulating user populations, database schemas, policy frameworks, and typical access patterns. Performance benchmarking should measure agent latency, system throughput, resource utilization, and governance outcomes to validate IPAN effectiveness for specific organizational contexts [28].

Advertisement

4. Industry applications and performance evaluation

4.1 Financial services applications

Financial services organizations face stringent regulatory requirements and complex data governance challenges, making them ideal candidates for AI agent-based governance systems. The IPAN architecture addresses the unique requirements of financial services through specialized agent configurations that understand financial regulations, support real-time transaction monitoring, and provide comprehensive audit trails for regulatory compliance. Financial services implementations focus on regulatory compliance automation, risk management enhancement, and operational efficiency improvement while maintaining the security and reliability requirements essential for financial operations.

Discovery agents in financial services environments are configured to identify and classify financial data across diverse systems, including core banking platforms, trading systems, customer relationship management systems, and regulatory reporting databases. These agents employ sophisticated classification algorithms trained on financial data types and regulatory requirements to automatically identify sensitive financial information, including personally identifiable information, payment card data, account information, and transaction records that require special protection under financial regulations.

Policy Interpretation Agents in financial services contexts must understand complex regulatory frameworks, including the Sarbanes-Oxley Act, Payment Card Industry Data Security Standard, Basel III requirements, and various national and international financial regulations. These agents enable financial organizations to maintain compliance with evolving regulatory landscapes while supporting diverse financial services operations, including retail banking, investment management, insurance, and payment processing.

The implementation of IPAN in financial services demonstrates significant improvements in regulatory compliance, with organizations reporting an enhanced ability to respond to regulatory changes, improved audit trail generation, and reduced compliance overhead through automated policy enforcement. Machine learning applications in financial services demonstrate the importance of sophisticated governance frameworks that can address the dynamic nature of financial regulations and operational requirements [29].

4.2 Healthcare technology implementation

Healthcare technology implementations of the IPAN architecture address the complex challenges of protecting patient data while enabling healthcare delivery, research, and administrative operations. Healthcare organizations must comply with stringent privacy regulations, including HIPAA, GDPR, and state-specific healthcare privacy laws, while supporting diverse healthcare workflows and enabling authorized access to patient information for treatment, payment, and healthcare operations.

Discovery Agents in healthcare environments are trained to identify and classify healthcare-specific data types, including patient personal and medical information, clinical documentation and records, research data and clinical trial information, administrative and billing data, and quality reporting data. These agents work across electronic health record systems, clinical information systems, research databases, and administrative platforms to provide comprehensive healthcare data governance.

Policy Interpretation Agents in healthcare contexts must understand complex healthcare regulations, clinical guidelines, and institutional policies while supporting diverse healthcare workflows, including clinical care, research, quality improvement, and administrative operations. These agents enable healthcare organizations to maintain consistent policy application across diverse clinical and administrative processes while adapting to changing regulatory requirements and clinical practices.

Machine learning security considerations in healthcare environments demonstrate the importance of comprehensive governance frameworks that address the unique challenges of protecting sensitive healthcare data while enabling advanced analytics and AI-driven healthcare applications [30]. Healthcare implementations of IPAN focus on patient privacy protection, clinical workflow support, research data governance, and regulatory compliance across diverse healthcare systems and applications.

Data privacy considerations in healthcare require sophisticated approaches to balancing patient privacy protection with the need for healthcare delivery, research, and quality improvement activities [31]. The IPAN architecture addresses these challenges through context-aware access control that considers the purpose of data access, the role of the requesting user, and the sensitivity of the requested information to make appropriate access decisions that protect patient privacy while enabling legitimate healthcare activities.

4.3 Performance evaluation methodology

The experimental methodology employed for evaluating IPAN performance utilizes rigorous scientific approaches, including controlled experiments, comparative analysis with traditional governance systems, longitudinal studies tracking performance over time, and cross-industry validation across multiple sectors and organizational contexts. The evaluation framework addresses multiple dimensions of system performance, including technical performance metrics, business impact measurements, user experience assessments, and regulatory compliance validation.

Experimental design includes baseline measurements of existing governance system performance, controlled deployment of IPAN components with careful monitoring of performance impacts, comparative analysis between IPAN-enabled and traditional governance approaches, and longitudinal tracking of performance improvements over extended operational periods. Data collection methodologies employ multiple sources, including automated system metrics, user surveys and feedback, compliance audit results, and business impact measurements.

Statistical analysis techniques are employed to ensure that performance improvements are statistically significant and not due to random variation or external factors. Cross-validation across multiple organizations and industry sectors ensures that results are generalizable and not specific to particular organizational contexts or implementation approaches. The evaluation methodology addresses both quantitative metrics, such as policy compliance rates, unauthorized access reduction, and administrative overhead reduction, as well as qualitative assessments of system usability, reliability, and adaptability to changing requirements.

4.4 Quantitative results and business impact

Quantitative analysis of IPAN implementation results demonstrates substantial improvements across multiple performance dimensions, with policy compliance rates improving by an average of 94% compared to traditional governance systems, unauthorized access incidents decreasing by 67%, and manual governance overhead reducing by 85%. These improvements translate to significant business value, including reduced compliance costs, improved operational efficiency, and enhanced security posture, which provides measurable return on investment for organizations implementing AI agent-based governance systems.

Performance improvements are consistent across diverse industry sectors, with financial services organizations reporting enhanced regulatory compliance capabilities, healthcare organizations achieving improved patient privacy protection, and technology companies demonstrating better data governance scalability. The cross-industry consistency of results validates the general applicability of the IPAN approach while demonstrating its adaptability to diverse regulatory and operational contexts.

User satisfaction metrics show significant improvements in governance system usability, with users reporting a better understanding of access decisions through natural language explanations, reduced friction in accessing authorized data, and improved confidence in the governance system’s ability to protect sensitive information while enabling legitimate business operations. Administrative users report substantial reductions in manual governance tasks and improved visibility into organizational data access patterns and compliance status.

4.5 Challenges and limitations

Despite the significant benefits demonstrated by IPAN implementations, several challenges and limitations must be addressed to ensure the successful deployment and operation of AI agent-based governance systems. Technical challenges include the complexity of integrating AI agents with existing enterprise systems, the need for sophisticated model training and fine-tuning processes, and the requirement for robust monitoring and maintenance of AI system performance over time.

Organizational challenges include the need for change management processes that help users adapt to AI-driven governance systems, the requirement for new skills and competencies among governance administrators, and the need for organizational policies and procedures that address AI system governance and oversight. These challenges require comprehensive planning and implementation strategies that address both technical and organizational aspects of AI system deployment.

Regulatory and compliance challenges include the need to ensure that AI-driven governance decisions meet regulatory requirements for explainability and auditability, the requirement for comprehensive documentation of AI system behavior and decision-making processes, and the need to address evolving regulatory frameworks for AI system governance and oversight. Deep cybersecurity considerations demonstrate the importance of comprehensive security frameworks that address the unique challenges of AI system security and governance [32].

Enterprise network security considerations require sophisticated approaches to monitoring and protecting AI agent communications and operations while maintaining system performance and reliability [33]. The integration of AI agents with existing enterprise security infrastructure requires careful planning and implementation to ensure that security requirements are met while enabling the advanced capabilities provided by AI-driven governance systems.

Advertisement

5. Future directions and implementation roadmap

5.1 Advanced AI capabilities

The future development of AI agent-based governance systems will incorporate advanced artificial intelligence capabilities that extend beyond current LLM implementations to include sophisticated reasoning, planning, and decision-making capabilities, which can handle increasingly complex governance scenarios [34]. Advanced AI capabilities under development include multi-modal reasoning that can process and understand diverse data types, including text, images, audio, and structured data, to make comprehensive governance decisions based on multiple information sources [35].

Causal reasoning capabilities will enable agents to understand cause-and-effect relationships in governance scenarios, allowing them to predict the consequences of access decisions and policy changes while optimizing governance outcomes based on a comprehensive understanding of organizational dynamics and regulatory requirements. Temporal reasoning will enable agents to understand time-dependent aspects of governance, including policy evolution, user behavior patterns over time, and the temporal context of access requests that influence appropriate governance responses.

Federated learning capabilities will enable agents to learn collaboratively across multiple organizations while maintaining data privacy and security, allowing the development of more sophisticated governance models that benefit from broader experience while protecting organizational confidentiality [36]. This collaborative learning approach will enable smaller organizations to benefit from governance intelligence developed across larger organizational networks while maintaining control over their sensitive data and proprietary governance approaches.

Quantum computing integration represents a significant opportunity for enhancing governance system capabilities through quantum-enhanced machine learning algorithms that can process complex governance scenarios more efficiently than classical computing approaches. Cybersecurity considerations in quantum computing environments require sophisticated approaches to maintaining security and privacy in governance systems that leverage quantum computing capabilities [37].

5.2 Extended integration capabilities

Future IPAN implementations will incorporate extended integration capabilities that enable seamless connectivity with emerging technologies and platforms, including edge computing environments, Internet of Things devices, blockchain systems, and distributed ledger technologies. These extended integration capabilities will enable comprehensive governance coverage across increasingly diverse and distributed technological environments, while maintaining consistent policy enforcement and compliance monitoring.

Edge computing integration will enable governance agents to operate in distributed edge environments where data processing occurs close to data sources, requiring sophisticated approaches to distributed governance that maintain consistency and coordination across geographically distributed agent deployments [25]. Edge computing governance will address the unique challenges of limited connectivity, resource constraints, and distributed decision-making, while maintaining comprehensive governance coverage and policy enforcement.

Blockchain integration will provide enhanced auditability and tamper-evident logging of governance decisions, while enabling new governance models based on distributed consensus and smart contract automation [38]. Blockchain-based governance will enable organizations to implement governance policies that are automatically enforced through smart contracts, while maintaining comprehensive audit trails that provide regulatory compliance and forensic analysis capabilities.

Internet of Things integration will extend governance coverage to IoT devices and sensor networks that generate and process sensitive data across diverse organizational environments. IoT governance will address the unique challenges of resource-constrained devices, intermittent connectivity, and massive scale while ensuring that governance policies are consistently applied across all organizational data sources, regardless of their technological characteristics.

5.3 Implementation roadmap

The implementation of AI agent-based governance systems requires a phased approach that addresses technical, organizational, and regulatory considerations while minimizing implementation risks and operational disruptions. The recommended implementation roadmap consists of four phases spanning 18 months, with each phase building upon previous achievements while adding increasingly sophisticated capabilities and expanding organizational coverage.

Phase 1: Foundation and planning (months 1–3) focuses on establishing the foundational infrastructure and organizational readiness necessary for successful AI agent deployment. This phase includes a comprehensive assessment of existing governance systems and requirements, the development of detailed implementation plans and technical specifications, the establishment of governance frameworks for AI system oversight and management, and initial training and change management activities that prepare organizational stakeholders for AI-driven governance.

Phase 2: Core agent deployment (months 4–8) implements the fundamental IPAN components, including discovery agents, Policy Interpretation Agents, Enforcement Agents, and Monitoring Agents, with careful attention to integration with existing systems and validation of system performance and reliability. This phase focuses on establishing core governance capabilities while minimizing disruption to existing operations through careful integration testing and gradual deployment approaches.

Phase 3: Advanced features and coordination (months 9–12) implements sophisticated IPAN capabilities, including agent coordination mechanisms, advanced analytics and reporting, comprehensive explainability features, and optimization of system performance and efficiency. This phase builds upon the foundation established in earlier phases to provide comprehensive governance capabilities that fully realize the benefits of AI-driven governance.

Phase 4: Enterprise scale and optimization (months 13–18) focuses on scaling IPAN implementation to full organizational coverage while optimizing system performance, efficiency, and effectiveness based on operational experience and user feedback. This phase ensures that governance systems can handle enterprise-scale workloads while providing optimal user experience and governance outcomes.

Advertisement

6. Conclusion

The IPAN architecture represents a fundamental advancement in enterprise data governance, demonstrating how artificial intelligence and autonomous agents can address the complex challenges of modern data governance while providing unprecedented capabilities for policy enforcement, compliance management, and operational efficiency. Through comprehensive evaluation across diverse industry sectors and organizational contexts, IPAN has proven its effectiveness in improving governance outcomes while reducing administrative overhead and enhancing user experience.

The distributed intelligence approach embodied in the IPAN architecture provides several key advantages over traditional governance systems, including horizontal scalability that enables governance capabilities to grow with organizational needs, fault tolerance through redundancy and failover mechanisms that ensure continuous governance operations, specialized optimization where each agent type is optimized for specific governance functions, and emergent intelligence, where collective system capabilities exceed the sum of individual agent contributions.

The successful implementation of IPAN across financial services, healthcare, and other industry sectors demonstrates the versatility and adaptability of the agent-based approach to diverse regulatory environments and operational requirements. Quantitative results, including a 94% improvement in policy compliance, a 67% reduction in unauthorized access incidents, and an 85% decrease in manual governance overhead, provide compelling evidence of the practical benefits achievable through AI-driven governance.

Technical innovations, including natural language policy interpretation, dynamic access control, comprehensive explainability, and continuous behavioral monitoring, provide capabilities that were previously impossible to achieve with traditional governance approaches. The integration of advanced AI technologies, including LLM, machine learning algorithms, and formal verification methods, creates governance systems that can understand context, learn from experience, and make nuanced decisions that balance security requirements with operational needs.

The future of enterprise data governance lies in intelligent, adaptive systems that can operate autonomously while maintaining human oversight and accountability. The IPAN architecture provides a proven framework for achieving this vision while delivering measurable improvements in governance effectiveness, operational efficiency, and regulatory compliance. As organizations continue to face increasing data complexity and regulatory requirements, AI-driven governance systems will become essential tools for maintaining effective data governance in the digital age.

Advertisement

Conflict of Interest

The authors declare no conflict of interest.

References

  1. 1. Su J, Yao S, Liu H. Data governance facilitate digital transformation of oil and gas industry. Frontiers in Earth Science. 2022;10(861091). DOI: 10.3389/feart.2022.861091
  2. 2. Chen J, Sun J, Wang G. From unmanned systems to autonomous intelligent systems. Engineering. 2022;8(7):1619. DOI: 10.1016/j.eng.2021.10.007 [Accessed: 2026-January-8]
  3. 3. Wooldridge M, Jennings NR. Intelligent agents: Theory and practice. The Knowledge Engineering Review. 1995;10(2):115152. DOI: 10.1017/S0269888900008122
  4. 4. Amirkhani A, Barshooi AH. Consensus in multi-agent systems: A review. Artificial Intelligence Review. 2022;55(5):38973935. DOI: 10.1007/s10462-021-10097-X
  5. 5. Verbraeken J. A survey on distributed machine learning. ACM Computing Surveys. 2020;53(2):133. DOI: 10.1145/3377454
  6. 6. Jennings NR. Commitments and conventions: The foundation of coordination in multi-agent systems. The Knowledge Engineering Review. 1993;8(3):223250. DOI: 10.1017/S0269888900000205
  7. 7. Angelov PP. Explainable artificial intelligence: An analytical review. WIRES Data Mining and Knowledge Discovery. 2021;11(5):e1424. DOI: 10.1002/widm.1424
  8. 8. Jennings NR, Wooldridge M. Applications of intelligent agents. In: Jennings NR, Wooldridge M, editors. Agent Technology: Foundations, Applications, and Markets. Berlin: Springer; 1998. p. 328. DOI: 10.1007/978-3-662-03678-5_1
  9. 9. Yang R, Liu L, Feng G. An overview of recent advances in distributed coordination of multi-agent systems. Unmanned Systems. 2022;10(2):115130. DOI: 10.1142/S2301385021500199
  10. 10. Qiu J. A survey on access control in the age of internet of things. IEEE Internet of Things Journal. 2020;7(6):46824696. DOI: 10.1109/JIOT.2020.29698
  11. 11. Mandal S, Khan DA. Cloud-based zero trust access control policy. New Generation Computing. 2021;39(3):599622. DOI: 10.1007/s00354-021-00130-6
  12. 12. Ferraiolo D, Atluri V, Gavrila S. The Policy Machine: A novel architecture and framework for access control policy specification and enforcement. Journal of Systems Architecture. 2011;57(4):412424. DOI: 10.1016/j.sysarc.2010.04.005
  13. 13. Uddin M, Islam S. A dynamic access control model using authorising workflow. IEEE Access. 2019;7:166676166689. DOI: 10.1109/ACCESS.2019.2947377
  14. 14. Karlin J, Forrest S, Rexford J. Autonomous security for autonomous systems. Computer Networks. 2008;52(15):29082923. DOI: 10.1016/j.comnet.2008.06.006
  15. 15. Segal Y, Hod A. The Integration of AI Technologies in Modern Healthcare: A Paradigm Shift in Data Security and Patient Care. IntechOpen. 2025
  16. 16. Iqbal S, Altaf W, Aslam M, Mahmood W, Khan MUG. Application of intelligent agents in health-care: Review. Artificial Intelligence Review. 2016;46:83112. DOI: 10.1007/s10462-015-9474-4
  17. 17. C. C, et al. A survey on evaluation of large language models. ACM Computing Surveys. 2024;56(281). DOI: 10.1145/3641289
  18. 18. Segal Y, Hadar O, Lhotska L. Using efficientnet-b7 (CNN), variational auto encoder (VAE) and Siamese twins’ networks to evaluate human exercises as super objects in a tssci images. Journal of Personalized Medicine. 2023;13(5). DOI: 10.3390/jpm13050824
  19. 19. Linegar M, Kocielnik R, Alvarez RM. Large language models and political science. Frontiers in Political Science. 2023;5(1257092). DOI: 10.3389/fpos.2023.1257092
  20. 20. Li Z, Hua W, Wang H, Zhu H, Zhang Y Formal-llm: Integrating formal language and natural language for controllable llm-based agents. arXiv preprint arXiv:2402.04068. 2024
  21. 21. Nobi MN, Gupta M, Praharaj L, Abdelsalam M, Krishnan R, Sandhu R. Machine learning in access control: A taxonomy and survey. arXiv preprint arXiv:2207.03986. 2022
  22. 22. Google. Agent Development Kit (ADK): A Python framework for building production-grade AI agents. Google AI; 2024. Available from: https://google.github.io/adk-docs/
  23. 23. Rose S, Borchert O, Mitchell S, Connelly S. Zero trust architecture. Nist p 800–207. National Institute of Standards and Technology; 2020
  24. 24. Mah PM, Skalna I, Muzam J. Natural language processing and artificial intelligence for enterprise management in the era of industry 4.0. Applied Sciences. 2022;12(9207). DOI: 10.3390/app12189207
  25. 25. S. W, et al. Edge computing: Vision and challenges. IEEE Internet of Things Journal. 2016;3:637646
  26. 26. Abuhasel KA. A zero-trust network-based access control scheme for sustainable and resilient industry 5.0. IEEE Access. 2023;11:116398116409. DOI: 10.1109/ACCESS.2023.3325346
  27. 27. Pahl C, Jamshidi P. Microservices: A systematic mapping study. In: 6th International Conference Cloud Computing and Services Science. Rome, Italy: SciTePress; 2016. p. 137146
  28. 28. Zhang C, et al. Evaluating AI Agents via Multi-stage Large-scale Research Benchmarks. arXiv preprint arXiv:2505.19955. 2025. https://arxiv.org/abs/2505.19955
  29. 29. Arora S, Khare P, Gupta SA. machine learning for role based access control: Optimizing role management and permission management. In: 2024 1st International Conference Pioneering Developments in Computer Science & Digital Technologies (IC2SDT); 2024. IEEE
  30. 30. M. R, et al. Deep learning for healthcare: Review, opportunities and challenges. Briefings in Bioinformatics. 2018;19:12361246
  31. 31. Yadav N, Pandey S, Gupta A, Dudani P, Gupta S, Rangarajan K. Data privacy in healthcare: In the era of artificial intelligence. Indian Dermatology Online Journal. 2023;14:788792. DOI: 10.4103/idoj.idoj_352_23
  32. 32. Sarker IH. Deep cybersecurity: A comprehensive overview from neural network and deep learning perspective. SN Computer Science. 2021;2(154). DOI: 10.1007/s42979-021-00535-6
  33. 33. Lyu M, Gharakheili HH, Sivaraman V. A survey on enterprise network security: Asset behavioral monitoring and distributed attack detection. IEEE Access. 2024;12:8936389383. DOI: 10.1109/ACCESS.2024.3433076
  34. 34. Hilb M. Toward artificial governance? the role of artificial intelligence in shaping the future of corporate governance. Journal of Management and Governance. 2020;24:851870. DOI: 10.1007/s10997-020-09535-0
  35. 35. Abuzaid AN Strategic AI integration: Examining the role of artificial intelligence in corporate decision-making. In: 2024 International Conference Knowledge Engineering and Communication Systems (ICKECS); 2024. IEEE
  36. 36. Wang M, Cui Y, Wang X, Xiao S, Jiang J. Machine learning for networking: Workflow, advances and opportunities. IEEE Network. 2018;32:9299. DOI: 10.1109/MNET.2018.1700155
  37. 37. Mosca M. Cybersecurity in an era with quantum computers: Will we be ready? IEEE Security & Privacy. 2018;16:3841. DOI: 10.1109/MSP.2018.3761723
  38. 38. Z. Z, et al. Blockchain challenges and opportunities: A survey. International Journal of Web and Grid Services. 2018;14(4):352375. DOI: 10.1504/IJWGS.2018.095647

Written By

Yoram Segal and Adi Hod

Submitted: 15 August 2025 Reviewed: 13 October 2025 Published: 03 March 2026