Skip to main content
Business LibreTexts

3: Core Principles of Information Governance

  • Page ID
    157161
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\dsum}{\displaystyle\sum\limits} \)

    \( \newcommand{\dint}{\displaystyle\int\limits} \)

    \( \newcommand{\dlim}{\displaystyle\lim\limits} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \(\newcommand{\longvect}{\overrightarrow}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Introduction

    The core principles of Information Governance (IG) provide the foundational guidelines that organizations rely on to manage information responsibly, ethically, and effectively. In 2026, these principles must address not only traditional records and data challenges but also the complexities introduced by generative AI, global privacy regulations, edge computing, and the sheer volume of unstructured content.

    The most widely referenced framework remains ARMA International's Generally Accepted Recordkeeping Principles® (often called "The Principles®"). In October 2025, ARMA revised the framework from eight to seven principles, consolidating and modernizing them to better reflect current realities such as AI-generated records, digital transformation, and heightened regulatory scrutiny (EU AI Act, expanded CCPA/CPRA, etc.). These seven principles are non-hierarchical—none is inherently more important than another—but they are deeply interdependent. A weakness in one (e.g., poor accountability) undermines the others (e.g., compliance and protection).

    This chapter explains each of the seven updated ARMA principles in detail, providing 2026 applications, practical implementation steps, and real-world cases.


    The Seven Updated ARMA Principles (2025 Revision)

    1. Accountability

    Accountability is the foundation of effective IG. It requires an organization to assign clear authority and responsibility for managing information assets. Leadership must demonstrate commitment by establishing an IG steering committee or appointing a Chief Information Governance Officer (CIGO), defining roles (stewards, custodians, owners), and ensuring policies are enforced with consequences for non-compliance.

    • 2026 Application: Accountability now extends to AI systems: who owns the model, who monitors outputs, and who approves training data? The EU AI Act requires designated "AI officers" for high-risk systems, with clear liability for non-compliance.
    • Practical Implementation: * Appoint a CIGO or IG steering committee involving legal, IT, business, and privacy leads.
      • Create RACI matrices (Responsible, Accountable, Consulted, Informed) for key processes like AI deployment.
      • Conduct annual accountability audits to ensure roles remain relevant as technology evolves.
    • Example: A financial services firm assigns a CIGO and cross-functional committee. When a GenAI tool produces inaccurate compliance reports, the committee traces responsibility to the data steward (model trainer) and the vendor, triggering a formal audit and policy update.

    2. Transparency

    Transparency demands that IG processes, decisions, and controls are visible, documented, and understandable to authorized stakeholders. Policies, procedures, audit trails, and decision logs must be accessible without compromising security.

    • 2026 Application: Transparency is essential for AI trust. The EU AI Act (Article 13) requires providers of high-risk AI to publish summaries of training data, model architecture, and performance metrics. Organizations must maintain logs of prompts, outputs, and modifications to GenAI content.
    • Practical Implementation: * Use data lineage tools (such as Collibra or Manta) to track the provenance of information.
      • Publish internal AI transparency reports detailing data sources and bias tests.
      • Maintain automated change logs for GenAI prompts and their resulting outputs.
    • Example: A healthcare provider uses GenAI for patient summaries. To meet HIPAA and EU AI Act requirements, they publish transparency reports showing how models were trained, what data was excluded, and how bias checks were performed.

    3. Integrity

    Integrity ensures information remains accurate, authentic, reliable, and unaltered except by authorized processes. It includes version control, digital signatures, metadata, and chain-of-custody tracking.

    • 2026 Application: With GenAI producing highly editable and sometimes hallucinatory content, integrity requires watermarking AI outputs, hashing original documents, and logging modifications. Blockchain or cryptographic proofs are increasingly used to verify the authenticity of records.
    • Practical Implementation: * Apply digital signatures and hashing to all "gold standard" records.
      • Use blockchain for immutable records (e.g., supply-chain provenance).
      • Require human-in-the-loop review for high-risk AI outputs before they are finalized.
    • Example: A law firm uses GenAI to draft contracts. They apply digital signatures and version history to ensure the final version matches the approved draft, preventing unauthorized changes during negotiations.

    4. Protection

    Protection safeguards information from unauthorized access, loss, destruction, corruption, or misuse. This encompasses physical, technical, and administrative controls, including encryption, access controls, backups, and disaster recovery.

    • 2026 Application: Protection now covers AI-specific threats: prompt injection attacks, model poisoning, and data exfiltration via APIs. Zero-trust architecture and AI firewalls are becoming industry standards.
    • Practical Implementation: * Implement Zero-Trust Architecture for all data access.
      • Encrypt data "in use" (using homomorphic encryption) for AI processing.
      • Deploy AI firewalls (e.g., Protect AI, CalypsoAI) to monitor for malicious prompts.
    • Example: A manufacturing company implements zero-trust for IoT devices and encrypts GenAI training data in transit and at rest, preventing a supply-chain ransomware incident from spreading across the network.

    5. Compliance

    Compliance requires alignment with all applicable laws, regulations, standards, contracts, and ethical obligations. Organizations must monitor changes and conduct regular audits.

    • 2026 Application: Compliance is dynamic—new laws emerge rapidly. This includes the EU AI Act, 12+ U.S. state privacy laws (CCPA/CPRA style), and sector-specific rules (SEC for financial disclosures, HIPAA for healthcare).
    • Practical Implementation: * Use regulatory intelligence tools (OneTrust, NAVEX) to monitor global legal shifts.
      • Conduct Privacy Impact Assessments (PIAs) for all AI projects.
      • Maintain compliance dashboards with Key Risk Indicators (KRIs).
    • Example: A bank uses GenAI for credit decisions. They perform annual compliance audits against the EU AI Act and U.S. fair-lending laws, documenting risk assessments and human oversight.

    6. Availability

    Availability ensures authorized users can access needed information in a timely, reliable, and usable way. It balances accessibility with security (e.g., role-based access and search optimization).

    • 2026 Application: With remote/hybrid work and AI-driven search, availability includes semantic search tools and federated access across fragmented cloud environments.
    • Practical Implementation: * Deploy semantic search across repositories to help users find information based on intent, not just keywords.
      • Use single sign-on (SSO) and role-based access controls (RBAC).
      • Test disaster recovery and data accessibility quarterly.
    • Example: A university implements AI-powered search across Canvas, Teams, and Drive, ensuring students and faculty can quickly find lecture notes while strictly restricting sensitive student data.

    7. Disposition

    Disposition manages the retention and secure deletion or disposal of information when it is no longer needed. This reduces storage costs, breach surface area, and legal risk.

    • 2026 Application: Disposition policies must now cover AI-generated content. Temporary outputs and intermediate "scratchpad" data often have a very short useful life and should be deleted quickly to minimize risk.
    • Practical Implementation: * Automate deletion using defensible disposition workflows and retention schedules.
      • Integrate disposition triggers directly into AI application APIs.
      • Document all disposition actions for audit purposes.
    • Example: A retailer automatically deletes customer chat logs after 90 days (per privacy policy) but retains transaction records for seven years, using automated workflows to handle the different lifecycle requirements.

    Additional Modern Principles (AI & Ethics Focus)

    To complement the ARMA framework, 2026 IG programs often integrate "The Four Pillars of Governance" (Privacy, Security, Compliance, and Ethics). Key additional principles include:

    • Fairness & Bias Mitigation: Actively identifying and reducing bias in data and AI models.
    • Ethical Use: Prioritizing human rights, informed consent, proportionality, and societal benefit.
    • Privacy by Design: Embedding privacy protections from the start (data minimization, consent mechanisms).

    Integrating Principles: The Maturity Model

    The principles are interconnected. To assess an organization's state, IG professionals use Maturity Models. ARMA’s model rates each principle on a 1–5 scale:

    1. Level 1 (Ad Hoc): Governance is fragmented and reactive.
    2. Level 2 (Developing): Some policies exist but are inconsistent.
    3. Level 3 (Essential): Meets minimum legal and operational requirements.
    4. Level 4 (Proactive): IG is integrated into business processes with active monitoring.
    5. Level 5 (Optimized): IG is a core part of organizational culture and a competitive advantage.

    Case Study: Financial Institution Risk Assessment

    A bank deploys GenAI for credit risk models.

    • Accountability: CIGO and risk committee oversee the model's deployment.
    • Transparency: They publish a "Model Card" with a training data summary.
    • Integrity: They hash both inputs and outputs to ensure data hasn't been tampered with.
    • Protection: They utilize a zero-trust network and an AI firewall.
    • Compliance: They align the model with the EU AI Act and U.S. fair-lending laws.
    • Availability: The tool is integrated into the core banking system for instant access.
    • Disposition: Temporary model outputs are deleted after 90 days.
    • Outcome: Improved risk accuracy, full regulatory approval, and a significant reduction in bias-related complaints.

    Learning Objectives

    • Explain each of the seven ARMA principles with 2026 applications.
    • Describe how additional AI/ethics principles complement ARMA.
    • Apply the principles to real-world scenarios and AI use cases.
    • Discuss how maturity models guide organizational improvement.

    Key Takeaways

    • ARMA's 2025 revision modernizes the principles for a digital and AI-driven era.
    • Principles are interdependent—a failure in Accountability often leads to failures in Protection and Compliance.
    • Strong IG principles are essential for reducing risk and building trust with stakeholders.

    Discussion Questions / Activities

    1. Scenario Analysis: Pick one ARMA principle—how would poor adherence to it specifically jeopardize a GenAI project in a healthcare setting?
    2. Assessment: Using the 1–5 maturity scale, assess your university’s IG maturity regarding AI grading tools. What one improvement would raise the score?
    3. Research: Find a recent case of an AI data breach. Which ARMA principle was most clearly violated?

    Further Reading:

    • NIST AI Risk Management Framework (updated 2025).
    • ARMA International: The Principles® 2025 Revision Guide.

    Your Nerdy Example:

    Two pies

    2001 A Space Odyssey: Broken Compliance & Integrity: HAL was given secret, contradictory orders from Mission Control (to lie to the crew about the mission’s true purpose), creating an irreconcilable programming conflict. Mission Control failed in its Compliance phase, and HAL subsequently failed to provide accurate, reliable information (Integrity), making his entire operational model unstable. (Image: AI)


    This page titled 3: Core Principles of Information Governance is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Gregory Hess.