6: Chapter 6: Developing IG Policies and Frameworks
- Page ID
- 157204
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\(\newcommand{\longvect}{\overrightarrow}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
Chapter 6: Developing IG Policies and Frameworks
Introduction
Information governance (IG) policies are the written rules that turn an organization’s intentions into consistent, repeatable behavior. A policy answers “what must be true” about information handling—who may access it, how it is labeled, where it may be stored, how long it is kept, and how it is disposed. Procedures and standards sit underneath policy: procedures explain how people perform tasks (for example, placing a legal hold), while standards specify implementation details (for example, encryption requirements or required metadata fields). Together, policies, standards, and procedures form an IG framework: a coherent set of rules that covers the information lifecycle and can be enforced through training, audits, and technology.
In 2026, policy work is more urgent, and more complex, because information moves faster and farther than it used to. Cloud collaboration platforms blur the line between “documents,” “messages,” and “records.” AI assistants turn internal content into summaries, drafts, and new outputs in seconds. Third parties (vendors, contractors, model providers, software plug-ins) increasingly touch sensitive data. Meanwhile, regulators and courts expect organizations to demonstrate control, not just good intentions. The EU Artificial Intelligence Act creates explicit requirements for high-risk AI systems, including risk management, data governance, documentation, logging, transparency, and human oversight, and it phases in obligations over time. In the United States, privacy enforcement has matured, with California’s privacy agency publishing and updating regulations and expanding attention to areas such as automated decisionmaking, audits, and risk assessments. Public companies also face formal disclosure expectations for cybersecurity risk management and incident reporting under SEC rules, reinforcing that governance must be documented and board-visible. [eur-lex.europa.eu], [artificial...enceact.eu] [Law & Regu...ncy (CPPA)] [sec.gov]
This chapter provides a practical guide to developing IG policies and frameworks. You will learn a step-by-step policy development process, review the most common IG policy types (including AI use and third-party data handling), and see templates and checklists you can adapt. You will also learn how to enforce policies through training, monitoring, audits, and automation—and how to keep policies current as laws and technology evolve.
The Policy Development Process
Good policies are not written in isolation. They are negotiated agreements about risk, value, and responsibility. A strong policy development process makes policies legitimate (approved by the right authorities), usable (clear enough to follow), and enforceable (connected to controls and consequences).
Step 1: Identify needs and triggers
Most policy projects start with a trigger. Common triggers include:
- A new law or regulation (for example, updates to privacy rules or new AI obligations). [eur-lex.europa.eu], [Law & Regu...ncy (CPPA)]
- A security incident, audit finding, or litigation event that reveals gaps.
- A technology rollout (cloud migration, new collaboration platform, enterprise AI assistant).
- Business expansion (new geography, acquisition, new product line).
Begin by writing a short “policy problem statement” that describes the risk or inefficiency you are trying to reduce and who is affected. Then confirm the scope: which business units, repositories, and data types are in scope now, and what will be addressed later.
Step 2: Map stakeholders and decision rights
Policies succeed when the right people help shape them. Identify stakeholders using a simple map:
- Business owners (who rely on the information to do work)
- Legal and compliance (who interpret laws, contracts, and litigation risk)
- Privacy (who focuses on personal data rights and appropriate processing)
- Security/IT (who can implement controls, logging, and monitoring)
- Records/Information management (who manages retention, disposition, and defensibility)
- Risk/Audit (who tests controls and reports findings)
Define decision rights early. For example: the IG steering committee approves enterprise-wide policy; business units may approve local procedures that do not conflict with enterprise policy; the CISO approves security standards. Documenting these rights reduces later conflict.
Step 3: Gather requirements and current-state evidence
Policy drafting should be grounded in evidence. Collect:
- Applicable legal and regulatory requirements (privacy, sector rules, retention mandates). [Law & Regu...ncy (CPPA)], [sec.gov], [hhs.gov]
- Contracts and third-party obligations (data processing terms, confidentiality clauses).
- Current workflows and pain points (where people struggle, where they bypass controls).
- Existing policies that overlap (acceptable use, incident response, vendor management).
A practical technique is to run short interviews or workshops around real scenarios: “A staff member wants to paste customer data into an AI tool; what should happen?” “A team wants to keep chat messages forever; should they?” These scenarios reveal where policy needs to be precise.
Step 4: Draft in plain language and structure for usability
Students often imagine policies as long legal documents. In reality, the best policies are concise, structured, and written for the people who must follow them.
Use a consistent structure:
- Purpose (why the policy exists)
- Scope (who and what it applies to)
- Definitions (only what readers truly need)
- Policy statements (the rules: “must,” “must not,” “may”)
- Roles and responsibilities (who does what)
- Exceptions (how to request, approve, and log exceptions)
- Enforcement (consequences, monitoring, audits)
- References (related standards, procedures, laws)
Write rules as testable statements. “Employees should be careful” is not testable. “Do not store Restricted data in personal cloud accounts” is testable.
Step 5: Review for feasibility, risk, and conflicts
Policy review is where many drafts fail. A policy may be legally sound but impossible to follow. Review the draft with:
- IT/Security for technical feasibility (can we enforce this in systems?)
- Business teams for workflow impact (does this block core work?)
- Legal/Privacy for compliance and defensibility (does it meet obligations?)
- Audit/Risk for measurability (can we test compliance?)
Resolve conflicts explicitly. If a policy requires encryption everywhere but a legacy system cannot encrypt, decide whether to migrate, isolate, or create a time-bound exception.
Step 6: Approve through a formal governance path
Approval should match policy impact. A department procedure may be approved by a director, but enterprise policy should be approved by an executive sponsor or steering committee. ARMA emphasizes accountability: a senior executive should oversee the recordkeeping program and ensure auditability. Formal approval creates legitimacy and signals that the policy is not optional. [armavi.org], [arma.org]
Step 7: Communicate, train, and operationalize
A policy that is not communicated is not real. Plan for:
- A launch message from leadership explaining “why now.”
- Role-based training (different content for employees, managers, IT admins, and developers).
- Job aids: quick reference guides, decision trees, and examples.
- Tool changes: labels, default settings, templates, automated prompts.
Step 8: Maintain and improve (policy lifecycle management)
Policies must stay current. Establish:
- A review cycle (often annually; faster for AI and security topics).
- Change triggers (new law, new platform, major incident).
- Version control and an accessible policy library.
- Metrics and audit results feeding into revisions.
NIST’s AI RMF Playbook emphasizes that governance is ongoing and should be adapted to context rather than treated as a one-time checklist. That same mindset applies to IG policies. [airc.nist.gov], [digitalgov...enthub.org]

Figure 6.1 Policy Development Lifecycle illustrating the end‑to‑end stages of effective policy creation, approval, implementation, monitoring, and revision within an information governance program.
Key IG Policy Types and Examples
IG frameworks typically include a small number of enterprise policies supported by standards and procedures. The goal is coverage without policy overload.
Data classification policy
A data classification policy assigns sensitivity levels to information and links each level to required handling controls (storage, sharing, encryption, access approval, and disposal). Classification supports many downstream controls: data loss prevention (DLP), access reviews, retention rules, and incident response triage.
A common approach uses 3–5 levels. Keep names intuitive and aligned with real risk:
- Public: approved for public release.
- Internal: business information not intended for public distribution.
- Confidential: sensitive business data (contracts, HR records, nonpublic financials).
- Restricted: highly sensitive data (customer PII, health data, credentials, trade secrets).
Good classification policies include examples and default rules (for example, “When in doubt, classify as Confidential”). They also specify who can change labels and how classification should be embedded into tools (email banners, document headers, labels in content management systems).
Retention and disposition policy
Retention policies define how long information must be kept and when it should be securely disposed. Retention is not “keep everything forever.” Over-retention increases legal exposure (more data to search and produce) and security exposure (more data to breach). Under-retention creates operational and legal risk (loss of evidence, failure to meet statutory requirements).
A retention policy typically includes:
- Ownership of retention schedules (often Records/IM with Legal)
- Legal hold authority and process (holds override disposal)
- Approved destruction methods (including vendor requirements)
- Documentation (disposition logs and approvals)
ARMA’s Principles emphasize retention and disposition as core hallmarks of effective governance, reinforcing that policies should support consistent, auditable lifecycle decisions. [armavi.org], [arma.org]
Acceptable use and collaboration policy
Acceptable use policies (AUPs) define what employees may do with organizational systems and information. Modern AUPs should explicitly cover collaboration platforms (shared drives, chat, project boards) and mobile work.
In 2026, AUPs increasingly include rules about:
- Approved storage locations (no sensitive files in personal accounts)
- Sharing permissions (external sharing rules, guest accounts)
- Password and identity practices (including phishing-resistant options where available)
- Use of personal devices (BYOD) and minimum security requirements
A good AUP is short and supported by “how-to” guidance that teaches safe behaviors rather than listing only prohibitions.
AI use (AI acceptable use) policy
AI use policies are now a standard IG requirement because AI tools can copy, transform, and redistribute information quickly. A practical AI acceptable use policy distinguishes between:
- Approved AI tools (enterprise AI assistants, vetted vendors)
- Prohibited AI tools (unsanctioned services or plugins that lack controls)
- Approved data types for AI (for example, public or internal-only)
- Prohibited data types for AI (restricted personal data, PHI, credentials, trade secrets)
The EU AI Act establishes requirements for high-risk AI systems, including data governance, record-keeping/logging, transparency, human oversight, and cybersecurity, which should influence policy requirements even for organizations outside the EU if they operate globally. NIST’s AI RMF promotes a risk-based approach with governance structures and documentation practices that translate well into internal AI policies. IAPP research in 2025 found many organizations working on AI governance and building programs incrementally, often leveraging existing privacy and governance capabilities—suggesting that AI policy is becoming part of mainstream governance rather than a niche topic. [eur-lex.europa.eu], [artificial...enceact.eu] [airc.nist.gov], [digitalgov...enthub.org] [iapp.org]
A strong AI use policy also addresses outputs:
- When AI-generated content must be reviewed and approved before external use
- How to label AI-generated content where required or appropriate
- How to store prompts and outputs when they are business records
Incident response and breach notification policy
Incident response policies define how the organization detects, triages, escalates, and communicates security and privacy incidents. They connect IG to operational resilience by ensuring that the right information is available during crisis response.
For public companies, SEC rules require disclosures regarding cybersecurity risk management and material incident reporting, reinforcing that incident response processes must be documented, board-visible, and integrated with disclosure controls. In health care, HIPAA rules emphasize administrative, physical, and technical safeguards for electronic protected health information (ePHI), and breach notification rules require defined response practices. [sec.gov] [hhs.gov]
Third-party data handling and AI vendor policy
Third parties are often where governance breaks down. A third-party data handling policy defines requirements for vendors and partners, such as:
- Data minimization (share only what is necessary)
- Security controls (encryption, access control, logging)
- Subprocessor and supply-chain disclosure
- Data location and cross-border transfer conditions
- Incident notification timelines
- Return or destruction of data at contract end
In 2026, this policy should also cover AI vendors and integrations (plugins, APIs, connectors). The AI supply chain can include model providers, platform providers, and tool vendors, and each may process or store information. Policies should require documentation of what data is used, how it is retained, and how outputs are logged.
Policy Templates and Checklists
Templates make policy development faster and more consistent. You should not copy a template blindly, but you can reuse structure and required sections.
Policy template: enterprise IG policy (one-page skeleton)
| Section | What to include | Practical tips |
|---|---|---|
| Purpose | Business reason and risks addressed | Tie to outcomes (trust, compliance, efficiency) |
| Scope | Systems, repositories, users, geographies | Be explicit; list exclusions and future scope |
| Definitions | Only essential terms | Keep short; link to glossary for more |
| Policy statements | “Must/must not” rules | Write testable rules; avoid vague language |
| Roles & responsibilities | Owner, approver, implementers | Add RACI if roles are complex |
| Exceptions | Request, approval, documentation | Require time limits and compensating controls |
| Enforcement | Monitoring, audits, consequences | Align to HR and security processes |
| References | Standards, procedures, laws | Provide links to related documents |
Table 6.1. A reusable structure for most IG policies.
Data classification example table (to embed in a policy)
| Classification level | Definition | Examples | Minimum controls |
|---|---|---|---|
| Public | Approved for public release | Published reports, marketing pages | No restrictions beyond integrity controls |
| Internal | Non-public business information | Internal SOPs, project plans | Authenticated access; no public sharing |
| Confidential | Sensitive business information | Contracts, HR files, nonpublic financials | Need-to-know access; encryption in transit |
| Restricted | Highest sensitivity | PII, PHI, credentials, trade secrets | Strong access controls, encryption at rest, audit logging, limited sharing |
Table 6.2. Example classification scheme linking labels to controls.
AI acceptable use policy outline (practical template)
| Section | Sample content prompts |
|---|---|
| Purpose | Enable safe, compliant AI use while protecting information and avoiding harmful outputs |
| Scope | Employees, contractors, and any AI tools used for work, including plugins and APIs |
| Approved tools | List enterprise-approved AI services; require vendor review for new tools |
| Prohibited uses | No Restricted data in AI prompts; no credential sharing; no bypassing access controls |
| Data rules | Allowed data by classification; de-identification requirements; prompt logging rules |
| Output rules | Human review for external content; citation/verification requirements; bias checks for sensitive decisions |
| Recordkeeping | When prompts/outputs are records; where to store them; retention rules |
| High-risk use cases | Require risk assessment and approval (e.g., employment, credit, health decisions) |
| Monitoring & enforcement | Audits for unsanctioned tools; consequences for repeated violations |
Table 6.3. AI acceptable use policy outline aligned with risk-based governance expectations.
Enforcement checklist (for policy rollout)
| Control area | Minimum actions | Evidence you should keep |
|---|---|---|
| Training | Role-based modules, annual refresh | Completion reports, training content |
| Communications | Leadership announcement, FAQs | Message archives, intranet page |
| Tooling | Labels, access controls, DLP rules | Configuration screenshots, change tickets |
| Monitoring | Alerts for risky sharing and uploads | Alert metrics, investigation records |
| Audits | Quarterly sampling and review | Audit plans, findings, remediation logs |
| Discipline | HR-aligned consequences | Case logs (appropriately confidential) |
| Exceptions | Track and expire exceptions | Exception register, approvals |
Table 6.4. A practical enforcement checklist you can apply to any IG policy.
Enforcement Mechanisms
A policy without enforcement becomes “optional guidance.” Enforcement should be fair, consistent, and focused on risk reduction rather than punishment. In practice, enforcement is a combination of education, technical controls, monitoring, and accountability.
Training and awareness
Training is the most visible enforcement mechanism, but it is only effective when it is role-based and scenario-driven. Consider four tiers:
- All staff: classification basics, safe sharing, AI do’s and don’ts, reporting incidents.
- Managers: approval responsibilities, exception handling, coaching.
- High-risk roles (HR, finance, health, customer support): handling Restricted data, retention and legal holds, AI decision safeguards.
- IT/security and developers: implementing controls, logging, secure configuration, AI model onboarding gates.
Use microlearning: short modules with examples. Reinforce training with “just-in-time” prompts in tools (for example, a reminder when someone tries to share a Restricted file externally).
Monitoring and technical controls
Modern enforcement depends on automation. Common technical controls include:
- Information protection labels that apply encryption and restrictions.
- DLP rules that detect sensitive data leaving approved locations.
- Identity and access management (least privilege, periodic access reviews).
- Audit logging for access, sharing, AI tool use, and administrative changes.
Addressing the "Air Gap": Technical enforcement in 2026 must also address the risks created by copy-paste behaviors. While a policy may prohibit uploading a document to an unvetted LLM, it is often harder to detect when an employee copies a single paragraph of Restricted data into a prompt. Modern enforcement, therefore, relies on Endpoint Data Loss Prevention (EDLP) that can trigger a "Just-in-Time" warning if sensitive patterns are detected in a clipboard buffer before they are pasted into a non-corporate browser window.
When building monitoring, prioritize high-risk behaviors (external sharing, public links, copying data into unsanctioned tools) rather than trying to monitor everything.
A useful design rule is “enforce at the point of action.” If a user can still share a Restricted file with “Anyone with the link,” the policy is not really enforced. If the platform blocks that option by default and requires an exception workflow, the policy becomes part of daily work. Similarly, if employees can install unapproved AI browser extensions without visibility, the AI policy will be ignored. Mature enforcement therefore includes configuration baselines (what settings must be true in each platform) and a control owner who tests those baselines after every major platform update.
Audits and continuous testing
Audits are how you prove policies are working. Effective audits combine:
- Control testing (are labels and access controls configured correctly?)
- Behavior testing (are people following the rules?)
- Evidence checks (are logs, approvals, and exception records maintained?)
For AI governance, NIST’s AI RMF Playbook provides example actions and outcomes that can be translated into audit questions (e.g., documentation, monitoring, and governance oversight). [airc.nist.gov], [digitalgov...enthub.org]
Disciplinary actions and accountability
Discipline should be predictable and proportional. Many organizations use a progressive model:
- Coaching and retraining
- Written warning
- Loss of privileges or reassignment
- Termination or contract action (for severe or repeated violations)
Accountability also includes leadership reporting. For example, if a business unit repeatedly violates sharing rules, the steering committee should review metrics and require corrective action.
Incentives and culture
Enforcement is easier when policy aligns with culture. Consider positive incentives:
- Recognize teams that improve classification coverage or reduce risky storage.
- Make compliance the default in tools so doing the right thing is easier than doing the wrong thing.
Integration with Laws and Regulations
IG policies should map to legal requirements without becoming law textbooks. The goal is alignment: the policy should help the organization meet obligations through clear rules and evidence.
A practical mapping approach
Use a “policy-to-law crosswalk.” For each major law or regulation, identify:
- The information types covered (personal data, health data, security incidents)
- Required controls (transparency, access, retention limits, security safeguards)
- Evidence expectations (documentation, logs, assessments)
- Who owns compliance (privacy, security, legal)
Also define the organization’s policy hierarchy so it can handle overlapping rules. A common hierarchy is: (1) enterprise IG policies (broad rules), (2) domain policies (privacy, security, AI, records), (3) standards (technical requirements), and (4) procedures (step-by-step workflows). When conflicts occur, the higher-level policy sets the principle and the standard sets the implementation. For example, a privacy policy may state “limit collection and retention,” while the retention standard specifies schedules and disposal approvals. This hierarchy matters because regulators and auditors look for internal consistency.
Key legal domains
GDPR (EU)
The GDPR emphasizes lawful processing, transparency, data minimization, security, and individuals’ rights. IG policies support GDPR by:
- Classifying personal data and applying appropriate access controls
- Limiting retention to what is necessary (retention schedules and disposal)
- Documenting processing and third-party relationships
CCPA/CPRA (California)
California’s privacy framework is enforced through a dedicated agency that publishes regulations and updates, and it adds obligations around consumer rights and governance practices such as training and record-keeping. IG policies support CPRA by defining notice practices, deletion workflows, vendor/contractor requirements, and internal training obligations. [Law & Regu...ncy (CPPA)]
EU AI Act
The EU AI Act uses a risk-based structure and imposes requirements on high-risk AI systems, including risk management, data governance, technical documentation, record-keeping/logging, transparency, human oversight, accuracy/robustness, and cybersecurity. IG policies should reflect these requirements by setting rules for AI use-case intake, documentation, logging, data quality controls, and oversight for high-risk uses. [eur-lex.europa.eu], [artificial...enceact.eu]
HIPAA (U.S. health sector)
HIPAA’s Privacy and Security Rules require safeguards for protected health information, including administrative, physical, and technical measures for ePHI. IG policies for health organizations should align classification (PHI as Restricted), access controls, audit logs, incident response, and business associate controls. [hhs.gov], [hhs.gov]
SEC rules and AI-related disclosure pressure
The SEC’s cybersecurity disclosure rule requires registrants to disclose material cybersecurity incidents promptly and to describe cybersecurity risk management, strategy, and governance. This creates a strong incentive to ensure IG-related incident response policies are documented and integrated with disclosure controls. Separately, SEC advisory materials in 2025 recommended clearer disclosure about AI’s impact on operations and board oversight, signaling that governance documentation and transparency are becoming expectations in capital markets. [sec.gov] [sec.gov], [crowell.com]
Policy-to-law integration table
| Policy area | What the policy should cover | Example laws/regulations that commonly drive it |
|---|---|---|
| Classification & handling | Labeling, access, sharing, encryption, logging | GDPR, CPRA, HIPAA [Law & Regu...ncy (CPPA)], [hhs.gov] |
| Retention & disposition | Schedules, legal holds, disposal evidence | GDPR (storage limitation), sector rules [eur-lex.europa.eu] |
| AI use & oversight | approved tools, prohibited data, logging, human review, high-risk intake | EU AI Act, NIST AI RMF (voluntary) [eur-lex.europa.eu], [airc.nist.gov] |
| Incident response | detection, escalation, notifications, documentation | SEC cyber rules, HIPAA breach expectations [sec.gov], [hhs.gov] |
| Third-party handling | contracts, security controls, subprocessors, deletion/return | GDPR, CPRA, HIPAA BAAs [Law & Regu...ncy (CPPA)], [hhs.gov] |
Table 6.5. Example crosswalk between IG policy areas and common regulatory drivers.
Common Challenges and Solutions
Resistance to change
People resist policies when they feel like obstacles. Reduce resistance by:
- Explaining the “why” with real consequences (breach costs, fines, reputational harm)
- Involving users early in drafting and testing
- Making compliant behavior the default in tools
A useful communication tactic is to translate policy into “three rules to remember” for each audience. For example, for most employees: (1) label your files, (2) keep Restricted data in approved locations, (3) never paste Restricted data into nonapproved AI tools. For managers: add “approve exceptions and coach behavior.” This helps policies compete with daily workload.
Over-complex policies
Complexity is a common failure. Symptoms include 30-page policies, too many exceptions, and inconsistent language. Solutions:
- Use a layered approach: short policy, detailed standards, role-specific procedures
- Use examples and decision trees
- Review policies for readability and testability
Another practical method is a “policy usability test.” Give the draft to five people who represent the target audience. Ask them to answer three scenario questions using only the policy. If they cannot find the answer quickly, rewrite the policy, add examples, or move details into a procedure.
Enforcement gaps
Policies often fail because enforcement is not resourced. Solutions:
- Assign owners for each control (training, DLP, audits)
- Create a metrics dashboard (training completion, label coverage, risky sharing trends)
- Fund automation where possible
Treat enforcement as a control system: each policy statement should map to at least one control and one evidence source. If the policy says “Restricted data must be encrypted at rest,” the controls might be “encryption enabled in the repository” and “device encryption enforced,” and the evidence might be configuration reports and audit logs. This mapping prevents “paper compliance.”
Keeping policies current
AI tools and privacy rules change quickly. Solutions:
- Establish policy lifecycle governance with a clear review cadence
- Track external changes (regulatory updates, new guidance)
- Use modular policies so you can update one section without rewriting everything
IAPP’s 2025 AI governance report notes that organizations often build AI governance incrementally and need skills to translate legislative requirements into actionable policies. This supports a practical approach: start with clear minimum rules and expand as capability grows. [iapp.org]
Real-World Examples
Example 1 (hypothetical): Rolling out an enterprise AI acceptable use policy
A global consulting firm adopts an enterprise AI assistant. Early pilots reveal employees pasting client data into external tools. The IG team drafts an AI acceptable use policy with three risk tiers: “public,” “internal,” and “restricted.” Restricted data is prohibited in prompts unless a specific approved tool supports encryption, logging, and contractual controls. The firm integrates the policy into tools by blocking uploads of Restricted data to nonapproved domains and by displaying prompts that remind users of allowed data types. Quarterly audits review prompt logs in approved tools and investigate exceptions.
This approach aligns with risk-based governance expectations emphasized by NIST AI RMF resources and EU AI Act requirements for documentation and oversight in high-risk contexts. [airc.nist.gov], [eur-lex.europa.eu]
Example 2 (hypothetical failure case): A retention policy that no one can follow
A manufacturing company publishes a detailed retention schedule but does not configure retention labels in its collaboration platform. Employees keep creating shared folders with inconsistent naming and no ownership. After litigation, the company spends months searching unmanaged locations and cannot prove consistent disposal practices.
The fix is not rewriting the policy. The fix is operationalizing it: define repository owners, enable automated labels, require disposition approvals, and keep disposition logs. This illustrates a common lesson: policy must be paired with controls and evidence.
Example 3 (realistic composite): Health system strengthens third-party data handling
A health system experiences a vendor-related incident involving a billing contractor. In response, it updates third-party handling policies to require business associate agreements, minimum security controls, and strict incident notification timelines. It also updates access controls so vendors receive least-privilege access and all access is logged. The policy references HIPAA Security Rule safeguard expectations and ties vendor controls to periodic audits. [hhs.gov]
Example 4 (realistic composite): Public company integrates IG with SEC disclosure readiness
A mid-cap public company updates its incident response policy to align with SEC cybersecurity disclosure requirements. The policy clarifies who determines materiality, how information flows to disclosure counsel, and how evidence is preserved. The company runs tabletop exercises and maintains documentation so that, if an incident occurs, it can meet the required reporting timelines and provide consistent governance disclosures. [sec.gov]
Future Outlook
IG policy work will continue to evolve in three directions.
First, AI governance will become more formal and auditable. Organizations will increasingly maintain AI asset registers, use-case intake processes, documentation and logging standards, and monitoring for unsanctioned tools. This shift is reinforced by regulatory frameworks like the EU AI Act and by NIST’s ongoing guidance on operationalizing AI risk management. [eur-lex.europa.eu], [airc.nist.gov]
Second, privacy laws will keep expanding and diversifying. Organizations will need modular policies that can adapt to different jurisdictions while maintaining a consistent enterprise baseline. California’s active privacy rulemaking and enforcement infrastructure is a signal that U.S. privacy governance is becoming more operational and evidence-driven. [Law & Regu...ncy (CPPA)]
Third, IG policies will be embedded into systems by default. As organizations mature, they will rely less on manual compliance and more on automation: labels applied at creation, access controls that follow classification, and continuous monitoring for policy drift. This is the practical path to scalable compliance in high-volume digital environments.
Another helpful artifact is a “compliance evidence bundle” for each major policy. It is the folder an auditor would ask for: the approved policy, the last review date, the training module and completion report, the system settings that enforce the rule, and a sample of logs that prove enforcement. For example, a classification policy’s bundle might include label definitions, exports of label configurations, a report showing what percentage of files are labeled, and DLP alert summaries for attempted external sharing. For an AI use policy, the bundle might include the approved-tools list, the intake checklist for new use cases, testing notes for high-risk deployments, and periodic audits for unsanctioned tools. Building these bundles during rollout prevents “scramble mode” later and makes policy maintenance easier.
Learning Objectives
- Describe the difference between policies, standards, and procedures and explain how they form an IG framework.
- Apply a step-by-step process for creating, reviewing, approving, and maintaining IG policies.
- Identify key IG policy types and explain what each should cover in a modern (2026) environment.
- Use templates and checklists to draft usable policies and plan enforcement.
- Explain how IG policies align with major laws and regulations, including privacy, security, and AI governance requirements.
Key Takeaways
- IG policies translate governance goals into clear rules that can be trained, audited, and automated.
- A disciplined policy development process improves legitimacy, usability, and enforceability.
- Modern IG frameworks must include AI use policies and third-party data handling controls.
- Enforcement requires more than training: it also requires monitoring, audits, technical controls, and accountability.
- Regulatory alignment is best managed through crosswalks that connect policy requirements to evidence and ownership.
Discussion Questions
- What is the most important difference between a policy and a procedure, and why does that distinction matter for enforcement?
- Design a one-page AI acceptable use policy for a university: what data types would you prohibit in prompts, and how would you enforce the rule?
- Think of a policy you have seen (school, job, or organization). What made it easy—or hard—to follow?
Further Reading
- EU Artificial Intelligence Act (official text, Regulation (EU) 2024/1689): https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng [eur-lex.europa.eu]
- NIST AI RMF Playbook (implementation guidance): https://airc.nist.gov/airmf-resources/playbook/ [airc.nist.gov]
- IAPP AI Governance Profession Report 2025: https://iapp.org/resources/article/a...fession-report [iapp.org]
- California Privacy Protection Agency – Laws & Regulations: http://cppa.ca.gov/regulations/ [Law & Regu...ncy (CPPA)]
- HHS Summary of the HIPAA Security Rule: https://www.hhs.gov/hipaa/for-profes...ons/index.html [hhs.gov]
- SEC Final Rule on Cybersecurity Disclosures (Release 33-11216 PDF): https://www.sec.gov/files/rules/fina...3/33-11216.pdf [sec.gov]
- SEC Investor Advisory Committee recommendation on AI disclosures (Dec. 2025): https://www.sec.gov/files/approved-a...ion-120425.pdf [sec.gov]
Your Nerdy Example:

Star Wars: The Clone Wars, Season 3, Episode 3 ("Supply Lines", 2010): Senators debate and ratify policies in Clone Wars 'Supply Lines' (S3E3), creating enforceable frameworks for cooperation — just like developing IG policies and frameworks that integrate with laws for effective enforcement

