AI Enablement

Free team deliverable, copy print and adopt in 15 minutes

AI Rules of the Road

A free, practical team governance doc for Copilot, ChatGPT, Gemini, Claude, and any AI tool your team uses.

Most teams adopt AI tools without any shared rules — which leads to data leaks, unreliable outputs, and inconsistent quality. This one-page document gives your team a shared standard: what AI is good for, what to avoid, how to verify outputs, and who is accountable. No legal jargon. No IT overhead. Just a clear, practical guide you can adopt immediately.

Copy / Print controls

Data safety first

Clear rules for what can and cannot be pasted into AI tools, with no guesswork for your team.

Verification built in

A two-pass review method so external-facing outputs are always checked before they go out.

Human accountability

Every output has an owner. AI is the drafting assistant and a human is always responsible.

Rules (core)

Copy, print, and share with your team in 15 minutes.

Data safety (sanitization required)

  • Do not paste confidential, proprietary, regulated, or customer-identifying information into AI tools unless explicitly allowed.
  • When in doubt, sanitize: remove names, emails, IDs, internal URLs, account numbers, credentials, and sensitive details.
  • Prefer structure + placeholders (ClientName, Date, PolicyName) over raw data.

Good fits

  • Drafting: emails, SOPs, agendas, summaries, macros
  • Structuring: messy notes to clean outline
  • Editing: tone, clarity, concision
  • Brainstorming: options and pros/cons (with verification)

Avoid or escalate

  • Legal/compliance interpretation or formal advice
  • Medical or financial advice
  • Security-sensitive material (credentials, incident details)
  • High-impact decisions without verification

Verification is mandatory

  • External-facing outputs require verification of facts, numbers, dates, and policy references.
  • Use a two-pass method: generate, then review with checklist before sending.
  • Prefer draft-with-placeholders over invented specifics.

Quality standards

  • Use clear structure: headings, bullets, short paragraphs.
  • Ask for assumptions when info is missing.
  • Keep tone professional and aligned to your brand voice.

Accountability

  • A human is always accountable for final output quality and correctness.
  • AI is a drafting assistant, not a source of truth.

How teams use this

1.

Adopt rules in a 15-minute team meeting

2.

Use the verification checklist before sending external content

3.

Store prompt systems in a shared space

4.

Review and refine monthly

Governance-lite

Practical internal standards, not a compliance project

This document is the starting point for a team governance-lite posture: safe-use rules, verification requirements, escalation paths, and accountability assignments. It is not a substitute for formal legal, security, or regulatory compliance work.

  • Print or share this doc as your team’s AI usage agreement
  • Assign an owner (manager or ops lead) responsible for maintaining it
  • Review and update quarterly as tools and policies evolve
  • For formal governance, AI policies, or compliance frameworks, consult your legal or security team

Mini FAQ

Does this replace an official AI policy?

No. It's a practical starter guide. For formal policies, consult internal security/legal.

Do we need Copilot?

No. Copilot-first is recommended for Microsoft shops, but these rules apply across tools.

How do you reduce hallucinations?

Verification checklist + source grounding + human review.