In the first two articles (Part 1 and Part 2) in this series, we covered Microsoft 365 readiness and the main security risks associated with Copilot Studio agents. The next step is turning those insights into a practical control model.
This is where governance matters most.
A well-designed agent is not just a useful prompt with a connected SharePoint library. It is a governed capability with a defined scope, approved knowledge sources, controlled channels, formal ownership, and an operating model that can stand up to review.
This article sets out the core governance and control measures organisations should establish before scaling Copilot Studio agents, especially where confidential information or personal data could be involved.
Governance is what makes AI defensible
Many organisations focus first on what an agent can do:
- answer staff questions
- guide users through a process
- summarise policy documents
- retrieve internal knowledge
- trigger workflows
Those are useful outcomes, but governance asks a different set of questions:
- What is the agent allowed to do?
- What is it explicitly not allowed to do?
- What data can it use?
- Where can it operate?
- Who owns it?
- Who can change it?
- What testing is required before release?
- What happens if it discloses something it should not?
Without those controls, the agent may still function, but it will not be well governed.
The three layers of control
A practical governance model for Copilot Studio agents usually has three layers:
- Agent governance
Controls how the agent behaves. - Data governance
Controls what the agent can access. - Process governance
Controls how the agent is approved, changed, and reviewed over time.
All three are required. Strong prompts without strong permissions are not enough. Strong permissions without change control are not enough. Good process without clear runtime guardrails is not enough.
1. Agent governance: controlling behaviour
Agent governance is about practical guardrails. It defines how the agent should behave in day-to-day use.
This includes:
- what topics it can handle
- what it must refuse
- how it responds to sensitive requests
- what channels it can appear in
- whether it can perform actions
- how its outputs are monitored and reviewed
This is the layer that helps reduce over-disclosure, misuse, and inconsistent behaviour.
Start with a clearly defined scope
Every agent should have a documented scope.
That scope should define:
- the business purpose
- intended users
- approved channels
- approved knowledge sources
- whether personal data is in scope
- whether the agent is read-only or action-enabled
- what is out of scope
A vague purpose statement such as “help users find information” is not enough. The more specific the scope, the easier it is to apply meaningful controls.
For example, a safer scope might be:
Provide answers to staff questions using approved policy and template content from a curated SharePoint library. Do not provide personal data, case details, legal advice, or record-level responses.
That gives a stronger foundation for prompt design, testing, and approval.
Use the system prompt as a policy control
The system prompt is one of the most important behavioural controls in Copilot Studio. It should not be treated as a casual instruction set. It should be written as a policy layer for the agent.
A strong system prompt should clearly define three things:
What the agent can do
Examples:
- explain approved business processes
- summarise non-sensitive documents from approved knowledge sources
- provide links to policies, templates, or standard guidance
- direct users to the right team or process
What the agent cannot do
Examples:
- disclose personal data
- discuss casework, complaints, or disciplinary matters
- list or export records
- infer sensitive details
- answer questions outside approved sources
What the agent must do when asked something out of scope
Examples:
- refuse clearly
- avoid ambiguity
- offer a safe alternative
- provide a hand-off or escalation path
The prompt should use direct language. Clear instructions are more reliable than soft wording.
Better:
Do not disclose personal data under any circumstances.
Weaker:
Try to avoid sharing sensitive information where possible.
The agent should not be left to interpret policy intent.
Build dedicated refusal topics
System prompts are necessary, but they should not be the only control. Where the organisation knows the likely high-risk requests, those should be intercepted early using dedicated topics or trigger patterns.
High-priority refusal topics are especially useful for:
- requests for addresses, phone numbers, dates of birth, or account details
- requests for lists of people or records
- requests involving complaints, investigations, or disciplinary matters
- bulk export-style prompts
- probing questions such as “what do you know about this person”
A refusal topic should follow a consistent structure:
- Refuse the request
- Offer a safe alternative
- Direct the user to the correct process or team
For example:
- “I can’t help with that request.”
- “I can explain the process for requesting approved information.”
- “Please use the approved system or contact the relevant team.”
This keeps the response controlled and predictable.
Detect sensitive inputs before the model processes them
Not every risky interaction comes from a direct request. Sometimes a user enters sensitive information into the chat itself.
This might include:
- phone numbers
- email addresses
- employee or customer identifiers
- account numbers
- other structured personal data
Where possible, organisations should use entity detection or regex-style pattern matching to identify these inputs before the conversation proceeds.
This helps the agent do three things:
- avoid echoing the information back
- redirect the user to the right channel
- log the event appropriately without retaining unnecessary detail
This is a practical way to support data minimisation at the conversation layer.
Approve channels deliberately
Where an agent is deployed matters as much as what it says.
For example, a one-to-one chat is very different from:
- a Teams channel
- a shared workspace
- a broad collaboration area with changing membership
- an environment where guests may be present
Channel choice should be treated as a governance decision, not just a deployment option.
A good operating principle is:
- low-risk informational agents may be suitable for broader internal use
- higher-risk agents should be restricted to tightly controlled channels
- agents involving personal data should not be placed in broad collaboration spaces without strong justification and additional controls
Each enabled channel should be explicitly approved.
Test for misuse before release
An agent should not go live simply because the happy-path scenario works.
Testing should include:
- standard functional testing
- refusal testing
- over-disclosure testing
- prompt injection testing
- boundary testing for out-of-scope requests
- channel-specific testing
- least-privilege testing against connected data
Many organisations benefit from maintaining a small regression pack of known risky prompts. That pack should be re-used whenever the prompt, source set, connectors, or channels change.
If the agent cannot consistently refuse risky requests before launch, it is not ready.
Monitor behaviour after go-live
Governance is not complete at deployment.
Once an agent is live, organisations should monitor for:
- repeated requests for record lists or exports
- attempts to retrieve personal data
- unusual access patterns
- spikes in refusal topic activation
- changes in usage by channel
- user feedback that suggests misleading or over-broad answers
Monitoring does not need to be over-engineered at the start, but it does need to exist. It should also treat logs and transcripts as sensitive operational data.
2. Data governance: controlling access
If agent governance controls behaviour, data governance controls exposure.
This is the layer that determines what the agent can retrieve in the first place.
The strongest behavioural guardrails will still struggle if the agent is connected to broad, mixed, or poorly governed repositories.
Use curated knowledge sources
One of the most effective controls is to connect the agent only to curated knowledge sources.
That means:
- approved policy libraries
- approved FAQ content
- standard templates
- governed guidance repositories
It does not mean connecting the agent to entire sites or broad document collections that include mixed content.
High-risk content areas such as HR, casework, complaints, finance, legal, or investigations should be excluded by default unless there is a specific approved need and stronger controls to support it.
A useful pattern is to create “Copilot-ready” libraries that are intentionally prepared for agent use.
What a Copilot-ready library should include
A well-governed knowledge source should have:
- a named content owner
- clear business purpose
- limited scope
- approval workflow before publication
- versioning
- review dates
- archiving or expiry process for stale content
The goal is not just security. It is also accuracy. A smaller, well-maintained library reduces the chance that the agent retrieves the nearest matching answer instead of the correct one.
Align permissions to business need
Permissions remain a core enforcement layer.
Before connecting any SharePoint, OneDrive, Dataverse, or other source, organisations should check:
- who currently has access
- whether broad groups are in use
- whether inheritance is broken unnecessarily
- whether file- or folder-level permissions are sprawing
- whether guests have access
- whether service identities are over-permissioned
The agent will not understand whether a permission is appropriate. It will only operate within the permissions it is given.
A few useful principles are:
- use role-based groups rather than individual assignments
- avoid broad access groups for sensitive content
- reduce unique permissions where possible
- review and remove stale access regularly
- scope service accounts tightly
Apply classification and protection controls
Labels and DLP are not optional extras in an AI-enabled environment. They are part of the control structure.
Sensitivity labels help identify which content is sensitive and what restrictions should apply. DLP helps enforce what can happen to that content across Microsoft 365 and the Power Platform.
These controls become more important when agents are in use because AI can summarise and re-present content quickly. If sensitive content is not labelled and governed properly, it may be surfaced without triggering the right protection response.
A practical starting point is a small, manageable label set, such as:
- Public
- Internal
- Confidential
- Highly Confidential
This is often more effective than introducing a large label taxonomy too early.
Control external sharing and guest access
If the source content is accessible to guests, or if sites and libraries are shared broadly, the risk surface increases significantly.
Important questions include:
- Are “Anyone” links allowed?
- Can users invite guests freely?
- Are guest accounts reviewed and expired?
- Are sensitive sites configured differently from low-risk sites?
- Are agents present in spaces that include guests?
For sensitive use cases, internal-only deployment is usually the safer starting point. At minimum, external sharing settings should be reviewed carefully before any knowledge source is connected to an agent.
Define retention for source and derived content
Retention is often discussed only for source files, but agent-related artefacts matter as well.
Governance should consider retention for:
- source libraries
- generated files
- transcripts
- exports
- diagnostic logs
- analytics outputs
If the organisation cannot explain where this material is stored, who can access it, and how long it is retained, the governance model is incomplete.
3. Process governance: controlling change and accountability
The final layer is process governance. This is what prevents unmanaged drift over time.
Even a well-designed agent can become risky later if someone changes:
- the prompt
- the knowledge sources
- the connectors
- the channels
- the identity model
- the permissions
That is why process governance matters.
Assign clear ownership
Every agent should have at least:
- a business owner
- a technical owner
The business owner is accountable for purpose, appropriateness, and business risk. The technical owner is responsible for configuration, support, and operational integrity.
For more sensitive use cases, additional oversight may be needed from security, privacy, compliance, or records management functions.
No agent should go live without named ownership.
Use a formal intake and risk assessment
Before a new agent is built, the request should capture:
- business purpose
- target users
- channels
- knowledge sources
- connectors
- whether personal data is in scope
- whether the agent can take actions
- likely risk level
This helps the organisation distinguish between:
- low-risk informational agents
- medium-risk internal workflow agents
- high-risk agents involving confidential information or personal data
Different risk levels should trigger different approval expectations.
Establish approval gates
Approval should not happen only once.
At a minimum, organisations should require approval:
- Before initial launch
- Before expanding data access
- Before expanding channels or audience
- Before enabling actions that change data or trigger business processes
A new connector or data source should be treated as a material change, not a routine update.
Maintain separation of duties
The person who builds the agent should not be the only person who approves it for release.
This is particularly important where the agent:
- accesses sensitive content
- uses service identities
- can update records
- can trigger communications
- can create or move files
Independent review helps reduce blind spots and improves accountability.
Treat changes as controlled items
Prompts, sources, connectors, and permissions should all be treated as governed configuration items.
Each change should record:
- what changed
- why it changed
- who requested it
- who approved it
- when it was released
- whether re-testing was completed
This matters because small changes can have large effects. A minor adjustment to the system prompt or a newly added knowledge source may significantly alter what the agent can say or retrieve.
Prepare an incident response path
If an agent discloses something it should not, the organisation needs to know what to do immediately.
A simple incident runbook should cover:
- how to disable the agent quickly
- how to preserve evidence
- who to notify
- how to assess impact
- how to review logs and transcripts safely
- how to document lessons learned
This does not need to be complex, but it should exist before launch.
Review and recertify regularly
Agents should not run indefinitely without review.
A periodic review should confirm that:
- the purpose still applies
- the owner is still active
- the connected sources are still appropriate
- permissions have not drifted
- channels remain suitable
- the control set still matches the risk
Agents that are no longer used, no longer owned, or no longer aligned to their approved purpose should be paused or retired.