builderall

STB Creative Solutions — AI Data Policy

Internal policy for employees, contractors, and approved subcontractors

Effective date: January 15th  2026

Policy owner: AI Data Governance Officer / Data Protection Lead (or delegated compliance owner)

Applies to: STB Creative Solutions (sybersolution.com)

Data protection governance for AI-assisted work, including Private Agentic AI flows
 

Aligned to GDPR + EU AI Act, Malaysia PDPA + Malaysia AI governance guidance, and Singapore PDPA + Singapore AI governance guidance

0. Policy principles (read first)

1. Purpose

This AI Data Policy defines how STB Creative Solutions (“STB”) collects, uses, stores, shares, secures, and deletes data when using AI tools and AI-enabled workflows, including autonomous or semi-autonomous Agentic AI systems. The policy focuses exclusively on data handling and data protection controls (not general AI usage ethics).

 

2. Scope
 

This policy applies to:

  • All AI-assisted client work and internal operations (research, drafting, editing, audits, analytics, reporting, agent automations).

  • All data categories processed via AI tools: prompts, uploads, transcripts, datasets, logs, embeddings/vector stores, agent memory, tool-call parameters, intermediate reasoning artifacts, and outputs.

  • All third-party vendors, platforms, plugins, extensions, and integrations used for AI processing or agent execution.

  • All deployments of Agentic AI, including Retrieval-Augmented Generation (RAG), tool-using agents, workflow automations, and private agentic AI flows (on-prem/private cloud).

 

7. Data minimisation & purpose limitation
 

7.1 Minimisation rules (mandatory)

  • Only use data strictly necessary for the task; prefer summaries/excerpts over full documents.

  • Redact identifiers (names, emails, phone numbers, account numbers) unless required and approved.

  • Never place credentials, API keys, secrets, passwords, or access tokens in prompts, uploads, or agent tool calls.

  • Avoid ingesting comments/user identifiers from external platforms unless necessary; aggregate insights where possible.

  • Disable persistent memory for client work by default; enable only with documented purpose and retention controls.

 

7.2 Purpose limitation rules (mandatory)
 

AI processing must match the documented purpose agreed with the client or internal owner.

Client data must not be reused for other clients, demonstrations, benchmarking, or model training without explicit written approval.

Any change in purpose requires a new approval and updated project documentation.

 

4. Definitions
 

  • Personal Data: information relating to an identified or identifiable individual (including business contact details where applicable).

  • Sensitive / Special Category Data: sensitive data (e.g., health data) requiring higher protections; treat as restricted by default.

  • Client Confidential Information: any non-public client information (strategies, documents, product roadmaps, credentials, financials, customer lists).

  • AI System: any software using ML/LLMs to generate or transform content, retrieve information, or support decisions.

  • Agentic AI / AI Agent: an AI system that can plan and execute multi-step tasks by invoking tools/APIs and taking actions.

  • Private Agentic AI Flow: an agentic AI system designed to keep sensitive data within a private environment (on-prem or private cloud), with strict access control, logging, and guardrails.

  • RAG: workflow retrieving data from a knowledge base (documents/vector store) to augment model responses.

  • AI Artifacts: prompts, outputs, intermediate steps, tool calls, retrieval results, logs, memory entries, embeddings, plans, and execution traces.

 

5. Roles and responsibilities
 

  • AI Data Governance Officer / Data Protection Lead: owns this policy; approves high-risk workflows; maintains vendor register; coordinates DSAR and incident response.

  • Project Lead: ensures project-level data mapping, lawful basis confirmation, permissions, tool configuration, logging, and retention are implemented.

  • All staff/contractors: follow approved workflows; apply minimisation; do not bypass controls; report incidents immediately.

  • Vendors/subprocessors: must meet STB contractual and security requirements and support deletion/retention obligations.

 

3. Regulatory alignment (overview)

STB operates with a multi-jurisdiction approach. This policy is designed to meet or exceed requirements under:

  • European Union: General Data Protection Regulation (GDPR) and the EU AI Act (risk-based AI governance).

  • Malaysia: Personal Data Protection Act 2010 (PDPA) and relevant national AI governance guidance (e.g., Malaysia National Guidelines on AI Governance & Ethics and National AI Office governance direction).

  • Singapore: Personal Data Protection Act 2012 (PDPA) and national AI governance guidance (e.g., Model AI Governance Framework, AI Verify).

If client contracts or sector rules (financial services, healthcare, legal) impose stricter requirements, the stricter requirement prevails.

 

 

6. Data classification for AI workflows
 

STB classifies data used with AI into the following categories (highest risk classification governs):
 

  • Public: lawfully available public information.

  • Internal: STB internal operational data (non-public).

  • Client Confidential: non-public client data.

  • Personal Data: any data about an identifiable person.

  • Sensitive/Restricted: sensitive personal data; regulated data; credentials/secrets; minors’ data; high-impact datasets.

  • Agentic Artifacts: logs, memory, intermediate steps, embeddings, and tool-call traces (classified by the most sensitive data they contain).

 

8. Private Agentic AI flows (required standard for sensitive work)

 

For sensitive sectors (healthcare, finance, legal, defense) and any work involving restricted data, STB adopts a Private Agentic AI approach whenever feasible to prevent sensitive data from leaving controlled environments.
 

8.1 Three-layer private agentic architecture (reference model)
 

  • Foundation Layer (LLM runtime): run models on-premises or in a private cloud environment where data remains within the organisation/network boundary.

  • Augmentation Layer (private knowledge): use RAG, vector databases, and curated knowledge bases hosted in private environments with strict tenancy separation.

  • Action Layer (tools/APIs): invoke only approved internal tools and APIs using least-privilege credentials, with strong monitoring and approval gates.

 

8.2 Risks unique to private agentic systems
 

  • Model memorisation: fine-tuning or training on private data can embed sensitive content within model weights.

  • Prompt injection and data exfiltration: agents may be manipulated via retrieved content or external instructions.

  • Over-broad tool access: agent autonomy increases the blast radius of mistakes.

  • Insider threats and misuse: authorised users can still misuse systems without strong governance.

 

8.3 Mitigation requirements for private agentic deployments
 

  • Anonymisation/tokenisation: scrub or tokenise PII before it reaches the model, where feasible; keep re-identification mapping in a separate, secured store.

  • Least privilege + allowlists: restrict agents to approved tools, datasets, endpoints, and actions; block unknown destinations.

  • Human approval gates: require explicit human approval for publishing, external sending, exports, system changes, and client-system actions.

  • Kill switch: capability to pause/disable agents instantly and revoke credentials; preserve logs for investigation.

  • Environment isolation: per-client isolated workspaces; segregate dev/test/prod; prevent cross-client retrieval.

  • DLP controls: outbound filters/redaction; attachment restrictions; prevent copying restricted content into unapproved tools.

  • Logging-by-design: Agent-run logs, tool-call logs, and retrieval logs are mandatory for agentic workflows that handle personal or client-confidential data.

  • No silent fine-tuning on client data: fine-tuning requires explicit client approval, documented lawful basis, and a Model Training Register entry.

 

10. Data subject rights & requests (GDPR, Malaysia PDPA, Singapore PDPA)
 

10.1 Intake and escalation

  • Immediately route all data subject requests (access, correction, deletion/erasure, objection, restriction) to the Data Protection Lead.

  • If STB is a processor, coordinate with the client/controller and assist per contract and law.

 

12. Security and confidentiality controls (baseline + agentic hardening)
 

12.1 Baseline controls (mandatory)

  • Encryption in transit and at rest.

  • Role-based access control (RBAC) and least privilege; MFA for all accounts.

  • Separate client workspaces and strict tenancy boundaries; no commingling.

  • Secure storage for uploads and transcripts; time-bound sharing links where feasible.

  • Secure device posture for staff (updated OS, endpoint protection) where applicable.

 

10.2 Rights handling across AI and agentic artifacts

 

  • System mapping: maintain a project data map listing where personal data may exist (files, prompts, transcripts, logs, memory, vector stores, vendor systems).

  • Searchability: logs and stores must be searchable to locate personal data without undue delay (where lawful).

  • Rectification: correct personal data in the authoritative source and propagate downstream; re-run relevant steps if outputs depended on incorrect data.

  • Erasure/deletion: delete personal data from active systems and request deletion from vendors where feasible; document completion.

  • Embeddings: if deletion from embeddings is not technically feasible, apply compensating controls (tombstoning, access blocking, re-indexing) and document the limitation and mitigation.

  • Automated decision-making: if any workflow could materially affect individuals (e.g., profiling), it requires additional approvals, human oversight, and risk assessment.

 

13. Data provenance, logging & traceability
 

13.1 Required project records

  • Project Data Map (sources, categories, systems, vendors).

  • AI Activity Log (tools used, purpose, data categories, outputs, reviewer).

  • External Source Register (public sources and private sources, including access level and permission).

  • Retention & Deletion Record (what was deleted, when, and confirmations).

 

13.2 Agent Run Log (mandatory for agentic workflows)
 

Minimum fields:

  • Run ID, date/time, owner, client/project, documented purpose, risk tier.

  • Tools invoked (and versions), permission scope, endpoints accessed.

  • Data inputs and sources used; flags for personal/confidential/restricted data.

  • Retrieval results provided to the agent, actions proposed, actions executed, outputs produced.

  • Human approvals granted/denied and by whom.

  • Errors, policy violations, overrides, and remediation actions.

 

14. External data sources (including private sources)

Agents may interact with external sources (web content, videos, platforms). External sources can be public, semi-public, or private (e.g., client-provided private content). Treat derived transcripts/notes as client confidential when sourced from private materials.

  • Use only sources STB has the right to access and process; respect platform terms and copyright.

  • Minimise ingestion: prefer metadata/transcripts/excerpts; avoid downloading full media unless approved.

  • Redact personal data from derived notes before sending to third-party tools unless explicitly approved and contractually covered.

  • Log provenance: URLs/IDs, access level, date accessed, and purpose in the External Source Register.

 

 

11. Vendor management & third-party AI tools

 

  • Only use approved vendors for client work. Maintain an Approved AI Vendor List and Vendor Register.

  • Conduct due diligence: security controls, retention/deletion, subprocessors, cross-border transfers, audit logging, prompt/output handling, and training data use.

  • Execute Data Processing Agreements (DPAs) where required; ensure cross-border safeguards are in place (e.g., SCCs where relevant).

  • Prefer private cloud or data-residency options when required by the client or by jurisdiction.

 

12.2 Agentic hardening (mandatory for agentic workflows)
 

  • Scoped, short-lived credentials and secret management (never in prompts).

  • Egress restrictions and endpoint allowlists; block unknown domains and exfil routes.

  • Monitoring and alerting for abnormal tool usage (bulk exports, unusual retrievals, repeated failures).

  • Outbound content filtering/redaction for restricted data.

9. Lawful basis, transparency, and processing role (GDPR & PDPA)
 

  • For each project, document whether STB is acting as Data Controller, Joint Controller, or Data Processor.

  • For client projects in which STB is a processor, the client/controller confirms the lawful basis and notice obligations; STB provides the information needed to explain AI processing.

  • Where required, maintain a Record of Processing Activities (ROPA) for AI-assisted workflows.

 

 

13.3 Model Training Register (required if fine-tuning/training occurs)
 

  • Training purpose, dataset provenance, lawful basis/permissions, and data minimisation steps

  • Training environment, access controls, evaluation, and rollback plan.

  • Retention/deletion plan for training datasets and artifacts.

 

15. EU AI Act alignment (risk-based controls)

 

STB applies risk tiering and governance controls consistent with the EU AI Act. Agentic workflows involving personal data, profiling, or automated actions receive elevated scrutiny and documentation.

  • Risk screening: determine whether a workflow is high-risk (e.g., profiling, credit/employment/health-related decisions).

  • Transparency: disclose to clients the role of AI tools in deliverables, limitations, and required human review where applicable.

  • Technical documentation and logs: retain sufficient records to demonstrate traceability and accountability.

  • AI literacy: ensure staff are trained to use AI responsibly and securely.

 

16. Malaysia and Singapore alignment (practical governance)
 

16.1 Malaysia (PDPA + national AI governance guidance)
 

  • Apply PDPA principles: consent/notice (where applicable), purpose limitation, security safeguards, and access/correction handling.

  • Follow national AI governance guidance (e.g., fairness, safety, privacy/security, transparency, accountability) for AI deployments.

  • Where PDPA amendments introduce new requirements (e.g., DPO, breach notification, cross-border rules), STB implements these as baseline controls.

 

16.2 Singapore (PDPA + AI governance guidance)
 

  • Apply Singapore PDPA obligations for collection, use, disclosure, protection, retention, and access/correction requests.

  • Adopt responsible AI governance guidance (Model AI Governance Framework, AI Verify) for system documentation, transparency, and testing where feasible.

  • Use sector-specific rules where relevant (finance, healthcare, employment) as higher standards.

 

 

17. Data retention & deletion
 

Default retention (unless stricter client contract applies):

  • Active project working files (including retained prompts/outputs): up to 24 months after project completion for support and auditability.

  • Client-confidential datasets: only for the project duration unless the client approves longer retention in writing.

  • Transcripts/recordings: up to 12 months unless required longer for engagement support.

  • AI Activity Logs and Agent Run Logs: up to 24 months; redact where feasible.

  • Agent memory/caches: disabled by default; if enabled, retain only for the shortest operational need and no longer than project duration unless explicitly approved.

  • Vector stores/embeddings containing personal data: retain only as long as needed for documented purpose; implement DSAR deletion/rectification procedures.

Deletion requirements:

  • Delete data from STB systems at the end of retention or upon valid request; request deletion from vendors where feasible.

  • Record deletion actions and confirmations in the Retention & Deletion Record.

 

18. Incident management
 

  • Report suspected data breach, unintended agent action, prompt leakage, or unauthorised access immediately to the Data Protection Lead.

  • Containment: activate kill switch, revoke credentials, disable tool access, isolate workspace, and preserve logs.

  • Assess notification obligations (client/controller + regulators) based on applicable law and contract; document decisions and actions.

 

 

19. Training, audits, and continuous improvement
 

  • Mandatory onboarding and annual refreshers: data minimisation, confidentiality, vendor/tool use, DSAR handling, agentic security, and external-source governance.

  • Quarterly review of Approved AI Vendor List, permissions, and logging coverage for agentic workflows.

  • Annual policy review (or sooner if laws/tools change).

 

20. Enforcement

Non-compliance may result in access removal, corrective training, contractual action, or termination of engagement, depending on severity.
 

Annex A — Minimum templates/registers (recommended)

Project Data Map template

AI Activity Log template

Agent Run Log template

External Source Register template

Model Training Register template

Retention & Deletion Record template

Incident runbook for agentic workflows (kill switch, credential revocation, evidence preservation)
 

Annex B — Reference sources (non-exhaustive)

  • EU: GDPR and the EU AI Act implementation timeline and guidance (European Commission).

  • Malaysia: Personal Data Protection Act 2010 (PDPA) and Malaysia National Guidelines on AI Governance & Ethics / National AI Office guidance.

  • Singapore: Personal Data Protection Act 2012 (PDPA) and AI governance guidance (Model AI Governance Framework, AI Verify).

 

C.3.3 Enhanced-control triggers (apply stricter controls)
 

  • Financial service providers or projects touching customer transaction data, AML/fraud, or credit-related workflows.

  • Agents that can trigger external communications or system writes.

  • High-volume processing of personal data or use of analytics that could be considered profiling.

 

Annex C — Regional Alignment (ASEAN + APAC)

This Annex provides jurisdiction alignment clauses that can be referenced in project documentation. It does not replace legal advice. Where client contracts or sector rules are stricter, the stricter requirement prevails.
 

C.0 ASEAN (regional) — voluntary governance baseline

 

C.0.3 Enhanced-control triggers (apply stricter controls)
 

  • Any use of restricted data (health/financial/legal/credentials/minors) or regulated-client engagements.

  • Any agentic workflow that can take actions (write to systems, send messages, export data).

  • Any cross-border processing where local residency/transfer safeguards are required by the client or law.

 

 

C.1.2 STB required controls (minimum baseline)

 

  • PDPA-aligned purpose limitation and minimisation; keep a project data map for personal data flows.

  • Security safeguards: encryption, RBAC/MFA, segregated client workspaces, and controlled sharing.

  • Vendor governance: approved vendors only; retention/deletion controls; cross-border transfer checks where applicable.

  • DSAR handling: documented intake, verification, access/correction workflow, and deletion verification.

  • Incident readiness: internal escalation and evidence preservation for AI-related data events.

 

 

C.0.1 Applicable instruments (indicative)


STB aligns ASEAN-wide GenAI governance expectations to ASEAN-level voluntary guidance for AI governance and ethics (including generative AI) and applies the stricter national requirements of each client jurisdiction where applicable.

C.0.2 STB required controls (minimum baseline)
 

  • Risk-tier workflows: classify AI use (low/medium/high impact) and apply proportional controls.

  • Human oversight: required for client-facing outputs and any action-taking agent workflows.

  • Transparency: maintain an AI Activity Log and provide clear client disclosures of AI assistance where required by contract.

  • Security safeguards: encryption, RBAC/MFA, tenancy separation, and incident response readiness.

  • Accountability: maintain documented approvals for higher-risk workflows and vendor onboarding.

 

C.1.3 Enhanced-control triggers (apply stricter controls)
 

  • Banking/fintech, healthcare, legal, or public-sector style engagements.

  • Restricted data categories or client requirements for local containment/private processing.

  • Agentic workflows with external browsing, integrations, or automation actions.

 

C.2 Singapore — AI governance + privacy alignment
 

C.2.1 Applicable instruments (indicative)
 

STB aligns Singapore-related AI and data handling to Singapore PDPA obligations and Singapore’s national AI governance guidance for both traditional AI and generative AI, including assurance/testing approaches when required by clients.

 

 

C.1 Malaysia — AI governance + privacy alignment

C.1.1 Applicable instruments (indicative)
 

STB aligns Malaysia-related AI and data handling with Malaysia’s PDPA requirements and national AI governance guidance and coordination directions.

C.2.2 STB required controls (minimum baseline)

 

  • PDPA-ready data governance: document collection/use/disclosure purpose; maintain a project data map and retention schedule.

  • Generative AI governance mapping: document human oversight points, limitations, and risk controls for outputs.

  • Vendor due diligence: data-use terms (no silent training), subprocessors, deletion, audit logs, and security posture.

  • Assurance evidence (when requested): test cases, failure mode checks (hallucination/leakage), and audit-ready logs.

  • Agentic controls: allowlisted tools/endpoints, least-privilege credentials, kill switch, and Agent Run Logs.

 

C.2.3 Enhanced-control triggers (apply stricter controls)
 

  • Workflows involving AI recommendations/decisions that could materially affect individuals.

  • Regulated-sector clients or engagements requiring formal assurance/testing artifacts.

  • Persistent memory/vector stores containing personal data.

 

 

C.3 Thailand — sector-led AI governance (notably financial services)
 

C.3.1 Applicable instruments (indicative)

STB aligns Thailand-related AI and data handling to Thailand’s PDPA as the privacy baseline and to sector guidance (especially for financial services) where the client is regulated

C.3.2 STB required controls (minimum baseline)
 

  • PDPA-aligned minimisation, purpose limitation, and secure handling of personal data.

  • For fintech clients: apply strong model/AI risk governance—documentation, monitoring, and oversight.

  • Maintain AI Activity Logs and Agent Run Logs for agentic workflows involving personal/confidential data.

  • Vendor governance: approved tools only; deletion/retention controls; cross-border safeguards as required.

 

C.4.2 STB required controls (minimum baseline)
 

  • Document lawful basis/permission pathway for personal data handling per client/controller responsibilities.

  • Strengthen traceability: AI Activity Logs + Agent Run Logs, including provenance for datasets and external sources.

  • Implement stricter retention limits for client-confidential and personal data; verify document deletion.

  • Vendor governance: prefer controlled environments; restrict cross-border processing unless safeguarded and approved.

 

C.4 Vietnam — emerging AI law + strong personal data rules
 

C.4.1 Applicable instruments (indicative)
 

STB aligns Vietnam-related AI and data handling with Vietnam’s personal data protection rules and its evolving AI legal framework. When Vietnam AI-specific obligations apply, STB follows a risk-based compliance posture and enhanced documentation.

 

C.4.3 Enhanced-control triggers (apply stricter controls)
 

  • Use cases that may be classified as higher-risk under Vietnam’s AI governance direction.

  • Any processing involving sensitive personal data or large-scale datasets.

  • Any agentic workflow that integrates with internal systems or executes automated actions.

 

C.5 Indonesia — ethics guidance + personal data protection law baseline
 

C.5.1 Applicable instruments (indicative)
 

STB aligns Indonesia-related AI and data handling with Indonesia’s Personal Data Protection Law baseline and national ethics guidance for AI adoption, treating ethics guidance as the minimum standard and applying stricter STB controls for client safety.

 

C.5.3 Enhanced-control triggers (apply stricter controls)
 

  • Regulated/financial sector clients or work involving sensitive transaction data.

  • Any fine-tuning/training on private datasets (requires explicit approval and training register).

  • Autonomous agents with external browsing or broad tool permissions.

 

C.6 Hong Kong — privacy-first workplace GenAI governance
 

C.6.1 Applicable instruments (indicative)

STB aligns Hong Kong-related AI and data handling to Hong Kong personal data protection obligations and to workplace-oriented GenAI guidance, emphasising strict prompt hygiene and ‘do not upload’ rules for sensitive data.

 

C.5.2 STB required controls (minimum baseline)
 

  • Minimise personal data use; keep data mapped to purpose; prevent reuse outside scope.

  • Security safeguards: encryption, RBAC/MFA, tenancy separation, and restricted sharing.

  • Vendor controls: approved vendors only; clear retention/deletion and auditability; cross-border checks.

  • Agentic governance: tool allowlisting, least privilege, kill switch, and Agent Run Logs.

 

C.6.3 Enhanced-control triggers (apply stricter controls)
 

  • Use of unlisted/private content provided by clients (treat as client confidential; prefer private agentic flows).

  • Work involving regulated industries or large datasets of individuals.

  • Any agentic workflow that could export or share content externally.

 

 

C.6.2 STB required controls (minimum baseline)
 

  • Workplace controls: staff guidance on what data must never be entered into external AI tools; mandatory redaction templates.

  • Minimisation: avoid collecting identifiers from comments/community content; aggregate insights.

  • Security safeguards and access controls for any stored transcripts/notes and logs.

  • Vendor governance: retention/deletion controls, audit logs, and explicit data-use restrictions.

 

C.7 South Korea — GenAI privacy lifecycle guidance + AI legal framework readiness
 

C.7.1 Applicable instruments (indicative)

STB aligns South Korea-related AI and data handling to South Korea's personal information protection requirements and to GenAI privacy lifecycle guidance from the privacy regulator. STB also adopts readiness controls for South Korea’s AI legal framework obligations as they take effect.

 

 

C.7.2 STB required controls (minimum baseline)
 

  • Lifecycle privacy controls: document data collection, training, deployment, monitoring, and deletion pathways.

  • Strict minimisation and PII scrubbing before model exposure; keep token maps separately if used.

  • Agentic governance: tool allowlists, least privilege, kill switch, and Agent Run Logs.

  • Vendor controls: audit logs, retention/deletion, and security posture verification.

 

 

C.7.3 Enhanced-control triggers (apply stricter controls)
 

  • Any processing that could be considered profiling or that results in high-impact recommendations/decisions.

  • Any fine-tuning/training on private datasets or use of persistent memory/vector stores containing personal data.

  • Any integration with enterprise systems where actions could alter records or trigger communications.

 

 

C.9 Japan — APPI alignment + business GenAI governance expectations
 

C.9.1 Applicable instruments (indicative)

STB aligns Japan-related AI and data handling with Japan’s APPI requirements and business-oriented AI governance guidance, and applies public-sector governance/assurance where projects require higher assurance or procurement-like controls.

 

C.8 Taiwan (ROC) — AI governance + PDPA alignment
 

C.8.1 Applicable instruments (indicative)
 

STB aligns Taiwan-related AI and data handling with Taiwan’s Personal Data Protection Act (PDPA) and the government’s AI governance direction, including safe-adoption approaches for controlled environments when clients request local containment.

 

 

C.9.2 STB required controls (minimum baseline)
 

  • APPI-aligned governance: document purpose, limit disclosures, secure handling, retention, and deletion verification for personal data.

  • Governance & human oversight: accountable roles, review checkpoints, and documented limitations for outputs.

  • Vendor due diligence: no silent training on client data; subprocessors transparency; deletion/retention; audit logs.

  • Agentic controls: allowlisted tools/endpoints, least-privilege credentials, environment isolation, kill switch, and Agent Run Logs.

 

C.8.2 STB required controls (minimum baseline)

  • Purpose & minimisation: use de-identified or redacted data where feasible; limit full-document ingestion.

  • Access controls: strict tenancy separation, RBAC/MFA, and time-bound access for shared resources.

  • Agentic traceability: Agent Run Logs for any agentic workflow that touches personal or client-confidential data.

  • Vendor and cross-border governance: approved vendors only; record safeguards and client approvals for cross-border processing.

 

 

C.8.3 Enhanced-control triggers (apply stricter controls)
 

  • Sensitive/restricted data categories or client requirement for ‘sovereign’/local processing.

  • Tool-using agents that can write to systems, export data, or contact third parties.

  • Any model fine-tuning/training using private data (requires explicit approval and training register entry).

 

C.9.3 Enhanced-control triggers (apply stricter controls)
 

  • Regulated-sector clients (finance/healthcare/telecom) or government-adjacent engagements.

  • Any action-taking agent workflow (system writes, exports, outbound communications).

  • Any use of sensitive/restricted data or persistent memory/vector stores containing personal data.