This website or its third-party tools use cookies, which are necessary for its functioning and required to achieve the purposes illustrated in the privacy policy. You accept the use of cookies by closing or dismissing this notice, by clicking a link or button or by continuing to browse otherwise. Learn more
Operata Pty Ltd Under Agreement: Operata End User License Agreement and Terms of Service v2.6 between Operata Pty Ltd ("Provider") and ("Customer")
Effective Date: 1st May 2026
The following terms ("AI Terms") are hereby added to and become part of the Agreement as Additional Terms. Capitalized terms not defined in these AI Terms have the meanings given in the Agreement. The Agreement applies to the AI Features as part of the Cloud Service with the following modifications.
Use of AI Features
Customer may submit Customer Data (including in the form of prompts or queries) to the AI Features ("Inputs") and receive outputs from the AI Features ("Outputs").
"AI Features" means the artificial intelligence and machine learning capabilities integrated into the Operata CX observability platform, branded collectively as Tenor AI™, including but not limited to:
CX Copilot - a natural language conversational interface that allows users to query CX observability data, generate on-the-fly visualisations, receive AI-generated summaries, drill into root causes, and receive recommended next steps. CX Copilot is powered by an array of purpose-built AI Agents (Data Query, Knowledge, Summarisation, Charting & Visualisation, Next Steps, Routing) that utilise Anthropic's Claude family of large language models via Amazon Bedrock.
Agent Copilot - Operata's real-time Agent guidance tool that sits inside the agent's browser tab and proactively surfaces alerts and self-fix recommendations.
CX Insights Graph® - CX-specific, actionable, real-time insights generated through AI-powered analysis of observability data, including anomaly detection and CX Risk Scores.
Customer Journey Trace® - end-to-end trace visualisation across CX services with AI-generated summaries powered by CX Copilot.
Operata MCP Server® - a Model Context Protocol (MCP) server that enables Customer's own AI agents and tools (e.g. Claude, Cursor, OpenAI Codex) to query Operata observability data through a standardised interface, subject to Operata's permissions and governance rules. Available for Enterprise+ Plan customers.
Proprietary ML Models - Operata-built machine learning models running within Operata's own AWS infrastructure, used for anomaly detection, CX Risk Scores, and pattern analysis. These models do not send Customer Data to any third-party provider.
Other AI-enabled features - additional artificial intelligence and machine learning capabilities that Provider may introduce, enable, rename or rebrand from time to time as part of the Cloud Service. Such capabilities are subject to these AI Terms upon release. Where the introduction of a new capability constitutes a material change under 12.2, Provider will notify Customer in accordance with 12.3; otherwise, new capabilities will be documented in standard product release notes.
For context, the data processed by the AI Features consists of application and network event logs, WebRTC data, agent system telemetry (memory usage, CPU utilisation), network quality metrics (jitter, packet loss, latency/RTT, MOS), and CCaaS platform logs. Operata does not record customer or agent audio, nor does it capture Customer Application data. Elements of data considered Personally Identifiable Information (for example, Agent Login Name, Agent IP address, Customer Calling number) can be blocked from transmission to Operata by the Customer from their CCaaS environment.
2. Training on Inputs/Outputs prohibited Provider may not use Inputs or Outputs to train or otherwise improve AI Features.
3. Intellectual Property A. Inputs - Customer owns Inputs as Customer Data Inputs are deemed to be Customer Data, subject to these AI Terms.
B. Outputs - Customer granted right to use Outputs Customer is authorized to use Outputs subject to the Agreement, including the AUP and these AI Terms.
4. Similar Outputs Customer acknowledges that Outputs provided to Customer may be similar or identical to Outputs independently provided by Provider to others.
5. Infringement by Outputs - Provider disclaims infringement liability for Outputs
Due to the nature of the AI Features, Provider does not represent or warrant that (a) any Output does not incorporate or reflect third-party content or materials or (b) any Output will not infringe third-party intellectual property rights. Claims of intellectual property infringement or misappropriation by Outputs are not included in Provider-Covered Claims.
6. Disclaimer Outputs are generated through machine learning processes and are not tested, verified, endorsed or guaranteed to be accurate, complete or current by Provider. Customer should independently review and verify all Outputs as to appropriateness for any or all Customer use cases or applications. The warranty disclaimers and limitations of liability in the Agreement for the Cloud Service apply to the AI Features.
7. Third-Party Providers
- Provider has specified in Exhibit A any third parties that provide the AI Features.
- Customer agrees to abide by any third-party terms and conditions relating to the AI Features specified in Exhibit A ("Third-Party Terms").
This section does not modify the subcontractor provisions of the Agreement.
8. Special Restrictions on Use of AI Features
The following restrictions apply to Customer's use of the AI Features. Without limiting any restrictions on use of the Cloud Service in the Agreement, Customer will not and will not permit anyone else to:
(a) use the AI Features or any Output to infringe any third-party rights;
(b) use the AI Features or any Output to develop, train or improve any AI or ML models (separate from authorized use of the Cloud Service under this Agreement);
(c) represent any Output as being approved or vetted by Provider;
(d) represent any Output as being an original work or a wholly human-generated work;
(e) use the AI Features for automated decision-making that has legal or similarly significant effects on individuals, unless it does so with adequate human review and in compliance with Laws; or
(f) use the AI Features for purposes or with effects that are discriminatory, harassing, harmful or unethical.
9. Data Processing & Security
9.1 Platform Overview The Operata platform is built on and deployed to Amazon Web Services (AWS) cloud infrastructure, hosted in ap-southeast-2 (Asia Pacific - Sydney) and us-east-2 (US East - Ohio). The platform inherits the hardware, software and operational controls provided by AWS under the AWS Shared Responsibility Model.
The Operata platform is a monitoring and observability platform that captures performance and event data from the Customer's CCaaS environment. Operata:
Does not record customer or agent audio, nor require access to audio recordings
Does not capture Customer Application data
Captures logs, events and metadata only
Is a read-only monitoring platform that has no impact on the operation of the Customer's contact centre
Customer Data (application/network event logs, application and network metrics, logs and traces metrics, agent telemetry, CCaaS platform logs) is processed within the Operata platform hosted in ap-southeast-2 (Asia Pacific - Sydney) and us-east-2 (US East - Ohio). Where AI Features require LLM inference (CX Copilot, AI Summaries), data is transmitted to sub processor/AI vendor models via Amazon Bedrock. Bedrock instances are deployed in ap-southeast-2 (Asia Pacific - Sydney) and us-east-2 (US East - Ohio). Each request is sent individually over an SSL-encrypted connection. Amazon Bedrock does not store or use prompts or completions for model training or service improvement.
Where AI Features require LLM inference (CX Copilot, AI Summaries), data is transmitted to sub processor/AI vendor models via Amazon Bedrock. Bedrock instances are deployed in ap-southeast-2 (Asia Pacific - Sydney) and us-east-2 (US East - Ohio). Each request is sent individually over an SSL-encrypted connection. Amazon Bedrock does not store or use prompts or completions for model training or service improvement.
The Knowledge Agent within CX Copilot uses the OpenAI Embeddings API for vector similarity search (RAG pattern). Only Operata domain knowledge text and decomposed topic queries are sent to OpenAI - no Customer Data is processed through this path. OpenAI does not use API inputs for model training (per OpenAI API Data Usage Policy).
Operata's proprietary ML models for anomaly detection, CX Risk Scores and pattern analysis run entirely within Operata's own AWS infrastructure in ap-southeast-2. No Customer Data is transmitted to third-party providers for these features.
AI-generated Outputs (including CX Copilot responses, AI summaries and visualisations) and the associated Inputs (prompts and conversation turns) are persisted within the Customer's isolated tenant database in Amazon RDS for PostgreSQL. Conversation turns are retained to provide context when queries are handed off between AI Agents within a session. AI-generated Outputs are retained for 30 days, after which they are automatically purged (see §9.5).
9.2 Data Flow for AI Features When Customer uses the AI Features, the following data processing occurs:
9.3 Data Residency The Operata platform is hosted in n ap-southeast-2 (Asia Pacific - Sydney) and us-east-2 (US East - Ohio).
Where AI Features require LLM inference via Amazon Bedrock, Customer Data may be processed in:
ap-southeast-2 (Asia Pacific - Sydney)
us-east-2 (US East - Ohio)
Such transfers are governed by AWS's data processing and transfer mechanisms and are subject to the data transfer provisions set out in the Data Processing Addendum to the Agreement. Amazon Bedrock does not store or retain Customer Data beyond the duration required to complete each individual inference request.
9.4 Encryption
In transit: All communication to and from Operata, including to AI sub-processors, is encrypted via HTTPS and Transport Layer Security (TLS) industry best-practices.
At rest: Operata encrypts data at rest using AWS Key Management Service (AWS KMS) with the Advanced Encryption Standard algorithm with 256-bit keys (AES-256).
9.5 Data Retention for AI Features AI-generated Outputs (including CX Copilot responses, AI summaries and visualisations) are persisted within the Customer's isolated tenant database for a period of 30 days, after which they are automatically purged. This is in addition to the retention policies applicable to the underlying observability data as described in the Agreement.
9.6 Multi-Tenancy & Isolation Each Customer CCaaS instance onboarded with Operata is placed into an independent Group. Each Group maintains an independent database, which is the sole repository for all data collected from the associated CCaaS instance. AI Features operate within these same tenant isolation boundaries - data from one Customer Group is never used to generate Outputs for another Customer Group.
9.7 Security Controls The AI Features inherit the security controls applicable to the Operata Cloud Service, including:
Encryption in transit (TLS) and at rest (AES-256 via AWS KMS)
Physical security inherited from AWS (certified to SSAE16 SOC 1, 2 and 3, ISO 27001 and FedRAMP/FISMA)
Authentication managed via Auth0 (SOC 2 and ISO 27001/27018 compliant)
Role-based access controls (Group Admin / Group User)
Production access restricted on need-to-know basis with two-factor authentication
Weekly vulnerability scanning and periodic independent third-party penetration testing
Secure SDLC with automated test suites, peer reviews, and automated deployments
Environment separation - no customer data used in development or test environments
Background police checks on all employees and contractors; NDA/confidentiality agreements in place
9.8 Customer Controls
AI Features are enabled as part of the Operata platform across Core, Enterprise and Enterprise+ Editions and cannot currently be disabled independently of the platform.
Elements considered PII can be blocked from transmission to Operata by the Customer from their CCaaS environment, which also excludes this data from AI processing.
The Operata MCP Server® is available only to Enterprise+ Plan customers and requires explicit configuration by the Customer to connect external AI agents.
AI Features respect all existing tenant isolation and role-based access controls - Users have read-only access; Administrators can manage configuration and system parameters.
10. Compliance Provider maintains the following compliance certifications and commitments applicable to the Cloud Service, including the AI Features:
SOC 2 Type II - Provider has completed an independent SOC 2 Type II audit covering the Operata platform. The current audit scope does not explicitly cover the AI Features; Provider intends to include AI Features in future audit cycles.
GDPR - Provider processes Customer Data in accordance with applicable EU/UK data protection laws and the Data Processing Addendum to the Agreement.
CCPA - Provider complies with the California Consumer Privacy Act where applicable.
EU AI Act - Provider has assessed its AI Features against the risk classification framework established by the EU Artificial Intelligence Act (Regulation (EU) 2024/1689). Provider's AI Features are classified as minimal risk / limited risk based on the following assessment: AI Features are used for CX observability analytics and operational decision support - they do not perform biometric identification, social scoring, emotional inference, autonomous decision-making affecting individuals, or any use case classified as "high risk" or "unacceptable risk" under the Act. All AI-generated outputs are advisory in nature and are designed to augment human decision-making by qualified CX operations personnel. Provider meets the transparency obligations applicable to limited-risk AI systems: AI-generated content is clearly identified within the platform, and the use of AI is disclosed in product documentation and these AI Terms. Provider does not deploy AI Features for any purpose listed under Article 5 (Prohibited AI Practices) of the Act. Provider monitors the implementation timeline and delegated acts under the EU AI Act and will update this assessment as guidance evolves.
Provider will process Customer Data in connection with the AI Features in accordance with the Data Processing Addendum to the Agreement and applicable data protection laws.
11. Responsible AI Principles Provider is committed to the responsible development and deployment of AI Features in accordance with the following principles:
Transparency - Provider will maintain clear documentation of which product features use AI and how Customer Data is processed.
Human Oversight - AI Features are designed to augment human decision-making, not replace it. Outputs are recommendations and insights intended for review by qualified personnel.
Fairness - Provider will use reasonable efforts to identify and mitigate bias in AI Features.
Privacy - Provider processes only the minimum Customer Data necessary to deliver the AI Features. PII can be excluded from collection at the Customer's discretion.
Continuous Improvement - Provider will monitor AI Features for quality and accuracy and will notify Customer of material changes to AI Features or the Third-Party Providers used to deliver them.
12. Model Versioning & Change Notification
12.1 Model Version Control Provider pins LLM-based AI Features to specific Anthropic Claude model versions available through Amazon Bedrock. Prompt template versions and OpenAI Embeddings API versions are tracked in source control. Proprietary ML models are versioned using semantic versioning (MAJOR.MINOR.PATCH) and logged alongside inference results for auditability.
12.2 Definition of Material Change A material change to AI Features includes: change of foundation model family or major version (e.g., Claude 3.5 → Claude 4); addition or removal of a third-party AI provider; change in data processing region for AI inference; significant change to proprietary ML model architecture or scoring methodology; or change to data types processed by AI Features.
The following are not material changes: minor model version updates within the same family (e.g., patch releases); prompt template optimisations that do not change feature scope; knowledge base content updates; or bug fixes and performance improvements.
12.3 Change Notification
Exhibit A
Third-Party AI Providers
Additionally, Operata operates proprietary ML models for anomaly detection, CX Risk Scores and pattern analysis within its own AWS infrastructure (ap-southeast-2). These models do not transmit Customer Data to any third-party provider and are not listed as sub-processors.