This website or its third-party tools use cookies, which are necessary for its functioning and required to achieve the purposes illustrated in the privacy policy. You accept the use of cookies by closing or dismissing this notice, by clicking a link or button or by continuing to browse otherwise. Learn more
Operata is building the world’s first CX Observability platform for Cloud Contact Centers. Our platform is designed to measure, optimize and maintain voice performance to deliver the best customer and agent experience for cloud contact centers.
Headquartered in Melbourne, Australia, Operata is an AWS Select Technology Partner and our customers include leading insurers, telcos, banks and managed service providers.
About the role
We are seeking a highly skilled and hands-on Senior Data Engineer to take ownership of the data engineering practice. This is a practical, delivery-focused role—we are looking for someone who is deeply involved in building, optimising, and maintaining our data pipelines and infrastructure.
The ideal candidate has a software or product engineering background with hands-on experience designing, building, and operating ELT/ETL pipelines and data lakehouse solutions in production environments.You will play a critical role in shaping the way data is collected, processed, and delivered across our CX observability platform, working directly with technologies such as Databricks, Kinesis, Apache Flink, and AWS services.
You will work closely with product managers, software engineers, and business stakeholders to ensure our data systems are robust, scalable, and aligned with customer needs. This is an opportunity to have a direct impact, contributing through hands-on coding, system design, and the delivery of production-grade solutions.
How you will make an impact
Design, build, and optimise ELT/ETL pipelines to support the needs of a real-time CX Observability platform, ensuring the efficient, cost-effective, reliable, and scalable flow of data.
Develop and manage production-grade data pipelines using Databricks, Apache Flink, AWS, and OpenTelemetry for both batch and real-time processing.
Integrate and maintain cloud-based storage solutions such as AWS S3, RDS, ClickHouse, and EKS to support scalable, performant data access across the platform.
Work collaboratively with software engineers to integrate data collection systems, develop data models, and build seamless data flows across the product stack.
Design, implement, and manage modern data warehousing solutions to enable analytics, BI, and data visualisation capabilities.
Develop and uphold data governance practices to ensure data quality, security, cost-efficiency, and regulatory compliance.
Proactively monitor, support, and troubleshoot data pipelines and infrastructure to maintain high performance and operational reliability.
Continuously explore emerging technologies, data engineering practices, and industry trends to strengthen and evolve our data engineering capabilities.
Qualifications
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Extensive, hands-on experience designing, building, and operating ELT/ETL pipelines and data lakehouse solutions in production environments.
Proven, practical experience with Databricks and supporting technologies, with the ability to deliver scalable, high-performance data solutions.
Strong working knowledge of real-time data processing platforms such as Apache Flink, Spark, or similar technologies.
Proficiency in SQL and experience working with relational and columnar databases such as PostgreSQL, Redshift, OpenSearch, or Clickhouse.
Strong programming skills in Python, Java, Scala, or similar languages. Golang experience is a plus.
Proficient with CI/CD tooling (e.g., GitHub Actions, Buildkite Pipelines, or AWS Workflows) and modern software engineering practices.
Hands-on experience with AWS services, particularly S3, RDS, EKS, and associated data technologies.
Experience with Infrastructure as Code and cloud-native technologies, including Kubernetes and Terraform.
Familiarity with building data models and schemas to support business intelligence and data visualisation tools such as Superset or Tableau.
What you’ll bring to the role
You promote openness, diversity of opinions and inclusive discussions at all times to evaluate a wide variety of ideas and perspectives in solving challenging problems
You demonstrate clear decision making and good trade-offs in complex situations comprising multiple opinions, needs, teams, technologies, cloud providers, and architectural settings
You communicate effectively with stakeholders ranging from founders and executives to junior engineers across the breadth and depth of the business.
You exemplify high accountability, integrity, and resilience to maintain focus on both big-picture goals and milestones to get there
You enable the engineering organization to innovate and deliver with greater speed and safety.
Additional Information
At Operata, we embrace inclusion and embrace diversity. We believe in work / life balance and bringing our true selves to work. To that end, we offer best-in-class flexibility that support our Operatas along their career journey with us. For us, better connection is something we live - with our people, customers, partners and our team. Read more about our culture and values at operata.com/story
Operata is an equal opportunity employer and makes employment decisions on the basis of merit. Operata prohibits discrimination based on race, color, religion, sex, sexual identity, gender identity, marital status, veteran status, nationality, citizenship, age, disability, medical condition, pregnancy, or any other unlawful consideration.