Senior DataOps Engineer - #1691950

Glencore


Date: 4 hours ago
City: London
Contract type: Full time
Work schedule: Full day
Glencore
We are seeking a Senior DataOps Engineer to design and implement reusable data engineering frameworks to accelerate the end-to-end data product development and deployment across projects with focus on automation and efficiency. This role will drive innovation by integrating AI into development flows, building robust testing and observability frameworks.

In details, the position encompasses duties and responsibilities as follows:

  • Enhance/Improve Glencore’s reusable data engineering frameworks to boost productivity for data engineers in data product delivery teams, ensuring reusability, scalability, and efficiency
  • Build and maintain a Data observability SDK or API to track metadata, enabling reusable data health dashboards for real-time insights into data pipelines, quality, and performance metrics
  • Build, extend and maintain data ingestion frameworks for supporting standardised external data ingestion patterns at scale
  • Implement GenAI enabled data engineering productivity tool/options – AI for code review, code documentation, ..etc
  • Ensure standardised and efficient data engineering approach is followed in Data product delivery teams
  • Develop and maintain reusable testing frameworks to enable effective, config-driven, and automated unit, integration, and user acceptance testing (UAT), promoting test-driven development practices; These frameworks should improve mean time to detect and resolve issues related to the data product
  • Conduct thorough code reviews to uphold high standards, consistency, and adherence to best practices across all data product teams
  • Implement governance-by-design principles, embedding metadata management, data quality (DQ), and lineage tracking into frameworks and processes
  • Guide data engineers on framework usage, observability tools, and governance practices, promoting technical excellence and team collaboration
  • Collaborate with cross-functional teams to pinpoint inefficiencies in data workflows and propose innovative solutions to speed up delivery timelines.
  • Stay updated on emerging trends in DataOps, observability, and AI technologies to continually enhance engineering practices and tools.
  • Collaborate with data product engineering teams to understand challenges related to high development cycle time and build reusable solutions using combination of open-source and enterprise as per need
  • Document frameworks, APIs, and governance processes comprehensively to ensure maintainability and seamless adoption across teams.

The ideal candidate disposes of:

  • A builder mindset—able to create modular, reusable frameworks rather than one-off solutions
  • Bring innovative Mindset to designing cutting-edge, reusable data engineering frameworks that streamline data workflows and enhance scalability
  • A forward-thinking perspective on data observability, with a proven track record of implementing frameworks that provide real-time visibility into data health and performance
  • A DevOps-first approach, automating deployment, monitoring, and testing for data engineering pipelines
  • Tracking impact in productivity, quality and performance improvements using DataOps metrics (e.g., Lead time for data product delivery, DQ score, Observability coverage)
  • Strategic thinking to align DataOps initiatives with wider Data strategy, anticipating future needs and designing solutions that are both flexible and future-proof
  • Exceptional communication skills to work effectively with cross-functional teams
  • Strong problem solving & analytical skills
  • A passion for mentoring and driving best practices in DataOps, MLOps, and software engineering
  • The ability to bridge the gap between software engineering and data engineering to deliver resilient, maintainable systems

Skills:

  • Strong experience with Azure cloud infrastructure especially Azure Databricks, AKS, ADLS..etc
  • Expertise in designing reusable, scalable data engineering frameworks in PySpark/Python
  • Proficiency in Airflow, Dagster, or Databricks Jobs for workflow orchestration, and GitHub Actions for CI/CD
  • Experience in Shift-Left testing, developing unit, integration, and end-to-end tests, and using Great Expectations or equivalent for data quality validation
  • Ability to implement Data Observability python SDK by leveraging open-source solutions
  • Familiarity with efficient approaches in integrating AI/LLMs into Data engineering development flow for improving productivity
  • Proficiency in Python, PySpark, SQL and Strong knowledge in clean coding principles
  • Strong understanding of Data Governance best practices with focus on Governance by Design/DataGovOps
  • Experience with cost-efficient, scalable data architectures and query performance tuning

How to apply

To apply for this job you need to authorize on our website. If you don't have an account yet, please register.

Post a resume

Similar jobs

Director of Tax - Allegis Global Solutions

Jobs via eFinancialCareers,
2 hours ago
Duration: 18 months Starting date: ASAP Location: London, UK Working model : Hybrid Client: World’s biggest investment bank Benefits: 38 paid holidays per year and paid national insurance Fixed-term contract with the possibility of becoming a permanent employee after 12...
Jobs via eFinancialCareers

Assurance - Wealth & Asset Management - Audit Senior - Edinburgh

EY,
4 hours ago
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. We’re counting on your unique voice and perspective to...
EY

Observability Site Reliability Engineer

DRW,
5 hours ago
DRW is a diversified trading firm with over 3 decades of experience bringing sophisticated technology and exceptional people together to operate in markets around the world. We value autonomy and the ability to quickly pivot to capture opportunities, so we...
DRW