Return to jobs Return to jobs

Senior Data Engineer

BeZero Carbon

Clock

Posted over 30 days ago...

Join the Climate Revolution as a Senior Data Engineer at BeZero Carbon and Shape the Future of Carbon Markets

Overview

icon Salary

No salary declared 😔

icon Location

London or UK based

icon Nomad Friendly?
Tick
icon Expires

Expires at anytime

Organisation summary: BeZero Carbon is a pioneering ratings agency transforming the Voluntary Carbon Market with its innovative SaaS product. With a diverse team of over 170 professionals across four continents, the company blends climate science, financial research, technology, and policy expertise to drive the Net-Zero transition. Fresh off a substantial Series B funding, BeZero Carbon is in a phase of exciting growth, offering a dynamic environment for talented individuals passionate about making a real-world impact.

  • Role Summary:
    • Develop carbon offset-related data products for clients.
    • Build internal data tools to enhance the efficiency of Rating teams.
    • Collaborate with product, ratings, and software engineering teams.
    • Create back-end API services and manage data workflows.
    • Implement AI tools for knowledge management.
    • Standardize data models for internal and client platforms.
    • Aggregate data using web crawlers for a comprehensive data warehouse.
  • Role Requirements:
    • Passion for climate change solutions and carbon markets.
    • Minimum 5 years of experience with ELT/ETL pipelines, Python, and SQL.
    • Proficiency in workflow orchestration tools, Docker, and AWS.
    • Experience in writing scalable code and using CI/CD practices.
    • Capability to design and deploy back-end services and APIs.
    • Knowledge of cloud resource deployment and maintenance.
    • Ambition for technical leadership and management roles.

Full time, London or UK-based.

About us

BeZero Carbon is a global ratings agency for the Voluntary Carbon Market. We distribute our ratings via our SaaS Product, BeZero Carbon Markets, informing all market participants on how to price and manage risk. Our ratings and research tools support buyers, intermediaries, investors and carbon project developers.

Founded in April 2020, our 170+ strong team combines climatic and earth sciences, sell-side financial research, earth observation, machine learning, data and technology, engineering, and public policy expertise. We work from four continents. Having raised a significant Series B funding round in late 2022, we are rapidly growing as a company, accelerating the Net-Zero transition through ratings.


Job Description

BeZero is looking for a senior data engineer to join our existing data products and tooling team, which sits within the broader data organisation. The team is focussed on developing carbon offset-related data products for our clients, as well as building internal data tools to increase the efficiency of our Ratings teams.

You’ll be responsible for building data products and tools that directly affect the way our ratings teams analyse carbon offset projects. This is a cross-functional role: you will be working together with colleagues from our product, ratings, and software engineering team every day.

To give you a flavour of the kind of work this team does, these are some of the projects members in our team have been working on recently:

  • Developing robust back-end API services that power our in-house central data portal, enabling ratings analysts to access prepared and curated data essential for evaluating carbon offset projects.
  • Introducing an in-house knowledge management tool, with generative AI capabilities, to help rating analysts in navigating the large amounts of unstructured document data that exists in the carbon market.
  • Designing cross-team data flows and service architectures to deliver data consistently to our client-facing platform.
  • Deploying a system of web crawlers to aggregate carbon project related data into our data warehouse, alongside developing a standardised data model so the data can be used internally and displayed on our client-facing platform.

If you’re excited by working on such problems and making impactful contributions to data in the climate space, then we’re looking for you.


Tech stack

As a data team, we have a bias towards shipping products, staying close to our internal and external customers, and end-to-end ownership of our infrastructure and deployments. This is a team that follows software engineering best practices closely. Our data stack includes the following technologies:

AWS serves as our cloud infrastructure provider.

  • Snowflake acts as our central data warehouse for tabular data. AWS S3 is used for any of our geospatial raster data, and we use PostGIS for storing and querying geospatial vector data.
  • We use dbt for building SQL-style data models and Python jobs for non-SQL data transformations.
  • Our computational jobs are executed in Docker containers on AWS ECS, and we use Prefect as our workflow orchestration engine.
  • GitHub Actions for CI / CD.
  • Metabase serves as a dashboarding solution for end-users.

We are a remote-friendly company and many of our colleagues work fully remote; however, for this position, we will only consider applications from candidates based in the UK. If you live in or near London, you are welcome (but not required!) to work from our London office.


Responsibilities:

  • You will be an individual contributor in our data engineering team, focused on designing and building robust data pipelines for the ingestion and processing of carbon offset-related data.
  • You will contribute to and maintain our analytical data models in our data warehouse.
  • You will work with our product engineering teams to architect robust data flows, systems, and APIs to deliver data to our internal and external customers.
  • You will work with other teams in the business to enable them to be more efficient, by building data tools and automations.


You’ll be our ideal candidate if:

  • You care deeply about the climate and carbon markets and are excited by solutions for decarbonising our economy.
  • You are a highly collaborative individual who wants to solve problems that drive business value.
  • You have at least 5 years of experience building ELT/ETL pipelines in production for data engineering use cases, using Python and SQL.
  • You have hands-on experience with workflow orchestration tools (e.g., Airflow, Prefect, Dagster), containerization using Docker, and a cloud platform like AWS.
  • You can write clean, maintainable, scalable, and robust code in Python and SQL, and are familiar with collaborative coding best practices and continuous integration tooling.
  • You are well-versed in code version control and have experience working in team setups on production code repositories.
  • You’ve designed back-end services and deployed APIs yourself, ideally using a framework like FastAPI.
  • You have experience in deploying and maintaining cloud resources into production using tools such as AWS Cloud Formation, Terraform, or others.
  • You have ambitions to grow into a technical leadership role, and are willing to take on line management responsibilities of 1-2 engineers.


Our interview process:

  • Initial screening interview with recruiter (15 mins)
  • Introduction call with senior team lead (30 mins)
  • 2x Technical interview with members from the data engineering team (60-90 mins)
  • Reference checks + offer


We value diversity at BeZero Carbon. We need a team that brings different perspectives and backgrounds together to build the tools needed to make the voluntary carbon market transparent. We’re therefore committed to not discriminate based on race, religion, colour, national origin, sex, sexual orientation, gender identity, marital status, veteran status, age, or disability.

Medal
Computer

Hire with Escape

Showcase your progressive organisation and post your open roles to the biggest UK community of purpose driven job seekers.

Get Started