Free cookie consent management tool by TermsFeed Generator Job - Data Engineer - SC Cleared

Data Engineer - SC Cleared

London

Posted 7 hours ago

Work Type

Contract

Salary/Rate

£ 430-480 per day

Remote Work

No

IR35 Status

Inside IR35

Job Title: Data Engineer
Location: Remote
Rate: Up to £480 a day
Start Date: Mid February
Job Type: Contract

Length: 12months

About the Role

We're looking for a skilled Data Engineer to design, build, and optimize scalable data pipelines in a cloud-first environment. You'll play a key role in transforming raw data into reliable, high-quality datasets that power analytics, reporting, and downstream applications.

This is a hands-on role for someone who enjoys working across modern ETL tooling, AWS cloud services, and distributed data processing frameworks-while collaborating closely with architects, analysts, and business stakeholders.

What You'll Be Doing

  • Design, develop, and optimize ETL pipelines using tools such as Informatica, Azure Data Factory, AWS Glue, or similar platforms

  • Build and manage cloud-based data pipelines leveraging AWS services including EMR, S3, Lambda, and Glue

  • Implement scalable data processing workflows using PySpark, Python, and SQL

  • Design and support data ingestion, transformation, and integration across structured and unstructured data sources

  • Collaborate with data architects, analysts, and business stakeholders to translate requirements into robust data solutions

  • Monitor pipeline performance, troubleshoot issues, and ensure data quality, reliability, and availability

  • Contribute to best practices across data engineering, including version control, CI/CD, automation, and monitoring

What We're Looking For

  • Strong hands-on experience with ETL development and orchestration (Informatica, Azure, or AWS-based tooling)

  • Solid experience working in AWS cloud environments, particularly core data services

  • Proven experience building distributed data pipelines using EMR, PySpark, or similar technologies

  • Strong background in data processing and transformation across large-scale datasets

  • Proficiency in PySpark, Python, and SQL for data manipulation and automation

  • Good understanding of data modelling, data warehousing concepts, and performance optimization

  • Experience with CI/CD and version control (e.g. GitHub, GitLab, DevOps tooling)

  • Exposure to data governance, metadata management, and data quality frameworks

  • Comfortable working in Agile delivery environments (a plus, not a blocker)

Why Join?

  • Work on modern, cloud-native data platforms

  • Build pipelines that directly support real business and analytics outcomes

  • Collaborate with experienced data professionals in a supportive, delivery-focused environment

  • Flexible working and a strong engineering culture that values quality and automation

Apply

REFER A Friend
FOR £250!

REFER SOMEONE YOU FEEL IS RIGHT FOR THE ROLE AND YOU COULD RECEIVE £250. IF YOU WOULD LIKE TO FIND OUT MORE CONTACT US TODAY.
GET IN TOUCH