SC Cleared Python Data Engineer – Azure Cloud

SC Cleared Python Data Engineer – Azure Cloud

Posted 2 days ago by Montash

£51 Per hour
Inside
Remote
United Kingdom

Summary: The SC Cleared Python Data Engineer role focuses on designing and delivering scalable data pipelines using Python and PySpark within an Azure cloud environment. The position requires active SC clearance and emphasizes strong technical expertise in cloud services, data architecture, and testing methodologies. The engineer will collaborate with various teams to ensure the development of maintainable and efficient solutions. This is a remote position for a 12-month contract starting in January 2026.

Key Responsibilities:

  • Design, develop, and maintain data ingestion and transformation pipelines using Python and PySpark.
  • Clearly articulate PySpark architecture, execution models, and performance considerations to both technical and non-technical stakeholders.
  • Implement unit and BDD testing (Behave or similar), including effective mocking and dependency management.
  • Design and optimise Delta Lake tables to support ACID transactions, schema evolution, and incremental processing.
  • Build and manage Docker-based environments for development, testing, and deployment.
  • Develop configuration-driven, reusable codebases suitable for multiple environments.
  • Integrate Azure services including Azure Functions, Key Vault, and Blob/Data Lake Storage.
  • Optimise Spark jobs for performance, scalability, and reliability in production.
  • Collaborate with Cloud, DevOps, and Data teams to support CI/CD pipelines and environment consistency.
  • Produce clear technical documentation and follow cloud security and data governance best practices.

Key Skills:

  • Strong Python expertise, with a demonstrable depth of experience in designing modular, testable, production-quality code.
  • Proven experience explaining and designing PySpark architectures, including distributed processing and performance tuning.
  • Hands-on experience with Behave or similar BDD frameworks, including mocking and patching techniques.
  • Solid understanding of Delta Lake concepts, transactional guarantees, and optimisation strategies.
  • Experience using Docker across development and deployment workflows.
  • Practical experience with Azure services (Functions, Key Vault, Blob Storage, ADLS Gen2).
  • Experience building configuration-driven applications.
  • Strong problem-solving skills and ability to work independently in agile environments.

Salary (Rate): £51.00/hr

City: undetermined

Country: United Kingdom

Working Arrangements: remote

IR35 Status: inside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Job Title: SC Cleared Python Data Engineer – Azure Cloud

Contract: 12 months

Rate: Up to £410/day (Inside IR35)

Location: UK Based - Remote

Start: January 2026

Clearance: Active SC Clearance required

Role Overview

We are seeking an SC Cleared Python Data Engineer with strong hands-on experience in PySpark, Delta Lake, and Azure cloud services. The role focuses on designing and delivering scalable, well-tested data pipelines, with particular importance placed on the ability to understand, explain, and design PySpark architectures and demonstrate deep, production-grade Python expertise. You will work in a containerised, cloud-native environment, delivering maintainable, configurable, and test-driven solutions as part of a multi-disciplinary engineering team.

Key Responsibilities

  • Design, develop, and maintain data ingestion and transformation pipelines using Python and PySpark.
  • Clearly articulate PySpark architecture, execution models, and performance considerations to both technical and non-technical stakeholders.
  • Implement unit and BDD testing (Behave or similar), including effective mocking and dependency management.
  • Design and optimise Delta Lake tables to support ACID transactions, schema evolution, and incremental processing.
  • Build and manage Docker-based environments for development, testing, and deployment.
  • Develop configuration-driven, reusable codebases suitable for multiple environments.
  • Integrate Azure services including Azure Functions, Key Vault, and Blob/Data Lake Storage.
  • Optimise Spark jobs for performance, scalability, and reliability in production.
  • Collaborate with Cloud, DevOps, and Data teams to support CI/CD pipelines and environment consistency.
  • Produce clear technical documentation and follow cloud security and data governance best practices.

Required Skills & Experience

  • Strong Python expertise, with a demonstrable depth of experience in designing modular, testable, production-quality code.
  • Proven experience explaining and designing PySpark architectures, including distributed processing and performance tuning.
  • Hands-on experience with Behave or similar BDD frameworks, including mocking and patching techniques.
  • Solid understanding of Delta Lake concepts, transactional guarantees, and optimisation strategies.
  • Experience using Docker across development and deployment workflows.
  • Practical experience with Azure services (Functions, Key Vault, Blob Storage, ADLS Gen2).
  • Experience building configuration-driven applications.
  • Strong problem-solving skills and ability to work independently in agile environments.

Desirable Experience

  • Databricks or Synapse with Delta Lake.
  • CI/CD pipelines (Azure DevOps or similar) and infrastructure-as-code.
  • Knowledge of Azure data security and governance best practices.
  • Experience working in distributed or multi-team environments.