Negotiable
Undetermined
Hybrid
Singapore
Summary: We're seeking a Data Engineer to join a structured technology environment focused on a multi-year data transformation for a key Financial Services client. The role involves building scalable data ingestion pipelines and ensuring high-quality data access across business domains. Ideal candidates will have hands-on experience in engineering and data pipeline operations, tackling real-world challenges in a mature setting. Collaboration within a modern engineering squad is essential for success in this position.
Key Responsibilities:
- Design, develop, and maintain robust data ingestion pipelines (batch & streaming) using Python.
- Integrate data from APIs, file transfers, and relational databases (e.g., Oracle, MSSQL) into a cloud-based data platform.
- Build and optimise ETL/ELT processes, ensuring reliability, scalability, and clean end-to-end data flow.
- Implement automated data quality checks, operational monitoring, and rerun/reprocessing capabilities.
- Work across the full SDLC: requirements, development, testing (SIT/UAT), deployment, and BAU support.
- Collaborate closely with data stewards, analysts, and cross-functional stakeholders to deliver high-quality outcomes.
- Participate in operational duties, including occasional low-touch weekend support for deployment-related issues.
- Uphold strong engineering discipline around compliance, documentation, version control, and CI/CD processes.
Key Skills:
- Strong hands-on experience with Python for data engineering and pipeline development.
- Proficiency in SQL and experience with large, structured datasets.
- Exposure to Snowflake or similar cloud data warehouse platforms.
- Experience with AWS services commonly used in data environments.
- Familiarity with CI/CD pipelines (e.g., GitHub).
- Strong understanding of data engineering fundamentals: ingestion patterns, orchestration, data lifecycle management.
- Ability to troubleshoot production issues independently with an operational mindset.
- Strong communication and collaboration skills within structured, cross-team environments.
Salary (Rate): undetermined
City: Singapore
Country: Singapore
Working Arrangements: hybrid
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
description:
- Strong hands on experience with Python for data engineering and pipeline development
- Proficiency in SQL and experience with large, structured datasets
- Exposure to Snowflake or similar cloud data warehouse platforms
We're hiring a Data Engineer to join a highly structured, enterprise-grade technology environment undergoing a multi-year data transformation initiative for one of our key Financial Services clients . You'll work within a modern, collaborative engineering squad responsible for building scalable data ingestion pipelines and enabling high-quality, centralised data access across critical business domains.
This role is ideal for someone who enjoys hands-on engineering , data pipeline operations , and tackling real-world production challenges in a mature environment with strong engineering standards.
What You'll Do
- Design, develop, and maintain robust data ingestion pipelines (batch & streaming) using Python.
- Integrate data from APIs, file transfers, and relational databases (e.g., Oracle, MSSQL) into a cloud-based data platform.
- Build and optimise ETL/ELT processes , ensuring reliability, scalability, and clean end-to-end data flow.
- Implement automated data quality checks , operational monitoring, and rerun/reprocessing capabilities.
- Work across the full SDLC: requirements, development, testing (SIT/UAT), deployment, and BAU support.
- Collaborate closely with data stewards, analysts, and cross-functional stakeholders to deliver high-quality outcomes.
- Participate in operational duties, including occasional low-touch weekend support for deployment-related issues.
- Uphold strong engineering discipline around compliance, documentation, version control, and CI/CD processes.
What You'll Bring
Must-Have Skills
- Strong hands-on experience with Python for data engineering and pipeline development.
- Proficiency in SQL and experience with large, structured datasets.
- Exposure to Snowflake or similar cloud data warehouse platforms.
- Experience with AWS services commonly used in data environments.
- Familiarity with CI/CD pipelines (e.g., GitHub).
- Strong understanding of data engineering fundamentals : ingestion patterns, orchestration, data lifecycle management.
- Ability to troubleshoot production issues independently with an operational mindset.
- Strong communication and collaboration skills within structured, cross-team environments.
Nice-to-Have Skills
- Test-driven development (TDD) or production-grade coding practices.
- Experience with pipeline monitoring and observability tools.
- Background working in large, mature organisations with established governance frameworks.
- Prior exposure to enterprise-scale data transformation or centralised data platform projects.
Who You Are
- You enjoy building and fixing pipelines end-to-end, not just writing scripts.
- You're comfortable working in environments where engineering hygiene, traceability, and compliance matter.
- You thrive in collaborative, mission-driven teams with shared ownership.
- You're proactive, structured, and able to navigate complex data ecosystems confidently.
- You're open to light operational duty (very minimal weekend touchpoints during deployment cycles).
Team & Ways of Working
- You'll join a tight-knit engineering squad working as one team.
- The wider group includes data management partners who support data quality, operations, and user engagement.
- Work is delivered in Agile sprints , with strong emphasis on communication and clear ownership.
- The culture is inclusive, supportive, and highly collaborative , with team members helping each other across development and operations.
- Hybrid work arrangement with rotational in-office days depending on team schedule.
Why This Role Is Appealing
- Be part of a high-impact, enterprise-level data initiative that's central to organisational decision-making.
- Enjoy the stability and structure of a large organisation while working in a modern, engineering-driven team .
- Work on meaningful data domains with real operational importance and visibility.
- Opportunity to grow your cloud data engineering skills and gain exposure to end-to-end platform operations.
- The team invests heavily in mentoring, learning, and continuous improvement.
- Tech readiness is complete - you can make immediate impact from day one.
EA registration number: ANDREW JONAS MATTHEW, R21103843 Allegis Group Singapore Pte Ltd, Company Reg No. 200909448N, EA Licence No. 10C4544