Data Warehouse and Reporting Developeer
Role details
Job location
Tech stack
Job description
At Uni Systems, we are working towards turning digital visions into reality. We are continuously growing and we are looking for a Data Warehouse and Reporting Developer to join our UniQue team.
What will you be doing in this role?
- Develop, deploy, and maintain scalable and incremental data pipelines from REST APIs and databases using Python, PySpark, Azure Synapse, Knime, SQL, and ETL tools to ingest, transform, and prepare data.
- Process and transform complex JSON and GIS data into structured datasets optimized for analysis and reporting. This includes parsing, transforming, and validating JSON data to ensure data quality and consistency.
- Load, organize, and manage data in Azure Data Lake Storage and Microsoft Fabric OneLake, ensuring accessibility, performance, and efficient storage using lakehouse and Delta Lake patterns.
- Document ETL processes, metadata definitions, data lineage, and technical specifications to ensure transparency and reusability.
- Collaborate with data analysts, BI developers, and business stakeholders to understand data requirements and deliver reliable, well-documented datasets aligned with organizational needs.
- Implement data quality checks, logging, monitoring, and automated incremental load mechanisms within data pipelines to support maintainability, observability, and troubleshooting
Requirements
What will you be bringing to the team?
- Master's Degree and 12 years of experience or Bachelor's Degree and 16 years of experience in IT.
- Hold Microsoft Azure Data Engineer Associate certificate.
- At least 5 years of experience in Azure Data Lake Storage, Microsoft Fabric OneLake, and Oracle databases.
- Minimum 5 years of experience in developing data pipelines from REST APIs and on integration (such as Azure Synapse, PySpark, Microsoft Fabric, Python, SQL, KNIME).
- No less than 5 years of experience in processing JSON and GIS data.
- Excellent knowledge of data engineering tools Azure Synapse Analytics, Microsoft Fabric, PySpark and Python.
- Experience designing incremental loads, CDC processes, and automated schema evolution
- Ability to implement robust data quality checks, logging, and monitoring in ETL processes.
- Ability to document ETL workflows, metadata, and technical specifications clearly and consistently.
- Familiarity with DevOps and version control best practices. Experience with CI/CD pipelines.
- Experience working in an Agile and Scrum framework.
- Proficiency in English language at a C1/C2 level.
- Proficiency in French language is considered an advantage.
Requirements
What will you be bringing to the team?
- Master's Degree and 12 years of experience or Bachelor's Degree and 16 years of experience in IT.
- Hold Microsoft Azure Data Engineer Associate certificate.
- At least 5 years of experience in Azure Data Lake Storage, Microsoft Fabric OneLake, and Oracle databases.
- Minimum 5 years of experience in developing data pipelines from REST APIs and on integration (such as Azure Synapse, PySpark, Microsoft Fabric, Python, SQL, KNIME).
- No less than 5 years of experience in processing JSON and GIS data.
- Excellent knowledge of data engineering tools Azure Synapse Analytics, Microsoft Fabric, PySpark and Python.
- Experience designing incremental loads, CDC processes, and automated schema evolution
- Ability to implement robust data quality checks, logging, and monitoring in ETL processes.
- Ability to document ETL workflows, metadata, and technical specifications clearly and consistently.
- Familiarity with DevOps and version control best practices. Experience with CI/CD pipelines.
- Experience working in an Agile and Scrum framework.
- Proficiency in English language at a C1/C2 level.
- Proficiency in French language is considered an advantage.