For our partner teams, they create an environment with opportunities for their people to succeed, backed by the culture and support to ensure they are enabled to truly own their careers. They are motivated individuals who tackle unique technical challenges at scale and solve them as a team. Together, they deliver innovative and ethical solutions that help businesses achieve their ambitions faster.
This is Compliance Data
They are seeking a Data Engineer to join their Compliance Data team in Amsterdam. This team is part of their Data Solution to enhance their risk management strategy for their customers. They are here to set a new status quo on how they can build a data driven approach to manage risk, while simultaneously having a direct impact on reducing those risks through technology. Therefore they’re looking for a trailblazer with an entrepreneurial mindset to innovate, tinker and craft scalable data solutions from the ground up.
*Please note that this is an office first role*
- Develop High-Quality Data Pipelines: Design, develop, deploy, and operate ETL/ELT pipelines in PySpark. Your work will directly contribute to the creation of visually stunning reports, analytics, and datasets for both internal and external use.
- Collaborative Solution Development: Partner with compliance teams, engineers, and data analysts to understand data requirements and transform these insights into effective data pipelines.
- Orchestrate Data Flow: Utilize orchestration tools to manage data pipelines efficiently, experience in Airflow is a significant advantage.
- Champion Data Best Practices: Advocate for performance, code quality, data validation, data governance, and discoverability. Ensure that the data provided is accurate, performant, and reliable.
- Ensure Code Quality: Implement rigorous testing protocols for your code, with experience in Pytest being highly valued.
- Knowledge Sharing and Training: Scale your knowledge throughout the organization, enhancing the overall data literacy.
- Experienced in Big Data Technologies: You bring at least 3 years of hands-on experience, with a focus on ETL/ELT development, ideally in PySpark and Airflow.
- Technically Proficient: You have a strong command of Python and SQL, coupled with a good understanding of software engineering and data engineering principles.
- A Clear Communicator: You excel in articulating complex data-related concepts and outcomes to a diverse range of stakeholders.
- Self-starter: You are able to independently recognise areas of opportunity, figure out solutions, and build with a launch quick and iterate mentality. A strong background in statistics, mathematics, or engineering is a plus.
- Skilled in the following tools/languages: Git, S3, AirFlow, Docker, Python, SQL. Looker is a plus.
- A Competitive salary
- An inclusive environment where everyone will support your growth
- A welcoming team and a positive atmosphere.