Roles & Responsibilities
- Design and develop ETL pipelines with external services, including client wrappers.
- Participate in designing and implementing foundational layers, including data models, workflow, and storage systems.
- Perform DataOps, including data correction and schema evolution migration.
Job requirements:
- 1 year+ in Python, a basic understanding of Java.
- Basic knowledge of Git, *NIX; Kubernetes is a plus.
- A hungry learner and a good writer. You will be the subject matter expert at a variety of external integrated services. Adaptability and a strong sense of quality communication and documentation will be great edges.
- Have an appreciation of sound and evolvable data schema design. On the opposite side, having a solid sense and ability to detect schema design smells.
Special Benefits:
- Be trained with data engineering experts and specialists.
- Being a part of the incredible team building PrimeData CDP – the pioneering science-first customer data & experience platform.
The deadline for submission: 31/12/2023