Principle Data Engineer
Phoenix, Arizona, États-Unis
Numéro de demande 13420
28 septembre 2022
Required Skills:
Python, Pyspark/Spark, SQL, Data Lake (ON Prem or cloud) AWS, Snowflake.
Job Summary:
The Data Engineer’s role is to play a lead role in developing a high-performance data platform, integrating data from a variety of internal and external sources, in order to support data and analytics activities. This is a technical role that involves defining changes to the warehouse data model and building scalable and efficient processes to populate or modify warehouse data. The successful candidate will have hands-on data processing and data modeling experience in cloud and on-prem environments.
Responsibilities:
- Be a technical lead in the development of high-volume platforms to drive decisions and have a tangible beneficial impact on our clients and on business results.
- Design and implement efficient data pipelines (ETLs) in order to integrate data from a variety of sources into Data Warehouse.
- Design and implement data model changes that align with warehouse standards.
- Design and implement backfill or other warehouse data management processes.
- Develop and execute testing strategies to ensure high quality warehouse data.
- Provide documentation, training, and consulting for data warehouse users.
- Perform requirement and data analysis in order to support warehouse project definition.
- Provide input and feedback to support continuous improvement in team processes.
- Experience in working both, On-Prem and Cloud (AWS preferred).
- Responsible for leading the team onsite and off shore with technical leadership and guidance.
Qualifications:
- 5+ years in a Data Engineering role
- 7+ years hands on experience with SQL:
- Ability to write/ interpret SQL and Complex Joins/ Queries
- Execution plan and SQL optimization (Oracle SQL Profiler)
- o 3+ years coding experience (Python and/or PySpark).
- o 3+ years hands on experience with big data and cloud technologies (Snowflake, EMR, Redshift, or similar technologies) is highly preferred
- Schema design and architecture on snowflake
- Architecture and design experience with AWS cloud
- AWS services expertise: S3, RDS, EC2, ETL services (Glue, Kinesis)
- Consumption layer design experience for reporting and dashboard
- Expert level understanding and experience of ETL fundamentals and building efficient data pipelines.
- 3+ years at least - Enterprise GitHub – branch, release, DevOps, CI/CD pipeline.
- Team player, Strong communication and collaboration skills.
- Experience with Agile methodologies.
- Master’s Degree (or a B.S. degree with relevant industry experience) in math, statistics, computer science, or equivalent technical field.
Autres détails
- Date de début de la tâche 28 septembre 2022
- Date de fin de la tâche 28 octobre 2022
- Phoenix, Arizona, États-Unis