Full-Time Onsite Position with Paid Relocation
Our client, a renowned leader in the IT industry, is seeking a highly skilled Principal Consultant – AWS+ Snowflake to join their team. This opportunity offers the chance to work with many Award-Winning Clients worldwide.
Responsibilities: In this role, you will be responsible for various tasks and deliverables, including:
- Crafting and developing scalable analytics product components, frameworks, and libraries.
- Collaborating with business and technology stakeholders to devise and implement product enhancements.
- Identifying and resolving challenges related to data management to enhance data quality.
- Optimizing data for ingestion and consumption by cleaning and preparing it.
- Collaborating on new data management initiatives and the restructuring of existing data architecture.
- Implementing automated workflows and routines using workflow scheduling tools.
- Building frameworks for continuous integration, test-driven development, and production deployment.
- Profiling and analyzing data to design scalable solutions.
- Conducting root cause analysis and troubleshooting data issues proactively.
Requirements: To excel in this role, you should possess the following qualifications and attributes:
- A strong grasp of data structures and algorithms.
- Proficiency in solution and technical design.
- Strong problem-solving and analytical skills.
- Effective communication abilities for collaboration with team members and business stakeholders.
- Quick adaptability to new programming languages, technologies, and frameworks.
- Experience in developing cloud-scalable, real-time, high-performance data lake solutions.
- Sound understanding of complex data solution development.
- Experience in end-to-end solution design.
- A willingness to acquire new skills and technologies.
- A genuine passion for data solutions.
Required and Preferred Skill Sets:
Hands-on experience with:
- AWS services, including EMR (Hive, Pyspark), S3, Athena, or equivalent cloud services.
- Familiarity with Spark Structured Streaming.
- Handling substantial data volumes in a scalable manner within the Hadoop stack.
- Utilizing SQL, ETL, data transformation, and analytics functions.
- Python proficiency, encompassing batch scripting, data manipulation, and distributable packages.
- Utilizing batch orchestration tools like Apache Airflow or equivalent (with a preference for Airflow).
- Proficiency with code versioning tools, such as GitHub or BitBucket, and an advanced understanding of repository design and best practices.
- Familiarity with deployment automation tools, such as Jenkins.
- Designing and building ETL pipelines, expertise in data ingest, change data capture, and data quality, along with hands-on experience in API development.
- Crafting and developing relational database objects, with knowledge of logical and physical data modeling concepts (some exposure to Snowflake).
- Familiarity with use cases for Tableau or Cognos.
- Familiarity with Agile methodologies, with a preference for candidates experienced in Agile environments.
If you’re ready to embrace this exciting opportunity and contribute to our client’s success in IT Project Management, we encourage you to apply and become a part of our dynamic team.