Summary:
Our client is looking for a Cloud Data Engineer to join the Data Pipeline team. As a member of the team, you will get to live on the front edge of modern technology by building on things like Databricks, Azure Synapse Analytics and Azure Data Explorer while building ETLs and process in Databricks and Data Factory.
Here are some of the specific details:
Job Title: Cloud Data Engineer
Location: Remote
Duration: 6 Months
Description:
You will help build state-of-the-art pipelines and data
models that are at the heart of the studio decision-making process. You will be
working on large-scale Lakehouse and warehouse analytics systems that process
data feeds in real-time and batch processes. Your customers will be the
business, design, test and development teams and they will look to you to help
shape how we capture data to improve our test-driven methodologies and build a
culture around data driven development.
Key Responsibilities:
• Help lead tech writing team goals, planning, and projects
• Help teams write/edit needed project documentation in a clear, concise,
easily understandable manner
• Create documentation and knowledge sharing processes to maintain organization
• Organize documentation in an intuitive manner
• Align teams on and be advocate for knowledge sharing across studios
• Help research and push forward sharing initiatives or solutions to current
pain points
Minimum Qualifications:
•5+ years’ experience with SQL required.
•5+ years’ experience designing and implementing scalable ETL processes
including data movement (Azure data factory) and quality tools.
•3+ years’ experience with modern Big Data Analytics using Data Lake, Spark and
formats like Parquet.
•2+ years’ experience building cloud hosted data systems. Azure highly
preferred.
Preferred Qualifications:
• Building data pipelines in Azure Data Factory/Synapse
Analytics/Spark.
• Working with data in delta lake format and from Azure Data Explorer/Kusto
• Applying AI/ML to data engineering use cases (feature engineering, feature
stores, model training/serving datasets, and model monitoring data pipelines).
• Experience preparing and governing datasets for modern AI applications
(LLM/RAG, experimentation/A-B testing, and privacy-aware data access).
A reasonable, good faith estimate of the minimum and maximum for this position is $50 to $55/hour
Benefits will also be available and details are available
at the following link: https://britehr.app/HarveyNashContractorsNH2025
I am looking forward to speaking with you today.
About us:
Harvey Nash is a national, full-service talent management
firm specializing in technology positions. Our company was founded with a
mission to serve as the talent partner of choice for the information technology
industry.
Our company vision has led us to incredible growth and success in a relatively short period of time and continues to guide us today. We are committed to operating with the highest possible standards of honesty, integrity, and a passionate commitment to our clients, consultants, and employees.
We are part of Nash Squared Group, a global professional
services organization with over forty offices worldwide.
For more information, please visit us at https://www.harveynashusa.com/
Thanks & Regards,
Srinath Kumbala
(510) 984 1503
Srinath.kumbala@harveynash.com