Please find the Job description below
Summary of the position:
One of our many customer(s) today is urgently looking for an experienced Data Engineer
Here are some of the specific details:
Data Engineer | 12+ months With possible extension | Remote Opportunity
Client is looking for a developer with good hands on experience developing big data solutions on Hadoop and AWS to help with migration. Should have Python coding skills.
Bachelors Degree in Computer Science, Information Technology or other relevant fields
7+ years software development experience related to data engineering
Hands on experience in Java, Python or Scala with ability to understand/write complex SQL queries
Proficient in S3, Glue, Athena, Lambda, RedShift, EC2, EMR, Spark, DynamoDB with either Java, Python, Scala or PySpark
Experience deploying software solutions to the cloud platform through CI/CD in a DevOps model
General understanding of the application and data security concepts with exposure to AWS IAM, CloudTrail, CloudWatch, AWS Config, Secrets Manager, KMS
Hands on experience on Hadoop related technologies such as HDFS, Oozie, Impala, Hive
Nice to have experience in Ansible/Terraform/Cloud Formation scripts to develop or support Infrastructure as Code
Nice to have experience with RDBMS and data warehouses but not required
Nice to have experience with Machine Learning technologies and data visualization tools
Experience working with Hadoop based Big Data architecture and solutions
Experience working in a Agile development environment using Agile tools like Jira, Rally etc
Experience with UNIX commands and shell scripts
Ability to effectively communicate, collaborate and work in a team environment to deliver high quality work independently
Design, develop and operationalize large scale enterprise data solutions using AWS data and analytics services - S3, Glue, Athena, Lambda, RedShift, EMR, Spark, DynamoDB
Analyze, re-design and re-platform on-premise data solutions from Cloudera Hadoop platforms to AWS native data stack
Design, develop and deploy data pipelines from ingestion to consumption within a big data architecture using Java, Python, Scala
Participate in discussions related to architecture, design as well as product development to clearly understand business requirements and to translate into technical solutions
Works on one or more projects as a technical team member taking responsibility for complete user stories through analysis, design, development, testing as per project timelines
Develops and maintains scripts to automate batch jobs
May involve in support and maintenance of existing applications working on troubleshooting and resolving technical issues
Follows appropriate technical best practices and internal processes complying with various information security controls
Maintain existing data solutions running on Hadoop stack using HDFS, Oozie, Impala, Hive handling some enhancements till the Cloud migration.
Harvey Nash is a national, full-service talent management firm specializing in technology positions. Our company was founded with a mission to serve as the talent partner of choice for the information technology industry.
Our company vision has led us to incredible growth and success in a relatively short period of time and continues to guide us today. We are committed to operating with the highest possible standards of honesty, integrity, and a passionate commitment to our clients, consultants, and employees.
Utilizing our proprietary Predictive Staffing model, our company has enjoyed more than a decade of rapid growth, earning our reputation as a client-focused, efficient provider across a broad range of industries. Today, we serve top Fortune 1000 and successful privately held companies all over the country, still operating under the simple idea that great people aligned under a common vision can achieve tremendous results.
We are part of Harvey Nash Group, a global professional services organization with over forty offices worldwide.
For more information, please visit us at https://www.harveynashusa.com/