Senior Big Data Engineer

Senior Big Data Engineer
CT, Stamford

Job Description

Full Time Employment opportunity with WWE in Stamford, CT

We are seeking a Senior Big Data Engineer to join a dynamic data platform engineering team responsible for building the WWE data management platform. You will play a key role in the architecture, design and development of the data pipeline using Big Data Technologies on AWS/Google Cloud. You will be working closely with the WWE data engineering, analytics and science teams leveraging the usable data lake built out of multiple media internal and external datasets.

Key Responsibilities:

  • Build and validate large-scale batch and real-time data pipelines with Big Data Technologies such as Talend Real-Time Big Data Platform, Python, Spark, Hadoop, Hive, Pig, Redshift, Snowflake, NoSQL DB on AWS/Google Cloud
  • Evaluate, configure and implement new technologies, methodologies and architecture design patterns to build data processing ETL pipeline
  • Design and Develop processes for data discovery, modeling, mining and archival
  • Collaborate with data analytics team to ensure the integrity and availability of the data necessary for the business analytics & reporting
  • Think strategically & bring new ideas to build the ETL pipeline architecture and how to scale it with the business as it grows
  • Build reusable components and framework to speed up the data pipeline development
  • Provide guidance/ directions to the data engineers team and implement best practices as well as standards across all the data pipelines

Education & Technical Experience Requirements:

  • Bachelor’s in computer science, science, or similar field of study
  • 8+ years of Data Warehousing, OLAP, SQL Queries, ETL/ELT design and development experience
  • 3+ years of experience with AWS services including S3, Redshift, EMR, Lambda and RDS
  • 3+ years of solid experience in developing and performance tuning the data pipeline with Hadoop, Hive, Spark, Talend Big Data Platform
  • 3+ years of experience in programming languages such as Python, Scala, R, Java or C#
  • 2+ years of experience with parallel computing, batch processing, and stream processing using tools such as Kinesis or Kafka
  • 2+ years of experience in columnar databases such as RedShift, Snowflake as well as NoSQL databases such as MongoDB, DynamoDB
  • Self-starter and highly motivated to add value to the team and platform using innovations around data and data solutions
  • Experience in dealing with the structured, semi-structured and unstructured datasets
  • Excellent communication skills to collaborate with the data engineering, analytics and science teams
  • Experience in Social Media Datasets such as Twitter, YouTube, Facebook, Instagram is a plus
  • Experience in Google Clickstream, DFP, or Adobe Analytics datasets is a plus
  • Experience in dealing with the Media content subscription-based datasets is a plus
  • Experience in creating restful API’s is a plus
  • Experience in AI, machine learning and statistics is a plus
  • Experience in Media and Entertainment Industry is a plus


Please email your resume to 

Apply Now