Press ENTER to skip to the job description.

Data Engineer

  Apply Now  

Job Description

Our Role:

We are looking for an astute, determined professional like you to fulfil a Data Engineering (Scala) role within our Technology Solutions Group. You will showcase your success in an Agile environment through collaboration, ownership and innovation. Your expertise in emerging trends and practices will evoke stimulating discussions around optimization and change to help keep our competitive edge. This rewarding opportunity will enable you to make a big impact in our organization, so if this sounds exciting, then PGIM might be the place.

 

Your Impact:

  • Build and maintain new and existing applications in preparation for a large-scale architectural migration within an Agile function.
  • Align with the Product Owner and Scrum Master in assessing business needs and transforming them into scalable applications.
  • Build and maintain code to manage data received from heterogenous data formats including web-based sources, internal/external databases, flat files, heterogenous data formats (binary, ASCII).
  • Help Build new enterprise Datawarehouse and maintain the existing one.
  • Design and support effective storage and retrieval of very large internal and external data set and be forward think about the convergence strategy with our AWS cloud migration
  • Assess the impact of scaling up and scaling out and ensure sustained data management and data delivery performance.
  • Build interfaces for supporting evolving and new applications and accommodating new data sources and types of data.

 

Your Required Skills:

  • 5+ years of experience in building out Data pipelines in Scala
  • 3 + years of experience working in AWS Cloud especially services like S3, EMR, Lambda, Kinesis
  • 3+ years of experience with Spark
  • Exposure working in an Agile environment with Scrum Master/Product owner and ability to deliver
  • Experience with data lake/data marts/data warehouse
  • Ability to communicate the status and  challenges with the team
  • Demonstrating the ability to learn new skills and work as a team

 

Your Desired Skills:

  • Experience working in Hadoop or other Big data platforms
  • Exposure to deploying code through pipeline
  • Good exposure to Containers like ECS or Docker
  • Direct experience supporting multiple business units for foundational data work and sound understanding of capital markets within Fixed Income
  • Knowledge of Jira, Confluence, SAFe development methodology & DevOps
  • Excellent analytical and problem-solving skills with the ability to think quickly and offer alternatives both independently and within teams.
  • Proven ability to work quickly in a dynamic environment.
  • Bachelor's degree Computer Science or a related field.

  Apply Now