Agentic AI Consultant (# 7196)

Job Title: Agentic AI Consultant

Location: 15-20% travel to NYC (all expenses paid)

Duration: Contract to hire or Perm (Both options Open)

 US citizens and Green Card Holders and those authorized to work in the US are encouraged to apply. We are unable to sponsor H1b candidates at this time

Job Description


Summary

Agentic AI Data Engineer:

  • Looking for a strong Data Engineer who knows AI concepts. Wants more of a consultant type of person for this. They'll be wearing multiple hats not just as an order taker data engineer but needs to be creative and help with the Agentic AI proof of concepts.
  • Financial and asset management experience
  • AWS, API connection Python and SQL
  • Ability to connect to APIs as a Senior Data Engineer
  • Experience with agentic AI, doesn’t need to build it but need to have supported it. If they know the concept its good enough.
  • Very strong Data Engineer
  • Need a consultant style person. Will drop them into a banking client and work with data scientists and AI engineers to create AI Proof of concepts.
  • Must understand data pipelines, understand data, be ok with ambiguity, coming up with ideas, coming up with POCs. Don’t want an order taker.
  • Production level deployment, flexible mindset, can support AI person with creation. 

 

AI Role

 

Responsibilities:

  • Work side-by-side with AI/ML Engineers to understand data requirements for agentic AI workflows, iterating on data schemas and storage strategies.
  • Be able to perform data audits to find inconsistencies, gaps and quality issues of how data is ingested, transformed, and used.
  • Develop and maintain ETL/ELT pipelines to collect, ingest, and integrate data from diverse sources (databases, APIs, event streams).
  • Implement automated routines for data validation, cleansing, de-duplication, and anomaly detection to ensure high-quality inputs for AI models.
  • Optimize data pipelines and storage for performance and cost-efficiency in large-scale environments (AWS cloud).
  • Leverage tools like Apache Airflow, Prefect, or like schedule, monitor, and manage data workflows.
  • Maintain clear documentation of data schemas, pipeline architectures, and operational runbooks; champion data governance and version control.
  • Establish monitoring dashboards and alerts; rapidly diagnose and resolve data pipeline issues to minimize downtime.

 

Qualifications:

 

Technical skills

  • 5+ years of experience in data engineering or software development, with a strong focus on building data pipelines and infrastructure.
  • You demonstrate a good understanding of commercial data reality across one or more industries.
  • Firm understanding of GenA and MLOps, and an ability to discuss it beyond a superficial level.
  • Experience building and deploying data pipelines for machine learning models or AI-driven features in production environments.
  • Proficiency in Python, Java, or other relevant programming languages.
  • Hands-on experience with cloud platforms (AWS) and data management tools like Airflow, DBT, or Kubernetes.
  • Expertise in relational and NoSQL databases, including MySQL, PostgreSQL, MongoDB, or Cassandra.
  • Experience with distributed data processing frameworks (Spark, Hadoop) and big data tools.
  • Strong understanding of data governance, compliance, and security best practices.
  • Excellent problem-solving skills and ability to communicate technical concepts to non-technical stakeholders.
  • Ability to work as part of a team and independently
  • Effective time management skills to balance deliverables with fast-paced deadlines.
  • Good verbal/written communication and presentation skills, with the ability to successfully articulate your work and its intent to your team.

A reasonable, good faith estimate of the minimum and maximum base salary for this position is $130K to $150K Per Year.  

Apply

Apply for this role

Additional questions

The following error(s) occurred:

Hi I'm Ripudaman

I manage this role