GCP Data and AI Engineer, AVP

Deutsche Bank

Deutsche Bank cover image
Deutsche Bank logo image
Deutsche BankFinancial Services

GCP Data and AI Engineer, AVP

India , Pune

Position Overview

Role Description:

As an AVP, Senior GCP Data and AI Engineer, you will be a lead technologist and individual contributor at the forefront of our data and artificial intelligence initiatives. This role requires an expert builder with a deep, practical understanding of system architecture and modern design patterns. You will be responsible for the hands-on development of our most complex data pipelines and generative AI solutions, from design through to production deployment, ensuring the solutions you build are scalable, resilient, and forward-thinking

What we’ll offer you

As part of our flexible scheme, here are just some of the benefits that you’ll enjoy

  • Best in class leave policy.
  • Gender neutral parental leaves
  • 100% reimbursement under childcare assistance benefit (gender neutral)
  • Sponsorship for Industry relevant certifications and education
  • Employee Assistance Program for you and your family members
  • Comprehensive Hospitalization Insurance for you and your dependents
  • Accident and Term life Insurance
  • Complementary Health screening for 35 yrs. and above

Your key responsibilities:

As a Senior Engineer, your hands-on responsibilities will include:

AI & Application Development:

  • Design, build, and operationalize sophisticated AI applications, including production-grade RAG pipelines on GCP.
  • Leverage the Vertex AI platform to train, fine-tune, and deploy machine learning and generative AI models.
  • Design and implement complex agentic workflows to automate and optimize business processes.
  • Develop, consume, and host mission-critical REST APIs using Python, deployed as containerized applications on Cloud Run.

Data Engineering & Pipelines:

  • Design, develop, and maintain scalable batch and streaming data pipelines using Python, SQL, Cloud Composer, and Pub/Sub.
  • Develop and optimize complex SQL queries in BigQuery for large-scale data analysis, extraction, and transformation.
  • Automate data quality and ETL testing procedures using Python and SQL.

Infrastructure & Operations:

  • Develop and deploy all cloud infrastructure as code using Terraform.
  • Implement and manage CI/CD pipelines to ensure smooth and reliable deployment of data and AI applications.
  • Serve as a key escalation point for complex L3 production issues, providing expert troubleshooting and resolution.

Your skills and experience

Mandatory Engineering Skills:

  • 10+ years of IT experience as a hands-on engineer with a proven track record of building and deploying large-scale data systems.
  • Expert-Level Languages: Deep proficiency in Python and advanced SQL, including complex query optimization and data modeling.
  • Cloud Platform: Extensive hands-on experience building solutions on GCP. Experience with Azure or AWS is also valuable.
  • Core GCP Services: Mastery of BigQueryCloud Composer (or Apache Airflow), and Cloud Run in production environments.
  • Infrastructure & Automation: Proficient in defining infrastructure as code using Terraform and designing robust CI/CD pipelines.

Advanced Technical & Design Expertise: We expect candidates to have deep, practical experience in the following areas, with a portfolio of projects demonstrating their expertise.

  • System Design: Strong understanding of modern data patterns, distributed systems, and architectural best practices.
  • Vertex AI Platform: Extensive, hands-on experience building solutions using the Vertex AI platform, including model fine-tuning, custom training, and scalable endpoint deployment.
  • Generative AI Systems: Proven experience building and deploying production-grade RAG (Retrieval-Augmented Generation) systems. Deep understanding of LLMs (like Gemini), vector databases, and embedding models.
  • Agentic Architecture: Strong practical knowledge of Agentic Patterns, multi-agent systems, and complex workflows. Awareness of emerging frameworks (Google ADK) and concepts like a2a (agent-to-agent) protocol and agent cards is highly desirable.

MLOps & Application Hosting: Demonstrable experience in operationalizing AI models and hosting applications using containers (Docker).

How we’ll support you

  • Training and development to help you excel in your career
  • Coaching and support from experts in your team
  • A culture of continuous learning to aid progression
  • A range of flexible benefits that you can tailor to suit your needs

About us and our teams

Please visit our company website for further information:

https://www.db.com/company/company.html

We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively.
Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group.
We welcome applications from all people and promote a positive, fair and inclusive work environment.

Share this job