1. Home
  2. Careers
  3. Jobs at bp
  4. Staff Data Engineer - Data Hub Platforms

Staff Data Engineer - Data Hub Platforms

Staff Data Engineer - Data Hub Platforms

  • Location India - Maharashtra - Pune
  • Travel required Yes - up to 10%
  • Job category IT&S Group
  • Relocation available Yes - Domestic (In country) only
  • Job type Professionals
  • Job code 126345BR
  • Experience level Senior
Apply Search all jobs at bp

Job summary

Role Synopsis

Critical to achieving bp’s digital ambitions is the delivery of our high value data and analytics initiatives, and the enablement of the technologies and platforms that will support those objectives.

As a Data Engineer you will be developing and maintaining data infrastructure and writing, deploying, and maintaining software to build, integrate, manage, maintain, and quality-assure data at bp. You are passionate about planning and building compelling data products and services, in collaboration with business stakeholders, Data Managers, Data Scientists, Software Engineers and Architects in bp.

You will be part of bp’s Data & Analytics Platform organisation, the group responsible for the platforms and services that underpin bp’s data supply chain. The portfolio covers technologies that support the life cycle of critical data products in bp, bringing together data producers and consumers through enablement and industrial scale operations of data ingestion, processing, storage, and publishing, including data visualisation, advanced analytics, data science and data discovery platforms.

For this role specifically, you will be involved designing and delivering the necessary data workflows and pipelines to collect and process data from bp’s enterprise systems, e.g., cloud engineering platform, networks, data & analytics platforms, to provide visibility and insights to platform operations and performance. This role is key in ensuring availability and reliability of quality data that bp’s enterprise organisation can use to detect, anticipate, and prevent issues, provide insights to improve our operations, and have timely access to data that will allow our platform engineers to build the necessary automation for operational efficiency, self-healing, etc

Key Accountabilities
  • Design and development of industrial scale data pipelines on Azure and AWS data platforms and services, building data ingestion and publishing pipelines, development and provisioning of data nodes and telemetry for performance and utilisation analytics, and support the development and build the automation of system performance and metrics
  • Collaboration with enterprise platform teams to utilize existing data products, ingestion patterns, or automations to avoid bespoke development while contributing to the enhancement and creation of these shared assets when gaps are identified
  • Own the end-to-end technical data lifecycle and corresponding data technology stack for their data domain and to have a deep understanding of the bp technology stack.
  • Write, deploy and maintain software to build, integrate, manage, maintain, and quality-assure data, and responsible for deploying secure and well-tested software that meets privacy and compliance requirements; develops, maintains and improves CI / CD pipeline.
  • Adhere to and advocate for software engineering best practices (e.g., technical design, technical design review, unit testing, monitoring & alerting, checking in code, code review, documentation).

Essential Education

Bachelors (or higher) degree in Computer Science/IT, Mathematics or a hard science.

Job Requirements

Years of experience: 12 to 15 years of relevant experience

Required Criteria
  • Deep and hands-on experience designing, planning, implementing, maintaining, and documenting reliable and scalable data infrastructure and data products in complex environments.
  • Development experience in one or more object-oriented programming languages (e.g., Python, Go, Java, C++)
  • Deep knowledge and hands-on experience in technologies across all data lifecycle stages

Desirable Criteria
  • Data Manipulation: debug and maintain the end-to-end data engineering lifecycle of the data products; design and implementation of the end-to-end data stack, including designing complex data systems, e.g., interoperability across cloud platforms; experience on various types of data (streaming, structured and un-structured) is a plus.
  • Software Engineering: hands-on experience with SQL and NoSQL database fundamentals, query structures and design best practices, including scalability, readability, and reliability; you are proficient in at least one object-oriented programming language, e.g., Python [specifically data manipulation packages - Pandas, seaborn, matplotlib, Apache Spark or Scala.
  • Scalability, Reliability, Maintenance: proven experience in building scalable and re-usable systems that are used by others; knowledge and experience in automating operations as much as possible and identifying and building for long-term productivity over short-term speed/gains and execute on those opportunities to improve products or services.
  • Data Domain Knowledge: proven understanding of data sources and data and analytics requirements and typical SLAs associated to data provisioning and consumption at enterprise scale, with interest and experience in analysis of data or other enterprise platform operations activities.
  • Cloud Engineering: Recent experience utilizing data analytics offerings and services from Azure and AWS

Additional Information
Key Behaviors:
  • Empathetic: Cares about our people, our community and our planet
  • Curious: Seeks to explore and excel
  • Creative: Imagines the extraordinary
  • Inclusive: Brings out the best in each other


Apply Search all jobs at bp