1. Home
  2. Careers
  3. Jobs at bp
  4. Senior Data Engineer

Senior Data Engineer

Senior Data Engineer

  • Location India - Maharashtra - Pune
  • Travel required Yes - up to 10%
  • Job category IT&S Group
  • Relocation available Yes - Domestic (In country) only
  • Job type Professionals
  • Job code 133113BR
  • Experience level Intermediate
Apply Search all jobs at bp

Job summary

Role Synopsis:

Critical to achieving bp’s digital ambitions is the delivery of our high value data and analytics initiatives, and the enablement of the technologies and platforms that will support those objectives.
As a Data Engineer you will be developing and maintain data infrastructure and products. You will be writing, deploying and maintaining softward to build, integrate, manage, maintain and quality-assure data at bp. You are passionate about planning and building compelling data products and services, in collaboration with business stakeholders, Data Managers, Data Scientists, Software Engineers and Architects in bp.

You will be part of bp’s Data & Analytics Platform organisation, the group responsible for the platforms and services that underpin bp’s data supply chain. The portfolio covers technologies that support the life cycle of critical data products in bp, bringing together data producers and consumers through enablement and industrial scale operations of data ingestion, processing, storage and publishing, including data visualisation, advanced analytics, data science and data discovery platforms.

For this role specifically, you will be involved designing and delivering the necessary data workflows and pipelines to collect and process data from bp’s enterprise systems, e.g. cloud engineering platform, networks, data & analytics platforms, in order to provide visibility and insights to platform operations and performance. This role is key in ensuring availability and reliability of quality data that bp’s enterprise organisation can use to detect, anticipate and prevent issues, provide insights to improve our operations, and have timely access to data that will allow our platform engineers to build the necessary automation for operational efficiency, self-healing, etc.

Key responsibilities:

  • Bachelor's (or higher) degree, preferably in Computer Science, MIS/IT, Mathematics or a hard science.
  • Experience Range: 8 to 12 years
  • Design complex software components, services, and applications. You’ll write design documents and review them with your software engineering and architecture peers, incorporating and quickly iterating on the feedback.
  • Own the delivery of your projects to production. You’ll follow best practices including writing high quality code, developing unit, functional, and performance tests, and creating end-to-end deployment pipelines to production to maintain a fast velocity.
  • Design and development of industrial scale data pipelines on Azure and AWS data platforms and services, building data ingestion and publishing pipelines, development and provisioning of data nodes and telemetry for performance and utilisation analytics, and support the development and build the automation of system performance and metrics
  • Collaboration with enterprise platform teams to utilize existing data products, ingestion patterns, or automations to avoid bespoke development while contributing to the enhancement and creation of these shared assets when gaps are identified
  • Own the end-to-end technical data lifecycle and corresponding data technology stack for their data domain and to have a deep understanding of the bp technology stack.
  • Write, deploy and maintain software to build, integrate, manage, maintain, and quality-assure data, and responsible for deploying secure and well-tested software that meets privacy and compliance requirements; develops, maintains and improves CI / CD pipeline.

Desirable Criteria:
  • You’ll have expertise in at least one language (C/C++, Java, C#, Python) including object-oriented design. You should be proficient in data structures, algorithms, runtime complexity, API and database design, as well as unit and functional test methodologies.
  • Deep and hands-on experience designing, planning, implementing, maintaining and documenting reliable and scalable data infrastructure and data products in complex environments
  • Data Manipulation: debug and maintain the end-to-end data engineering lifecycle of the data products; design and implementation of the end-to-end data stack, including designing complex data systems, interoperability across cloud platforms; experience on various types of data (streaming, structured and un-structured) is a plus.
  • Software Engineering: hands-on experience with SQL and NoSQL database fundamentals, query structures and design best practices, including scalability, readability, and reliability; you are proficient in at least one object-oriented programming language, Python [specifically data manipulation packages - Pandas, seaborn, matplotlib], Apache Spark or Scala.
  • Scalability, Reliability, Maintenance: proven experience in building scalable and re-usable systems that are used by others; knowledge and experience in automating operations as much as possible and identifying and building for long-term productivity over short-term speed/gains and execute on those opportunities to improve products or services.
  • Data Domain Knowledge: proven understanding of data sources and data and analytics requirements and typical SLAs associated to data provisioning and consumption at enterprise scale, with interest and experience in analysis of data or other enterprise platform operations activities.
  • Cloud Engineering: Recent experience utilizing data analytics offerings and services from Azure and AWS

Apply Search all jobs at bp