1. Home
  2. Careers
  3. Jobs at bp
  4. Senior Data Engineer - dataWorx

Senior Data Engineer - dataWorx

Senior Data Engineer - dataWorx

  • Location India - Maharashtra - Pune
  • Travel required Yes - up to 10%
  • Job category IT&S Group
  • Relocation available Yes - Domestic (In country) only
  • Job type Professionals
  • Job code 126335BR
  • Experience level Intermediate
Apply Search all jobs at bp

Job summary

Role Synopsis:
As part of bp “reinvent”, we have created a major new business line called “Innovation & Engineering” (I&E). One key remit of this group is to drive the transformation of the company through its use of digital and data. A major digital sub-team within I&E is Digital Production & Business Services (DP&BS). DP&BS are responsible for all digital and data initiatives and operations across the following areas of the bp business:
  • Production & Projects including Health, Safety, Environment & Carbon
  • Refining & Operations
  • Wells & Subsurface
  • Business Services including Finance, Procurement, People & Culture, Performance Management
  • Strategy & Sustainability
  • “DataWorx” is the name of the data team that is responsible for all data within these areas and we are developing deep data capabilities to transform the access, supply, control and quality to our vast and ever growing data reserves that are measured in Petabytes. The DataWorx team covers many data sub-disciplines, including data science, data analytics, data engineering and data management as well as specialist areas such as geospatial, remote sensing, knowledge management and digital twin. The DataWorx team works with a wide variety of data from structured data to unstructured data & we also work on Real-time streaming data processing along with Batch data processing.
    Key Responsibilities :
  • Architects, designs, implements and maintains reliable and scalable data infrastructure
  • Leads the team to write, deploy and maintain software to build, integrate, manage, maintain, and quality-assure data
  • Architects, designs, develops, and delivers large-scale data ingestion, data processing, and data transformation projects on the Azure cloud
  • Mentors and shares knowledge with the team to provide design reviews, discussions and prototypes
  • Leads customer discussions from a technical standpoint to deploy, manage, and audit best practices for cloud products
  • Leads the team to follow software & data engineering best practices (e.g. technical design and review, unit testing, monitoring, alerting, source control, code review & documentation)
  • Leads the team to deploy secure and well-tested software that meets privacy and compliance requirements; develops, maintains and improves CI / CD pipeline
  • Leads the team in following site-reliability engineering best practices: on-call rotations for services they maintain, responsible for defining and maintaining SLAs. Design, build, deploy and maintain infrastructure as code. Containerizes server deployments.
  • Actively contributes to improve developer velocity
  • Part of a cross-disciplinary team working closely with other data engineers, software engineers, data scientists, data managers and business partners in a Scrum/Agile setup
  • responsible for defining and maintaining SLAs. Design, build, deploy and maintain infrastructure as code. Containerizes server deployments.
  • Work closely with other data engineers, software engineers, data scientists, data managers and business partners

Job Requirements :

Education :
Bachelor or higher degree in computer science, Engineering, Information Systems or other quantitative fields

Experience :
  1. Years of experience: 8 to 12 years with minimum of 5 to 7 years relevant experience
  2. Deep and hands-on experience (typically 5+ years) designing, planning, productionizing, maintaining and documenting reliable and scalable data infrastructure and data products in complex environments
  3. Hands on experience with:
    1. Databricks and using Spark for data processing (batch and/or real-time)
    2. Configuring Delta Lake on Azure Databricks
    3. Languages : Python, Scala, SQL
    4. Cloud platforms : Azure (ideally) or AWS
    5. Azure Data Factory
    6. Azure Data Lake, Azure SQL DB, Synapse, and Cosmos DB
    7. Data Management Gateway, Azure Storage Options, Stream Analytics and Event Hubs
    8. Designing data solutions in Azure incl. data distributions and partitions, scalability, disaster recovery and high availability
    9. Data modeling with relational or data-warehouse systems
    10. Advanced hand-on experience with different query languages
    11. Azure Devops (or similar tools) for source control & building CI/CD pipelines
  4. Understanding Data Structures & Algorithms & their performance
  5. Experience designing and implementing large-scale distributed systems
  6. Deep knowledge and hands-on experience in technologies across all data lifecycle stages
  7. Stakeholder management and ability to lead large organizations through influence

Desirable Criteria :
  • Strong stakeholder management
  • Continuous learning and improvement mindset
  • Boy Scout mindset to leave the system better than you found it

Key Behaviours :
  • Empathetic: Cares about our people, our community and our planet
  • Curious: Seeks to explore and excel
  • Creative: Imagines the extraordinary
  • Inclusive: Brings out the best in each other

NA

Apply Search all jobs at bp