Lead Engineer - Data Processing (m/f)

  • Engineering
  • Berlin, Germany

Lead Engineer - Data Processing (m/f)

Job description

ChartMogul's engineering operates as two teams: our Data Processing team, which is responsible for the ingestion and processing of our customers data, and our Analytics and Display team, which is responsible for generating the analytics our customers use to grow their businesses.

The Lead Engineer (Data Processing) will be responsible for scaling our data processing pipeline, which takes the raw transaction data from our customers' billing providers or their API integrations, and produces the higher level data that underlies the analytics we provide. At present, the data processing pipeline is 2 monolithic Ruby on Rails applications, and sees peak loads of around 100 transactions per second. Over the coming year, we're going to break this down into high-performance microservices, most likely written in Go, to massively increase the throughput and decrease the latency our customers see.

The Data Processing team currently consists of 5 engineers, but we're planning to grow it to 8 engineers plus this Lead Engineer role. We're looking for someone with experience scaling web applications that deal with large amounts of data, and who has experience in the monolith-to-microservices transition. This role will have a lot of architectural responsibility, so you'll need to have the ability to design scalable systems, and the ability to pick and choose the technologies that will help us deliver an amazing product to our customers.


What we're looking for:

  • Significant experience in high-throughput production systems
  • Experience delivering production software in at least 2 differing languages (Go and Ruby preferred, but not mandatory)
  • Several years production experience in service-based environments
  • Strong architectural skills
  • Experience leading a team of engineers
  • Fluent professional English

Nice to have:

  • Experience with message queues such as RabbitMQ, ZeroMQ or Kafka
  • Exposure to data processing pipelines such as Apache Storm or Kafka Streams
  • DevOps experience