Another update on streaming work

2 minute read


This is an update on my previous post about work by Typesafe to add resiliency to Spark Streaming.

The part of this work that may enter Spark 1.5.0 consists in adding a dynamic throttle to streaming execution, which continuously estimates the maximum number of elements per second the system is able to take in, and regulates data ingestion based on that. My colleague Luc, the author of a cool test bed for this feature, has written a summary of the main results, you should go read it. The internal pull requests, tied to SPARK-7398, are out - you may want to look at them or at the design docs if you’re technically inclined.

This differs from the previous PoC implementation in that it does not include congestion management strategies other than throttling (e.g. sampling data, or other destructive strategies), and does not offer a connection to Reactive-Streams-compliant data producers.

Pull-requests are not even issued against the Spark repository, yet I’d still like to take this occasion to mention the great OSS work done so far on the subject. Besides Luc’s great test bed, colleagues Christopher, Dean, Iulian, and the whole Akka team (incl. notably Björn, Konrad, Endre, Jonas, Viktor, and Roland) have been continuously providing great comments, criticism and discussions since first internal proposals in December 2014. Jerry Shao from Intel has provided early work and a PR in April that were a great inspiration. Gérard Maas from Virdata suggested the essential idea that the old fixed-rate limits should be kept as an upper bound to ensure a smooth fault recovery. Monal Daxini, Chris Fregly, and Reynold Xin provided tough questions and comments in a video meeting during the Spark Summit in June. Helena Edelson from DataStax has offered to interface with the ReactiveReceiver present in an early stage of this work. And of course Patrick Wendell and Tathagata Das from Databricks have been arguing, discussing, improving and shepherding this for a long time (esp. the latter), ever since a meeting in San Francisco in early March of this year. That’s not to mention the fantastic environment Typesafe offers to its developers - including in ways the above doesn’t make obvious -, for which I’m extremely greatful.