A different approach to batch processing, and how to potentiate the power of data pipelines throughout the use of the Go concurrency model.

Introduction to pipelines

The term pipeline applied to the computer science field, is nothing more than a series of stages that take data in, perform some operation on that data, and pass the processed data back out as a result.

Thus, when using this pattern, you can encapsulate the logic of each stage and scale your features quickly by adding/removing/modifying stages, each stage becomes easy to test, nor mentioning the huge benefit of leveraging it by using concurrency, which was the motto for this article.

Previous problem and solution

A few years back, I had a chance to work for a food and CPG delivery company as…

Lucas Godoy

IT Project Lead at Mercadolibre

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store