An easy explanation about how all the components of this pattern work together to concurrently process a batch of jobs.

Concurrent WorkerPool Pattern

[TL;DR] Skip the intro and go straight to the Implementation Details if you want to.

When I first came into the Go language I was a bit reluctant to its syntax and verbosity. After a couple of months, I slowly began to fall in love with its simplicity, readability, performance superiority, and small memory footprints over other languages.

I especially had some particular interest in the language for its rich concurrency model. But despite the “new language honeymoon” stuff, I also struggled a bit trying to figure out how does this model work.

Because I was running behind a deadline…


Synchronize worker executions by using Semaphore pattern instead of sync.WaitGroup.

Before you move forward, I wanted to let you know, this article won’t tell that semaphores are better than using WaitGroups or vice versa, but a different approach you can choose for our WorkerPool implementation by purely using channels.

Introduction

[TL:DR] you can skip this and jump right to the implementation.

Last week I came up with the “Explain to me Go Worker Pool Pattern like I’m five” post, where we went through the pattern, its components, and how they work together. …


A theoretical and practical approach

This is the second part of the “Optimize your data access by using CQRS Architecture Pattern — A theoretical and practical approach” series. In Part I of this series, we went over the CQRS pattern’s concepts, benefits, and also when this solution might be suitable and convenient to implement. Hence, if you have first reached this article, it is advisable for you, to look at it before moving forward on this one. Otherwise, you can skip the first part if you are just interested in walking through the PoC (Proof of Concept). …


A theoretical and practical approach

Have you ever wondered how many times you have started to develop a new service with a simple CRUD architecture for a certain domain object from your bounded context and that was “ok” at that moment, whilst the surrounding ecosystem keeps growing and scaling over time till the point you start noticing either (or both):

  1. The need to execute queries around your object becomes complex to deal with (multiple HTTP calls to other services, expensive joins across tables, etc).
  2. Performance degradation on your writes since your service ended up having more reading operation than writes.

Certainly, some of the issues…


A different approach to batch processing, and how to potentiate the power of data pipelines throughout the use of the Go concurrency model.

The term pipeline applied to the computer science field, is nothing more than a series of stages that take data in, perform some operation on that data, and pass the processed data back out as a result.

Thus, when using this pattern, you can encapsulate the logic of each stage and scale your features quickly by adding/removing/modifying stages, each stage becomes easy to test, nor mentioning the huge benefit of leveraging it by using concurrency, which was the motto for this article.

A few years back, I had a chance to work for a food and CPG delivery company as…

Lucas Godoy

Staff Software Engineer at Pomelo

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store