Wait Groups

This is a simple way of handling lots of jobs in parallel with error checking.

It's only suitable if the jobs are lightweight, as it doesn't put limits on the number of goroutines being executed at once.

// WaitGroup and the errors channel which doubles up to indicate when the jobs are complete
var wg sync.WaitGroup
errc := make(chan error)

for j := range jobs {
	wg.Add(1)
	go func(j string) {
		// Do some work...
		// Log errors
		if err != nil {
			errc <- err
		}
		wg.Done()
} (j)

// Wait for all jobs to complete, then close errc
go func() {
	wg.Wait()
	close(errc)
}

// Range over errc until it is closed - which will occur after all jobs have completed
var errs error
for err := range errc {
	if err != nil {
		errs = multierror.Append(err, errs) // Hashicorp package; could just log
	}
}
if errs != nil {
	// ...
}

Worker Pool

This is better if you're resource-constrained and don't want too many goroutines running at once.

A simple way to do this is https://gobyexample.com/worker-pools, but this requires you to know the number of jobs ahead of time.

This is the technique I used in pixel-slicer, which allows an unknown number of jobs, fixed number of workers, monitors for errors, and ensures that all workers have finished before terminating.

jobs := make(chan JobType, 1024)
errc := make(chan error)
completion := make(chan bool)
for w := 1; w <= numWorkers; w++ {
	go StartWorker(jobs, errc, completion)
}

// Queue tasks on jobs...
jobs <- task
// Once we're done queueing jobs
close(jobs)

go func() {
	for i := 1; i <= numWorkers; i++ {
		_ = <-completion
	}
	fmt.Println("All workers have finished")
	close(errc)
}()

for err := range errc {
		fmt.Printf("Error processing job: %s\\n", err)
}

// Worker definition
func StartWorker(jobs <-chan JobType, errc chan<- error, completion chan<- bool) {
	for j := range jobs {
		// Do work...

		// Report errors
		if err != nil {
			errc <- errors.Wrap(err, "Error processing job")
		}
	}

	// Called when jobs is closed
	completion <- true
}