Table of contents
- A Language Where Concurrency Is Easy
- Goroutines — Lightweight Concurrent Execution
- Channels — Communication Between Goroutines
- Directional Channels
- Buffered Channels
- select — Waiting on Multiple Channels
- WaitGroup — Waiting for Goroutines to Complete
- Comprehensive Example — Concurrent URL Checker
A Language Where Concurrency Is Easy
In most languages, concurrency is an advanced topic. You create threads, apply locks, and rack your brains to avoid deadlocks. Go solved this problem at the language level. Concurrent programming is possible with just two tools — goroutines and channels — and the syntax is remarkably concise.
Go’s creators have a favorite saying: “Don’t communicate by sharing memory; share memory by communicating.” This philosophy is baked directly into channels.
Goroutines — Lightweight Concurrent Execution
A goroutine is a lightweight thread managed by the Go runtime. They’re much lighter than OS threads (initial stack size is just a few KB), and you can spin up thousands simultaneously without problems.
Starting one is as simple as putting go before a function call.
package main
import (
"fmt"
"time"
)
func sayHello(name string) {
for i := 0; i < 3; i++ {
fmt.Printf("%s: Hello! (%d)\n", name, i)
time.Sleep(100 * time.Millisecond)
}
}
func main() {
go sayHello("goroutineA")
go sayHello("goroutineB")
// main must wait since all goroutines terminate when main exits
time.Sleep(500 * time.Millisecond)
fmt.Println("done")
}
go sayHello("goroutineA") runs sayHello in a new goroutine. The main function doesn’t wait and immediately proceeds to the next line, while the two goroutines run concurrently. The output order may vary each time, which is a characteristic of concurrency.
Using time.Sleep to wait is a stopgap. In real code you shouldn’t do this — use channels or WaitGroup, which are covered shortly.
Channels — Communication Between Goroutines
A channel is a pipe for sending and receiving values between goroutines. One side sends (<-), and the other receives.
Here’s a diagram of the structure where multiple goroutines exchange data through a channel.
sequenceDiagram
participant Main as main goroutine
participant Ch as chan int
participant G1 as Goroutine A<br/>(sum first half)
participant G2 as Goroutine B<br/>(sum second half)
Main->>Ch: make(chan int)
Main->>G1: go sum(numbers[:5], ch)
Main->>G2: go sum(numbers[5:], ch)
G1-->>G1: 1+2+3+4+5 = 15
G2-->>G2: 6+7+8+9+10 = 40
G1->>Ch: ch <- 15
Ch->>Main: a := <-ch (15)
G2->>Ch: ch <- 40
Ch->>Main: b := <-ch (40)
Main-->>Main: a + b = 55
package main
import "fmt"
func sum(numbers []int, ch chan int) {
total := 0
for _, n := range numbers {
total += n
}
ch <- total // Send result to channel
}
func main() {
numbers := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
ch := make(chan int)
// Split in half and compute concurrently
go sum(numbers[:5], ch)
go sum(numbers[5:], ch)
a := <-ch // Receive first result
b := <-ch // Receive second result
fmt.Println(a, b, a+b) // 15 40 55 (order may vary)
}
make(chan int) creates a channel for sending and receiving int values. ch <- total sends a value, and <-ch receives one. Crucially, the sending side blocks until the receiving side is ready, and the receiving side blocks until the sending side is available. This synchronization property enables safe communication without separate locks.
Directional Channels
You can restrict a channel’s direction in function parameters. Distinguishing between send-only and receive-only channels lets you catch mistakes at compile time.
package main
import "fmt"
// Send-only channel
func produce(ch chan<- string) {
ch <- "data 1"
ch <- "data 2"
close(ch)
}
// Receive-only channel
func consume(ch <-chan string) {
for msg := range ch {
fmt.Println("received:", msg)
}
}
func main() {
ch := make(chan string)
go produce(ch)
consume(ch) // Consume directly in the main goroutine
}
chan<- string is send-only, and <-chan string is receive-only. If produce accidentally tries <-ch, it gets a compile error. Restricting direction isn’t mandatory, but it’s a good practice for making your code’s intent clear and preventing bugs.
close(ch) closes the channel, signaling that no more values will be sent. range keeps receiving values until the channel is closed.
Buffered Channels
Default channels are unbuffered, meaning both the sender and receiver must be ready simultaneously for communication to occur. With a buffered channel, you can send values up to a certain count without a receiver being ready.
package main
import "fmt"
func main() {
// Buffer size 3
ch := make(chan string, 3)
ch <- "first" // Doesn't block
ch <- "second" // Doesn't block
ch <- "third" // Doesn't block
// ch <- "fourth" // Would block here (buffer full)
fmt.Println(<-ch) // first
fmt.Println(<-ch) // second
fmt.Println(<-ch) // third
}
The second argument to make(chan string, 3) is the buffer size. When the buffer is full, sending blocks until space frees up; when empty, receiving blocks.
When should you use buffered channels? They’re useful when you want to cushion speed differences between producers and consumers. For example, collecting logs for batch processing or queuing requests for sequential handling. However, setting the buffer too large wastes memory and can hide problems that only surface much later, so choose an appropriate size thoughtfully.
select — Waiting on Multiple Channels
select processes whichever channel operation is ready first. It looks similar to switch, but each case is a channel operation.
package main
import (
"fmt"
"time"
)
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
go func() {
time.Sleep(100 * time.Millisecond)
ch1 <- "channel 1 done"
}()
go func() {
time.Sleep(200 * time.Millisecond)
ch2 <- "channel 2 done"
}()
// Process whichever arrives first
for i := 0; i < 2; i++ {
select {
case msg := <-ch1:
fmt.Println(msg)
case msg := <-ch2:
fmt.Println(msg)
}
}
}
select executes whichever case has data available on any of the channels. If multiple are ready simultaneously, it picks one at random. This property makes implementing timeouts clean.
select {
case result := <-ch:
fmt.Println("result:", result)
case <-time.After(3 * time.Second):
fmt.Println("timeout!")
}
time.After returns a channel that sends a value after the specified duration. If the result doesn’t arrive within 3 seconds, the timeout case executes.
WaitGroup — Waiting for Goroutines to Complete
Earlier we used time.Sleep to wait for goroutines, but that shouldn’t be used in real code. You can’t know exactly when they’ll finish. sync.WaitGroup solves this problem.
package main
import (
"fmt"
"sync"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done() // Decrement counter when function exits
fmt.Printf("worker %d started\n", id)
// Do some work...
fmt.Printf("worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
wg.Add(1) // Increment counter
go worker(i, &wg)
}
wg.Wait() // Block until counter reaches 0
fmt.Println("all workers done")
}
The pattern is simple. Register the number of goroutines to wait for with Add, call Done when each goroutine finishes, and Wait blocks until everything completes. Using defer wg.Done() ensures the counter decreases regardless of how the function exits (normally or via panic).
One important note: wg.Add(1) must be called before the go keyword. If Add is called inside the goroutine, there’s a race condition where Wait might run first and pass immediately when the count is still 0.
Comprehensive Example — Concurrent URL Checker
Let’s combine everything we’ve learned to build a program that checks the response status of multiple URLs concurrently.
package main
import (
"fmt"
"net/http"
"sync"
"time"
)
type Result struct {
URL string
Status string
Took time.Duration
}
func checkURL(url string, ch chan<- Result, wg *sync.WaitGroup) {
defer wg.Done()
start := time.Now()
resp, err := http.Get(url)
took := time.Since(start)
if err != nil {
ch <- Result{URL: url, Status: "error: " + err.Error(), Took: took}
return
}
defer resp.Body.Close()
ch <- Result{URL: url, Status: resp.Status, Took: took}
}
func main() {
urls := []string{
"https://go.dev",
"https://github.com",
"https://example.com",
}
ch := make(chan Result, len(urls))
var wg sync.WaitGroup
for _, url := range urls {
wg.Add(1)
go checkURL(url, ch, &wg)
}
// Close the channel in a separate goroutine once all are done
go func() {
wg.Wait()
close(ch)
}()
for result := range ch {
fmt.Printf("%-30s %s (%v)\n", result.URL, result.Status, result.Took)
}
}
Goroutines send URL requests concurrently, directional channels deliver results, and WaitGroup waits for all requests to finish before closing the channel. Sequential processing would take the sum of all URL response times, but with concurrent requests, it takes only as long as the slowest URL. That’s the power of concurrency.
Goroutines and channels are the core features that define Go’s identity. You start concurrent execution with a single go keyword, safely exchange data through channels, and elegantly branch multiple paths with select. A solid grasp of these basics is necessary to understand the practical concurrency patterns covered in the next part.
The next part covers worker pools, fan-out/fan-in, timeout and cancellation with context, and race conditions with mutexes.




Loading comments...