Skip to main content

Command Palette

Search for a command to run...

Concurrency is not Parallelism

Parallelism makes it faster. Concurrency makes it work.

Updated
3 min read
Concurrency is not Parallelism
M

Software Engineer x Data Engineer - I make the world a better place to live with software that enables data-driven decision-making

We - as a developers - love to talk about “running things at once” - but not everyone means the same thing. Some chase speed, others chase structure, and that’s where the confusion starts.

Parallelism makes it faster. Concurrency makest it work.

  • Parallelism is about using more cores to finish tasks sooner.

  • Concurrency is about organizing your code so multiple things can happen - even if they don’t all run at the same time.

If you’ve ever built an API that juggles thousands of requests or a background worker that consumes messages non-stop, you’ve touched the concurrency. And if you’ve tried to squeeze evey ms out of CPU-bound code - that’s parallelism.

Quick Defs (Practical, Not Academic)

This always confuses a lot of engineers.

The simplest way to explain the difference is to imagine yourself writing both hands at once on a different pages - this is parallelism.

When you write only one hand and swap pages - this is a concurrency.

Easy, right?

  • Parallelism = true simultaneity / executing multiple things physically at once

  • Concurrency = multitasking / managing multiple things

What Actually Limits Us

  • CPU cores → true parallel speedup

  • Blocking I/O → here’s where the concurrency shines

  • Context switching cost → too many “workers” means overhead

  • Coordination cost → locks, queues, channels can bottleneck

Use Cases

Based on the typical workload usages:

  • Thousands outbound HTTP calls:

    • We want high throughput, acceptable latency

    • Concurrency overlaps the waiting time on sockets

  • Message consumers (Kafka, Rabbit):

    • We want steady throughput

    • Concurrency scale workers while controlling commit/ack operations

  • CPU-bound (image resize, JSON schema validation or huge payloads):

    • We want raw parallel speedup

    • Concurrency alone won’t help, we need more cores

Misconceptions

Nine women won’t give a birth to a child in a month

  • “More threads/goroutines → faster” - not if you’re CPU-bound or thrashing the scheduler

  • “Async always beats sync” - async done poorly adds latency and complexity

  • “Locks are bad, channels are good” - both are tools, use them wisely otherwise you end up with deadlock or stall

A Tiny Thought Experiment

We must call 1000 slow 3rd party APIs (avg 200 ms).

  • Serial is about 200 s wall-clock

  • Concurrent (well-tuned pool) is close to the slowest batch (~200 - 400 ms), plus overhead and rate-limits

The win comes from overlapping waits, not raw CPU.

Write for Concurrency, Optimize for Parallelism

It’s easy to mix’em up, but there’s the practical takeway:

  • Concurrency is how you structure your program to handle multiple things at once

  • Parallelism is how the hardware executes them faster

  • You can have one w/o the other - and most of the time, you start with concurrency and earn parallelism later

When you write Go code, every go func() you spawn is a bet that your runtime will handle the juggling well.

When you write Java code, every thread pool or async executor is your manual way of telling the system how much to juggle.

But neither concurrency nor parallelism is a silver bullet. They won’t fix slow algo or bad I/O patterns - they’ll just help you manage how the slowness happens.

So, before adding more threads, goroutines, or async magic, ask one simple question:

Am I trying to make this faster, or just make it work better?

Knowing that difference is the real skill.

Cheers!