Concurrency in Go vs Java
Why most concurrency problems are design mistakes, not language issues

Software Engineer x Data Engineer - I make the world a better place to live with software that enables data-driven decision-making
Concurrency is one of those topics that feels solved, until it breaks in production.
Go gives us goroutines and channels. Java gives us executors, futures and virtual threads.
Different tools, different syntax… and the same mistakes.
This article is not about benchmarks or language wars ;-) It’s about the real concurrency bugs in production systems - both in Go and in Java.
Concurrency is Not Parallelism (Still)
This mistake never goes away.
Concurrency is about structuring work.
Parallelism is about executing work at the same time.
We’ve discussed the Concurrency is not Parallelism recently, but - as a reminder - you can write highly concurrent code that:
runs on a single core
blocks on I/O
performs worse than a sequential version
How this shows up?
Go: “Goroutines are cheap, so I’ll just spawn one per request”
Java: “Virtual threads are lightweight, so I don’t need to think about limits anymore”
Both are wrong for the same reason: you didn’t analyze the workload.
CPU-bound and I/O-bound workloads behave very differently under concurrency.
“Fire and Forget” is a Production Bug
This is probably the most common concurrency bug
Go:
go processOrder(order)
Java:
executor.submit(() -> processOrder(order));
At firsst glance, this looks harmless.
In reality, you just lost:
lifecycle control
error handling
cancellation
ebservability
What happens in production:
goroutines / threads keep running afrter the request is gone
errors dissapear into logs (or nowhere)
resource usage slowly climbs until something collapses
Rule of thumb: “If you start concurrent work, you must also define who owns it and who stops it”.
No Backpressure = Self-Inflicted DoS
Go:
for req := range requests {
go handle(req)
}
Java:
requests.forEach(req ->
executor.submit(() -> handle(req))
);
The bug:
no limits
no queue size
no load scheduling
The result:
traffic spike → thread explosion
memory pressure
GC storms (especially in Java)
latency spikes everywhere
Backpressure is not an optimization. It’s a survival mechanism.
Backpressure in Go (conceptually):
workers := make(chan struct{}, 10) // limit concurrency
func handle(req Request) {
workers <- struct{}{} // accuire slot
defer func() { <-workers } () // release slot
process(req)
}
If all slots are taken:
the caller blocks
work slows down naturally
the system stays stable
The blocking is backpressure.
Backpressure in Java (conceptually):
ExecutorService executor =
new ThreadPoolExecutor(
10, 10,
0L, TimeUnit.MILLISECONDS,
new ArrayBlockingQueue<>(100)
);
Here:
max 10 workers
queue limited to 100 tasks
when full → rejection policy kicks in
The rejection is backpressure.
Cancellation Is Treated as Optional (It’s Not)
Cancellation is one of those features everyone “supports” and almost no one uses correctly.
Go:
context.Contextpassed around “just in case”nobody checks
ctx.Done()
Java:
InterruptedExceptionignoredvirtual threads assumed to auto-cancel everything
Why this hurts:
request times out
work continues anyway
side effects happen after the client is gone
Cancellation must be:
explicit
propagated
actively checked
If cancellation is an afterthought, your system will behave unpredictable under load.
Shared State: Different Tools, Same Pain
Go and Java take different approach here, but developers still manage to get it wrong.
Go:
channels used as a magic solution
mutexes added later, without clear ownership
Java:
synchronizedeverywheremutable shared objects crossing thread boundaries
The real problem: Not synchronization. Ownership.
If it’s unclear:
who owns the data
who is allowed to mutate it
when it can be accessed
…then concurrency bugs are inevitable.
Prefer:
immutable data
clear boundaries
single-writer patterns
Blocking Where It Hurst Most
Blocking is not evil. Blocking in the wrong place is.
Go:
blocking I/O in unlimited goroutines
time.Sleepused for coordination
Java:
blocking calls inside virtual threads
mixing async and blocking APIs blindly
Symptoms:
thread pools stuck
request queues growing
sudden latency cliffs
Virtual threads reduce the cost of blocking, but they do not eliminate it.
Debugging Concurrency “By Logs”
Logs don’t explain concurrency issues. They only confirm that something already went wrong.
Common mistakes:
no metrics
no visibility into queues or workers
debugging via stack traces only
What actually helps:
Go:
pprof, goroutine dumpsJava: JFR, thread dumps
metrics like:
queue depth
active workers
execution time distribution
If you can’t see concurrency behavior, you can’t fix it.
The Biggest Shared Mistake: Thinking Too Low-Level
Most concurrency bugs don’t come from goroutines or threads.
They come from thinking in terms of how work runs, instead of how work flows.
Stop designing systems around:
goroutines
threads
and executors.
Start designing around:
data flow
limits
ownership
failure models
Design Over Language
Go and Java look very different on the surface.
But in production, concurrency failures usually come from the same place: design, not language.
Go makes it easy to start concurrent work
Java makes you think harder before you do
neither will save you from bad assumptions
Concurrency is not a feature.
It’s a responsibility.
Cheers!
Sources
Virtual Threads: https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html
Concurrency: https://docs.oracle.com/en/java/javase/21/core/concurrency.html
No K.I.S.S.! samples for today ¯\_(ツ)_/¯



