Intro to virtual threads: A new approach to Java concurrency

No matter how much heap you allocate, you have to factor out the extra memory consumed by your threads. This is actually a significant cost, every time you create a thread, that’s why we have thread pools. That’s why we were taught not to create too many threads on your JVM, because the context switching and memory consumption will kill us.

  • We would also want to obtain a fiber’s stack trace for monitoring/debugging as well as its state (suspended/running) etc..
  • When these features are production ready, it will be a big deal for libraries and frameworks that use threads or parallelism.
  • It seems like RestTemplate or any other blocking API is exciting again.
  • Besides, the lock-free scheduling implementation greatly reduces the scheduling overhead compared to kernel implementation.
  • After all, the smaller the concurrency, the easier the system is to understand.

Using the structured concurrency, it’s actually fairly simple. Once we reach the last line, it will wait for all images to download. Once again, confront that with your typical code, where you would have to create a thread pool, make sure it’s fine-tuned. Notice that with a traditional thread pool, all you had to do was essentially just make sure that your thread pool is not too big, like 100 threads, 200 threads, 500, whatever.

Java to get go-routine-like virtual threads!

There are also chances for memory leaks, thread locking, etc. The first eight threads took a wallclock time of about two seconds to complete, the next eight took about four seconds, etc. As the executed code doesn’t hit any of the JDK’s blocking methods, the threads never yield and thus ursurpate their carrier threads until they have run to completion. This represents an unfair scheduling scheme of the threads.

This is a replacement for instantiating a thread and calling thread.start(). As we want fibers to be serializable, continuations should be serializable as well. If they are serializable, we might as well make them cloneable, as the ability to clone continuations actually adds expressivity . It is, however, a very serious challenge to make continuation cloning useful enough for such uses, as Java code stores a lot of information off-stack, and to be useful, cloning would need to be «deep» in some customizable way. In the literature, nested continuations that allow such behavior are sometimes call «delimited continuations with multiple named prompts», but we’ll call them scoped continuations.

Virtual Threads: New Foundations for High-Scale Java Applications –

Virtual Threads: New Foundations for High-Scale Java Applications.

Posted: Fri, 23 Sep 2022 07:00:00 GMT [source]

In that case, it’s actually fairly easy to get into a situation where your garbage collector will have to do a lot of work, because you have a ton of virtual threads. You don’t pay the price of platform threads running and consuming memory, but you do get the extra price when it comes to garbage collection. The garbage collection may take significantly more time. This was actually an experiment done by the team behind Jetty.

Tasks, Not Threads

StructuredTaskScope also ensures the following behavior automatically. For me, preemption by the Java scheduler is not currently supported but may be added in the future, after all the goroutine were not preempted at the beginning in Go. Beyond that, one thing the loom authors have suggested is that when project loom java you want to limit concurrency the better way to do that is using concurrency constructs like semaphores rather than relying on a fixed pool size. They’re not critical in the same way «functions/procedures» are not critical, but you’re going to end up with a messy code base when it comes to reality.

java loom vs golang

When I ran this code and timed it, I got the numbers shown here. I get better performance when I use a thread pool with Executors.newCachedThreadPool(). Let’s try to run 100,000 tasks using platform threads. Let’s look at some examples that show the power of virtual threads.

Will your application benefit from Virtual Threads?

Hence ZIO has much more control over how interruptions are handled. Our code, when written using ZIO, simply has no way of «catching» an interruption and recovering from it. It is possible to define uninterruptible regions, but once the interpreter leaves such a region, any pending interruption requests will be processed. You can opt out of automatic supervision, but if you stick to defaults, it’s simply not possible to use the API incorrectly . With Loom, you have to make additional effort to ensure that no threads leak.

java loom vs golang

For example, when a kernel thread runs for too long, it will be preempted so that other threads can take over. It more or less voluntarily can give up the CPU and other threads may use that CPU. It’s much easier when you have multiple CPUs, but most of the time, this is almost always the case, you will never have as many CPUs as many kernel threads are running. This mechanism happens in the operating system level. This is far more performant than using platform threads with thread pools. Of course, these are simple use cases; both thread pools and virtual thread implementations can be further optimized for better performance, but that’s not the point of this post.

Spring Blog

Unlike continuations, the contents of the unwound stack frames is not preserved, and there is no need in any object reifying this construct. The capitalized words Thread and Fiber would refer to particular Java classes, and will be used mostly when discussing the design of the API rather than of the implementation. A working application on how to implement the Record Patterns and Pattern Matching for switch APIs may be found in this GitHub repository, java-19 folder, by Wesley Egberto, Java technical lead at Global Points. JEP 405 and JEP 427 fall under the auspices of Project Amber, a project designed to explore and incubate smaller Java language features to improve productivity.

java loom vs golang

This helps to run the programs written in the synchronous mode in the asynchronous mode. Essentially, both user-mode and kernel-mode context switching are very lightweight operations and support some hardware commands. For example, PUSHA is used to store general-purpose registers. Therefore, context switching overhead is generally only caused by either storing registers or switching SPs. The call command automatically stacks PCs, and the switch is completed in dozens of commands. Another important note is that virtual threads are always daemon threads, meaning they’ll keep the containing JVM process alive until they complete.

The only different between kernel threads and Loom’s virtual threads are how threads are initially created. This is overblown, because everyone says millions of threads and I keep saying that as well. That’s the piece of code that you can run even right now. You can download Project Loom with Java 18 or Java 19, if you’re cutting edge at the moment, and just see how it works.

What exactly makes Java Virtual Threads better

Another interesting related presentation is Daniel Spiewak’s «Case for effect systems» where he argues that Loom obsoletes Future, but not IO. As mentioned earlier, proper sequencing needs a bit more mindfulness in the Loom than the ZIO implementation so that at one point, we don’t inadvertently run the side-effects at the moment of their construction. Other than that, both implementations are similar here as well.

java loom vs golang

There’s also a different algorithm or a different initiative coming as part of Project Loom called structured concurrency. Essentially, it allows us to create an ExecutorService that waits for all tasks that were submitted to it in a try with resources block. This is just a minor addition to the API, and it may change. This is a main function that calls foo, then foo calls bar. There’s nothing really exciting here, except from the fact that the foo function is wrapped in a continuation. Wrapping up a function in a continuation doesn’t really run that function, it just wraps a Lambda expression, nothing specific to see here.


Again, threads — at least in this context — are a fundamental abstraction, and do not imply any programming paradigm. In particular, they refer only to the abstraction allowing programmers to write sequences of code that can run and pause, and not to any mechanism of sharing information among threads, such as shared memory or passing messages. A few use cases that are actually insane these days, but they will be maybe useful to some people when Project Loom arrives. For example, let’s say you want to run something after eight hours, so you need a very simple scheduling mechanism.

I leave you with a few materials which I collected, more presentations and more articles that you might find interesting. Quite a few blog posts that explain the API a little bit more thoroughly. A few more critical or skeptic points of view, mainly around the fact that Project Loom won’t really change that much. It’s especially for the people who believe that we will no longer need reactive programming because we will all just write our code using plain Project Loom.

However, if I now run the continuation, so if I call run on that object, I will go into foo function, and it will continue running. It runs the first line, and then goes to bar method, it goes to bar function, it continues running. Then on line 16, something really exciting and interesting happens. The function bar voluntarily says it would like to suspend itself.

If you do not do anything exotic, it does not matter, in terms of performance, if you submit all tasks with one executor or with two. The try-with-resources construct allows to introduce “structure into your concurrency”. If you want to get more exotic, then Loom provides possibilities to restrict virtual threads to a pool of carrier threads. However, this feature can lead to unexpected consequences, as outlined in Going inside Java’s Project Loom and virtual threads. To be able to execute many parallel requests with few native threads, the virtual thread introduced in Project Loom voluntarily hands over control when waiting for I/O and pauses. However, it doesn’t block the underlying native thread, which executes the virtual thread as a “worker”.

TrickyCases #1. Language Detection

If you’ve already heard of Project Loom a while ago, you might have come across the term fibers. In the first versions of Project Loom, fiber was the name for the virtual thread. It goes back to a previous project of the current Loom project leader Ron Pressler, the Quasar Fibers. However, the name fiber was discarded at the end of 2019, as was the alternative coroutine, and virtual thread prevailed.

Why is the Overhead Small in Case of Scheduled Coroutines?

The methods for changing priority and daemon status are no-ops. But why would user-mode threads be in any way better than kernel threads, and why do they deserve the appealing designation of lightweight? It is, again, convenient to separately consider both components, the continuation and the scheduler. Many applications written for the Java Virtual Machine are concurrent — meaning, programs like servers and databases, that are required to serve many requests, occurring concurrently and competing for computational resources. Project Loom is intended to significantly reduce the difficulty of writing efficient concurrent applications, or, more precisely, to eliminate the tradeoff between simplicity and efficiency in writing concurrent programs. Should you just blindly install the new version of Java whenever it comes out and just switch to virtual threads?