GIL-free Python and the GPU: hands-on experience
- Track:
- Python Core, Internals, Extensions
- Type:
- Tutorial
- Level:
- intermediate
- Duration:
- 180 minutes
Abstract
Because of the Global Interpreter Lock (GIL), Python has never truly been parallel. Even on multi-core systems, Python threads are forced to take turns rather than running simultaneously, limiting performance in compute-heavy applications. The recent removal of the GIL is unlocking new levels of concurrency and efficiency, redefining what’s possible with Python in high-performance computing.
In this hands-on tutorial, we will demystify parallel programming in Python by showcasing how to tackle common concurrency challenges. Starting from the ground up, we will introduce the two common parallel-programming approaches in Python—multithreading and multiprocessing—ensuring that attendees of all experience levels can successfully participate in the tutorial.
From there, we will dive into real-life use cases and demonstrate how to leverage free-threaded Python to tap into the power of GPUs. By pairing Python’s parallel libraries with CUDA, you will learn how to accelerate both typical computing tasks and more advanced work, such as deep learning. We will also explore the best tools available for debugging, monitoring, and optimizing multi-threaded and GPU-accelerated applications, all while highlighting proven best practices.
Throughout the tutorial, you will have the chance to work through exercises—from simple parallel calls to complex GPU integrations—so make sure to bring your laptop. Ideally, your laptop should have a GPU; if not, we will show you how to use one available online.
By the end, you will walk away with not only a solid understanding of GIL-free Python but also the confidence to implement, debug, and optimize parallel solutions in your own projects.