Key Features
- You'll learn to write data processing programs in Python that are highly available, reliable, and fault tolerant
- Make use of Amazon Web Services along with Python to establish a powerful remote computation system
- Train Python to handle data-intensive and resource hungry applications
Book Description
CPU-intensive data processing tasks have become crucial considering the complexity of the various big data applications that are used today. Reducing the CPU utilization per process is very important to improve the overall speed of applications.
This book will teach you how to perform parallel execution of computations by distributing them across multiple processors in a single machine, thus improving the overall performance of a big data processing task. We will cover synchronous and asynchronous models, shared memory and file systems, communication between various processes, synchronization, and more.
What You Will Learn
- Get an introduction to parallel and distributed computing
- See synchronous and asynchronous programming
- Explore parallelism in Python
- Distributed application with Celery
- Python in the Cloud
- Python on an HPC cluster
- Test and debug distributed applications
About the Author
Francesco Pierfederici is a software engineer who loves Python. He has been working in the fields of astronomy, biology, and numerical weather forecasting for the last 20 years.
He has built large distributed systems that make use of tens of thousands of cores at a time and run on some of the fastest supercomputers in the world. He has also written a lot of applications of dubious usefulness but that are great fun. Mostly, he just likes to build things.
Table of Contents
- An Introduction to Parallel and Distributed Computing
- Asynchronous Programming
- Parallelism in Python
- DistrilÃ