Wednesday, 12 January 2011

The Art of Concurrency: A Thread Monkey's Guide to Writing Parallel Applications



The Art of Concurrency: A Thread Monkey's Guide to Writing Parallel Applications
| 2009-05-19 00:00:00 | | 0 | Programming


If you're looking to take full advantage of multi-core processors with concurrent programming, this practical book provides the knowledge and hands-on experience you need. The Art of Concurrency is one of the few resources to focus on implementing algorithms in the shared-memory model of multi-core processors, rather than just theoretical models or distributed-memory architectures. The book provides detailed explanations and usable samples to help you transform algorithms from serial to parallel code, along with advice and analysis for avoiding mistakes that programmers typically make when first attempting these computations. Written by an Intel engineer with over two decades of parallel and concurrent programming experience, this book will help you:

Understand parallelism and concurrency Explore differences between programming for shared-memory and distributed-memory Learn guidelines for designing multithreaded applications, including testing and tuning Discover how to make best use of different threading libraries, including Windows threads, POSIX threads, OpenMP, and Intel Threading Building Blocks Explore how to implement concurrent algorithms that involve sorting, searching, graphs, and other practical computations

The Art of Concurrency shows you how to keep algorithms scalable to take advantage of new processors with even more cores. For developing parallel code algorithms for concurrent programming, this book is a must.

User review
Not a bad book if you can get over the style and sloppy code
This book is kind of a dull read for (in my opinion) interesting material. The writing style is informal, but self-importantly so (lots of `I did this` and `I've said this before`). Even discounting that, the writer cannot make the subject very interesting. Partly because he eschews figures or flowcharts or itemized steps for walls of text, partly because the writing itself is disjointed and not very good. Remember, informal != good.


Plus, the code is sloppy. Almost everywhere a main routine has a loop pthread_create'ing new threads, `new` is used to allocate and free is used to deallocate a pointer. This is the most egregious one; one can take issue with a lot of others (for example, why is the code C++ if 99% of it is C? Why is the pointer allocated in the main thread but freed in the worker thread?).


The thing I liked about the book is that it covers a lot of relevant topics, including modern ones such as MapReduce. It should reward someone with the patience to overlook its flaws.

User review
An excellent book, but,,.
I thought this was an excellent book. The previous reviewer was correct, it's an outstanding overview of how to create parallel algorithms without being too academic. My only real complaint is that the font was too small. Perhaps it's my age (I'm 46), but frankly if I'm going to spend hours with a book and use it as a reference, I want the font to be easy on my eyes. If in the second edition, the author makes the font bigger and adds a few more pages, then this will *the* book to read on parallel algorithms.

User review
Solid book on concurrent programming
This is a new book on concurrent programming that splits the difference between academic tomes on the subject and cookbook code dump type texts.


The author assumes that readers have some basic knowledge of data structures and algorithms and asymptotic efficiency of algorithms (Big-Oh notation) that is typically taught in an undergraduate computer science curriculum. Readers familiar with Introduction to Algorithms should do the trick. He also assumes that the reader is an experienced C programmer (he uses C throughout the book) and knows something about OpenMP, Intel Threading Building Blocks, POSIX threads, or Windows Threads libraries and has a good idea of which of these tools will be used in their own situations. The author does not focus on using one programming paradigm here, since, for the most part, the functionality of these overlap. Instead he presents a variety of threading implementations across a wide spectrum of algorithms that are featured in the latter portion of the book.


The current product description does not show the table of contents, so I do that next:


Chapter 1, `Want to go faster` anticipates and answers some of the questions you might have about concurrent programming. This chapter explains the differences between parallel and concurrent, and describes the four-step threading methodology. The chapter ends with some background on concurrent programming and some of the differences and similarities between distributed-memory and shared-memory programming and execution models.


Chapter 2, `Concurrent or Not Concurrent` collects a lot of the author's wisdom on initial approaches that apply to a large percentage of code you're likely to encounter. Also, the author lets you know that not every bit of computation can be made concurrent, no matter how hard you try. There are examples of the kinds of algorithms and computations that are not friendly to concurrency in Section 2.2. When any of those examples can be modified to allow for concurrent execution, hints and tips are included about how to do that.


Chapter 3, `Proving Correctness and Measuring Performance` takes a look at topics related to the final two steps of the threading methodology. The first is knowing when your concurrent algorithms will run correctly or at least having a good idea that you've done a good job of designing an error-free concurrent algorithm. The second topic covers some of the ways you can measure how well your concurrent code is executing in parallel.


Chapter 4, `Eight Simple Rules` should give you more success in writing the best and most efficient threaded implementation of your applications. In upcoming chapters, when discussing the design and implementation of specific algorithms, the author drops in a reference to one or more of these eight rules.


Chapter 5, Threading Libraries, reviews some of the details of the threading libraries used in subsequent chapters to implement the algorithms. The author assumes that you are already familiar with at least one of these threading methods. Proficiency is not expected, but you should have at least tried your hand at some threaded coding examples before.


If you are unfamiliar with any of the threading libraries, this chapter should provide you with enough details to understand the algorithms implemented with such a library and any library-specific features that are used. Otherwise, this should be a review. You can skip over this chapter and come back when you might have a question about syntax or threading without fear of loss of continuity.


Chapter 6, is entitled `Parallel Sum and Prefix Scan`. Summing the elements of an array or finding all partial sums of the elements in an array are basic algorithmic problems. The solution to these problems is easy to describe in a single sentence or two. The concurrent versions of these algorithms, known as parallel sum and prefix scan respectively, are simple and easy to understand. Since they are so simple, these problems have been extensively analyzed and are used as bellwether algorithms within the parallel programming community. Description, design, analysis, and implementation of these two algorithms is intended to prepare you for the rest of the algorithms contained in the text.


Chapter 7, `Map Reduce`, is about an algorithmic framework of the same name, like divide-and-conquer or backtracking, rather than a specific algorithm. The pair of operations, map and reduce, is found in LISP and other functional languages. MapReduce has been getting a lot of interest lately as an algorithmic framework that can be executed concurrently.


Chapter 8, `Sorting` examines concurrent sorting. First considered are compare-exchange sorts such as bubble sort. These are sorting algorithms that use the results from comparing two keys to determine the relative order of the elements with those keys. Movement of data items will be based on those results and will be the exchange of the positions of the two items under consideration. The final algorithm considered is radix sort, which compares bits within keys to determine movement of data.


Chapter 9, `Searching`, discusses two algorithms you can use to search through a collection of data and shows how to make them concurrent to decrease the time needed to locate items of interest. For simplicity, it is assumed that all keys within a data set to be searched are unique. Strategies for dealing with multiple duplicate keys are only mentioned briefly.


Chapter 10, `Graph Algorithms`, talks briefly about what a graph is and how it is represented and then how it is traversed using both depth-first and breadth-first search algorithms. It is very well illustrated.


Chapter 11,`Threading Tools`, mentions some debugging and performance tools that you can use on threaded applications. The set of tools covered in this chapter is not exhaustive and is meant to show you ways to track down problems in a much quicker manner than if you performed the hunt yourself.


All in all, this is a very useful yet not over-academic book. I would say that you should probably have read something along the lines of Using OpenMP: Portable Shared Memory Parallel Programming (Scientific and Engineering Computation), or at least read that book in parallel with this one. If you're not interested in OpenMP, you would of course read a book on a different technique for concurrent programming.


Download this book!

Free Ebooks Download

No comments:

Post a Comment