Search Now to Find Amazing Website on I Dmoz ORG

Threads Websites

A thread is a context of execution within a program. Multithreaded programming deals with designing a program to have parts of it execute concurrently.- Category ID : 58805
1 -

Bibliography on Threads and Multithreading

Part of the Computer Science Bibliography Collection.
2 -

State Threads Library

Small application library for writing fast, highly scalable Internet programs on Unix-like platforms. Open source, MPL or GPL.
3 -

The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software

The biggest sea change in software development since the OO revolution is knocking at the door, and its name is Concurrency.
4 -

Generic Synchronization Policies in C++

Most uses of synchronization code in multi-threaded applications fall into a small number of high-level “usage patterns”, or what can be called generic synchronization policies (GSPs). This paper illustrates how the use of such GSPs simplify the writing of thread-safe classes. In addition, this paper presents a C++ class library that implements commonly-used GSPs.
5 -

Deadlock: The Problem and a Solution

This article explains what deadlocks are and describes ways of circumventing deadlocks.
6 -

Multi-threaded Algorithm Implementations

Explores effective uses of threads by looking at a multi-threaded implementation of the QuickSort algorithm and reports on situations where using threads will not help.
7 -

Avoiding the Perils of C++0x Data Races

Find out what dangers race conditions in general and C++0x data races in particular pose to concurrent code, as well as the strategies for avoiding them.
8 -

Concurrency in the D Programming Language

Andrei Alexandrescu explains recent hardware changes allowing concurrency and how the D programming languages addresses these possibilities.
9 -

Software and the Concurrency Revolution

Focuses on the implications of concurrency for software and its consequences for both programming languages and programmers. (Herb Sutter and James Larus)
10 -


A site devoted to lock-free algorithms, scalable architecture, multicore design patterns, parallel computations, threading libraries, tooling support and related topics.
11 -

Welcome to the Jungle

Herb Sutter is looking at how mainstream hardware is becoming permanently parallel, heterogeneous, and distributed.
12 -

A Thread Performance Comparison

Compares Windows NT and Solaris on a symmetric multiprocessor machine.
13 -


Higher order threads for C++; tutorial and reference manual.
14 -


Very lightweight stackless threads; give linear code execution for event-driven systems, designed to use little memory; library is pure C, no platform-specific Assembly; usable with or without OS. Open source, BSD-type license.
15 -

Apply Critical Sections Consistently

Critical sections are the One True Tool for guaranteeing mutual exclusion on shared variables. Like most tools, these must be applied consistently, and with the intended meanings.
16 -

Use Lock Hierarchies to Avoid Deadlock

Explains how to use lock hierarchies to avoid deadlock by assigning each shared resource a level that corresponds to its architectural layer.
17 -

Application-Level Abstractions for Lock-Free Data Sharing

Describes lock-free data sharing, otherwise known as "wait-free data sharing" as an alternative to the use of locks.
18 -

Lock-free Interprocess Communication

Interprocess communication is an essential component of modern software engineering. Often, lock-free IPC is accomplished via special processor commands. This article propose a communication type that requires only atomic writing of processor word from processor cache into main memory and atomic processor word reading from main memory into the processor register or processor cache.
19 -

The Pillars of Concurrency

This article makes the case that a consistent mental model is needed to talk about concurrency.
20 -

Multi-threaded Debugging Techniques

Describes a number of general purpose debugging techniques for multi-threaded applications.
21 -

Maximize Locality, Minimize Contention

Explains why in the concurrent world, locality is a first-order issue that trumps most other performance considerations. Now locality is no longer just about fitting well into cache and RAM, but to avoid scalability busters by keeping tightly coupled data physically close together and separately used data far, far apart.
22 -

The Many Faces of Deadlock

Explains that deadlock can happen whenever there is a blocking (or waiting) cycle among concurrent tasks.
23 -

Writing Lock-Free Code: A Corrected Queue

Explores lock-free code by focusing on creating a lock-free queue.
24 -

Understanding Parallel Performance

Explains how to accurately analyze the real performance of parallel code and lists some basic considerations and common costs.
25 -

Lock Options

Presents a solution to races and deadlocks based on a well-known deadlock-avoidance protocol and shows how it can be enforced by the compiler. It can be applied to programs in which the number of locks is fixed and known up front.
26 -

Measuring Parallel Performance: Optimizing a Concurrent Queue

Shows different ways of how to write a fast, internally synchronized queue, one that callers can use without any explicit external locking or other synchronization, and compares the performance.
27 -

Multithreaded File I/O

So far multithreaded file I/O is a under-researched field. Although its simple to measure, there is not much common knowledge about it. The measurements presented here show that multithreading can improve performance of file access directly, as well as indirectly by utilizing available cores to process the data read.
28 -

Sharing Is the Root of All Contention

Sharing requires waiting and overhead, and is a natural enemy of scalability. This article focuses on one important case, namely mutable (writable) shared objects in memory, which are an inherent bottleneck to scalability on multicore systems.
29 -

Lock-Free Code: A False Sense of Security

Writing lock-free code can confound anyone-even expert programmers, as Herb shows in this article.
30 -

Real-world Concurrency

Describes some key principles that will help mastering the "black art" of writing multithreaded code.
31 -

Fundamental Concepts of Parallel Programming

Explains fundamental concepts for moving from a linear to a parallel programming model
32 -

Introduction to Priority Inversion

Gives an introduction to priority inversion and shows a pair of techniques to avoid them.
33 -

Use Threads Correctly = Isolation + Asynchronous Messages

Motivates and illustrate best practices for using threads - techniques that will make concurrent code easier to write correctly and to reason about with confidence.
34 -

Practical Lock-Free Buffers

Looks at how lock-free programming avoids system failure by tolerating individual process failures.
35 -

Avoid Exposing Concurrency: Hide It Inside Synchronous Methods

Explains where to start when trying to add concurrency to a mass of existing code.
36 -

Break Up and Interleave Work to Keep Threads Responsive

Breaking up is hard to do, but interleaving can be even subtler.
37 -

Use Thread Pools Correctly: Keep Tasks Short and Nonblocking

A thread pool hides a lot of details, but to use it effectively some awareness of some things a pool does under the covers is needed to avoid inadvertently hitting performance and correctness pitfalls.

Subcategories under Threads 2

All Languages