Concurrent Programming

Giorgio Brajnik - Antonio D'Angelo

  1. Aims

    The importance of Concurrent Programming within an Operating System course is largely recognized. The aim of this unit is to provide a student with a sound grounding of the process interation paradigms inside and outside the kernel. After a brief summary of the principles of sequential programming, the basic features of dataflow programming are introduced. Some general definitions are given and some properties are discussed. The data dependency graph is introduced providing the student with simple and effective programming paradigms to rewrite in parallel form some classical sequential algorithms.

    The general paradigm of message passing is easily derived from the dataflow principles thinking at the instruction nodes in a coarse-grained fashion. The semaphore-controlled shared memory is obtained as a special case of message passing with empty messages exchanged.

    The last part concerns the deadlock problem which is introduced and discussed within the framework of resource allocation graphs. Some important cases are defined, even formally, providing some interesting results on the corresponding graph structure.

  2. Program

    1. Abstract Models:

      1. dependencies graphs
      2. program representation
      3. computation organization and execution
      4. dataflow machine

    2. Concurrent Processes:

      1. parallel control of execution;
      2. multisequential models;
      3. processes;
      4. process creation and interaction;
      5. coroutines implementation;

    3. Message Passing:

      1. send and receive and communication channels;
      2. blocking and not blocking process communication;
      3. selective communication;
      4. atomic transitions;

    4. Semaphores:

      1. definition and implementation within message passing;
      2. abstract definitions;
      3. correctness of semaphores;
      4. kernel implementation;
      5. critical sections and mutual exclusion;
      6. private semaphores;
      7. (conditional) critical regions;

    5. Monitor:

      1. definitions;
      2. Hoare monitor;
      3. process scheduling;
      4. conditional synchronization;
      5. conditional wait;
      6. Kessel monitor;

    6. Deadlock:

      1. introduction;
      2. resource allocation graph;
      3. necessary conditions for deadlock;
      4. deadlock detection and prevention;
      5. deadlock
  3. Suggested Texts

    All the preceeding topics are broadly discussed in the lecture Paradigmi di Programmazione Concorrente whose contents, chapter by chapter, are breafly summarized.