ccc parallel programming language
TRANSCRIPT
CCC Parallel Programming
Language
Deepesh Lekhak072/MSCSKE/654
IOE, Pulchowk Campus
2
Introduction
• CCC (Chung Cheng C) is a extension of C and supports both control and data parallelism• Developed by Nai-wei Lin, Asso. Prof. Of National Chung Cheng
University, Teiwan• A CCC program consists of a set of concurrent and cooperative tasks• Control parallelism runs in MIMD mode and communicates via
shared variables and/or message passing • Data parallelism runs in SIMD mode and communicates via shared
variables
3
Data Parallelism• Concurrency Abstraction For The Data Parallelism Specified By
• The definition of the domain construct domain name [size] {
data_declarations;data_parallel_functions;
}• An invocation of a data-parallel function will concurrently create size tasks. • The synchronization abstraction is implicitly specified by the synchronous semantics of the
SIMD model. • The communication abstraction is implicitly specified by a global name space.
4
An Example – Matrix Multiplication
=
5
An Example– Matrix Multiplication
domain matrix_op[16] { int a[16], b[16], c[16]; multiply(distribute in int [16][16] A, distribute in int [16][16] B, distribute out int [16][16] C);};
6
An Example– Matrix Multiplication
task::main( ) { int A[16][16], B[16][16], C[16][16]; domain matrix_op m;
read_array(A); read_array(B); m.multiply(A, B, C); print_array(C);}
7
matrix_op::multiply(distribute in int [16][16] A, distribute in int [16][16] B, distribute out int [16][16] C){ int i, j; a := A; b := B; for (i = 0; i < 16; i++) for (c[i] = 0, j = 0; j < 16; j++) c[i] += a[j] * matrix_op[i].b[j]; C := c;}
AN EXAMPLE– MATRIX MULTIPLICATION
8
Control Parallelism• Concurrency• task• par and parfor
• Synchronization and communication• shared variables – monitors • message passing – channels
9
Task• The concurrency abstraction is specified via the definition of task-parallel functions and the parallel section
constructs. task task_parallel_function();
• par construct is used to concurrently invoke a group of task-parallel functions par {
func_1;func_2;…func_n;
}
• parfor construct is used to concurrently invoke multiple instances of a group of task-parallel functions.parfor (init_expr; exit_expr; step_expr) {
func_1;func_2;…func_n;
}
10
Monitors• The monitor construct is a modular and efficient
construct for synchronizing shared variables among concurrent tasks• It provides data abstraction, mutual exclusion, and
conditional synchronization
11
Channels• The channel construct is a modular and efficient
construct for asynchronous message passing among concurrent tasks• The messages in the channels can be accessed via
the following two functionsmsg = receive(channel);send(channel, msg);
12
An Example - Barber Shop
Barber
Chair
Customer Customer Customer
13
An Example - Barber Shop
task::main( ){ monitor Barber_Shop bs; int i;
par { barber( bs ); parfor (i = 0; i < 10; i++) customer( bs ); }}
14
An Example - Barber Shop
task::barber(monitor Barber_Shop in bs){ while ( 1 ) { bs.get_next_customer( ); bs.finished_cut( ); }}
task::customer(monitor Barber_Shop in bs){ bs.get_haircut( ); }
15
An Example - Barber Shop
monitor Barber_Shop { int barber, chair, open; cond barber_available, chair_occupied; cond door_open, customer_left;
Barber_Shop( ); void get_haircut( ); void get_next_customer( ); void finished_cut( );};
16
An Example - Barber Shop
Barber_Shop( ){ barber = 0; chair = 0; open = 0;}
void get_haircut( ){ while (barber == 0) wait(barber_available); barber = 1; chair += 1; signal(chair_occupied); while (open == 0) wait(door_open); open = 1; signal(customer_left);}
17
An Example - Barber Shop
void get_next_customer( ){ barber += 1; signal(barber_available); while (chair == 0) wait(chair_occupied); chair = 1; }
void finished_cut( ){ open += 1; signal(door_open); while (open > 0) wait(customer_left);}
Structure
CCC compiler
CCC runtime library
Virtual shared memory machine interface
CCC applications
Pthread Millipede
SMP SMP cluster
19
Platforms for the CCC Compiler• PCs and SMPS• Pthread: shared memory + dynamic thread creation
• PC clusters and SMP clusters• Millipede: distributed shared memory + dynamic remote
thread creation
20
Virtual Shared Memory Machine Interface
• Processor management• Thread management• Shared memory allocation• Mutex locks• Read-write locks• Condition variables
21
The CCC Runtime Library• The CCC runtime library contains a collection of
functions that implements the salient abstractions of CCC on top of the virtual shared memory machine interface
22
The CCC Compiler• Tasks → threads• Monitors → mutex locks, read-write locks, and
condition variables• Channels → mutex locks and condition variables• Domains → set of synchronous threads
23
Conclusions• A high-level parallel programming language
that uniformly integrates• Both control and data parallelism• Both shared variables and message passing
• A modular parallel programming language• A retargetable compiler
24
Reference
1. Design And Implementation Of The CCC Parallel Programming Language, Nai-wei Lin, National Chung Cheng University, Chiayi, Taiwan
25
Thank You