Parallel Programming with Intel MPI
|Chapter 1: Introduction
|Chapter 2|Chapter 3|Chapter 4|
|Chapter 2|Chapter 3|Chapter 4|
Download and install Intel MPI
Please check the hardware, software and other development tools requirement for your need on intel's webpage.
For my system which has 16 GB RAM, and a 2.1GHz quad core Intel Core i-5. Install the following tools. form the corresponding website.
1. Microsoft Visual Studio 2019 (Community Edition).
We will only select Desktop development with C++ set of tools in our installation.
2. Intel OneAPI Base toolkit
3. Intel OneAPI HPC Toolkit
We need to install both base and hpc toolkit from intel oneapi website . One has to have or create an account be able to download . Selet appropriate OS, installer type etc. on the webpage. The appropriate download button will show up. Just click it.
The HPC toolkit provides a set of compiler and libraries. DPC++/C++ compilers and the intel MPI Libraries are required for this tutorial others are optional.
After download is complete the installations are straightforward. Just keep clicking next or appropriate options provided on the prompt.You can choose installation path on your system while installing. A default installation path could be in - "C:\Program Files (x86)\Intel\oneAPI". To be able to compile and run using any intel compilers we need to run follwoing command on our command prompt.
This will set all environment variables and paths needed for compiling and running a program with MPI.
"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64
MPI: Compile and Run your first program.
Let us write our first MPI program. Here is mphello.c Hello World program program using MPI.
Note: It will help to go through the explanation of the above code in Dr. Wes Kendall's original website.
#include <mpi.h>
#include <stdio>
/*mphello.c: Hello World program program using MPI.*/
int main(int argc, char** argv) {
// Initialize the MPI environment
MPI_Init(NULL, NULL);
// Get the number of processes
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
// Get the rank of the process
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
// Get the name of the processor
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
// Print off a hello world message
printf("Hello world from processor %s, rank %d out of %d processors\n",
processor_name, world_rank, world_size);
// Finalize the MPI environment.
MPI_Finalize();
}
To compile the program:
|Chapter 1|Chapter 2|Chapter 3|Chapter 4|
mpicc mphello.c -o mphello
This will create a mphello.obj and mphello.exe in your local directory.
To run the program:
mpiexec mphello
output:
Hello world from processor COMPUTER-USER, rank 1 out of 2 processors
Hello world from processor COMPUTER-USER, rank 0 out of 2 processors
Example : Trapezium Rule
we can use the trapezoidal rule to approximate the area between the graph
of a function, y = f(x), two vertical lines, and the x-axis. The basic
idea is to divide the interval on the x-axis into n equal subintervals. Then we approximate the area lying between the graph and each subinterval by a trapezoid whose base
is the subinterval, whose vertical sides are the vertical lines through the endpoints of
the subinterval, and whose fourth side is the secant line joining the points where the
vertical lines cross the graph. See Figure 3.4. If the endpoints of the subinterval are
xi and xi+1, then the length of the subinterval is h = xi+1 − xi
. Also, if the lengths of
the two vertical segments are f(xi) and f(xi+1), then the area of the trapezoid is
Area of one trapezoid =
h/2 *[ f(xi) + f(xi+1)]
double Trap(double left ,double right,int trap_count ) {
double estimate, x;
int i;
double base_len = (right -left)/trap_count;
estimate = (func(left) + func(right))/2.0;
for (i=1;i<=trap_count;i++) {
x = left + i*base_len;
estimate += func(x);
}
estimate = estimate*base_len;
return estimate;
}
|Chapter 1|Chapter 2|Chapter 3|Chapter 4|Example : Galerkin Method
|Chapter 1|Chapter 2|Chapter 3|Chapter 4| Advanced level routines
More Resources:
Comments
Post a Comment