this ppt is on the topic of system security. there are some topic which are introduce very nicely.there are some commont topic introduce in the
1. firewall
2.antivirus
3.malware
and IOT
these are the sub topic..
Methods for Sentiment Analysis: A Literature Studyvivatechijri
ย
Sentiment analysis is a trending topic, as everyone has an opinion on everything. The systematic
study of these opinions can lead to information which can prove to be valuable for many companies and
industries in future. A huge number of users are online, and they share their opinions and comments regularly,
this information can be mined and used efficiently. Various companies can review their own product using
sentiment analysis and make the necessary changes in future. The data is huge and thus it requires efficient
processing to collect this data and analyze it to produce required result.
In this paper, we will discuss the various methods used for sentiment analysis. It also covers various techniques
used for sentiment analysis such as lexicon based approach, SVM [10], Convolution neural network,
morphological sentence pattern model [1] and IML algorithm. This paper shows studies on various data sets
such as Twitter API, Weibo, movie review, IMDb, Chinese micro-blog database [9] and more. The paper shows
various accuracy results obtained by all the systems.
Classification and Detection of Vehicles using Deep Learningijtsrd
ย
The vehicle classification and detecting its license plate are important tasks in intelligent security and transportation systems. The traditional methods of vehicle classification and detection are highly complex which provides coarse grained results due to suffering from limited viewpoints. Because of the latest achievements of Deep Learning, it was successfully applied to image classification and detection of objects. This paper presents a method based on a convolutional neural network, which consists of two steps vehicle classification and vehicle license plate recognition. Several typical neural network modules have been applied in training and testing the vehicle Classification and detection of license plate model, such as CNN convolutional neural networks , TensorFlow, Tesseract OCR. The proposed method can identify the vehicle type, number plate and other information accurately. This model provides security and log details regarding vehicles by using AI Surveillance. It guides the surveillance operators and assists human resources. With the help of the original training dataset and enriched testing dataset, the algorithm can obtain results with an average accuracy of about 97.32 in the classification and detection of vehicles. By increasing the amount of the data, the mean error and misclassification rate gradually decreases. So, this algorithm which is based on Deep Learning has good superiority and adaptability. When compared to the leading methods in the challenging Image datasets, our deep learning approach obtains highly competitive results. Finally, this paper proposes modern methods for the improvement of the algorithm and prospects the development direction of deep learning in the field of machine learning and artificial intelligence. Madde Pavan Kumar | Dr. K. Manivel | N. Jayanthi "Classification & Detection of Vehicles using Deep Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30353.pdf Paper Url :https://www.ijtsrd.com/engineering/software-engineering/30353/classification-and-detection-of-vehicles-using-deep-learning/madde-pavan-kumar
this ppt is on the topic of system security. there are some topic which are introduce very nicely.there are some commont topic introduce in the
1. firewall
2.antivirus
3.malware
and IOT
these are the sub topic..
Methods for Sentiment Analysis: A Literature Studyvivatechijri
ย
Sentiment analysis is a trending topic, as everyone has an opinion on everything. The systematic
study of these opinions can lead to information which can prove to be valuable for many companies and
industries in future. A huge number of users are online, and they share their opinions and comments regularly,
this information can be mined and used efficiently. Various companies can review their own product using
sentiment analysis and make the necessary changes in future. The data is huge and thus it requires efficient
processing to collect this data and analyze it to produce required result.
In this paper, we will discuss the various methods used for sentiment analysis. It also covers various techniques
used for sentiment analysis such as lexicon based approach, SVM [10], Convolution neural network,
morphological sentence pattern model [1] and IML algorithm. This paper shows studies on various data sets
such as Twitter API, Weibo, movie review, IMDb, Chinese micro-blog database [9] and more. The paper shows
various accuracy results obtained by all the systems.
Classification and Detection of Vehicles using Deep Learningijtsrd
ย
The vehicle classification and detecting its license plate are important tasks in intelligent security and transportation systems. The traditional methods of vehicle classification and detection are highly complex which provides coarse grained results due to suffering from limited viewpoints. Because of the latest achievements of Deep Learning, it was successfully applied to image classification and detection of objects. This paper presents a method based on a convolutional neural network, which consists of two steps vehicle classification and vehicle license plate recognition. Several typical neural network modules have been applied in training and testing the vehicle Classification and detection of license plate model, such as CNN convolutional neural networks , TensorFlow, Tesseract OCR. The proposed method can identify the vehicle type, number plate and other information accurately. This model provides security and log details regarding vehicles by using AI Surveillance. It guides the surveillance operators and assists human resources. With the help of the original training dataset and enriched testing dataset, the algorithm can obtain results with an average accuracy of about 97.32 in the classification and detection of vehicles. By increasing the amount of the data, the mean error and misclassification rate gradually decreases. So, this algorithm which is based on Deep Learning has good superiority and adaptability. When compared to the leading methods in the challenging Image datasets, our deep learning approach obtains highly competitive results. Finally, this paper proposes modern methods for the improvement of the algorithm and prospects the development direction of deep learning in the field of machine learning and artificial intelligence. Madde Pavan Kumar | Dr. K. Manivel | N. Jayanthi "Classification & Detection of Vehicles using Deep Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30353.pdf Paper Url :https://www.ijtsrd.com/engineering/software-engineering/30353/classification-and-detection-of-vehicles-using-deep-learning/madde-pavan-kumar
Currently we are having a project of Human Computer Interaction (HCI) course in which we are developing a mobile app named "Announcer".
This is a project report of our "Announcer" mobile app.
Click on our blogspot here to know more:
yujinnohikari.blogspot.com
prototyping software credit to: justinmind.com
A deep learning facial expression recognition based scoring system for restau...CloudTechnologies
ย
A deep learning facial expression recognition based scoring system for restaurants
Cloud Technologies providing Complete Solution for all
Academic Projects Final Year/Semester Student Projects
For More Details,
Contact:
Mobile:- +91 8121953811,
whatsapp:- +91 8522991105,
Email ID: cloudtechnologiesprojects@gmail.com
What is URI Handlers? Relationship between URI, URL and URN, Various URI handlers, Server-Side Includes (SSI),
CGI/FastCGI, Server-side scripting, Servlets, JSP.
Since its launch in mid-January, the Data Science Bowl Lung Cancer Detection Competition has attracted more than 1,000 submissions. To be successful in this competition, data scientists need to be able to get started quickly and make rapid iterative changes. In this talk, we show how to compute features of the scanned images in the competition with a pre-trained Convolutional Neural Network (CNN) with Cognitive Toolkit (previously named CNTK), and use these features to classify the scans into cancerous or not cancerous, using a boosted tree with Light-GBM library, all in one hour.
Blog post: https://blogs.technet.microsoft.com/machinelearning/2017/02/17/quick-start-guide-to-the-data-science-bowl-lung-cancer-detection-challenge-using-deep-learning-microsoft-cognitive-toolkit-and-azure-gpu-vms/
The "E-learning Management System" has been developed to override the problems prevailing in the practicing manual system. This software is supported to eliminate and in some cases reduce the hardships faced by this existing system. Moreover this system is designed for the particular need of the company to carry out operations in a smooth and effective manner.
Use Case, Activity, Sequence, Class Diagram of Bus Ticket Management System.
Poster Design of Bus Ticket Management System.
By- CSE Students of East West University
Lung Cancer Detection using Machine Learningijtsrd
ย
Modern three dimensional 3 D medical imaging offers the potential and promise for major advances in science and medicine as higher fidelity images are produced. Due to advances in computer aided diagnosis and continuous progress in the field of computerized medical image visualization, there is need to develop one of the most important fields within scientific imaging. From the early basis report on cancer patients it has been seen that a greater number of people die of lung cancer than from other cancers such as colon, breast and prostate cancers combined. Lung cancer are related to smoking or secondhand smoke , or less often to exposure to radon or other environmental factors thatโs why this can be prevented. But still it is not yet clear if these cancers can be prevented or not. In this research work, approach of segmentation, feature extraction and Convolution Neural Network CNN will be applied for locating, characterizing cancer portion. Harpreet Singh | Er. Ravneet Kaur | "Lung Cancer Detection using Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33659.pdf Paper Url: https://www.ijtsrd.com/computer-science/computer-architecture/33659/lung-cancer-detection-using-machine-learning/harpreet-singh
SDN( Software Defined Network) and NFV(Network Function Virtualization) for I...Sagar Rai
ย
Software, Software Defined Network, Network Function Virtualization, SDN, NFV, Internet of things, Basics of Internet of things, Network Basics, Virtualization, Limitation of Conventional Network, Open flow, Basics of conventional network,
Software Agents are very useful in coming Software development process. This ppt discuss introduction and use of Agents in Software development process.
Pest Control in Agricultural Plantations Using Image ProcessingIOSR Journals
ย
Abstract: Monocropped plantations are unique to India and a handful of countries throughout the globe. Essentially, the FOREST approach of growing coffee along with in India has enabled the plantation to fight many outbreaks of pests and diseases. Mono cropped Plantations are under constant threat of pest and disease incidence because it favours the build up of pest population. To cope with these problems, an automatic pest detection algorithm using image processing techniques in MATLAB has been proposed in this paper. Image acquisition devices are used to acquire images of plantations at regular intervals. These images are then subjected to pre-processing, transformation and clustering.
Hostel Management System monitors and records a variety of information covering Hostel Attendance, Disciplinary Logs, as well as Room charge Status.Hostel management software
Hostel software module includes many features like fee collection, room allotment, room management as categorization of rooms, daily attendance register of hostel and hostel reports.Hostel management system module includes many reports like room allotment register, room left report, charge due reports and receipts, room transfer register and room status report
Our unique 1U GPU servers allow you to use the latest GPUs (Tesla, GTX285, Quadro FX5800) for visualization or offloading processing in a small form factor. These are built on Intel\'s latest Nehalem processors.
Currently we are having a project of Human Computer Interaction (HCI) course in which we are developing a mobile app named "Announcer".
This is a project report of our "Announcer" mobile app.
Click on our blogspot here to know more:
yujinnohikari.blogspot.com
prototyping software credit to: justinmind.com
A deep learning facial expression recognition based scoring system for restau...CloudTechnologies
ย
A deep learning facial expression recognition based scoring system for restaurants
Cloud Technologies providing Complete Solution for all
Academic Projects Final Year/Semester Student Projects
For More Details,
Contact:
Mobile:- +91 8121953811,
whatsapp:- +91 8522991105,
Email ID: cloudtechnologiesprojects@gmail.com
What is URI Handlers? Relationship between URI, URL and URN, Various URI handlers, Server-Side Includes (SSI),
CGI/FastCGI, Server-side scripting, Servlets, JSP.
Since its launch in mid-January, the Data Science Bowl Lung Cancer Detection Competition has attracted more than 1,000 submissions. To be successful in this competition, data scientists need to be able to get started quickly and make rapid iterative changes. In this talk, we show how to compute features of the scanned images in the competition with a pre-trained Convolutional Neural Network (CNN) with Cognitive Toolkit (previously named CNTK), and use these features to classify the scans into cancerous or not cancerous, using a boosted tree with Light-GBM library, all in one hour.
Blog post: https://blogs.technet.microsoft.com/machinelearning/2017/02/17/quick-start-guide-to-the-data-science-bowl-lung-cancer-detection-challenge-using-deep-learning-microsoft-cognitive-toolkit-and-azure-gpu-vms/
The "E-learning Management System" has been developed to override the problems prevailing in the practicing manual system. This software is supported to eliminate and in some cases reduce the hardships faced by this existing system. Moreover this system is designed for the particular need of the company to carry out operations in a smooth and effective manner.
Use Case, Activity, Sequence, Class Diagram of Bus Ticket Management System.
Poster Design of Bus Ticket Management System.
By- CSE Students of East West University
Lung Cancer Detection using Machine Learningijtsrd
ย
Modern three dimensional 3 D medical imaging offers the potential and promise for major advances in science and medicine as higher fidelity images are produced. Due to advances in computer aided diagnosis and continuous progress in the field of computerized medical image visualization, there is need to develop one of the most important fields within scientific imaging. From the early basis report on cancer patients it has been seen that a greater number of people die of lung cancer than from other cancers such as colon, breast and prostate cancers combined. Lung cancer are related to smoking or secondhand smoke , or less often to exposure to radon or other environmental factors thatโs why this can be prevented. But still it is not yet clear if these cancers can be prevented or not. In this research work, approach of segmentation, feature extraction and Convolution Neural Network CNN will be applied for locating, characterizing cancer portion. Harpreet Singh | Er. Ravneet Kaur | "Lung Cancer Detection using Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33659.pdf Paper Url: https://www.ijtsrd.com/computer-science/computer-architecture/33659/lung-cancer-detection-using-machine-learning/harpreet-singh
SDN( Software Defined Network) and NFV(Network Function Virtualization) for I...Sagar Rai
ย
Software, Software Defined Network, Network Function Virtualization, SDN, NFV, Internet of things, Basics of Internet of things, Network Basics, Virtualization, Limitation of Conventional Network, Open flow, Basics of conventional network,
Software Agents are very useful in coming Software development process. This ppt discuss introduction and use of Agents in Software development process.
Pest Control in Agricultural Plantations Using Image ProcessingIOSR Journals
ย
Abstract: Monocropped plantations are unique to India and a handful of countries throughout the globe. Essentially, the FOREST approach of growing coffee along with in India has enabled the plantation to fight many outbreaks of pests and diseases. Mono cropped Plantations are under constant threat of pest and disease incidence because it favours the build up of pest population. To cope with these problems, an automatic pest detection algorithm using image processing techniques in MATLAB has been proposed in this paper. Image acquisition devices are used to acquire images of plantations at regular intervals. These images are then subjected to pre-processing, transformation and clustering.
Hostel Management System monitors and records a variety of information covering Hostel Attendance, Disciplinary Logs, as well as Room charge Status.Hostel management software
Hostel software module includes many features like fee collection, room allotment, room management as categorization of rooms, daily attendance register of hostel and hostel reports.Hostel management system module includes many reports like room allotment register, room left report, charge due reports and receipts, room transfer register and room status report
Our unique 1U GPU servers allow you to use the latest GPUs (Tesla, GTX285, Quadro FX5800) for visualization or offloading processing in a small form factor. These are built on Intel\'s latest Nehalem processors.
14. ๋ฐ์ดํฐ ๋ณ๋ ฌ์ฑ (2/3)
์ฝ๋ ์) : ํ๋ ฌ์ ๊ณฑ์ (OpenMP)
Serial Code
Parallel Code
!$OMP PARALLEL DO
DO K=1,N
DO K=1,N
DO J=1,N
DO J=1,N
DO I=1,N
C(I,J) = C(I,J) +
DO I=1,N
C(I,J) = C(I,J) +
(A(I,K)*B(K,J))
END DO
END DO
END DO
A(I,K)*B(K,J)
END DO
END DO
END DO
!$OMP END PARALLEL DO
15. ๋ฐ์ดํฐ ๋ณ๋ ฌ์ฑ (3/3)
๋ฐ์ดํฐ ๋ถํด (ํ๋ก์ธ์ 4๊ฐ:K=1,20์ผ ๋)
Process
Proc0
Proc1
Proc2
Proc3
Iterations of K
K =
K =
1:5
6:10
K = 11:15
K = 16:20
Data Elements
A(I,1:5)
B(1:5,J)
A(I,6:10)
B(6:10,J)
A(I,11:15)
B(11:15,J)
A(I,16:20)
B(16:20,J)
38. ์ฑ๋ฅํฅ์๋ (5/7)
f = 0.2, n = 4
Serial
Parallel
process 1
20
20
80
20
process 2
process 3
cannot be parallelized
process 4
can be parallelized
S(4) =
1
0.2 + (1-0.2)/4
= 2.5
43. ์ค์ง์ ์ฑ๋ฅํฅ์์ ๊ณ ๋ คํ ์ฌํญ
์ค์ ์ฑ๋ฅํฅ์๋ : ํต์ ๋ถํ, ๋ก๋ ๋ฐธ๋ฐ์ฑ ๋ฌธ์
20
80
Serial
parallel
20
20
process 1
cannot be parallelized
process 2
can be parallelized
process 3
communication overhead
process 4
Load unbalance
69. OpenMP ํ๋ก๊ทธ๋๋ฐ ๋ชจ๋ธ (2/4)
Fork-Join
๏ง
๏ง
๋ณ๋ ฌํ๊ฐ ํ์ํ ๋ถ๋ถ์ ๋ค์ค ์ค๋ ๋ ์์ฑ
๋ณ๋ ฌ๊ณ์ฐ์ ๋ง์น๋ฉด ๋ค์ ์์ฐจ์ ์ผ๋ก ์คํ
F
J
F
J
O
O
O
O
Master
R
I
R
I
Thread
K
N
K
N
[Parallel Region]
[Parallel Region]
70. OpenMP ํ๋ก๊ทธ๋๋ฐ ๋ชจ๋ธ (3/4)
์ปดํ์ผ๋ฌ ์ง์์ด ์ฝ์
Serial Code
PROGRAM exam
โฆ
ialpha = 2
DO i = 1, 100
a(i) = a(i) + ialpha*b(i)
ENDDO
PRINT *, a
END
Parallel Code
PROGRAM exam
โฆ
ialpha = 2
!$OMP PARALLEL DO
DO i = 1, 100
a(i) = a(i) + ialpha*b(i)
ENDDO
!$OMP END PARALLEL DO
PRINT *, a
END
71. OpenMP ํ๋ก๊ทธ๋๋ฐ ๋ชจ๋ธ (4/4)
Fork-Join
โป export OMP_NUM_THREADS = 4
ialpha = 2
(Master Thread)
(Fork)
DO i=1,25
DO i=26,50
DO i=51,75
DO i=76,100
...
...
...
...
(Join)
(Master)
PRINT *, a
(Slave)
(Master Thread)
(Slave)
(Slave)
76. ์ง์์ด (3/5)
๋ณ๋ ฌ์์ญ ์ง์
Fortran
!$OMP PARALLEL
DO i = 1, 10
PRINT *, โHello Worldโ, i
ENDDO
!$OMP END PARALLEL
C
#pragma omp parallel
for(i=1; i<=10; i++)
printf(โHello World %dnโ,i);
77. ์ง์์ด (4/5)
๋ณ๋ ฌ์์ญ๊ณผ ์์ ๋ถํ
Fortran
C
!$OMP PARALLEL
#pragma omp parallel
!$OMP DO
DO i = 1, 10
PRINT *, โHello Worldโ, i
ENDDO
[!$OMP END DO]
!$OMP END PARALLEL
{
#pragma omp for
for(i=1; i<=10; i++)
printf(โHello World %dnโ,i);
}
78. ์ง์์ด (5/5)
๋ณ๋ ฌ์์ญ๊ณผ ์์ ๋ถํ
Fortran
!$OMP PARALLEL
!$OMP DO
DO i = 1, n
a(i) = b(i) + c(i)
ENDDO
[!$OMP END DO]
Optional
!$OMP DO
โฆ
[!$OMP END DO]
!$OMP END PARALLEL
C
#pragma omp parallel
{
#pragma omp for
for (i=1; i<=n; i++) {
a[i] = b[i] + c[i]
}
#pragma omp for
for(โฆ){
โฆ
}
}
90. What is MPI?
MPI = Message Passing Interface
MPI is a specification for the developers and users of message passing libraries. By itself, it
is NOT a library โ but rather the specification of what such a library should be.
MPI primarily addresses the message-passing parallel programming model : data is moved
from the address space of one process to that of another process through cooperative
operations on each process.
Simply stated, the goal of the message Passing Interface is to provide a widely used standard
for writing message passing programs. The interface attempts to be :
๏ง
๏ง
Portable
๏ง
Efficient
๏ง
90
Practical
Flexible
91. What is MPI?
The MPI standard has gone through a number of revisions, with the most recent version
being MPI-3.
Interface specifications have been defined for C and Fortran90 language bindings :
๏ง
C++ bindings from MPI-1 are removed in MPI-3
๏ง
MPI-3 also provides support for Fortran 2003 and 2008 features
Actual MPI library implementations differ in which version and features of the MPI standard
they support. Developers/users will need to be aware of this.
91
92. Programming Model
Originally, MPI was designed for distributed memory architectures, which were becoming
increasingly popular at time (1980s โ early 1990s).
As architecture trends changed, shared memory SMPs were combined over networks
creating hybrid distributed memory/shared memory systems.
92
93. Programming Model
MPI implementers adapted their libraries to handle both types of underlying memory
architectures seamlessly. They also adapted/developed ways of handing different
interconnects and protocols.
Today, MPI runs on virtually any hardware platform :
๏ง
Distributed Memory
๏ง
Shared Memory
๏ง
Hybrid
The programming model clearly remains a distributed memory model however, regardless of
the underlying physical architecture of the machine.
93
94. Reasons for Using MPI
Standardization
๏ง
MPI is the only message passing library which can be considered a standard. It is
supported on virtually all HPC platforms. Practically, it has replaced all previous
message passing libraries.
Portability
๏ง
There is little or no need to modify your source code when you port your application to a
different platform that supports (and is compliant with) the MPI standard.
Performance Opportunities
๏ง
Vendor implementations should be able to exploit native hardware features to optimize
performance.
Functionality
๏ง
There are over 440 routines defined in MPI-3, which includes the majority of those in
MPI-2 and MPI-1.
Availability
๏ง
94
A Variety of implementations are available, both vendor and public domain.
95. History and Evolution
MPI has resulted from the efforts of numerous individuals and groups that began in 1992.
1980s โ early 1990s : Distributed memory, parallel computing develops, as do a number of
incompatible soft ware tools for writing such programs โ usually with tradeoffs between
portability, performance, functionality and price. Recognition of the need for a standard arose.
Apr 1992 : Workshop on Standards for Message Passing in a Distributed Memory
Environment, Sponsored by the Center for Research on Parallel Computing, Williamsburg,
Virginia. The basic features essential to a standard message passing interface were
discussed, and a working group established to continue the standardization process.
Preliminary draft proposal developed subsequently.
95
96. History and Evolution
Nov 1992 : Working group meets in Minneapolis. MPI draft proposal (MPI1) from ORNL
presented. Group adopts procedures and organization to form the MPI Forum. It eventually
comprised of about 175 individuals from 40 organizations including parallel computer
vendors, software writers, academia and application scientists.
Nov 1993 : Supercomputing 93 conference โ draft MPI standard presented.
May 1994 : Final version of MPI-1.0 released.
MPI-1.0 was followed by versions MPI-1.1 (Jun 1995), MPI-1.2 (Jul 1997) and MPI-1.3 (May
2008).
MPI-2 picked up where the first MPI specification left off, and addressed topics which went far
beyond the MPI-1 specification. Was finalized in 1996.
MPI-2.1 (Sep 2009), and MPI-2.2 (Sep 2009) followed.
Sep 2012 : The MPI-3.0 standard was approved.
96
99. A Header File for MPI routines
Required for all programs that make MPI library calls.
C include file
Fortran include file
#include โmpi.hโ
include โmpif.hโ
With MPI-3 Fortran, the USE mpi_f80 module is preferred over using the include file shown
above.
99
100. The Format of MPI Calls
C names are case sensitive; Fortran name are not.
Programs must not declare variables or functions with names beginning with the prefix MPI_
or PMPI_ (profiling interface).
C Binding
Format
rc = MPI_Xxxxx(parameter, โฆ)
Example
rc = MPI_Bsend(&buf, count, type, dest, tag, comm)
Error code
Returned as โrcโ, MPI_SUCCESS if successful.
Fortran Binding
Format
Example
call MPI_BSEND(buf, count, type, dest, tag, comm, ierr)
Error code
100
CALL MPI_XXXXX(parameter, โฆ, ierr)
call mpi_xxxxx(parameter, โฆ, ierr)
Returned as โierrโ parameter, MPI_SUCCESS if successful.
101. Communicators and Groups
MPI uses objects called communicators and groups to define which collection of processes
may communicate with each other.
Most MPI routines require you to specify a communicator as an argument.
Communicators and groups will be covered in more detail later. For now, simply use
MPI_COMM_WORLD whenever a communicator is required - it is the predefined
communicator that includes all of your MPI processes.
101
102. Rank
Within a communicator, every process has its own unique, integer identifier assigned by the
system when the process initializes. A rank is sometimes also called a โtask IDโ. Ranks are
contiguous and begin at zero.
Used by the programmer to specify the source and destination of messages. Often used
conditionally by the application to control program execution (if rank = 0 do this / if rank = 1
do that).
102
103. Error Handling
Most MPI routines include a return/error code parameter, as described in โFormat of MPI
Callsโ section above.
However, according to the MPI standard, the default behavior of an MPI call is to abort if there
is an error. This means you will probably not be able to capture a return/error code other than
MPI_SUCCESS (zero).
The standard does provide a means to override this default error handler. You can also
consult the error handing section of the MPI Standard located at http://www.mpiforum.org/docs/mpi-11-html/node148.html .
The types of errors displayed to the user are implementation dependent.
103
104. Environment Management Routines
MPI_Init
๏ง
Initializes the MPI execution environment. This function must be called is every MPI
program, must be called before any other MPI functions and must be called only once in
an MPI program. For C programs, MPI_Init may be used to pass the command line
arguments to all processes, although this is not required by the standard and is
implementation dependent.
C
MPI_Init(&argc, &argv)
๏ง
๏ง
104
Fortran
MPI_INIT(ierr)
Input parameters
โข argc : Pointer to the number of arguments
โข argv : Pointer to the argument vector
ierr : the error return argument
105. Environment Management Routines
MPI_Comm_size
๏ง
Returns the total number of MPI processes in the specified communicator, such as
MPI_COMM_WORLD. If the communicator is MPI_COMM_WORLD, then it represents the
number of MPI tasks available to your application.
C
MPI_Comm_size(comm, &size)
๏ง
๏ง
๏ง
105
Fortran
MPI_COMM_SIZE(comm, size, ierr)
Input parameters
โข comm : communicator (handle)
Output parameters
โข size : number of processes in the group of comm (integer)
ierr : the error return argument
106. Environment Management Routines
MPI_Comm_rank
๏ง
Returns the rank of the calling MPI process within the specified communicator. Initially,
each process will be assigned a unique integer rank between 0 and number of tasks -1
within the communicator MPI_COMM_WORLD. This rank is often referred to as a task ID.
If a process becomes associated with other communicators, it will have a unique rank
within each of these as well.
C
MPI_Comm_rank(comm, &rank)
๏ง
๏ง
๏ง
106
Fortran
MPI_COMM_SIZE(comm, rank, ierr)
Input parameters
โข comm : communicator (handle)
Output parameters
โข rank : rank of the calling process in the group of comm (integer)
ierr : the error return argument
107. Environment Management Routines
MPI_Finalize
๏ง
Terminates the MPI execution environment. This function should be the last MPI routine
called in every MPI program โ no other MPI routines may be called after it.
C
MPI_Finalize()
๏ง
107
ierr : the error return argument
Fortran
MPI_FINALIZE(ierr)
108. Environment Management Routines
MPI_Abort
๏ง
Terminates all MPI processes associated with the communicator. In most MPI
implementations it terminates ALL processes regardless of the communicator specified.
C
MPI_Abort(comm, errorcode)
๏ง
๏ง
108
Fortran
MPI_ABORT(comm, errorcode, ierr)
Input parameters
โข comm : communicator (handle)
โข errorcode : error code to return to invoking environment
ierr : the error return argument
109. Environment Management Routines
MPI_Get_processor_name
๏ง
Return the processor name. Also returns the length of the name. The buffer for โnameโ
must be at least MPI_MAX_PROCESSOR_NAME characters in size. What is returned into
โnameโ is implementation dependent โ may not be the same as the output of the
โhostnameโ or โhostโ shell commands.
C
Fortran
MPI_Get_processor_name(&name,
&resultlength)
MPI_GET_PROCESSOR_NAME(n
ame, resultlength, ierr)
๏ง
๏ง
109
Output parameters
โข name : A unique specifies for the actual (as opposed to virtual) node. This must be
an array of size at least MPI_MAX_PROCESOR_NAME .
โข resultlen : Length (in characters) of the name.
ierr : the error return argument
110. Environment Management Routines
MPI_Get_version
๏ง
Returns the version (either 1 or 2) and subversion of MPI.
C
MPI_Get_version(&version,
&subversion)
๏ง
๏ง
110
Fortran
MPI_GET_VERSION(version,
subversion, ierr)
Output parameters
โข version : Major version of MPI (1 or 2)
โข subversion : Miner version of MPI.
ierr : the error return argument
111. Environment Management Routines
MPI_Initialized
๏ง
Indicates whether MPI_Init has been called โ returns flag as either logical true(1) or
false(0).
C
MPI_Initialized(&flag)
๏ง
๏ง
111
Fortran
MPI_INITIALIZED(flag, ierr)
Output parameters
โข flag : Flag is true if MPI_Init has been called and false otherwise.
ierr : the error return argument
112. Environment Management Routines
MPI_Wtime
๏ง
Returns an elapsed wall clock time in seconds (double precision) on the calling
processor.
C
MPI_Wtime()
๏ง
Fortran
MPI_WTIME()
Return value
โข Time in seconds since an arbitrary time in the past.
MPI_Wtick
๏ง
Returns the resolution in seconds (double precision) of MPI_Wtime.
C
MPI_Wtick()
๏ง
112
Fortran
MPI_WTICK()
Return value
โข Time in seconds of the resolution MPI_Wtime.
114. Example: Hello world
Execute a mpi program.
$ module load [compiler] [mpi]
$ mpicc hello.c
$ mpirun โnp 4 โhostfile [hostfile] ./a.out
Make out a hostfile.
ibs0001
ibs0002
ibs0003
ibs0003
โฆ
114
slots=2
slots=2
slots=2
slots=2
115. Example : Environment Management Routine
#include "mpi.hโ
#include <stdio.h>
int main(argc,argv)
int argc;
char *argv[]; {
int numtasks, rank, len, rc;
char hostname[MPI_MAX_PROCESSOR_NAME];
rc = MPI_Init(&argc,&argv);
if (rc != MPI_SUCCESS) {
printf ("Error starting MPI program. Terminating.n");
MPI_Abort(MPI_COMM_WORLD, rc);
}
MPI_Comm_size(MPI_COMM_WORLD,&numtasks);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Get_processor_name(hostname, &len);
printf ("Number of tasks= %d My rank= %d Running on %sn", numtasks,rank,hostname);
/*******
do some work *******/
rc = MPI_Finalize();
return 0;
}
115
116. Types of Point-to-Point Operations
MPI point-to-point operations typically involve message passing between two, and only two,
different MPI tasks. One task is performing a send operation and the other task is performing
a matching receive operation.
There are different types of send and receive routines used for different purposes.
๏ง
Synchronous send
๏ง
Blocking send/blocking receive
๏ง
Non-blocking send/non-blocking receive
๏ง
Buffered send
๏ง
Combined send/receive
๏ง
โReadyโ send
Any type of send routine can be paired with any type of receive routine.
MPI also provides several routines associated with send โ receive operations, such as those used to wait for
a messageโs arrival or prove to find out if a message has arrived.
116
117. Buffering
In a perfect world, every send operation would be perfectly synchronized with its matching re
ceive. This is rarely the case. Somehow or other, the MPI implementation must be able to deal
with storing data when the two tasks are out of sync.
Consider the following two cases
๏ง
๏ง
117
A send operation occurs 5 seconds before the receive is ready โ where is the message w
hile the receive is pending?
Multiple sends arrive at the same receiving task which can only accept one send at a tim
e โ what happens to the messages that are โbacking upโ?
118. Buffering
The MPI implementation (not the MPI standard) decides what happens to data in these types
of cases. Typically, a system buffer area is reserved to hold data in transit.
118
119. Buffering
System buffer space is :
๏ง
๏ง
๏ง
๏ง
๏ง
119
Opaque to the programmer and managed entirely by the MPI library
A finite resource that can be easy to exhaust
Often mysterious and not well documented
Able to exist on the sending side, the receiving side, or both
Something that may improve program performance because it allows send โ receive ope
rations to be asynchronous.
120. Blocking vs. Non-blocking
Most of the MPI point-to-point routines can be used in either blocking or non-blocking mode.
Blocking
๏ง
๏ง
๏ง
๏ง
A blocking send routine will only โreturnโ after it is safe to modify the application buffer (your
send data) for reuse. Safe means that modifications will not affect the data intended for the rec
eive task. Safe dose not imply that the data was actually received โ it may very well be sitting i
n a system buffer.
A blocking send can be synchronous which means there is handshaking occurring with the re
ceive task to confirm a safe send.
A blocking send can be asynchronous if a system buffer is used to hold the data for eventual d
elivery to the receive.
A blocking receive only โreturnsโ after the data has arrived and is ready for use by the progra
m.
Non-blocking
๏ง
๏ง
๏ง
๏ง
120
Non-blocking send and receive routines behave similarly โ they will return almost immediately.
They do not wait for any communication events to complete, such as message copying from u
ser memory to system buffer space or the actual arrival of message.
Non-blocking operations simply โrequestโ the MPI library to perform the operation when it is a
ble. The user can not predict when it is able. The user can not predict when that will happen.
It is unsafe to modify the application buffer (your variable space) until you know for a fact the r
equested non-blocking operation was actually performed by the library. There are โwaitโ routin
es used to do this.
Non-blocking communications are primarily used to overlap computation with communication
and exploit possibale performance gains.
121. MPI Message Passing Routine Arguments
MPI point-to-point communication routines generally have an argument list that takes one of t
he following formats :
Blocking sends
MPI_Send(buffer, count, type, dest, tag, comm)
Non-blocking sends
MPI_Isend(buffer, count, type, dest, tag, comm, request)
Blocking receive
MPI_Recv(buffer, count, type, source, tag, comm, status)
Non-blocking receive
MPI_Irecv(buffer, count, type, source, tag, comm, request)
Buffer
๏ง
Program (application) address space that references the data that is to be sent or receiv
ed. In most cases, this is simply the variable name that is be sent/received. For C progra
ms, this argument is passed by reference and usually must be prepended with an amper
sand : &var1
Data count
๏ง
121
Indicates the number of data elements of a particular type to be sent.
122. MPI Message Passing Routine Arguments
Data type
๏ง
For reasons of portability, MPI predefines its elementary data types. The table below lists
those required by the standard.
C Data Types
MPI_CHAR
MPI_SHORT
signed short int
MPI_INT
signed int
MPI_LONG
signed long int
MPI_SIGNED_CHAR
signed char
MPI_UNSIGNED_CHAR
unsigned char
MPI_UNSIGNED_SHORT
unsigned short int
MPI_UNSIGNED
unsigned int
MPI_UNSIGNED_LONG
unsigned long int
MPI_FLOAT
float
MPI_DOUBLE
double
MPI_LONG_DOUBLE
122
signed char
long double
123. MPI Message Passing Routine Arguments
Destination
๏ง
An argument to send routines that indicates the process where a message should be del
ivered. Specified as the rank of the receiving process.
Tag
๏ง
Arbitrary non-negative integer assigned by the programmer to uniquely identify a messa
ge. Send and receive operations should match message tags. For a receive operation, th
e wild card MPI_ANY_TAG can be used to receive any message regardless of its tag. The
MPI standard guarantees that integers 0 โ 32767 can be used as tags, but most impleme
ntations allow a much larger range than this.
Communicator
๏ง
123
Indicates the communication context, or set of processes for which the source or destin
ation fields are valid. Unless the programmer is explicitly creating new communicator, th
e predefined communicator MPI_COMM_WORLD is usually used.
124. MPI Message Passing Routine Arguments
Status
๏ง
๏ง
๏ง
๏ง
For a receive operation, indicates the source of the message and the tag of the message.
In C, this argument is a pointer to predefined structure MPI_Status (ex. stat.MPI_SOURC
E, stat.MPI_TAG).
In Fortran, it is an integer array of size MPI_STATUS_SIZE (ex. stat(MPI_SOURCE), stat(M
PI_TAG)).
Additionally, the actual number of bytes received are obtainable from Status via MPI_Get
_out routine.
Request
๏ง
๏ง
๏ง
๏ง
๏ง
124
Used by non-blocking send and receive operations.
Since non-blocking operations may return before the requested system buffer space is o
btained, the system issues a unique โrequest numberโ.
The programmer uses this system assigned โhandleโ later (in a WAIT type routine) to det
ermine completion of the non-blocking operation.
In C, this argument is pointer to predefined structure MPI_Request.
In Fortran, it is an integer.
130. Advanced Example : Monte-Carlo Simulation
<Problem>
๏ง
๏ง
๏ง
Monte carlo simulation
Random number use
PI = 4 โ นAc/As
<Requirement>
๏ง
๏ง
Nโs processor(rank) use
P2p communication
r
130
131. Advanced Example : Monte-Carlo Simulation for PI
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main() {
const long num_step=100000000;
long i, cnt;
double pi, x, y, r;
printf(โ-----------------------------------------------------------nโ);
pi = 0.0;
cnt = 0;
r = 0.0;
for (i=0; i<num_step; i++) {
x = rand() / (RAND_MAX+1.0);
y = rand() / (RAND_MAX+1.0);
r = sqrt(x*x + y*y);
if (r<=1) cnt += 1;
}
pi = 4.0 * (double)(cnt) / (double)(num_step);
printf(โPI = %17.15lf (Error = %e)nโ, pi, fabs(acos(-1.0)-pi));
printf(โ-----------------------------------------------------------nโ);
return 0;
}
131
132. Advanced Example : Numerical integration for PI
<Problem>
๏ง
Get PI using Numerical integration
1
0
f ( x1 )
f ( x2 )
4.0
dx =
2)
(1+x
f ( xn )
<Requirement>
๏ง
Point to point communication
n
4
i 1
1 2
1 ((i 0.5) )
n
1
n
....
1
n
1
(2 0.5)
n
1
(1 0.5)
n
x2
x1
132
xn
(n 0.5)
1
n
133. Advanced Example : Numerical integration for PI
#include <stdio.h>
#include <math.h>
int main() {
const long num_step=100000000;
long i;
double sum, step, pi, x;
step = (1.0/(double)num_step);
sum=0.0;
printf(โ-----------------------------------------------------------nโ);
for (i=0; i<num_step; i++) {
x = ((double)i - 0.5) * step;
sum += 4.0/(1.0+x*x);
}
pi = step * sum;
printf(โPI = %5lf (Error = %e)nโ, pi, fabs(acos(-1.0)-pi));
printf(โ-----------------------------------------------------------nโ);
return 0;
}
133
134. Type of Collective Operations
Synchronization
๏ง
processes wait until all members of the group have reached the synchronization point.
Data Movement
๏ง
broadcast, scatter/gather, all to all.
Collective Computation (reductions)
๏ง
134
one member of the group collects data from the other members and performs an operati
on (min, max, add, multiply, etc.) on that data.
135. Programming Considerations and Restrictions
With MPI-3, collective operations can be blocking or non-blocking. Only blocking operations
are covered in this tutorial.
Collective communication routines do not take message tag arguments.
Collective operations within subset of processes are accomplished by first partitioning the su
bsets into new groups and then attaching the new groups to new communicators.
Con only be used with MPI predefined datatypes โ not with MPI Derived Data Types.
MPI-2 extended most collective operations to allow data movement between intercommunicat
ors (not covered here).
135
136. Collective Communication Routines
MPI_Barrier
๏ง
Synchronization operation. Creates a barrier synchronization in a group. Each task,
when reaching the MPI_Barrier call, blocks until all tasks in the group reach the same
MPI_Barrier call. Then all tasks are free to proceed.
C
MPI_Barrier(comm)
136
Fortran
MPI_BARRIER(comm, ierr)
137. Collective Communication Routines
MPI_Bcast
๏ง
Data movement operation. Broadcasts (sends) a message from the process with rank "r
oot" to all other processes in the group.
C
MPI_Bcast(&buffer, count, datatype,
root, comm)
137
Fortran
MPI_BCAST
(buffer,count,datatype,root,comm,ier
r)
138. Collective Communication Routines
MPI_Scatter
๏ง
Data movement operation. Distributes distinct messages from a single source task to ea
ch task in the group.
C
Fortran
MPI_Scatter
MPI_SCATTER
(&sendbuf,sendcnt,sendtype,&recvb (sendbuf,sendcnt,sendtype,recvbuf,
uf, recvcnt,recvtype,root,comm)
recvcnt,recvtype,root,comm,ierr)
138
139. Collective Communication Routines
MPI_Gather
๏ง
Data movement operation. Gathers distinct messages from each task in the group to a si
ngle destination task. This routine is the reverse operation of MPI_Scatter.
C
Fortran
MPI_Gather
MPI_GATHER
(&sendbuf,sendcnt,sendtype,&recvb (sendbuf,sendcnt,sendtype,recvbuf,
uf, recvcount,recvtype,root,comm)
recvcount,recvtype,root,comm,ierr)
139
140. Collective Communication Routines
MPI_Allgather
๏ง
Data movement operation. Concatenation of data to all tasks in a group. Each task in the
group, in effect, performs a one-to-all broadcasting operation within the group.
C
Fortran
MPI_Allgather
MPI_ALLGATHER
(&sendbuf,sendcount,sendtype,&rec (sendbuf,sendcount,sendtype,recvb
vbuf, recvcount,recvtype,comm)
uf, recvcount,recvtype,comm,info)
140
141. Collective Communication Routines
MPI_Reduce
๏ง
Collective computation operation. Applies a reduction operation on all tasks in the group
and places the result in one task.
C
MPI_Reduce
(&sendbuf,&recvbuf,count,datatype,
op,root,comm)
141
Fortran
MPI_REDUCE
(sendbuf,recvbuf,count,datatype,op,
root,comm,ierr)
142. Collective Communication Routines
The predefined MPI reduction operations appear below. Users can also define their own
reduction functions by using the MPI_Op_create routine.
MPI Reduction Operation
C Data Types
MPI_MAX
maximum
integer, float
MPI_MIN
minimum
integer, float
MPI_SUM
sum
integer, float
MPI_PROD
product
integer, float
MPI_LAND
logical AND
integer
MPI_BAND
bit-wise AND
integer, MPI_BYTE
MPI_LOR
logical OR
integer
MPI_BOR
bit-wise OR
integer, MPI_BYTE
MPI_LXOR
logical XOR
integer
MPI_BXOR
bit-wise XOR
integer, MPI_BYTE
MPI_MAXLOC
max value and location
float, double and long double
MPI_MINLOC
min value and location
float, double and long double
142
143. Collective Communication Routines
MPI_Allreduce
๏ง
Collective computation operation + data movement. Applies a reduction operation and pl
aces the result in all tasks in the group. This is equivalent to an MPI_Reduce followed by
an MPI_Bcast.
C
MPI_Allreduce
(&sendbuf,&recvbuf,count,datatype,
op,comm)
143
Fortran
MPI_ALLREDUCE
(sendbuf,recvbuf,count,datatype,op,
comm,ierr)
144. Collective Communication Routines
MPI_Reduce_scatter
๏ง
Collective computation operation + data movement. First does an element-wise reductio
n on a vector across all tasks in the group. Next, the result vector is split into disjoint se
gments and distributed across the tasks. This is equivalent to an MPI_Reduce followed b
y an MPI_Scatter operation.
C
MPI_Reduce_scatter
(&sendbuf,&recvbuf,recvcount,datat
ype, op,comm)
144
Fortran
MPI_REDUCE_SCATTER
(sendbuf,recvbuf,recvcount,datatype,
op,comm,ierr)
145. Collective Communication Routines
MPI_Alltoall
๏ง
Data movement operation. Each task in a group performs a scatter operation, sending a
distinct message to all the tasks in the group in order by index.
C
Fortran
MPI_Alltoall
MPI_ALLTOALL
(&sendbuf,sendcount,sendtype,&rec (sendbuf,sendcount,sendtype,recvb
vbuf, recvcnt,recvtype,comm)
uf, recvcnt,recvtype,comm,ierr)
145
146. Collective Communication Routines
MPI_Scan
๏ง
Performs a scan operation with respect to a reduction operation across a task group.
C
MPI_Scan
(&sendbuf,&recvbuf,count,datatype,
op,comm)
146
Fortran
MPI_SCAN
(sendbuf,recvbuf,count,datatype,op,
comm,ierr)
147. Collective Communication Routines
data
P0
A
A
P0
A
A
P1
B
P2
A
P2
C
P3
A
P3
D
broadcast
P1
A*B*C*D
reduce
*:some operator
P0
A
B
C
D
A
P0
A
P1
B
P1
B
P2
C
P2
C
A*B*C*D
D
P3
D
A*B*C*D
scatter
gather
P3
A*B*C*D
all
reduce
A*B*C*D
*:some operator
P0
A
A
B
C
D
P0
A
P1
B
A
B
C
D
P1
B
P2
C
A
B
C
D
P2
C
A*B*C
P3
D
A
B
C
D
P3
D
A*B*C*D
allgather
A
scan
A*B
*:some operator
P0
A0
A1
A2
A3
alltoall
A0
B0
C0
D0
P0
A0
A1
A2
A0*B0*C0*D0
A3
reduce
scatter
A1*B1*C1*D1
P1
B0
B1
B2
B3
A1
B1
C1
D1
P1
B0
B1
B2
B3
P2
C0
C1
C2
C3
A2
B2
C2
D2
P2
C0
C1
C2
C3
A2*B2*C2*D2
P3
D0
D1
D2
D3
A3
B3
C3
D3
P3
D0
D1
D2
D3
A3*B3*C3*D3
*:some operator
147
148. Example : Collective Communication (1/2)
Perform a scatter operation on the rows of an array
#include "mpi.h"
#include <stdio.h>
#define SIZE 4
int main(argc,argv)
int argc;
char *argv[]; {
int numtasks, rank, sendcount, recvcount, source;
float sendbuf[SIZE][SIZE] = {
{1.0, 2.0, 3.0, 4.0},
{5.0, 6.0, 7.0, 8.0},
{9.0, 10.0, 11.0, 12.0},
{13.0, 14.0, 15.0, 16.0} };
float recvbuf[SIZE];
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
148
150. Advanced Example : Monte-Carlo Simulation for PI
Use the collective communication routines!
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main() {
const long num_step=100000000;
long i, cnt;
double pi, x, y, r;
printf(โ-----------------------------------------------------------nโ);
pi = 0.0;
cnt = 0;
r = 0.0;
for (i=0; i<num_step; i++) {
x = rand() / (RAND_MAX+1.0);
y = rand() / (RAND_MAX+1.0);
r = sqrt(x*x + y*y);
if (r<=1) cnt += 1;
}
pi = 4.0 * (double)(cnt) / (double)(num_step);
printf(โPI = %17.15lf (Error = %e)nโ, pi, fabs(acos(-1.0)-pi));
printf(โ-----------------------------------------------------------nโ);
return 0;
}
150
151. Advanced Example : Numerical integration for PI
Use the collective communication routines!
#include <stdio.h>
#include <math.h>
int main() {
const long num_step=100000000;
long i;
double sum, step, pi, x;
step = (1.0/(double)num_step);
sum=0.0;
printf(โ-----------------------------------------------------------nโ);
for (i=0; i<num_step; i++) {
x = ((double)i - 0.5) * step;
sum += 4.0/(1.0+x*x);
}
pi = step * sum;
printf(โPI = %5lf (Error = %e)nโ, pi, fabs(acos(-1.0)-pi));
printf(โ-----------------------------------------------------------nโ);
return 0;
}
151